Cohort Considers the Impact of Generative AI on the Faculty Experience
The Duke Faculty Academy brought together eight faculty across disciplines to discuss how generative AI impacts the faculty experience.
Organized by the Office for Faculty Advancement with key university partners, the Duke Faculty Academy is a new professional development program designed to help participants develop creative solutions to issues that impact the faculty experience.
Yakut Gazi, vice provost for learning innovation and digital education, and Jon Reifschneider, executive director of the Master of Engineering in AI for Product Innovation Program, advised the cohort.
Beginning with the Emerging Pedagogies Summit, the faculty members engaged with national experts who study aspects of generative AI and came together regularly for interdisciplinary discussions. “Every time I attended the conversations, I learned something,” Gazi said.
“The interdisciplinary nature of the program made our learning experience both unique and highly effective,” said cohort member David Brown.
Inaugural Cohort of the Duke Faculty Academy
- Katherine Brading, Professor of Philosophy
- David Brown, Snow Family Business Distinguished Professor of Business
- Michael Cary, Associate Professor in the School of Nursing
- Eileen Cheng-yin Chow, Associate Professor of the Practice of Asian and Middle Eastern Studies
- Michael Murphy, Clinical Professor of Law (Teaching)
- Ronald Parr, Professor of Computer Science
- Carlo Tomasi, Iris Einheuser Distinguished Professor of Computer Science
- Alex Zhang, Archibald C. and Frances Fulk Rufty Research Professor of Law
Questions for Consideration
“We worked collaboratively using design thinking exercises,” said Sherilynn Black, associate vice provost for faculty advancement. “There was so much intellectual work, care and thought from these faculty fellows.”
The cohort worked to reflect the voices of their communities and disciplines to generate a list of critical questions about AI use in the Duke community. This took extensive collaborative thought and a deep interrogation of how they each viewed and experienced AI in their personal and professional lives. Focusing and narrowing the questions took significant time, as the cohort prioritized ensuring that the list of “entry point” questions would allow each member of the Duke faculty to see their own views reflected and identify a way to engage in the topic. These questions are:
- How should faculty approach AI literacy? What are the considerations faculty should interrogate if they want to use generative AI?
- Assuming that efficiency is the goal of most faculty, how do we use generative AI in ways that reliably save time?
- How do I know when something is AI generated or not (e.g., non-native speakers accused of plagiarism)?
- How much can we trust the accuracy of generative AI, especially in our research?
- What does it mean to be human (e.g., uniqueness, creativity) in the age of AI?
- How do we navigate the bias that is inherent in generative AI? What should we consider, and can we use this to our advantage?
- Why are we asking about using generative AI more and not less?
- Is there a way that faculty can use generative AI for skill development that will make us better at what we do?
- What is acceptable use of generative AI in the classroom? How should students use it? What boundaries should we as faculty put on student use?
- What are examples of generative AI helping the learning process vs. hurting the learning process?
- How can we use AI to explore areas not yet explored when we aren’t sure how much trustworthiness is in the output?
Comments from the Capstone
At a capstone event last month, the cohort invited members of the Duke community to join them in discussion.
“No part of a large language model is ever trained to tell the truth, or to tell truth from falsehood.” –Carlo Tomasi
“AI literacy is not a skill. If you want to incorporate it into your life, you need to make a commitment to engage with this evolving technology. One of the key things to focus on is to challenge AI and yourself. Try to get it to give bad answers. You’ll quickly start to understand what it’s actually doing.” – Ronald Parr
“Generative AI tools present an undeniable opportunity, forging new pathways for discovery and productivity. However, the presence of embedded biases demands our collective and unwavering attention. As a leading university, we must explore the strengths of generative AI but not lose sight of the importance of identifying their limitations. I believe the mission is clear: empower our faculty, staff, students and the public through education for responsible use.” –Michael Cary
“Just being in a room with people from different disciplines kind of blows your mind. I hope we can keep on doing that and being challenged in our perspectives.” –Eileen Cheng-yin Chow
New Resources for Faculty
Black announced that in the fall, the cohort will work collaboratively with Duke Learning Innovation & Lifetime Education (LILE) to lead learning communities for faculty around aspects of generative AI based on the list of entry point questions developed over the last year. The cohort will also create an introductory video highlighting AI entry points as a guide for other faculty to engage with the topic.
The learning communities will lead to newly developed resources in response to what emerges from discussions over the year.
The aim is to help all faculty better understand ways that they can effectively engage with generative AI, regardless of discipline, literacy or expertise. LILE will offer a series of workshops allowing for deep dives into each entry point topic.
Main image, above: Duke Faculty Academy cohort members Alex Zhang, Michael Cary, Michael Murphy, Ronald Parr, David Brown, Eileen Cheng-yin Chow and Carlo Tomasi (not pictured: Katherine Brading)