The Future of Assessment: Rethinking AI’s Role in Teaching and Learning

How is AI reshaping teaching and assessment? In this podcast episode, Eric Mazur and David Joyner explore the opportunities and challenges of generative AI in education—discussing its impact on assessment, academic integrity, and student engagement. Listen in for practical strategies to integrate AI into teaching in meaningful ways.

The Future of Assessment: Rethinking AI’s Role in Teaching and Learning

How is AI reshaping teaching and assessment? In this podcast episode, Eric Mazur and David Joyner explore the opportunities and challenges of generative AI in education—discussing its impact on assessment, academic integrity, and student engagement. Listen in for practical strategies to integrate AI into teaching in meaningful ways.

Want to explore these ideas further? Join David in a Perusall Engage event starting April 7, 2025, for an interactive reading experience with fellow educators. Learn more at perusall.com/engage.

Eric Mazur
Thank you for joining us today for this episode on the impact of generative AI on education in the social learning amplified postcast series. I'm your host, Eric Mazur, and our guest on the episode today is David Joyner. David is the inaugural holder of the Zvi Galil Peace Faculty Chair in the School of Interactive Computing at the Georgia Tech College of Computing. He's also the executive director of online education and the online masters of science and computer science at Georgia Tech. His research focuses on online education and learning at scale, especially as they intersect with for credit offerings at the graduate and undergraduate levels. His emphasis is on designing learning experiences that leverage the opportunities of online learning to compensate for the loss of synchronous co-located class time. Something we all struggled with during the pandemic.

This includes leveraging artificial intelligence for student support and assignment evaluation, facilitating student communities in large online classes and investigating strategies for maintaining and interactive presentation of online instructional material. He's also chair of the steering committee of the ACM learning at scale conference, as well as the general chair for the 2019, 2020 and 2024 conferences.

David has received numerous awards for his work in teaching online, including most recently the 2023 Georgia Tech Outstanding Professional Education Award and the 2022 College of Computing Outstanding Faculty Leadership Award. He was also named to the Georgia Tech Alumni Associations 40 under 40 in 2022. David, thank you for being here today.

David Joyner
Thank you so much for having me and thank you so much for reading all that. I need to shorten my bio.

Eric Mazur

Well, it's, it's, it's just an honor to have you here and to talk about this, absolutely fascinating subject. You recently wrote a book on, gender AI in education. title, if I got it right is a teacher's guide to conversational AI, enhancing assessment instruction and curriculum with chatbots. It was published, I think last year by, Routledge. Now. As the title states, your book explores the practical role that language-based artificial intelligence tools play in the classroom, know, both the teaching and the learning as well as the assessment. And there's no question that generative AI is having a major impact on education. In fact, it's all that my colleagues can talk about. And in my opinion, I mean, most people tend to focus on the challenges, but there are also opportunities and the publication of your book couldn't really have been timed here. What prompted you in particular to write this book?

David Joyner
Yeah, absolutely. think that's exactly kind of what the initial impetus behind the book was. I was having a lot of conversations with faculty, with teachers at lots of different levels. talked to teachers at my kid's school. I talked to faculty here at Georgia Tech. I talked to faculty around the country and around the world. And the initial reaction to, know, Chat GPD, especially coming out in November 2022 and everything that came after that was a lot of, I want to say fear, because fear is a strong reaction, but very negative, very kind of.

This is a new thing that we have to deal with. It's something that students now have in their pockets that we have not designed our classes, we've not designed our instruction, we've not designed our assessments with those things in mind. I like the analogy of it's as if in the mid 80s, every student woke up one day and suddenly had a TI-83 on their nightstand and had never had one before. And how would teachers, how would math teachers deal with the fact that students now have this super powerful tool in their pocket?

It's exactly what happened with everything that gender AI can do. So I was seeing a lot of that kind of fear about a kind of negative reaction. In fact, one of the working titles for the book was Chatbots Everywhere All at Once, because this was the same year that Everything Everywhere All at Once came out in one best picture at the Oscars, which was a fantastic movie. But it described how this Renaissance kind of felt to a lot of people. Just suddenly it's everywhere. Before that,

The chatbots were the little things in the corner of the websites you go to that if you want to chat with somebody and get customer support and just explore the site that way. It was, you know, very, very contained. Where you had Siri and Alexa, but they were very kind of regimented interactions. And just suddenly chatbots were everywhere. So that was a working title. And so the idea of the book was basically to meet teachers where they were and say, hey, you're worried about what this means for assessment. You're worried about what this means for instruction. Let's talk about how you can tweak things in kind of some low overhead low effort kind of ways to mitigate some of these kinds of things. I was seeing a lot of teachers deciding, know, now I just need, everything needs to be done in the classroom and everything needs to be proctored and things like that. And that was just kind of a negative reaction to these kinds of new technologies. And so the book was really, let's meet you where you are. Let's talk about how we can design assessments, design instructional experiences, design kind of learning environments that account for the fact that AI is there and treat it as something that happened to us. And in that regard, you brought up the pandemic earlier, it's been weird to me that a lot of the same kinds of rapid adjustment we went through for emergency remote teaching during the pandemic is the same kind of emergency kind of reactions we had to generative AI suddenly being there. My kids school, for example, that first year banned any use of generative AI, which as someone who works in AI and education, I had the reaction of, I have my kids at right school if they're taking that effort?

But their response was very measured. was, we don't yet know how powerful this is. We don't yet know how students are going to use it. So for right now, we're just going to say no. And we're going to monitor and watch and see how it goes. And then the next year, they actually said, OK, it's OK. In these environments, we're actually going to introduce a generative AI tool into the classrooms that students are assigned to use sometimes to augment their learning. And so they kind of had that kind of view. So the book is really that kind of angle of, you're worried, you're afraid, you're seeing these kind of things. Let's meet you where you are starting there.

But then let's use that as a gateway. So that's how we start the conversation. We start the conversation with, this is presenting immediate challenges to you today. Here's how to resolve those. But now that you've built a little bit more familiarity with generative AI, here's how you can use it to actually create better learning experiences or create things that you couldn't do at scale or you couldn't do as much as you really wanted to.

And by the way, here are all the ways it can help you and save you time and help you do some of your teacher responsibilities better. One thing I think teachers need to know about generative AI, and most have learned it by now, I think it's such a rapid time that I find that this conversation has to be updated every few months because people are learning and catching up. But some of the things that generative AI is really good at are the things that teachers have to do but don't like to do. They're repetitive things, the kind of routine. Reworking of content and routine updating of things that isn't why teachers generally get up out of bed in the morning. Those are the kind of things the generative AI very often is really, really good at. So it has a really big role to play as far as helping teachers out as well. My experience has been if you go to someone and say, hey, that new technology that you're angry about because your students are using it to cheat, by the way, it can help you too. People are just kind of like, don't evangelize that technology to me. I don't want it. I don't like it. I don't like what I'm seeing.

But if you start the conversation from, here's how we actually resolve the issues you're seeing right now, then we have kind of dialogue going and can move towards, here's how it can help you. So that was the point of the book and the MOOC series that accompanied it.

Eric Mazur
I see. I see. I'm actually leading right now a faculty learning community and focused on active learning, given my background and my own interests. And I decided to start that faculty learning community by having the participants focus on backward design and on learning outcomes. You just mentioned how Genitiv AI can help faculty do things that they don't want to do.

But it also can help faculty I've noticed, including myself, doing things that we're not that good at. For example, designing good learning outcomes that have an action verb that is aligned with Bloom's taxonomy that the learning that specify the subject of the learning and that also have a context for that learning.

And that satisfied those three criteria. And then also start to think about an assessment that would actually permit you to decide whether or not you've reached that learning outcome. And I noticed that both for my own learning outcomes in my course and the assessment, typically the learning outcome tends to be at a higher Bloom's taxonomy level than the assessment. by playing with Chachi BT, I was able to convince myself and many of the participants that there's actually a great way of rethinking what is in a certain sense, the foundation of your course. So I hope that we'll get a chance to talk more at some point about how to use ChatGPT to improve instruction. And you mentioned assessment. I definitely want to come back to that. But before we dive into those parts, I saw that your book is not just a book, but it's also a MOOC. Can you tell me a little bit more about this MOOC series and what's the difference between the two?

David Joyner
Yeah. Yeah, absolutely. It actually dates back to something we did several years ago. Back in 2017, McGraw-Hill was looking at MOOCs and textbooks and the alignment between them. And they were very generous and gave us a grant to develop a MOOC and textbook together to kind of look at the way those things interact. And I was asked to be the instructor for it. And so we developed a Python, it was an instruction to programming course, or instruction to computing in Python. And so we developed a textbook and we developed a video series and the videos in the, sorry, a MOOC series. And the MOOC series went along with the textbook and they were, I described them as congruent in the sense that the organizational scheme between the course and the textbook exactly mirror, the chapters were named the same, the examples were the same. They were just different mediums, different media for the same content. And what I found was a couple of things. One, that 90 % of the work that went into developing an online course was the same work that went into developing a textbook. So, so much of it was writing out the content, scripting it, sculpting it, really focusing on good instruction, good examples, and good analogies, and things like that. And that applies whether you're having students read a book or you're teaching them live in person. And the content can be a little bit different because there's certain things you can do in text that you can't do as easily in video and vice versa. But the overall work... was very similar. So I found essentially that doing the work for one very quickly overlaps with the work for the other. But more importantly, they meet different needs for different people. One of the great things about the MOOC series is that it is available for continuing education credits or continuing education units. And so teachers in very many states have a requirement to earn continuing education units every year.

And so it's great to be able to take something really recent and really important and make it so that teachers also have an incentive to engage with that. I think that's one of the reasons why the advent of generative AI has been somewhat of a hassle for teachers has been because teachers are already way too busy. We overwork them like crazy. the idea of now saying, take everything that you have to do anyway. And also now you have to figure out how to adjust AI on top of that. It's just, it's too much.

But if you are able to go in and say, OK, you're already required to go through an annual education kind of thing to learn more about something, you can use that on learning more about AI and education is a great connection there. There's other things around certain people in certain areas need to be able to demonstrate that they learn something. And so the MOOC series has built-in assessments and a built-in community. It's really meant to be more of a community-driven experience with some built-in credentialing which is exactly what a lot of people need. For others, the book is the perfect thing because they're really looking for something they can reference regularly. And you can reference a course regularly, but something about a book being text, being able to just flip through, being able to earmark pages, it lends itself to a different kind of approach. So content-wise, the two are extremely similar, but they serve different functions. One is really targeted at people who need to upskill, need to take courses and prove that they're staying current. And the other is for people who want to have kind of a quick reference. document, a quick reference book, that they can refer back to and piece through and share with friends and things like that. So it was partially opportunistic, partially really trying to meet those different audiences. I will say they also lend themselves differently to keeping things updated, which is this field is moving very quickly. An online course is easier in some ways to keep current because you can go in and modify it anytime. It's like a Wikipedia page. You can go in and change it as things change. But video itself, you know, it tends to be kind of high overhead. So refilming videos and things like that can be a more difficult. But the course lets you go and say, but you know, by the way, this, you know, in this video, we said that this is called this. They changed the name of that, you know, five months later, because of course they did. They waited for me put it into video to change the name of it because then it would be hard once I actually recorded a video saying this is called this.

But in online course, it's easier to do that. The book is kind of, it's static. It's there until you do a bigger revision over time. So they also lend themselves to different things that way. think the book is a great snapshot of what working with generative AI was like when the book came out. And I think most of the pedagogy in the book is still very current, but there certain things that have changed over time. The course is meant to be a kind of a closer reflection to the way the field is moving right now.

Eric Mazur
I see, I see. I want to come back for a moment to the idea of challenge and opportunity and also about assessment, which has come up in quite a few of my podcasts. think the thing that scared faculty the most about Gen2AI was, what is this going to do to my assessment? So I would love to hear. what you see as the challenges and opportunities in the field of assessment in times of generative AI.

David Joyner
Yeah, and I think there's a lot of different levels for that. think lately I've been really encouraged by the number of new initiatives that come out, most of the companies, mostly startups, that have looked at how you use generative AI to improve learning in authentic ways. think early on, especially, we saw a lot of people just kind of rushing to it. And there was kind of a gold rush of just people slapping an educational logo on GPT and calling it a new tool or companies that have not changed what they're doing, but calling what they've been doing all along AI because AI is such a big amorphous term that you can call anything AI if you really want to. But more recently, we've been seeing a lot more authenticity in this area in terms of things that really do improve it. There have been a couple of projects here at George Tech that really encouraged me. Jill Watson was the very famous AI TA that predated ChetGPT by five years that was meant to be an assistant to teaching assistants or an assistant to instructors, especially those of us who have to run very large forums as we're in our classes. I teach several very large classes. get dozens and dozens of forum posts every single day. And just to have something that's able to help me write the best response and things like that is really powerful. When used appropriately, I think that there's a big authenticity angle here as well, that there are certain threads, there certain messages where what the student cares about is not getting the right answer. What they care about is hearing an answer from the instructor. And because it's more an opinion question, it's more of a what's your view on this? Can you explain this? And there's power in it being authentic to the person. But where AI can really help in those areas is with helping us focus our attention on the place where human involvement is going to have the highest impact. They don't need necessarily a human to answer the questions along the lines of when is this due?

What time zone is this thing that doesn't need to come from a person. They just need the right answer and they need it as fast as possible. And so that's one place where I've been encouraged to see nothing to with assessment. That's more about the instruction side and the facilitation side. On the assessment side, one of the products that's come out of Georgia Tech is something called Socratic Mind that I'm using one of my classes this semester. And Socratic Mind is an AI augmented oral assessment interface. And so students are given assignments in there. And the AI poses a question to them and they answer the question out loud, ideally out loud. They have option to type as well, but it's really better and tend to be out loud. And what's really powerful about it is the AI can then ask follow-up questions about the student's answer. And so if the question was, explain the process of mitosis, for example. Student comes out and the, I should have chosen an example I actually know anything about, but the AI picks up on the fact that, okay, you described the first part perfectly well.

But the second part, you said something there. So let me ask a follow-up question about the second part. And so it moves away from the environment where it's just one question, one answer, and if you get it right, you get it right, and if you don't, you don't. And to a more dialogue-based, a back and forth, there's a lot of benefits. It's a better learning process, in my opinion, because you actually learn through the conversation. It forces the student to reflect on their own understanding, as opposed to just submitting their answer and finding out two weeks later if they got it right or not. It gives them the opportunity to recover as well knew the content, but just didn't think to include part of it. The AI actually prompts them for something else. It's a learning activity as well, because we know we learn more by that kind of conversation. I I try and push back as much as possible on the conception from many students and many teachers, unfortunately. The assessment is assessment and not learning. the learning happens over there and the assessment happens over here and never the two shall meet. Yeah. So and the assessment's actually meant to be a learning activity.

Eric Mazur
Absolutely, for learning rather than of learning.

David Joyner
Yeah, and people talk about formative assessment, but even formative assessment, think, of undersells it, because that's just an assessment that's kind of built in and done more frequently. Whereas this is really just the act of doing the assessment improves their learning. And we know that this is true from the testing effect and everything like that. But this Socratic mind is a way of kind building that into the classroom. And also as a happy additional point of it, it also helps with integrity because now instead of students being able to get their assignment, go over to chat GPT, go over to copilot, ask the question, copy it, maybe tweak it a little bit so it sounds a little bit more like them. It's live, it's interactive. And if someone's going to sit there, you know, copying from Socratic Mind and chat GPT and trying to copy back, it's going to become pretty obvious. And that's really where I, when we think about integrity in these kinds of assessments, that's really where I try and think about it is the best approaches to academic integrity are the ones where A student who isn't trying to cheat doesn't even realize there's anything there stopping them from cheating. use, you know, I'd use digital proctoring in many of my classes because I think it's kind of a necessary evil because my classes are asynchronous. And, and, but it's an example of the kind of approach to integrity I don't like. So I use it because I feel like I have to for these kinds of assessments. But you know, whether you're cheating or not, you're aware, this camera is watching me, the microphone is watching me. This is all an integrity measure. I know that this is being done for this reason. Socratic mind, if you're not trying to you know, get around the restrictions, then you never really realize it. yeah, it would be hard to do that because it's going to, you know, record and be aware that I don't sound like myself. I'm not sounding like my normal dialogue and things like that. So, and it's one of the ones that we've seen come out. We're using another one in one of my classes called Visible AI. It's an AI augmented assignment composition interface.

And for it, it kind of moves away from the idea that we don't want students to use AI and says, we want you to use AI. You're encouraged to use AI. We want you to learn how to use it properly. So we're going to build in the AI agent into the assignment interface. And when you submit your assignment, we're going to see both what you wrote and the conversation you had with AI alongside. So I can come in and say, hey, you had a great brainstorming session with AI. It looks like you've got a great idea from them about that. And then you really built on it and made it your own. And I can see that process can see that back and forth. Or I can come in and say, hey, you were supposed to write about three causes of World War II, and you asked generative AI for three causes of World War II, and you repeated the ones that it gave you. We wanted you to think about that in a bit more depth. And I can actually see that relationship. So I think we're really seeing a big development in tools that are now solving the kind of problems that generative AI seemingly caused.

And the calculator analogy in some ways is tired. It's been used so much. I think that when used appropriately, think that analogy still has some really useful learning, or some useful connections. Here what I compare it to is it's like the first textbooks that started to come out that included sections on. And here's how you graph a parabola on your TI-83. Now that you know how to do it on your calculator, it doesn't replace the fact that you learned how to do it by hand. But now we're going to teach you to do it with a calculator also because now that you can do it with a calculator, I can assign you problems that require you to go into much more depth, because I know you can generate that graph in 30 seconds instead of four minutes. And because that part is now just 30 seconds, I can expect you to do more. That's kind of where I see many of these new things in generA.ai going is they are the things that we can now incorporate proactively into the learning process, knowing they're going to allow students to reach even higher heights because they have access to these new tools.

Eric Mazur
Yeah, I like that point of view very much. And I think we should really focus on the opportunities rather than the challenges, particularly because many of the challenges are the result of things that we impose on the students. You mentioned cheating a couple of times, but that's the result of the high stakes environment in which we assess them. So changing that may actually alleviate that substantially.

David Joyner
Mm-hmm. Oh yeah, and it also can help with that. Cause like that, the Socratic Mind kind of thing. What I also love about it is high stakes assessment exists in person as well in regular classes as well. We talk about Socratic Mind as a support for online classes of, know, asynchronous at scale oral assessment. But even the in-person classes that have an oral assessments have difficulty having that for too many students at a time. Now you can have students do an oral assessment. Twice a week, and you couldn't do that at scale in person. So now you can actually build that as a fundamental part of what you're doing as well.

Eric Mazur
Right. You mentioned before of, you know, the pace at which things change. It's dizzying, right? What might be true at the beginning of semester may no longer hold at the end. So you kind of alluded already to the relevancy of your book and, you know, the MOOCs. How relevant do you think the book still is today? How many months after it came out? I forgot what month in 2024 it came out. Ten months.

David Joyner
It came out 10 months ago, which means I finished writing it about 20 months ago. So it's funny, on the pace of change, this is still my go-to story to exemplify how fast things are changing. In my classes, my tests are open book, open note, open internet. I basically say, I don't differentiate between you can think of the answer to this question off top of your head, or you can find the answer in 20 seconds. Like that to me, for this kind of knowledge, for what I'm testing here, those are the same thing.

And my tests tend to be open all semester long. So in 2022, when ChatGPD came out, I had students who took the final exam before ChatGPD came out. And then as soon as it came out, students were asking, are we allowed to use this while we're taking the exam? And other students said, no. Like, of course you're not allowed to use that. It's so smart. It's so powerful. Of course you can't use it the exam. And I came along and said, well, I'm a stickler for sticking to my own syllabus policies.

And I didn't say you can't use it. I said you can use anything except for talking to another person. It's not a person, no matter what the media will frame it as. It's not yet reached the level of which we can have that conversation. So yeah, you're allowed to use it. And I had one student write to me complaining, saying, if I'd known ChatGPD was going to come out, I would have waited to take the final exam until after it came out. I said, I'm sorry. I couldn't have predicted that. You couldn't have predicted that. None of us could have. He got an A anyway.

For how fast it changes, you can see at the scale of things can happen in a week that can change what you would have done a week earlier. And I can't think of any other time when technology has moved quite that fast. We talked about how the internet, know, the internet and smartphones and calculators and everything like that have fundamentally changed what we do in education. But they changed it on the scale of years or at worst, maybe months. I can't think of anything that, in fact, there's a chart I showed in some of my talks about the rate of adoption of certain technologies. The only thing that compares to the adoption of ChatGPT is, can't even remember the name of it, what was it, Google Plus. So Google Plus was Google's Facebook competitor. It's the only thing that compares to ChatGPT's adoption rate. It got as many users in two months as ChatGPT did, which is not an indicator of longevity, clearly. But nothing else has come nearly that close. all this is to say, yes, it's changing really quickly. There are things that we wrote, in that book we thought about in that book, that I think the pedagogy, the pedagogical underpinnings of that book are very, still remain very sound. Cause they all come from this view of this is a super powerful tool that students are gonna have access to. And we need to treat it as something for them to learn how to use responsibly. And that hasn't changed. In fact, I think that's gotten even more, more true. There are wrinkles to how it's developed that I didn't anticipate. One of the big things is that, the way so many these tools work, I didn't anticipate that they'd be really expanding how much memory it can hold at a time. And so one of our tips is when using it to remember to treat each conversation as independent because it hasn't really gotten to know you. It's like you're getting somebody new each time, which no longer has to be true. So there's some specific little wrinkles that have become different, which I don't really think fundamentally change much of the pedagogical underpinning of everything.

But they are things that are worth keeping in mind. The things that I think change more are that, and this gets into the area of it's hard to explain to some people because it really gets into the nuts and bolts of how these systems work. I like to describe it as these things, knowledge bases are usually stuck in time. And so if you want to ask about something, you ask about something that's kind of outside its knowledge base.

And they've gotten better at live processing, live interacting with the OpenHR, which they got better a lot faster than I anticipated, but they haven't gotten better to the point where the advice really goes away. They still are at the level of there's a fundamental difference between when they're talking about things that these systems understand in their own long-term memory and when they're recycling and reformatting and repeating things that they're reading right now. And it's still the case that the assessments that you can build based on more recent things are stronger.

than assessments you build about well-trodden topics. They're not as impossible to use AI on as they used to be. I used to use the example of a particular book that came out after one of the early, one of the first person to chat to you, the book came out after its data set cut off. And so you can assign students an assignment that was to write a book report about that book, knowing that if they tried to use AI, either it would say, don't know about that book, or it would,

make up something about that book, which is always more fun when it came up with completely wrong summaries of the book's plot and the characters and things like that. So it was just interpreting based on who the author was and the other stuff that author had written, what the book was probably about based on the title, which is just fun to watch. Now it's at the point where if you ask that it would be able to do a better job on a more recent source.

Because it has such sparse data on anything more recent, its summaries of anything more recent are going to much more surface level and much more similar across different prompts. So that's something that we've started to do with some of our assignments that asks students to summarize a recent paper or anything like that is we will go ahead and say, OK, if I asked chat GPT to summarize this, to describe this for me,

What does it look like? And then when a student submits some work, we compare it to what chat does. Not from an integrity angle, necessarily. It's not a case of, you know, it's very similar, therefore you cheated, therefore you get a zero. It's nothing like that. It's more from the perspective of if the summary you generated is no better than the summary I could have gotten from AI, then maybe use AI, in which case you don't get credit. Even if you didn't use AI, though, you're clearly not at a level of depth that's valuable for us. It's like teaching a college student how to do, you know, basic addition nowadays. It's just you can, you know, we expect you to have a level of knowledge deeper than what we can just get out of the tool. And so if you're staying at that level, you haven't gone deep enough with it quite yet. So think a lot of the things still are pretty relevant and pretty accurate. There are some nuances though that can be pretty distracting.

Eric Mazur
I have a feeling that you'll be coming out with a new edition sooner rather than later.

David Joyner
I feel like I need to, and I feel like there's a lot more to say. I feel like it's one of those things that...

Eric Mazur
Yeah. But you mentioned earlier that the focus really is on the pedagogy. And I think that should be true in general about any technology in education. Pedagogy first, technology thereafter. So regardless of the edition, regardless of whether we're having this conversation now or two months from now or a year from now, what's the one takeaway that you hope your readers will walk away with?

David Joyner
I think the biggest one is that, and I don't know how well this comes out right now, I mean, this might be something to highlight more in a future edition as well, is when used correctly, these new generative AI tools can really connect to the way people learn a lot better. You look back in human history, the idea of learning from books and learning from lectures and things like that is a relatively new phenomenon. And the fact that it's a relatively new phenomenon means it's not the way our brains develop to work. It's not the way we were, you know, our species develop to learn things. There's a research field around education or a book that's fantastic called Legitimate Peripheral Participation. And it talks about how learning happens in certain fields. And it's really apprenticeship learning. It's, you know, the new members get in and they're just there during the process and they just observe and they pick up on little things. And over time, it's just kind of like, okay, now you can do this little step and now you can do this little step.

That's how they go through their journey to eventually become a master who can handle everything. Because they learn just a little bit, so they learn socially. They learn from other people who are doing the thing that they're learning to do. And they're learning by doing. They're learning by actually doing it and being around it while it's being done, not reading about it. So they're not learning about things from people who just are describing what others have done. And...

Generative AI, no one has ever doubted, I think, that that's the best, the better way to learn. So we know about tutoring, we know about all those kinds of things that kind of more accurately capture a lot of that. The challenge has always been how do you standardize that and scale that and distribute that. So if you want to train a whole lot of blacksmiths, you can't have 100 people all apprenticing with one blacksmith at a time. It just doesn't work that way. You have to create these other kind of approaches.

But generative AI has the potential to build those kinds of environments. It has the potential to say, we're going to give you a virtual agent. We're going give you a virtual internship. This is a company that actually does virtual internships. So we can actually put you into the position of someone doing it and have you actually go through the process and see some virtual things doing other parts of it. And you can make it repeatable as well. So that's the other thing that's often a big challenge with some of these fields is surgery, for example. You don't want surgeons learning while they're doing surgery because

Every single case is a patient whose life is at stake. Emergency medical technicians as well. It's like you don't want them learning on the job because it's high stakes every single time. But with some of these new technologies, both generative AI and also virtual reality has played a big role in this going back for years. You can create the situations to learn from in a lower stakes environment, but they're still authentic. That's still learning by doing, still learning from people doing parts of the process.

And so that's really what I hope people get out of it is that it has the potential to create learning experiences that better connect to how people really learn. I think the challenge is that for a lot of those things, in order to really realize that, you have to be on the technical side. So there are ways you can design assignments, design assessments, design activities that basically boil down to go to a generative AI tool and have this conversation with it.

And there's a, you can get a decent amount of the way there that way, but for the real potential, you have to have kind of more custom things. So I hope it's a case of also, you know, people are sick of hearing about AI, I think for a lot of different reasons, some of which are due to the, it's just been so, so prevalent. Some of it due to some of the news articles and all the controversy around various elements of it. And some of it just because it's being used in so many places in the wrong way. One of my soap boxes is about how generative AI is really a solution looking for problems to solve.

And some of the problems people have tried to solve with generative AI were not problems to begin with for starters, let alone problems for generative AI to solve. They really missed the authenticity of the experience. They thought that just replacing human interaction with interaction with an AI was going to scale up the value, but loses the authenticity. So I hope people don't just surrender on that.

Eric Mazur
And you also mentioned the social aspect of learning earlier on the importance of it. I think that's something that will remain true whether or not we incorporate generative AI more in the future. So David.

David Joyner
Yeah. Yeah.

Eric Mazur
We're unfortunately out of time here and I think we could have gone on for quite a while longer. I really want to thank you for this thought provoking discussion and I would like to conclude by thanking our audience for listening and inviting everyone to return for our next episode. Maybe we can have you come back when edition two comes out, David. On behalf of all our listeners, thank you, David, for joining us today.

David Joyner
Thank you so much for having me.

Eric Mazur
You can find David's Teacher Guide on the Routledge website as well as at Amazon and other bookstores, but I have a more exciting opportunity for all our listeners today. Starting on April 7th, 2025, you can participate in an Engage event with David and his book on Perusall. For those of you who might not be familiar with Engage events, they are author facilitated, communal reading events where for a nominal fee, you not only get access to David's book, but you will also have an opportunity to engage with David and other like-minded educators like myself and brainstorm how to best use generative AI in education. You can secure a spot at this upcoming engage event by going to perusall.com slash engage. To find our social learning amplified podcast and more, go to perusall.com slash social learning amplified, all one word. Subscribe to find out about our upcoming episodes and I hope to welcome you back on a future episode.

Stay up-to-date!

Subscribe to our newsletter.