Taking too long? Close loading screen.
One moment, please. We are testing your patience.
Almost ready!
Breathe in. Breathe out. Repeat.

Is AI the Beginning or End of Learning? | TAPP 131

by Kevin Patton

Is AI the Beginning or End of Learning?

TAPP Radio Episode 131

Episode

Episode | Quick Take

In Episode 131, Kevin Patton discusses the use of ChatGPT and other chatbots and artificial intelligence (AI) in teaching and learning. We learn what’s going on, what to be concerned about, and what to look forward to. And how to keep breathing.

  • 0:00:00 | Introduction
  • 0:00:53 | What’s a Chatbot and Why Should I Care?
  • 0:08:44 | Sponsored by AAA
  • 0:09:51 | Kevin Asks ChatGPT Some Questions
  • 0:21:25 | Sponsored by HAPI
  • 0:22:01 | Is ChatGPT Amazing? Is It Accurate?
  • 0:37:23 | Sponsored by HAPS
  • 0:38:11 | Arms Race or Tool Box?
  • 0:46:55 | Calculators, Typewriters, and Grammarly
  • 0:58:36 | Cool Tools and Alternative Assessments
  • 1:06:20 | Worry?
  • 1:13:47 | Staying Connected

survey

Episode | Listen Now

Episode | Show Notes

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. (Stephen Hawking)

 

What’s a Chatbot and Why Should I Care?

8 minutes

ChatGPT is a very hot topic of conversation among anatomy and physiology faculty. It’s a chatbot. But was IS a chatbot? The use of chatbots by students can be concerning regarding learning and academic integrity. Why is that? Are we in an arms race?

Please rate & review The A&P Professor—it helps others decide whether to give us a try! 😁

Is AI the Beginning or End of Learning? | TAPP 131

Sponsored by AAA

63 seconds

A searchable transcript for this episode, as well as the captioned audiogram of this episode, are sponsored by the American Association for Anatomy (AAA) at anatomy.org.

Searchable transcript

Captioned audiogram 

Artificial intelligence and clinical anatomical education: Promises and perils (descriptive article by Michelle Lazarus et al. in Anatomical Sciences Education) AandP.info/p0q

Don’t forget—HAPS members get a deep discount on AAA membership!

AAA logo

Kevin Asks ChatGPT Some Questions

11.5 minutes

Kevin asks questions of ChatGPT, and it generates some answers. Can you tell that it’s not human? Listen and find out!

 

Sponsored by HAPI Online Graduate Program

32 seconds

The Master of Science in Human Anatomy & Physiology Instruction—the MS-HAPI—is a graduate program for A&P teachers, especially for those who already have a graduate/professional degree. A combination of science courses (enough to qualify you to teach at the college level) and courses in contemporary instructional practice, this program helps you be your best in both on-campus and remote teaching. Kevin Patton is a faculty member in this program at Northeast College of Health Sciences. Check it out!

northeastcollege.edu/hapi

Logo of Northeast College of Health Sciences, Human Anatomy & Physiology Instruction

Is ChatGPT Amazing? Is It Accurate?

15.5 minutes

Okay, what is it? Did ChatGPT do a good job of answering Kevin’s interview questions? Can you tell it’s not human? How might this work if a student used ChatGPT to generate content for exam answers or assignments?

  • Mock Interview About Podcasts with Kevin Patton and ChatGPT
    • I have not been able to post both the transcript and this mock interview as a separate bonus item in the TAPPapp, as I’d thought I’d be able to do. The link above (also available in the TAPPapp) is the best way to access the mock interview.
  • Microsoft’s AI Program Can Clone Your Voice From a 3-Second Audio Clip (article from PC Magazine; this is scary) AandP.info/vdy
  • Opinion | Human This Christmas (interesting take on the human aspect of what’s going on with ChatGPT from The New York Times) AandP.info/6th
  • Abstracts written by ChatGPT fool scientists (article from Nature discusses using AI bots for writing abstracts) AandP.info/5s0
  • OpenAI begins piloting ChatGPT Professional, a premium version of its viral chatbot (I told you so) AandP.info/kkh
  • ChatGPT Advice Academics Can Use Now (article from Inside Higher Ed) AandP.info/fef
  • Truce Be Told: Just a few years after banning Wikipedia, some educators are starting to make peace with the popular online encyclopedia that anyone can write and edit (from Harvard’s Ed. Magazine) AandP.info/ztp
  • Wikipedia, Once Shunned, Now Embraced in the Classroom (article from Inside Higher Education) AandP.info/97v

 

Sponsored by HAPS

45 seconds

The Human Anatomy & Physiology Society (HAPS) is a sponsor of this podcast.  You can help appreciate their support by clicking the link below and checking out the many resources and benefits found there. Watch for virtual town hall meetings and upcoming regional meetings!

Anatomy & Physiology Society

theAPprofessor.org/haps

HAPS logo

Arms Race or Tool Box?

9 minutes

What are some ways we can meet the challenges and opportunities offered by chatbots?

 

Calculators, Typewriters, and Grammarly

12 minutes

Kevin discusses some ideas from past episodes, an episode of the Teaching in Higher Ed podcast, and other sources.

  • How Artificial Intelligence is Impacting Higher Education (Cynthia Alby, coauthor of Learning That Matters discusses how artificial intelligence [like ChatGPT] is impacting higher education on episode 448 of the Teaching in Higher Ed podcast hosted by Bonni Stachowiak) AandP.info/mal
  • Teaching Writing in an Age of AI (John Warner explains that when we write, we are both expressing and exploring an idea) AandP.info/5ok
  • ChatGPT: Understanding the new landscape and short-term solutions (Tips and insights compiled by Cynthia Alby, Co-Author of Learning That Matters: A Field Guide to Course Design for Transformative Education) AandP.info/1ua
  • Artificial intelligence may improve accuracy of gestational age estimation (article on using AI in medicine; one of a bazillion) AandP.info/vm2
  • The nail in the coffin: How AI could be the impetus to reimagine education (article by Cynthia Alby) AandP.info/adz
  • ChatGPT Zotero group (public group in Zotero with shared list of ChatGPT references) AandP.info/9×6
  • Update Your Course Syllabus for chatGPT (ideas for staying ahead of the game) AandP.info/k9q
  • Teaching: Will ChatGPT Change the Way You Teach (column from the Teaching newsletter in the Chronicle of Higher Education) AandP.info/ra0
  • What are We Doing About AI Essays? (very insightful article from Faculty Focus) AandP.info/kc3
  • Grammarly (automatic grammar/style checker) app.grammarly.com/
  • Speechelo (AI Text To Voice Tool) AandPaandp.info/sk4

 

Cool Tools and Alternative Assessments

8 minutes

We also explore some ways AI can be used appropriately in teaching and learning. And there’s a comment or two in looking for opportunities to experiment with alternative grading, ungrading, and authentic assessments.

 

Worry?

7.5 minutes

What? Me worry? Not much, when it comes to chatbots used by students. There are even things I can do to dial back my worry when I recognize that it’s not appropriate.

  • Clouded Leopard Found at Dallas Zoo After ‘Suspicious’ Tear in Enclosure (mentioned in this segment) AandP.info/sw6
  • Why we are hard-wired to worry, and what we can do to calm down (article mentioned in this segment) AandP.info/auz
  • SUN TAI CHI CHUAN KUAN-STYLE FOR BEGINNERS – Section I – Preview (Kevin’s tai chi teachers) youtu.be/Y59gqWHntkU

 

People

Contributors: Terry Thompson, ChatGPT

Production: Aileen Park (announcer), Andrés Rodriguez (theme composer,  recording artist), Rev.com team (transcription), Kevin Patton (writer, editor, producer, host)

Need help accessing resources locked behind a paywall?
Check out this advice from Episode 32 to get what you need!

Episode | Transcript

The A&P Professor podcast (TAPP radio) episodes are made for listening, not reading. This transcript is provided for your convenience, but hey, it’s just not possible to capture the emphasis and dramatic delivery of the audio version. Or the cool theme music.  Or laughs and snorts. And because it’s generated by a combo of machine and human transcription, it may not be exactly right. So I strongly recommend listening by clicking the audio player provided.

AAA logoThis searchable transcript is supported by the
American Association for Anatomy.
I'm a member—maybe you should be one, too!


PDF icon

Introduction

Kevin Patton (00:00:00):
Physicist Stephen Hawking once wrote, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.”

Aileen Park (00:00:20):
Welcome to The A&P Professor. A few minutes to focus on teaching human anatomy and physiology with a veteran educator and teaching mentor, your host, Kevin Patton.

Kevin Patton (00:00:35):
In this episode, I chat about chatbots and artificial intelligence in teaching and learning, and we learn how not to panic.

What’s a Chatbot and Why Should I Care?

(00:00:53):
Weeks and weeks ago, I ran across a couple of media items about students using chatbots of various levels of sophistication to help them with their tests and assignments, often in unethical ways. What is a chatbot? Well, IBM defines a chatbot as a computer program that uses artificial intelligence, AI, and natural language processing, NLP, to understand customer questions and automate responses to them simulating human conversation.

(00:01:36):
Because I thought this chatbot thing might become a trend, I added this to my list of ideas for my annual psychic predictions for the world of teaching and learning, human anatomy, and physiology. But then I would see another story about this, and then another and another. And well, it soon got to where there was nowhere I could look without seeing something about these chatbots, or AI, or robots taking over higher education. To borrow the phrase of actor George Takei, “Oh my.”

(00:02:21):
Oh man, by bringing it up here, I guess I’m jumping on that bandwagon. I know. I mean, I don’t want to be left out of a big important debate that all my friends are jumping into, do I? I thought, well, I’ll just spend 30 seconds on it in my list of 2023 predictions, which was supposed to be one of the segments of this episode, which was supposed to be my annual debriefing episode. That’s what I had scheduled in my planner.

(00:02:52):
But the more I interacted with A&P professors and textbook authors and other colleagues, the more interesting angles and applications came up in our conversations. And a whole lot of worry. Oh my. A whole lot of worry. So I put off my debriefing until the next episode, and I’ll just talk about a few things that have come up for me lately regarding this phenomenon.

(00:03:24):
And perhaps most importantly, I’m going to share why I’m not worried, and why you shouldn’t be worried either. Well, not too, too worried. Starting with a sort of spoiler, I’ll tell you that where I’m going to end up is not far from where I ended up back in a previous episode, episode 47, where I had a segment titled Teachers Versus Robots: AI In Teaching. I basically said that AI is a tool, and it’s going to change how we teach and learn, but if we focus on the human part of what we’re doing, well, then it’s going to be okay in the end, maybe even better. Even if AI gets more human-like, which is probable, we are all still humans and are still teaching humans and facilitating their growth in specific human outcomes.

(00:04:31):
A major thing that sparked the current tsunami of talk about AI in education is the recent availability of online software called ChatGPT. There’s a link to it in the show notes and episode page where you can try it out for yourself for free. At least, I think it’s still free, but I’ll circle back to that in a minute. You have to register, but you don’t need to enter a credit card or anything. But you do have to check the box that states that you’re not a robot each time you log in. Which I don’t know, I thought that was super ironic when I had to do that.

(00:05:11):
If you ask a student an open-ended question, perhaps asking them to explain the citric acid cycle, or the Krebs cycle-

Other (00:05:21):
No, no, no.

(00:05:21):
Not the Krebs cycle.

Kevin Patton (00:05:24):
Or the role of carbaminohemoglobin in the body, they can put your question into ChatGPT or another chatbot, and it can create a unique answer to that specific question. And that answer will often be, well, pretty darn good, worth full credit on your test. Not always, but often. Clearly that’s an issue for learning and for academic integrity.

(00:05:55):
The issue for learning is that if a student cannot answer the question, and relies on a chatbot to create an answer, then well, learning hasn’t happened. So why even go through the motions? It’s a waste of time and effort and a seat in the course. And that’s why it’s an integrity issue. If no learning has taken place but credit is given, and eventually a degree is issued, then academic credentials will have no value. They’ll be a lie. And in A&P, which is the foundation for clinical professions, well, that costs lives, including mine and yours, and all kinds of other people I love. So this is a huge problem for me, as I’m sure it is for you.

(00:06:49):
Of course, our initial natural reaction to this is…

Read the entire transcript→

some level of panic, right? But you and I know that it’s a long tradition among students to find newer and better and stealthier ways around the rules, shortcuts to gaining a good grade. And you and I also know that we educators do pretty solid job of keeping up with that arms race. So now we’re facing an escalation of that arms race. And naturally that causes alarm, but we’ll figure it out. It’ll be okay, really. Let’s take a moment to breathe in. Now, breathe out very slowly. It’s going to be okay. I promise. We’re all in this together. And I have some brilliant friends, and so do you. And I’m sure that some of them will find solutions and share those solutions with you and me. Heck, there are already some solutions, and I’ll get to a few of those in a bit.

(00:08:10):
But before I do that, let’s get a better feel for the current state of artificial intelligence and related tools. Like many others have done, I took ChatGPT for a test drive. And I interviewed it as if it were a guest on this podcast. I’ll play part of that interview next, converting the chatbot text into synthesized human voice using yet another AI tool.

Sponsored by AAA

(00:08:44):
A searchable transcript, and a captioned audiogram of this episode are funded by AAA, the American Association for Anatomy. One of their journals, Anatomical Sciences Education, has a recent article by Michelle Lazarus and her colleagues called Artificial Intelligence and Clinical Anatomical Education: Promises and Perils. It’s a really thoughtful look at many of the issues we’re all talking about right now. One of the things that really struck me is how the author set out five tensions between the promises and perils of AI in anatomy education. For example, one of these tensions involves human variations, but you need to read this yourself. Check it all out at anatomy.org, or use the direct link I’ve provided in the show notes.

Kevin Asks ChatGPT Some Questions

Kevin Patton (00:09:51):
Hi ChatGPT. How are things going for you today?

ChatGPT (00:09:55):
As a language model, I do not have the ability to experience emotions or have a physical presence, so I do not have the ability to have a good or bad day. However, I am here and ready to assist you with any questions or tasks you have. How can I help you today?

Kevin Patton (00:10:13):
Well, then you won’t be offended when I say that’s not a great way to answer your first interview question on a podcast. I don’t think my listeners are going to warm up to you. And I think they’re already coming into this not liking you very much. Have you ever thought of running for political office?

ChatGPT (00:10:33):
As a language model, I am not capable of running for political office or any other real world activities, as I do not have physical form or consciousness. My purpose is to assist users in generating human-like text based on the input provided to me. However, there are use cases of AI in politics such as assist human politicians in decision-making or even predict the outcome of an election.

Kevin Patton (00:11:00):
You know what? I’m kind of sorry I asked that question. Let’s try this. Should instructors of undergraduate anatomy and physiology courses be concerned about academic dishonesty when students use ChatGPT for assignments, reports, and test items?

ChatGPT (00:11:19):
Instructors of undergraduate anatomy and physiology courses should be concerned about academic dishonesty when students use chatbots such as ChatGPT for assignments, reports, and test items. While chatbots can be useful tools for generating ideas or providing information on a topic, they should not be used as a substitute for original work, or as a means of cheating on assignments or exams. Academic dishonesty, which includes plagiarism, fabrication, and cheating, is a serious offense that can have serious consequences for both students and instructors. It is important for instructors to clearly communicate their expectations for academic integrity to their students and to establish policies and procedures to prevent and detect academic dishonesty.

(00:12:10):
One way to prevent academic dishonesty is to provide clear guidelines for the use of chatbots and other technology tools in assignments and exams. And to make it clear to students that using chatbots or other automated tools to complete assignments or cheat on exams is not acceptable. Instructors may also consider using plagiarism detection tools to check students’ work for originality, or requiring students to complete assignments or exams under supervision to prevent cheating. Overall, it is important for instructors to be proactive in addressing academic dishonesty and to establish a culture of integrity in their courses to ensure that students are learning and engaging with the material in an honest and ethical manner.

Kevin Patton (00:12:55):
What are valid predictions of what might happen over the next year in teaching college level anatomy and physiology courses?

ChatGPT (00:13:05):
It is difficult to make specific predictions about what might happen over the next year in teaching college level anatomy and physiology courses, as this will depend on a variety of factors, including the evolving COVID-19 pandemic, local and national education policies, and technological advancements. However, it is possible that some teaching methods may shift to more online or hybrid formats, as many colleges and universities have done in recent years, to adapt to the pandemic. This may involve the use of online lectures, discussions, and assessments, as well as virtual laboratory simulations and other interactive tools to help students learn and engage with the material.

(00:13:49):
It is also likely that faculty will continue to explore and adopt new technologies and teaching strategies to enhance student learning and engagement, such as using virtual reality, augmented reality, and other interactive technologies. In addition, there may be a continued emphasis on adapting course content and teaching methods to meet the needs of a diverse student body, including students with disabilities, students from underrepresented groups, and students with varying learning styles and backgrounds. Overall, it is important for instructors to stay up to date on developments in their field and to be flexible and adaptable in their teaching approach to ensure that students have the best possible learning experience.

Kevin Patton (00:14:34):
Why is carbaminohemoglobin one of the most joyous terms in human biology?

ChatGPT (00:14:41):
Carbaminohemoglobin is not generally considered to be one of the most joyous terms in human biology. Carbaminohemoglobin is a type of complex molecule that is formed when carbon dioxide, CO2, binds to the protein hemoglobin in red blood cells. It plays an important role in the body’s respiratory system as it helps to transport CO2 from the tissues to the lungs where it can be exhaled. While carbaminohemoglobin may be a fascinating and important topic of study for those interested in human biology, it is not necessarily a term that would be considered joyous in and of itself. The joy that individuals may experience while learning about carbaminohemoglobin or other scientific concepts may come from the sense of accomplishment and understanding that comes with learning and understanding new information.

Kevin Patton (00:15:36):
Are you trying to get under my skin? Because you are. Okay. I’ll try a different one. Why do many people cringe when asked about the Krebs cycle?

ChatGPT (00:15:51):
There are a few possible reasons why many people may cringe when asked about the Krebs cycle. Some possible explanations include, one, difficulty understanding the concept. The Krebs cycle, also known as the citric acid cycle, is a complex metabolic pathway that occurs in the mitochondria of cells. It involves a series of chemical reactions that produce energy for the cell, but these reactions can be difficult for some people to understand and remember. As a result, some people may cringe when asked about the Krebs cycle because they are unsure of how to explain it or because they have struggled to grasp the concept in the past.

(00:16:34):
Two, fear of being tested on the material. For some people, the thought of being tested on the Krebs cycle or other scientific concepts may be intimidating or stressful. This fear of being tested or evaluated on the material may cause them to cringe when asked about the Krebs cycle or other scientific topics.

(00:16:56):
Three, negative associations with past learning experiences. Some people may associate negative experiences, such as feeling overwhelmed or frustrated while learning about the Krebs cycle, with the concept itself. This can lead them to cringe when asked about the topic, even if they have a good understanding of it.

(00:17:17):
Overall, it is important to recognize that learning can be challenging, and that it is normal to feel intimidated or unsure when faced with new or complex concepts. By seeking help when needed, staying positive, and being persistent in your learning efforts, you can overcome any difficulties and gain a deeper understanding of the material.

Kevin Patton (00:17:39):
All right. Let’s change gears. Can you recommend a specific tool for determining whether work submitted by an undergraduate college student has been generated by ChatGPT?

ChatGPT (00:17:53):
There are several tools that you can use to determine whether work submitted by an undergraduate college student has been generated by ChatGPT or other chatbots. Some options include plagiarism detection software. Many plagiarism detection tools such as Turnitin or PlagScan can help you identify instances of copied or paraphrased content that may have been generated by a chatbot. These tools work by comparing the submitted work to a database of other texts and identifying any significant overlaps.

(00:18:28):
Manually reviewing the work. Another option is to manually review the submitted work and look for any signs that it may have been generated by a chatbot. For example, you might look for repetitive or formulaic language, or for text that seems disjointed or unrelated to the topic at hand.

(00:18:47):
Asking the student about their work process. If you have concerns about the originality of a student’s work, you may also consider asking the student about their work process, and how they came up with their ideas. This can help you determine whether the student was actively engaged in the learning process, or whether they may have relied on a chatbot or other automated tool to complete the assignment.

(00:19:13):
Ultimately, it is important to be proactive in addressing concerns about academic dishonesty and to establish clear guidelines for the use of chat bots and other technology tools in your courses. By clearly communicating your expectations and taking steps to prevent and detect academic dishonesty, you can help ensure that your students are learning and engaging with the material in an honest and ethical manner.

Kevin Patton (00:19:39):
Will challenges in managing academic dishonesty get better or worse in 2023?

ChatGPT (00:19:47):
It is difficult to predict with certainty whether challenges in managing academic dishonesty will get better or worse in 2023. This will depend on a variety of factors, including the availability and use of technology tools and resources, the effectiveness of institutions’ policies and procedures for preventing and detecting academic dishonesty, and the level of commitment to academic integrity among students and faculty.

(00:20:14):
That being said, it is likely that advances in technology will continue to present new challenges and opportunities for managing academic dishonesty. As technology continues to evolve and new tools and resources become available, it will be important for institutions and instructors to stay up to date on these developments and to adapt their policies and practices accordingly.

(00:20:38):
To help mitigate challenges in managing academic dishonesty, it is important for institutions and instructors to be proactive in addressing the issue and to establish clear expectations for academic integrity. This may involve implementing policies and procedures for preventing and detecting academic dishonesty, providing resources and support for students to help them understand the importance of academic integrity, and taking appropriate action when instances of academic dishonesty are identified. Overall, it is essential for institutions and instructors to remain vigilant and proactive in addressing academic dishonesty to ensure that students are learning and engaging with the material in an honest and ethical manner.

Sponsored by HAPI

Kevin Patton (00:21:26):
The free distribution of this podcast is sponsored by the Master of Science in Human Anatomy and Physiology Instruction, the HAPI degree. I’m on the faculty of this online graduate program at Northeast College of Health Sciences, and I’d love to have you as a student in my courses. Check it out at northeastcollege.edu/hapi. That’s H-A-P-I. Or click the link in the show notes or episode page.

Is ChatGPT Amazing? Is it Accurate?

Kevin Patton (00:22:01):
Okay. In that mock interview, you didn’t hear any of the usual witty banter that you usually hear in our interviews, or sighs or gasps, or laughs and snorts. Why? Because it’s a robot. It’s not human. Also, it doesn’t ask me questions or break into my question, or ask for clarification, or have anything resembling insightful, deep, natural human conversation. Especially informal conversation, which is what you’d want for an interview, right? I heard somebody compare it to a ping pong match where conversation just goes back and forth, back and forth. Ping, pong, ping, pong.

(00:22:54):
It’s not so obvious here because ChatGPT really is pretty darn good, but those chatbots we often get instead of a live person when we’re looking for customer support online or on the phone, often don’t really answer our question. They pick up on key phrases in our question and answer a common question about that topic, hoping that will satisfy us. Which in my experience, it rarely does because I’m never asking that generic question. I already read the FAQs. So I have some other question that goes beyond that, which is why I click the chat thing because often those are marked as being live chat. Yeah, live with a robot, it turns out. So I’m getting baited and hooked by that less and less. I’m getting baited, but not hooked so much as I used to.

(00:23:53):
Oh, how many times have you and I seen students answer a question that way though. They pick out a term that they recognize, and then riff on that hoping they’ll get at least partial credit. They do that when they don’t know the answer and don’t want to figure it out. If a student copies a chatbot response into their assignment, that does the very same thing. And that’s even worse, I think.

(00:24:24):
Did you notice that the answers in that mock interview were a bit repetitive within themselves? And some seemed to be padded with information that wasn’t asked. For example, the carbaminohemoglobin question had a gratuitous explanation of what carbaminohemoglobin is. Gosh, I love saying that word, carbaminohemoglobin. It gives me a feeling of joy. I’m sorry. It does. It was clearly part of a programmed formula by putting that definition in. A formula of how to respond to any question that is put a definition in there right after the opening bit. But that definition didn’t really add anything to the answer, did it? It wasn’t about what is carbaminohemoglobin? It’s really more about the joy it produces. And yeah, I guess a definition could work in there, but the way it got slid in there, it, I mean, clearly just seemed to be part of a formula. At least to me, it did.

(00:25:32):
And once you start really looking at these chatbot answers, you start to recognize that algorithm that’s built in there, or the fact at least that there is an algorithm built in there. To me, when that happens, it feels like when students who lack confidence in their answer state all kinds of facts, hoping that something will stick and get them full credit for their answer. Sort of like as if they’re thinking, I can’t answer your question directly, but here’s something I know about that topic. And in that mock interview, there were no contractions in any of the answers. I don’t call that natural language. That doesn’t sound human to me. Even when we’re speaking at a formal occasion, we still manage to get a lot of informal language devices in there. Even if they don’t jump out of people, they’re in there.

(00:26:29):
Even so, chatbots are getting better and better all the time. And this tool is good enough to fool us at least some of the time. The answers in the mock interview you just heard were actually pretty good. Maybe a bit off in a few places and missing some pieces that could have been included and weren’t. But overall, not too bad.

(00:26:56):
However, I tried something else just for fun. I asked which podcast I should listen to if I want science updates and teaching tips for anatomy and physiology faculty. That’s sort of like looking yourself up on Google, right? So that’s what I did. Okay. What’s a good podcast to listen to? And you know what? It gave me a podcast name and a host that seemed to be a fabricated guess. No such podcast exists. And the host was an A&P instructor, but didn’t seem to have a podcast, never had a podcast as far as I could tell. Not a podcast that was ever available to the public at least.

(00:27:43):
And in follow up questions, ChatGPT admitted it was wrong several times in a row because I kept asking questions to try and get it to give me the right answer. Sort of like when we’re dealing with a student or a group of students, and we ask more and more pointed questions, trying to lead them, point them exactly in the direction of where the right answer is. I was trying to do that for ChatGPT, and say, oh, yeah, yeah, yeah, that was wrong. Well, here’s another answer. Oh, oh, oh, wait a minute. That one was wrong. So yeah, of course it didn’t say it that way because that’d be informal natural language. And instead it like, “I apologize for getting that one wrong.” Blah, blah, blah. And so again, it sounded like a robot to me.

(00:28:28):
ChatGPT finally did give me a list of podcasts that might be useful for A&P faculty. None of which though had that as their specific mission. They were just general sciencey kind of podcasts or education type podcasts, and not really specific to A&P, and not necessarily aimed at that group. Some of them were actually fairly close, but not really on the money. At least one in the list was listed with the wrong host. It’s The Medical Mnemonist podcast, which I know the real host, Chase DiMarco. I know him personally. He was a guest on this podcast back in episode 64. And I’ve listened to that podcast, so I knew that the host they were listing was not right. I even double checked to see, wait a minute, I haven’t heard it lately. So maybe they changed it up and have a new host. No, no, no. It just was wrong on that part. But at least they got it half right, I guess. So that was better than the first couple times I tried to ask that question.

(00:29:36):
But there were some that were obvious podcasts, like not just The A&P Professor podcast, which you’re listening to now. But for example, Succeed in A&P. That’s another podcast, a very good podcast, probably the only one out there that’s really specifically for anatomy and physiology faculty. And then there are some others that are in interesting to A&P faculty that you probably listen to, and I listen to, and those weren’t listed, but when I asked specifically about The A&P Professor podcast, it gave me a pretty solid endorsement, so that made me feel a bit better about my podcast, but it made me feel far less confident in any information I’d get from ChatGPT. I mean, it was really a glowing recommendation. I’m thinking, holy smoke, you’re making this up. This podcast isn’t that well regarded. Come on.

(00:30:37):
It’s a big gamble for anyone using it as their sole source, or even as a background source, without fact checking every single element of a response, at least based on that experience I had with ChatGPT. I would not trust it, at least not until it gets way better than it is now. So that first round in that mock interview, that gave me some maybe false confidence in how accurate it could be. When I went back and asked about these podcasts, whoa, man, it fell flat on its face. Now, you can read that mock interview about the podcast by downloading it from the show notes or episode page. Or if you’re using the free mobile app, the TAPPapp, it’s right there in the app as an extra bonus content for this episode. And you can get that free TAPPapp by searching for the A&P Professor, or searching for Kevin Patton in your device’s app store.

(00:31:34):
Now, right now ChatGPT is using information from a database that was closed in 2021, so no new information after that closing point. So it’s not connected to the entire internet, and it’s not staying up to date. ChatGPT is a test product, not the final product. So I think we can count on all the flaws I just mentioned being corrected to some degree going forward, and other better chatbots being created in the near future.

(00:32:09):
Now, before we move on to some strategies for dealing with chatbots and artificial intelligence tools in general, I just want to say that the discussions I’ve been having and have been listening to are all sounding more and more like the discussion we were having years ago about Wikipedia. Am I right? If you were teaching when Wikipedia first became well known, there was a lot of fretting about its accuracy and its reliability. A lot of professors and institutions banned its use entirely for students. Some still do. We fretted about students simply copying and pasting information from Wikipedia, or simply ignoring other sources and using only Wikipedia. And some students did that, and some students still do that.

(00:33:10):
But as Wikipedia evolved into something much more accurate and reliable, not totally accurate and reliable, but much more than it was, and as we eventually understood how to use it as a tool more effectively, we’re really not so worried about it anymore. If any of us want to find a quick rundown of something we never heard of, or something we used to know, but we forgot about, like what was that philosophical school of thought that has just come up in a paper I’m reading now? I don’t remember enough from my philosophy courses to read that, or my reading, or wherever it was that I first learned that. Or maybe it’s some completely new thing that has recently been invented or discovered or formulated or whatever, and I want to find out about it. Well, any of us can just hop on Wikipedia and get a pretty good idea of what that thing is all about.

(00:34:14):
I can go back to that philosophical school of thought, and think, oh yeah, yeah, transcendentalism. Oh, I remember what that is now. We talked a lot about that in philosophy class. How could I have forgotten? So it all comes back. Or that new thing about chatbots or artificial intelligence, which is relatively new and I don’t know much about. So I’m going to go in there and check it out, and get an overarching idea. I’m not going to learn everything about it, and there might even be a mistake or two in there, or something that’s out of date, but I’ll get a pretty solid, actually probably a very solid idea of what’s going on. And if we want to, we can use Wikipedia to not only get that sort of big view, that overview, we can follow some of their citations in those Wikipedia articles. Follow them to the primary literature where there we can start to dive into a legitimate scholarly review of accurate source information.

(00:35:16):
So what I’m saying is we’ve learned that Wikipedia is a good place to figure out where to start. If we don’t know where to start, then yeah, it’s going to get us going. So that’s its best use. Sometimes that’s the hardest part of answering an exam question, or writing a paper for that course, or organizing a podcast episode. I didn’t use it for this episode, by the way. But I have used and continue to use Wikipedia occasionally when I’m like, I don’t know what I want to say about this topic for this application or that application. And so let me look it up on Wikipedia and see where they’re going with it, and maybe that’ll spark an idea. I’m not necessarily going to go in the same direction. I usually don’t because, well, that’s just my personality. Everybody’s going this way. I’m going to say, yeah, it’s pretty good, but I’m going to veer off just a little bit. I’m going to walk alongside where you’re going and not go exactly where you’re going.

(00:36:16):
And so that’s really a spark for creativity. It’s not something that cancels out creativity, and isn’t that how we originally thought about Wikipedia, that this is going to eliminate learning, it’s going to reduce learning, it’s going to make students less creative. And oh my gosh, if a scholar or some other professional person ever used Wikipedia, that would be horrifying. But now everybody uses Wikipedia. And a lot of people, a lot of really high powered professionals are contributing to Wikipedia as well. So it’s not surprising that more and more courses and more and more institutions are now encouraging the use of Wikipedia for its proper applications. We’ve turned around quite a bit on that one, so just keep that story in mind. I’ll be right back with some pitfalls, problems, strategies, and applications for ChatGPT and similar AI tools in just a moment.

Sponsored by HAPS

(00:37:23):
Marketing support for this podcast is provided by HAPS, the Human Anatomy and Physiology Society, with a mission of promoting excellence in the teaching of human anatomy and physiology. Hey, you ought to check out the HAPS educator where there are some really interesting articles that can help you as an A&P instructor. For example, the current edition reveals the result of a study comparing student preferences for presentation format before and after pandemic teaching. This and other issues are available online at theAPprofessor.org/haps. That’s H-A-P-S.

Arms Race or Tool Box?

Kevin Patton (00:38:11):
Just looking at it from the arms race angle where we as educators are trying to find defenses against the use of ChatGPT and similar tools for cheating or other forms of academic dishonesty. Well, there are some things to try. For example, a student made an app called GPTZero that he claims can detect the product of ChatGPT. Now I was going to try that out. You know how I am. Okay, well, let me try it and see if it works. I was going to do that. I was going to check the answer the ChatGPT gave me for the interview. I was going to try some of those answers. But you know what? I went into GPTZero, I got put on a waiting list. It’s not open to everybody yet. It’s still being worked on.

(00:39:07):
But as I was trying to get in, I had this interesting thought that here’s this college student, university student, who has developed this tool and is apparently still working hard on it, I wouldn’t be surprised if he doesn’t get several other university students working on it. And I mean, shouldn’t they be studying for their courses. So they’re putting time and energy and creativity into creating this tool. Don’t we often lament about all the effort that goes into cheating and plagiarism and getting around the rules and so on? If you put all that energy into learning, it would’ve been a lot easier just to learn this stuff so you wouldn’t have to do it.

(00:39:50):
But as I was having that thought, I had another thought. So I’m doing this meta meta thing going on about this cheating thing is like, yeah, okay, they’re putting a lot of time and energy into creating GPTZero and other kinds of things that they probably should set aside and work on their courses. But you know what? They’re learning a lot. And they’re being creative, and they’re going to be better off having done those projects than not. So I don’t know, maybe there is some kind of silver lining to all that time and effort and problem solving that students sometimes do to be dishonest. I’m not encouraging that because I’m not a fan of dishonesty. I’m all for ethical behavior. If for no other reason, then I mean that’s part of being a professional, and that’s what we’re teaching students. But there are other reasons why I embrace ethical behavior as well. But I mean sometimes that problem solving thing is, in and of itself, not such a bad thing.

(00:40:57):
So another strategy in this arms race that we can employ against the cheating angle is that OpenAI, which is the organization responsible for ChatGPT, they say that they’re working on some anti plagiarism tools that are going to be built into ChatGPT, including possibly a digital watermark that will alert those who have the right software installed or a plugin installed in their browser, or in some other piece of software. And it’s going to alert them that the source of whatever they’re looking at or whatever they put into that software, the source was ChatGPT. So it’s a way for ChatGPT to identify its own product in sort of a secretive, hidden way so that students are not necessarily going to be thinking about the fact that, yeah, we can tell.

(00:41:53):
Now so far, I think the more we see ChatGPT product in product that’s supposed to be coming from students, I think the more of that we see, the more likely it is we’re going to develop our own spidey senses where we’re going to be able to see that easily and without too much effort. But to back that up, there will be this automated thing already built in there that is going to help us with that as well. And there’ve been all kinds of studies that AI, when used in concert with human observation and reasoning, does tend to work pretty well. Either one by itself, eh, has some flaws, but working together can be pretty powerful.

(00:42:34):
So maybe that’s what we’re going to start seeing, but they’re working on it. So you know what? There’s dozens, maybe hundreds of geniuses, working on different defenses like those digital watermarks. They’re working on them at this very moment. So really, I’m not too worried about this whole thing. It’s going to work out. I know it is. And then maybe that’s because I kind of buy into the transcendentalist philosophy of… Let’s not go down that road.

(00:43:07):
Another thing to consider is that this particular chatbot is a test version. At any moment, OpenAI has said they’re going to start charging for it. It’s going to be Chatbot, wait a minute, Chatbot… Oh, they ought to change the name of it. I’m having a hard time with it. ChatbotGPT Pro. They already announced they’re going to start charging for it. And you know what? I think that that’s going to greatly reduce its use by students trying to pull one over on us, but still it’s going to be like Whack-a-Mole, isn’t it? There’ll be the next AI chatbot, or report writer, or test answer, or popup before too long. There are a bunch of geniuses working on that too, I’m sure of it. They’re supposed to be studying for their courses, but they’re working on new chatbots, I bet. Some of them are.

(00:44:04):
However, besides any built in anti-cheating mechanisms that they do end up building into these things, there’s another set of strategies that I’ve been talking about for a while, as have many others. And even the ChatGPT brought them up in my interview. For example, let’s have an ongoing and deep and transparent talk about academic integrity. Let’s build a culture of integrity in our courses. I’ve been saying that for a long, long time. And it’s not just me, it’s you too. I know it. And it’s many others. If chatbots or other AI tools can be used to mimic true learning, then well, let’s talk about that with the class. Look, there are these tools, and they can make it seem as if you’ve learned something and you haven’t really.

(00:45:02):
Let’s try to get our students to a place where they understand integrity and understand the value of integrity in the health professions. And if we do that, that may be time well spent, an effort well spent. Because, I don’t know, that might be a much more important learning outcome to my course than anything related to that darn Krebs cycle-

Other (00:45:29):
No, no, no.

(00:45:29):
Not the Krebs cycle.

Kevin Patton (00:45:32):
… or to carbaminohemoglobin. I think we also ought to check out our own reactions to this type of tool. Let’s ask ourselves, are they good or are they bad in the context of teaching and learning? Well, is a hammer good or is it bad? If I use it to help my neighbor build her house, well then that hammer is good. But if somebody uses a hammer to bash in your car’s windshield, well then that hammer is bad. Not really. But I mean, we think of it as bad, right? Like any potentially dangerous tool, we need to learn how to use a hammer properly. We ought to keep it away from young children and those prone to violence. We ought to ask students not to bring a hammer to A&P class. But in the end, the hammer is just a tool. We know that chatbots can be used for cheating, including plagiarism, but it’s just a tool. And there are some ethical ways of using chatbots and other AI tools that can help learning and can help us in our professions as well, can help in the teaching part of this teaching learning cycle.

Calculators, Typewriters, and Grammarly

(00:46:55):
As you know, I’ve been around for a long, long time. Okay, I’ll give it three longs. I’ve been around for a long, long, long time. So I’m old enough to have been around when there was a similar controversy with calculators. Oh my gosh, if students start using calculators in a math class, they’re going to get lazy and they’re not going to know how to do anything. And now there’s so many math classes and other applications of mathematics and engineering and science and so on that uses mathematics, where a calculator is a given. It’s sort of like I can’t really perform in class because I forgot my calculator today. Teachers carry extra calculators with them for students like me who occasionally forget or misplace their calculator. I’m sure they’re students all over the world being scolded for not having their calculator.

(00:47:47):
Yet, when I was a youngster, it was like, “You have a calculator? You shouldn’t even be bringing that to school.” But we learned how to use them properly. Use them what we need to use them for so that we can free up our energy and our computing power in our brain to focus on the actual problems that we’re trying to learn how to solve, and not so much the computation part of it.

(00:48:11):
I remember those kind of conversation about typewriters. If we use a typewriter and not write things out in long hand, isn’t that going to damage our character and damage our ability to learn? And then beyond typewriters, it’s like word processors. No, no, no. You need to be using a typewriter because if you’re not able to use a typewriter, then you’re going to fail later on. And then I already talked about the Wikipedia thing. These are all very similar.

(00:48:37):
Now, more recently, when this all came up, I’m thinking, you know what? I use chatbots all the time. And it’s actual artificial intelligence, at least some of them are. For example, I sometimes use Grammarly. Grammarly is a chatbot that I have subscribed to. What it does is it goes beyond the grammar checker that’s already built into your word processor. And a lot of us rely on that as well. The spell checker, grammar checker, built-in thesaurus, built-in dictionary, all that stuff is usually in our word processor.

(00:49:07):
But Grammarly goes a step beyond that. It’s a little bit more intelligent, and looks at it a different way. And what I like about it is it’s sort of like having my built-in copy editor. Grammarly is a chatbot, or at least AI that will go in there and it’ll say, “Do you want to say this or do you want to say this instead? This is better if you say this.” And so I can click on that other thing and it’ll just replace it. So is that cheating when I do that? No. All kinds of professional writers do that. That’s expected to be able to do it. Do I ask Grammarly to write an essay on transcendental philosophy? No. That would be ridiculous. It’s not capable of doing that anyway. Maybe ChatGPT can. I’ll have to try that sometimes. Maybe it can do that, but it’s going to be like Grammarly substitution. Sometimes Grammarly substitutions just didn’t get what I was trying to say. No, don’t do that. That’s not what I want to say.

(00:50:06):
Human copy editors, they sometimes do that, but not usually. They usually can kind of tell what it is you’re trying to say. But occasionally I have to reject what the copy editor changed and change it back to the way it was. Or maybe say, oh, maybe it wasn’t too clear. Maybe I better find yet another way, a better way than either of those to do that. And you know what? I love copy editors. I love the copy editing phase where I get my manuscript back, and I look at it and I get all these recommendations. And you know what? I love that about Grammarly too.

(00:50:39):
And there’s another one I’ve been using in addition to Grammarly and my copy editors, it’s called QuillBot. Actually, I use it in this podcast. If you go to any of the recent episode pages and go all the way down to the bottom, there’s a little citation there. If you ever want to mention something you learned in one of the episodes in another work and you want to cite it, thank you very much for doing that, if that’s what you want to do. Oh, man, that would just warm my heart if I found out you were doing that. But if you did that, well, the citation is already built for you down at the bottom. Because QuillBot, one of its several AI tools, is you just put in a URL or some other, whatever information you have, and it’ll figure out how to make it into an MLA citation, an APA style citation. Any of the major styles, it’ll do that.

(00:51:32):
But it also has a little plugin that does the Grammarly sort of thing where it’ll check your grammar or check your spelling and do that sort of thing. Sometimes I get annoyed with them and like, no, no, no, just stop correcting me for a minute. I got to get this done, and I think I’m okay for this. It’s just an email to somebody. It’s not gotta be formal. It’s not gotta be absolutely correct. And see what I just did, I used the term gotta. That’s not even a word, gotta. G-O-T-T-A. Gotta. But you know what I’m saying? You know. I mean, we do that all the time. And sometimes in an email, I want to be super informal. I mean, that’s a tone I want to impart. Yeah, these chatbots,

(00:52:15):
They’re going to have to learn how to do those sorts of things. But my point is that I use them as tools. A lot of us use those as tools, and there are many other kinds of tools. Let’s get off it. I mean, I could go down that road for a long time, and we don’t have time in this podcast to do that.

(00:52:30):
I want to pull back here and talk about Cynthia Alby. I just listened to her on one of my favorite podcasts. It’s called Teaching in Higher Ed. It’s hosted by Bonni Stachowiak. And Cynthia Alby is the co-author of a fairly recent book called Learning That Matters. I can’t wait to read this book. I haven’t read it yet, but I want to read this book, Learning That Matters. And she was in a recent episode of Teaching in Higher Ed Podcast. And that episode is called How Artificial Intelligence is Impacting Higher Education.

(00:53:09):
And so they’re having this kind of conversation I said that so many people are having right now. And it was great to listen because they’re both very experienced in teaching and they’re both very knowledgeable about different strategies in teaching, and modern philosophies of how best to teach our higher ed students. So one of the things that they brought up that I think is important to think about as a strategy for how to deal with students trying to use chatbots is transparency. And that’s kind of what I’m coming back to from where I was just a few minutes ago, trying to get back on that track again.

(00:53:50):
We need to discuss with our students the ethical use of artificial intelligence in our courses, the ethical use of chatbots in our courses. They spent some time on that. You might want to listen to it because they have some different angles than I’m able to share with you right now. And there are some other sources that look at that as well. And I have a bunch of them in the show notes. I’ll talk more about that at the end of this episode.

(00:54:16):
But if we talk with students and sort of work it out with them, like, okay, do you think that would be ethical to use a chatbot to do this? And get buy-in from everybody. Like, no, that’s clearly not right. Or maybe this thing over here, yeah, we can use it for that, and this and not that. And that really gets them thinking deeply. I mean, it really takes them to a higher level of thinking, a more complex level of thinking. And we’re applying it to our subject too. So it really even primes the pump for some deep thinking in the actual content of anatomy and physiology, at least I believe it does.

(00:54:54):
There are some things that we can do with that to have them solve problems in that. I mentioned a couple of times on this podcast at different times, this first day activity that I do, it’s got a lot of elements to it, but one of the things that I do is the students are all split up into small groups for most of this first day activity. And what I give to them is a set of little mini case studies, and they’re all cases about integrity in the course. If you witness a student doing this or that, what would be the best response to that? There are different actual things that I’ve seen happen or suppose probably could happen in my class.

(00:55:38):
And so I have them discuss that. And then one person gets my answer to that. How do I see that integrity thing? And they’re encouraged to call me over to their small group, and defend my answer and explain my answer, and clarify my answer if they want that. And so they are actually doing that sort of thing. That’s a way of doing it, instead of doing it as a whole class. Because a lot of times when you do it in a small group, you get a lot more interaction, a lot more engagement by each individual student than if you do it in a big class. And when I say big class, that would be a class of about 25 students. But if you’re in a group of four, that’s much more likely you’re going to be engaged in that process.

(00:56:23):
Another thing that I think it’s important for us to do is to model ethical behavior in our own lives. And I know a lot of us use images from the textbook, but do we actually cite to that to student and say, “This is from the textbook,” so they know where we got it. And maybe explain once or twice, or at various points in the course that we have permission to use the textbook images because we’ve adopted the textbook for their learning. And so that’s okay for us to do that. And when we’re not using images from the textbook, well, where did we get it? We need to model ethical behavior in our own courses. Because our students are going to use ChatGPT to cheat if they see us cheating by using images we’re not supposed to be using, or copying things into handouts that we’re not supposed to be copying. And not citing things in a proper professional way, the way that we’re supposed to be doing, the way we expect them to do. So ethical use of media.

(00:57:26):
What about ethical use of specimens? That’s a way to model ethical behavior. Maybe not dressing up human remains for the holidays. And you think, oh, I don’t do that. I don’t even have body donors in my course. Well, do you have a human skull? Do you have a replica of a human skull? Do you put a hat on it at holiday time, or for somebody’s birthday or something? Is that an ethical use of that specimen? I don’t know. I think we should have a transcendental philosophical discussion about that.

(00:58:01):
What about ethical use of parking spaces? I mean, there’s all kinds of ways that we can be ethical in our lives. You know that, I know that. But are we thinking about how we’re being ethical in front of our students and modeling that ethical behavior? Because if we’re not, then we can’t expect our students to be ethical. These are the early days of artificial intelligence and all that. So I think that we’re probably worrying more than we need to. I think things are going to work out.

Cool Tools and Alternative Assessments

(00:58:36):
I also want to spend a little bit more time talking about some useful applications that our students can use and we can use in our teaching. And just a couple because there are so many. But just to get you and me thinking about this a little bit. One that I just recently ran across is called SciSpace. It’s billed as your AI co-pilot to decode any research paper. The quickest way to read and understand scientific literature. Now, if you get my update newsletters a couple or three times a week, which you can get at theAPprofessor.org/updates, then you’ve already seen a story on this.

(00:59:18):
But what it is that you can plug in a scientific paper, and you’re having trouble with some part of it like I don’t even get what the abstract is trying to tell me. The one part that I should be able to understand, I can’t understand it. Put that abstract in there, highlight the abstract, and it will reinterpret it in simpler language for you. So that might be a good tool not only for us teachers, especially if we’re reading a paper that’s not all that well written, or we’re reading a paper that’s not in our area of expertise. Or our students can use it as well. If we assign them papers, we can give that to them as a tool because they’re often very intimidated. And I suspect that the more they use that tool, the less they’re going to have to. I think it’s more of a training wheel type thing.

(01:00:04):
That’s what I find is happening with my human copy editors and Grammarly and QuillBot is I learn a lot from them. I learned that like, oh, I never learned that it’s supposed to be this and not that, or which, and not that, or that and not which. So we can use SciSpace to kind of help us learn how to interpret scientific papers, complex papers. Another thing that my friend Terry Thompson, who you’ve heard from before in various episodes, she’s given feedback before, she alerted me to this thing called Hotpot. And I have links to all this in the show notes, remember I mentioned that? But Hotpot, it sort of pulls together a bunch of artificial intelligence tools where it can actually create diagrams for you, or create images, or can take images you have and alter them in various ways. You’ve seen this on social media probably where people are taking their avatar or profile picture and creating all kinds of new pieces of art of themselves using artificial intelligence.

(01:01:14):
But we can use that for education as well. We can take that stick figure diagram of a cell and its parts and maybe artificial intelligence can fill that out and make it look a lot cooler and maybe a lot more effective for what we want to do with it than we are able to do it. I still like stick figures, so don’t get me wrong. But I mean sometimes we want something besides that. This Hotpot has things where it can create badges. When we’ve talked about micro credentials, digital credentials, and using badges in our course. Well, yeah, you got to create those badges, and there are various templates you can use and so on. But yeah, artificial intelligence can do that as well in this Hotpot set of tools. At least they have them. And it can create social media posts, and it can create social media memes and different things like that. And I’m sure that’s not the only set of tools like that.

(01:02:09):
Another one that maybe I ought to bring up is Speechelo. I’ve been using that for a while in this podcast. When I did that interview with ChatGPT, it’s a text tool. It does not talk to you. So what I did was I took that text answer, each of those text answers, from ChatGPT, and I put them in Speechelo, and I picked a voice. I don’t know, there’s like 150 voices I could choose from. And I forget which one I picked. I think her name is Summer is the one I picked. And I can speed up her voice or slow it down and change her tone, and do all this stuff. And she ends up sounding kind of human. I think you could tell she probably wasn’t human. Now, I could have had someone else read those answers, but I didn’t. I specifically wanted to use artificial intelligence because, well, that’s what we’re talking about. But in the past and past episodes when I’ve mentioned the Krebs cycle.

Other (01:03:03):
Not the Krebs cycle.

Kevin Patton (01:03:06):
Yeah. See? Those voices, those are all from Speechelo. I just picked several different voices and kind of overlapped the sound files that were produced, and that’s how I made that sound effect. And there are several others I’ve done in the past where it’s been an auto-generated, artificial intelligence generated voice that you heard. Is that unethical? No, I don’t think so. I think if I had ChatGPT write an episode, and then plug that into Speechelo, and didn’t tell you that it was synthesized artificially, I think I’d be crossing a line there with ethics. At least in my own set of ethics, I’d be crossing a line. Maybe not legally or anything, but in my own mind.

(01:03:55):
And we’ve heard of media outlets doing that. I think it was CNET, which publishes a lot of articles on technology. And they admitted that some of their articles are produced by artificial intelligence. And there’s probably a lot more of that than we think is going on. We’re just not hearing about it. So good ethical question to discuss in our course maybe.

(01:04:17):
Another thing I’m thinking about, which many others have also thought about and talked about, including Cynthia Alby and Bonni Stachowiak in the Teaching and Higher Ed podcast I just mentioned, is that instead of spending a lot of our time and effort and worry on protecting our current tests and assignments, we should take this opportunity to rethink how we assess and how we grade our students. Maybe now is the perfect time to start learning more about options for alternative grading, for example. Maybe we should look at alternative assessments as part of that quest for something better for our courses.

(01:05:04):
For example, I think the authentic assessment movement, where instead of giving traditional tests and exams and term papers, we have students perform their knowledge or skills or applications and problem solving in some way that is more like real life or life outside the academy. And that it doesn’t lend itself to having a robot dishonestly do it for them. Or practice using robots in the way that it’s ethical to use them, such as when a writer uses Grammarly to help them polish their writing, or uses a chatbot answer to sort of just get an idea where to start with their term paper. I don’t even know what to think about this topic that I was assigned, or that I chose. So you use the chatbot answer, and then that generates some ideas. And then you creatively and independently and humanly go for that. So yeah, I think maybe there’s a good outcome to all this. Think about it.

Worry?

Kevin Patton (01:06:20):
Am I worried about the fact that we’ve already entered the age of robots in higher ed and we can’t go back? We can never go back. We got robots now. We have artificial intelligence now. Am I worried? Not really. Actually hardly at all. Now, partly that’s because, well, I’m old enough to have lived through crises before. Heck, we all lived through the sudden chaos of pandemic teaching, didn’t we? Maybe we’re not fully recovered yet, but we’re on our way I hope. I was there way back in the olden days at the dawn of learning management systems. I was even there for the dawn of using any kind of computer or software in the classroom. PowerPoint? Nah, we didn’t have that. Nor did we have digital cameras. Oh man. And what about smartphones and tablets and ear buds. Well, there’s a pretty long list of things we didn’t have in the olden days when I first started teaching.

(01:07:30):
All those things have been used for evil and not for good by some students. But you know what? We’ve learned to cope. And we’ve evolved our thinking and we’ve evolved our doing when it comes to teaching and learning. And we’re now using all these things mostly not with evil intent, but for the good of our students, and for the good of their eventual patience and other clients. So yeah, some aspects are concerning. And I wouldn’t be honest if I didn’t admit that when I hear some horrifying anecdote, I do reach for my metaphorical worry beads. But overall, I think all this might have a silver lining of moving our teaching strategies in a better direction.

(01:08:27):
But worrying is natural. I have a link in the show notes to an article that mentions that worry is good for survival. We evolved this emotion for a reason. It keeps us alert and therefore alive in potentially dangerous situations. But when things turn out to be not so dangerous. In other words, it’s not a clouded leopard that just escaped at the local zoo, and we don’t know where it is right now. By the way, that did just happen at the Dallas Zoo, and I was a little worried. I was worried for the zookeepers. I was worried for the safety of the people living around the Dallas Zoo. And I was really worried for the cat because it was probably very terrified. And not only that, it was in grave danger of being killed immediately if spotted by the wrong person under the wrong circumstances.

(01:09:25):
So yeah, that was worrying. But it turns out they caught the cat. Now I don’t have to worry. And I probably didn’t have to worry much because it really wasn’t me that was in danger at that point. So I was able to manage that worry. I even dialed it back a little bit. And you know what? We can manage our worry. We can dial it back a bit. Sometimes it takes a lot of work, and it’s not always an easy thing to do. Sometimes we’re not able to do it, but we can work on it. And the more we do it and the more practice we have doing it, the better we get at it.

(01:09:59):
That article I mentioned suggests doing something simple to practice mindfulness. That that’s a good way to dial back that worry, at least when we don’t need to be worried. I mean, sometimes we need to be worried. If the tornado sirens have gone off, worry, please worry. Because that will get you to shelter and make it more likely that you will survive. But if you’re hearing about it in another city or state, yeah, it’s natural to worry at that point, but I think we shouldn’t stop doing whatever it is we need to be doing in our lives at that point. We need to dial it back at least a little bit.

(01:10:37):
So if we practice mindfulness, as this article suggests, we can do it in a very simple way. That is we can just take a moment to focus on our breathing. Maybe take some slow breaths. Or maybe we can think about, if we’re walking, we can think about each step that we take, and the process of stepping left and then right, and then left, then right. If we focus on that and not on that clouded leopard that escaped, or on that tornado in another state, then we can dial back our worry a little bit.

(01:11:12):
But we’re talking about artificial intelligence and being worried about students cheating. Well, if you’re in that situation, and you suspect that it’s going to happen, well yeah, okay, that worry can be helpful. It can direct us to do something. But let’s not worry about it too much. Let’s dial it back when it’s appropriate to dial it back. And there are all kinds of ways to get out of the worry mode and practice mindfulness or doing something else. Maybe we can just intentionally focus on the simpleness of simple everyday things in our life. Not just breathing and walking, but grading papers and straightening up the books on our shelves and dusting off… I ought to do that someday. Dust off the books on my shelf. Dusting can be very therapeutic and can be very rehabilitating. And there are a lot of things like that.

(01:12:10):
Talking about the clouded leopard at the Dallas Zoo. Back in the olden days when I was a zookeeper, I spent a lot of my day cleaning out cages. I can’t tell you how many stable floors I have swept out, and then put fresh bedding down. And you know what? I actually miss that. And you know what I miss about it? Not the smell, not the mess. Besides the fact that I miss the animals, I miss the meditative effect that sweeping out those stalls had. It really was very meditative. Some people get that from gardening, or from doing other kinds of hobbies that are very simple and monotonous.

(01:12:51):
As I’ve mentioned many times in previous episodes, I use tai chi to achieve that state, and it works very well for me. And the more I do tai chi, the better it works, and the more quickly I can get into that non-worry mode. Because when you’re doing tai chi, or things like yoga, you’re focusing on your movements, you’re focusing on how your body is moving, you’re focusing on not falling over. And what you do by that is you’re turning down that worry. You’re dialing it back. I suggest that we all find things that we can practice so we can quickly walk away from the worry of artificial intelligence. And another thing to remember is that we’re all in this together. You know that the A&P teaching community is a very, very, very supportive group. Notice I use three verys there. We’re here for you. I’m here for you. Just don’t forget to breathe.

Staying Connected

(01:13:47):
In this episode, we talked about how the robots are continuing their march toward taking over the process of teaching and learning in A&P. Oh my. And we figured out that, eh, it’s not all bad, really. Yeah, there could be some bad things happen, especially if we’re not careful. But maybe we can get along with robots. There is no way that I could touch on all the different aspects of chatbots and education in this brief episode. So I have a ton of links for you, as usual. Actually, I have a bigger ton of links than usual. And you know what? That’s just a drop in the bucket because there are just as many new resources and opinions that have been posted just in the time it took to listen to this episode. This is really a hot topic right now.

(01:14:48):
If you don’t see my links in your podcast player, go to the show notes at the episode page at theAPprofessor.org/131. And while you’re there, you can claim your digital credential for professional development for listening to this episode. You can do your part to keep this conversation going, to support each other and ourselves by calling in with your questions, comments, stories, anecdotes, poetry, and all kinds of ideas at the podcast hotline. You can reach us at 1-833-LION-DEN. That’s 1-833-546-6336. Or send a recording or a written message to podcast@theAPprofessor.org. Hey, if you get caught up in a debate about artificial intelligence and teaching and learning, and you will, why not share this episode with your colleagues? Just have them search The A&P Professor wherever they listen to audio, or in their device’s app Store. I’ll see you down the road.

Aileen Park (01:16:12):
The A&P Professor is hosted by Dr. Kevin Patton, an award-winning professor and textbook author in human anatomy and physiology.

Kevin Patton (01:16:24):
Some speakers in this episode were robots, not people. Do not trust them.

Episode | Captioned Audiogram

This podcast is sponsored by the
Human Anatomy & Physiology Society
HAPS logo

Transcripts & captions supported by
The American Association for Anatomy. 
AAA logo

Stay Connected

The easiest way to keep up with new episodes is with the free mobile app:

download on the App Store

Available at Amazon

Google Play button

Or wherever you listen to audio!

Listen on Apple Podcasts

Listen on Google Podcasts

Listen-on-Spotify-badge@2x

LIsten on Pandora

LIsten on Amazon Music

Listen on Audible

Click here to be notified by email when new episodes become available (make sure The A&P Professor option is checked).

Call in

Record your question or share an idea and I may use it in a future podcast!

Toll-free: 1·833·LION·DEN (1·833·546·6336)
Email: podcast@theAPprofessor.org

Share

Share buttonPlease click the orange share button at the bottom left corner of the screen to share this page!

Kevin's bestselling book!
Available in paperback
Download a digital copy
Please share with your colleagues!

Tools & Resources

TAPP Science & Education Updates (free)
TextExpander (paste snippets)
Krisp Free Noise-Cancelling App
Snagit & Camtasia (media tools)
Rev.com ($10 off transcriptions, captions)
The A&P Professor Logo Items
(Compensation may be received)

 

Cite This Page

Patton, K. (2023, January 19). Is AI the Beginning or End of Learning? | TAPP 131. The A&P Professor. https://theapprofessor2.s010.wptstaging.space/podcast-episode-131.html

More citation formats from QuillBot Citation Generator

 

Last updated: January 19, 2023 at 10:56 am

handwritten text: pleas take a moment to rate & review The A&P Professor podcast (with photos of two painters making stars)


Don't TAPP alone! Join The A&P Professor Community

 

 

 

Please wait...

Leave a Comment

I accept the Privacy Policy

Skip to toolbar