Online Learning in the Second Half
In this podcast, John Nash and Jason Johnston take public their two-year long conversation about online education and their aspirations for its future. They acknowledge that while some online learning has been great, there is still a lot of room for improvement. While technology and innovation will be a topic of discussion, the conversation will focus on how to get online learning to the next stage, the second half of life.
Episodes
2 days ago
2 days ago
In this episode, John and Jason talk to Christelle Daceus of Johns Hopkins University chats about digital neo-colonialism and efforts to humanize online learning through training about AI and promoting inclusive practices. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
Christelle Daceus, M.Ed., is a Course Support Specialists at the Whiting School of Engineering, Johns Hopkins University, and the Founder and CEO of Excellence Within Reach
Watch for Christelle’s book chapter - Coming late 2024 on Springer Nature Press “Using Global Learning through the Collaborative Online International Learning Model to Achieve Sustainable Development Goals by Building Intercultural Competency Skills” coedited by Kelly Tzoumis and Elena Douvlou with a chapter titled “Combatting Virtual Exchange’s Predisposition to Digital Colonialism: Culturally Informed Digital Accessibility as a Tool for Achieving the UN SDGs”
Johns Hopkins Excellence in Online Teaching Symposium
John & Jason’s 6 Guideposts - Slide Deck (via Gamma.app)
Christelle’s symposium video
Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
[00:00:00] Jason Johnston: What'd you have for breakfast?
[00:00:01] Christelle Daceus: I did not have breakfast. I was thinking here that I have two dogs, so that my mornings consist of a lot of making sure they get their walk in and getting my nice kind of walk in the morning and things like that. It helps me start my day. And I spend a lot of time just hydrating, tea, I like, because I think I have a full plate, I would call it.
I like to have a really quiet morning, just like the simplest morning that I can have, depending on what my first thing is to do that day. This is my first meeting today, I was like, okay, I'm just gonna chill with the dogs, get into my emails and things like that.
[00:00:40] John Nash: Nice. We've been getting more into tea lately. There's wonderful woman-owned emporium near our house called White Willow and they've got a new herbalist and, we picked up a lavender earl gray tea there last night.
[00:00:53] Christelle Daceus: Ooh, that sounds good.
[00:00:54] John Nash: The little things.
I'm John Nash here with Jason Johnston.
[00:01:00] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:01:05] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, but there's still quite a bit that isn't. And Jason, how are we going to get to the next stage?
[00:01:20] Jason Johnston: That's a great question. How about we do a podcast and talk about it?
[00:01:24] John Nash: That's perfect. What do you want to talk about today?
[00:01:27] Jason Johnston: Well, today we're probably going to hit some pretty big themes, John, and it's partly because we have connected with somebody that we first connected with at the Johns Hopkins Online Teaching Excellence Symposium.
So we have with us today, Christelle Dacius. Thank you so much for joining us. And we're really just looking forward to talking to you today.
[00:01:51] Christelle Daceus: too. Thank you so much.
[00:01:54] Jason Johnston: Well, we wanted to get started by just talking a little bit about what is it you do currently? You're connected in with JHU maybe talk about that first, but I also know that you're an entrepreneur. They have other pursuits outside of JHU as well.
[00:02:07] Christelle Daceus: Yeah, I am a long time educator. I've had my hands in all things education at various levels. And yeah, now I'm at J. H. U. Working for the School of Engineering, working for the Center for Learning Design and Technology. I work as a course support specialist with the instructional designers and technologists, creating Materials for courses at the School of Engineering at Helmwood making sure that they're accessible and those materials are accessible, like videos have captions and are able to be, process and materials are able to be read by screen readers. And then we also have the faculty forward Academy where we provide professional development for faculty and I have some awesome opportunities to collaborate with the school of education in their international student work group and I'll be working in some workshops for them in April, providing some work with the faculty on AI and different tools and AI and how they can incorporate into learning and a no fear approach to AI because there's a lot of anxiety there. I think for faculty. And that's my goal with that workshop is to meet them in the middle and show them that AI is here. We can't quite get rid of it, but. We can, elevate our learning and how we, work with students. And so I'm super excited for that. I also work in some research with Global Learning, so I have some international partners I'm doing like exciting things with.
And we have a book coming out in May or June with Springer Nature Press. And so that book is about global learning and how sustainability in education can be affected by the United Nations sustainable development goals. And so we just launched our book recently again at the world environmental education Congress in Abu Dhabi, just a few weeks ago, and we talked about our book and had a panel there and that was super exciting.
Very excited for that work. Obviously it was again like that natural opportunity. I was talking about earlier where it's just I'm meeting good people talking about the good work. And then we started creating some great work together, I'm really excited about that. And then, yeah, like I said, I'm an entrepreneur.
So I have a business in Baltimore City, which is an academic center that's really starting to really connect with the community and start starting to grow into a very. Well rounded program which is exciting because I'm just in maybe a few months. But, it's one of those moments where hard work is paying off even in the new pursuit, where a lot of the relationships that I've valued and forged within Baltimore and within education systems and Baltimore City schools are starting to just grow and I'm able to like really reach students.
Because just moving here, I'm actually from New Jersey, and I moved here maybe five years ago, and I've had an opportunity to contract in schools and things like that. And, Baltimore City Schools is constantly in the news for their educational needs and things like that. And because my career started in K 12, I really wanted to connect kind of the work that I do at this higher level, right?
Accessibility, advocacy, inclusive education, but bring it to a community level. And I think one of the things you guys asked me was about affecting the individual, like, how can we do that work and reach the individual and not just put out the research and all these kinds of things, which is amazing and important to have those conversations and keep pushing forward with.
Workshops and conferences and getting those ideas out there. But then I have an opportunity to not only give opportunities to other educators to bring those opportunities to students, but also really, impact the community, a community that needs it, Yeah. I also am a mom and I have a son he's four.
His name is Malcolm. He's the greatest. And yeah, I'm just a busy bee. I'm all over the place. But I love everything I do. And I think I have a good balance right now. So I'm lucky to do the things that I love.
[00:06:16] Jason Johnston: so we sent you some questions, but like you just. You just landed us with four pretty big things that you do. We could probably spend the entire time talking about any one of those things. So I'm going to have to show some restraint, because there's some things we would like to get to, and why we connected over this, that I think are really important.
I don't want to derail anything here, but I was really curious, and I'm sorry to our listeners, because we keep saying that we're going to stop talking about AI, and then it just keeps coming back.
[00:06:44] Christelle Daceus: You can, that's what I'm saying. That's the workshop. You cannot run away from AI. I'm so
sorry.
[00:06:51] Jason Johnston: And we love it. We like, it's really interesting to us. And all the time are like texting each other things. I actually texted my wife yesterday by accident, something I meant to text to John, and it made no sense to her whatsoever.
[00:07:05] John Nash: Does that make us work spouses right?
[00:07:08] Jason Johnston: I I think so or at least AI spouses. but because every time something comes up, I'm like, Oh, John, did you see this? Did you see that? And he's like sending me stuff back as well. Anyways, tell us a little bit about your approach with the "no fear AI."
Cause I really, I haven't heard that particular kind of phrase, but I'm interested because I think we're all in the same space in, in education.
[00:07:32] Christelle Daceus: Like I said, with the School of Education, they have a work group that works towards just how do we work with international students and within their own faculty groups they make sure that their programming and professional development includes that kind of work, and so they approached me because, a lot of faculty just don't know what to do. Right? The biggest issue is the plagiarism. Like, how do we keep up with this? How do we know that students are submitting authentic work? And that's the idea behind how , I'm planning for the workshop is, that we're talking about first, what really is AI, right? It's not this solve everything.
Like, there's so much more we need to know. There's so much kinks that need to be figured out. And it's so exciting when you see ChatGPT create a menu for you, create a business plan and all these kinds of things. But, people like us who work in online and work with technology, we know that there's like limitation to the authenticity of it to the like humanization of the technology, because there are people who create these technologies.
And these people are often in an industry that is dominated by people who look a specific way, right? And so those people have specific ideologies. And so when they're creating their work, they're using their specific values and ideologies and biases to create that work. And it's amazing work, but it's not something that is, Full spectrum hitting the complexity of humanism.
And I won't scare the faculty by phrasing it that way. But, that's really the conversation of just, letting them know that there is limitations and as much as it looks like it can do, we are still, we still have the power in our hands, right? Because we have this thing that AI or any kind of technology would never have, which is the human brain.
And it's capable of so many things that no matter what we create and no matter how exciting and shiny and new it is it's just never going to be more meaningful than that. And we, the important thing is not allowing it to, right? Not allowing to ourselves to give AI and VR and XR and all these kinds of things.
The power to take our human interactions or communication or connections and make them artificial. Right? So yeah, so that's the idea behind the workshop is that we are going to now give them the tools, right? Okay. So what does that look like? You're telling me don't be scared. Don't be nervous and just. Embrace it. Okay. What does that look like when embracing it? Right? And so I want to talk about some faculty that are already doing that. How can we use ChatGPT is what everybody knows to review work that students are turning and tell them, sure, use it, get it out of their system, and they're going to start to recognize if you show them, okay, the reason we're concerned about this is because you're not getting the accurate information, right? So let's have the students sit down and compare some of their own research to ChatGPT's research and on a similar topic and, compare those things and analyze the technology itself, and it's gonna, teach them some things, which is exciting, right?
It's going to give us some new things, but at the same time, it's going to help them question. They're learning in an authentic way that it's not just I'm answering the question and that's it. But I'm having this moment where I'm like, I'm thinking about my thinking, right? It's something that is in something that we created within the engineering school.
But this metacognition of Remembering that it is a technology, right? It's not our reality. It's just something, a tool that can be applied to the courses, especially online.
[00:11:11] John Nash: Wonderful. Really cool. I think, I have a million questions. I, I've been worried about the the historical bias inside the large data sets that these LLMs get built on, even as actors inside universities like mine who are doing sub projects, they can go out and get the, I guess I'm learning about these, but there's the common crawl data set.
There's BookCorpus, Wikipedia, these things where the data comes from. And then on top of that, as you just noted, the developers values and ideologies get put on top of that. And so I'm thinking about ways to help others, particularly teachers, see their evolving role , as an actor inside this network of flow of information from the large language model to a learner, whether they're over 13 years old and it's okay for them to use them or whether they're in post secondary.
And I'm wondering how you're feeling about that too. I see now teachers are needed more than ever as the mediator between the screen and the learner in helping set up critical conversations. And I'm thinking about these guideposts that we talked about Jason and I did at the symposium at Johns Hopkins and being human to your students and yourself, treat humans as individuals, and you helped us expand on a point which was to recognize that not all humans are present. And so I'm thinking about that. And are you still feeling that way that there's a place for teachers to help learners remember that not all humans have been present in this AI flow of information.
[00:12:53] Christelle Daceus: I think the difficult part is having the time for those conversations in the classroom, I think that's where immediately teachers are like, this is just another thing, right? On our plate for us to, have to deliver, but that's where I'm hoping to encourage authentic, interactions and opportunities to have those conversations, right?
And so I really try to encourage faculty to. Talk about their own process, their approach to a assignment, right? So let's say we have this AI assignment or whatever assessment that we have in a course and they can talk for a moment, whether that's in the overview of the assignment or in the overview of the module, where they're saying, Okay, here's what's assigned this week.
Here's some things that I would keep in mind when I'm approaching this and here's how I would approach, an assessment like this or an assignment like this. And just, remind them that they're not on their own, right? It's not just especially online. It's so easy to just be on the other side of the screen and not really connect.
But if you remind them that, hey, I'm still here and, I try to do these things too. I found my way, I think a really good habit that I'd love to see is, that faculty in their course introductions or syllabus can talk about how they got to their role, as a professor like, yes, we have the bios and, tell them a little background, but really what courses that they take, what, how did they approach their learning in those courses?
A lot of program, if you think about the school of engineering these are common courses a lot of engineers have to take to reach their programming so a lot of these, more senior engineers and people in the industry, they've had those experience. They've had to approach the learning and it might, the learning might look differently right now, but there's things that work when you're, gaining retention or learning new things that just work, right?
And no matter how the learning is approached. And so what I realized is there's an assumption that because you're at a certain level, you just know those things and you should just know how to, you know, um, really organize yourself well enough and organize your course materials, prioritize your learning in an independent way when in actuality, online learning is so new, there's no real approach to it, right?
Right. There's no real guideline to, okay, well, this is how the norm of learning online is for the student, right? I think we spend a lot of time making sure that teaching is accurate and like we're putting out good materials and we're accessible and all these things, but then students, they're just told, log in, learn, even though it's different than anything you've ever done for the majority of your academic experience.
And. But, do it and do it well. And so yeah, those are the things I think about that technology moves so fast that we forget to step back and make sure that everyone has the steps to apply it and be a part of it and participate. And I think that's what true accessibility is not. Pinpointing the people who are most in need all the time, but sometimes it's if everyone can reach this most likely, that's the best products, right?
That's the best experience. And so that's how I approach accessibility and online learning and the design of those courses.
[00:16:14] John Nash: I don't want to oversimplify something you just said, but it, did it seem like I was hearing you say that there are too many instructors who take on an online teaching team? Thank you. endeavor, inadvertently throw the students to the wolves a little bit. There's not enough thought going in there to everything.
[00:16:32] Christelle Daceus: I'd even say it's at an institutional level because half the time, the faculty or teachers are also being thrown into new technology and they, start the school year and they say, Hey, these are the things we're using our courses. This is the LMS that we're using, teachers don't really have an opportunity to decide on those things, so I think that's really what it is that yes, there's.
The aspect that teachers could, step in the ways that I talked about, right? And helping them adjust to the technology. We have to make sure that as an institution, we're reaching them. And me working in K 12, that's the, that's where I see that the most, right? They put these laptops in classrooms and they have all these kinds of very amazing educational technology, but, Half the time, it's just, this is what we're using now.
This is how we're, looking at the data, how we're tracking our students progress, and all these kinds of things. And you just have to adapt and what happens to the teachers that can't, right? Which is what happened in higher ed with COVID. Hundreds, thousands of classes all around the country were placed online and everyone said, figure it out
[00:17:39] John Nash: Yep.
[00:17:40] Christelle Daceus: and not only in higher ed, but then there's all these K 12 kids logging into zoom with no idea what they're doing.
And that's the example I would use of just technology moving a touch too fast. Right? We saw an emergency which is the pandemic, and we're shutting down. We're locked down. We're in our house. And someone said, Oh, but we have the technology. We've created this. We've got it, but didn't think, okay, but schools are safe places for students.
Right? And especially at the K 12 level, are we making sure that this is safe, right? Are they logging into secure servers and all these kinds of things? That's where you saw Zoom immediately change its entire kind of interface. Very quickly, they were like, oh, we can't allow these Zoom links to be shared all over the place and people are popping into different rooms and things like that.
And so you started the more of the enterprise model and for schools and things like that and yeah, which is important. It's important for us to learn, but we don't want to put our most vulnerable people, our most vulnerable stakeholders at risk, which are our students, right? At any level. They are the stakeholders investing, if not their time, with younger students, but also financial investments when you're at the higher ed level, they invested into this product, which is their higher education experience, and they want to make sure that it's high quality and it's reaching them in a meaningful way, right? And they're walking away from that experience. And so when I always say I am so happy I didn't graduate around that time or I wasn't trying to go to college because, that experience of, oh, I'm having my first, second year of college and. All of a sudden, they're like, get off campus and go on your laptop.
You still have to, pay that ticket price. You still have to pay, to be there and be present and reach all the same goals, but it's a completely different environment. And we don't even know if you're going to be able to succeed in that environment, but we all just have to. Because we want to, well, this is the colonialist piece, so I won't get too much into that. Um, but yeah, it's just the continuation of capitalism. That's, that was the priority, right? That we needed to keep doors open, we needed to keep institutions pushing and we're literally dealing with a global health pandemic, people's lives are at risk, people are dying And instead of taking a second to make sure we're delivering this essential need, right, of education in the best possible way.
It was a little rush and we were, we put kids in danger. We put, institutions in danger in that way. So
[00:20:21] Jason Johnston: I feel like whether it's a, global pandemic pushing us in this direction, or maybe a school is pivoting to online or even down to a teacher who's been, asked to move their classes online. I feel like our default is to try to continue the same things that we've been doing, but just stick them online.
So if a teacher is very comfortable and this is the way they've always done it with specific kind of assessments or a very lecture based approach that everything just online and all of a sudden becomes just this kind of like same stuff, different package.
[00:20:58] Christelle Daceus: it's a folder, right? It's just like holding all the things and we hop online. We do our little lecture or recording and that's learning and the, we try to do interaction through discussion boards and things like that, but I think even the creation of discussion boards and the, is that why did we need to look like replicate discussion?
Why did we not instead create moments of authentic discussion, which is harder to, of course, analyze quantitatively , but I understand we have to find a balance, it's not easy, but this is why I say, my approach to, thinking about the professional development of educators is to show them the way, right?
Am I making sure that my materials are reaching every student in the room, right? And that means taking a moment to check in on if there's translating opportunities, right? What is the demographic in my room? Am I making sure that the content is culturally relevant to them? Okay. Am I sure that the words that I'm using are sensitive to the kinds of like cultural mindsets that are in my classroom.
And sometimes as educators, you're not in a room with people who look like you. I hope most of the time that's not how that looks, and you don't wanna miss opportunities for a student to grow and to reach the really good content that you're trying to deliver because they couldn't access it online, right?
Let's think about international students who are checking in online and we have links to sites that in their country are banned. So then we have a student that's okay, but I really want to go to this school, so I'm going to get a VPN. And I'm going to do what I need to do so I can get this degree.
And maybe it's normed, but is that really what we want as institutions or as educators that students are risking themselves in a, I guess legal way or judicial way where they have to go this extra mile versus the educator creating unique materials in such a way that they don't have to click on a link, right?
The learning is in the LMS. There's interaction there with their peers. They're really having an authentic experience instead of going into another space. Maybe you send that information in a different way. Maybe you have alternatives and you can still have your link, but making sure that they can reach that in some way, right?
I've, through this work, found out there's YouTube alternatives and all these kinds of things in places like China and the UAE, getting familiar with that, or at least, in the education, if you know that's a demographic that you serve, that should be a part of your own professional development, right?
That you're pursuing how to adjust your teaching for those students. But I think as institutions and as educators, we have to norm those conversations, norm it in a way that I think once you start saying inclusion and diversity and people get, "Oh, but I am, like I am, I'm doing the right thing.
I'm doing my best" and everyone's doing their best. But, once you start to put practical steps to it, okay, well, there's things I can just. Add to what I'm already doing and we just enhance overall, just the quality of education. And everybody would ideally.
[00:24:24] Jason Johnston: Yeah. Yeah, that theme of intentionality was something that came up over and over again in that J. H. U. symposium and what I hear you saying is part of that intentionality is being able to, is taking the time to do professional development so that you can take a step back, you can think about maybe where some practices need to change, and ideally as part of the professional development.
Here are some practical things that you could do today. Maybe some small steps or maybe some individual individual examples of things that, that could be done.
[00:24:58] Christelle Daceus: Yeah, and I would say it doesn't have to be the big conference or all these things can be reading, a really good book, a really good author who's familiar with the work
[00:25:06] Jason Johnston: Yeah.
[00:25:07] Christelle Daceus: If that's, of concern to you relating yourself to the other voices that, Are matching your values of that you want to bring into your classroom.
And I would say even at conferences you get to sign up for different sessions and my favorite session to sign up for the small ones that they put in the room that's all the way down the hall. And there's only a couple of attendees because we sit in there, we have amazing conversations, because everyone's being heard.
And it's not just anybody talking at you. It's real educators and they're having real conversations and then putting in some action steps. "Okay. How can I help you with this at your institution?" And how can we, collaborate in that way? And even actually, at the conference I went to recently, we had field trips, I think, on the last day of the conference we were on one of the charter buses and a colleague from London, they're working on some environmental work there.
We just connected immediately, and he starts talking about how he" is looking for how to elevate the design and meet the community and be inclusive and all these things. I was like, Oh, I love that's what I love. I love to do all those things," and that didn't happen because I sat in his session and, heard all his bullet points and stuff like that.
But it's because we came together as educators. We're trying to have an authentic experience where we get to, Abu Dhabi is very sustainable and environmentally aware. And so we were going to a mangrove where they plant trees and expand foliage there. And it was great to have this authentic moment where we were like, "this is just something that I love."
And, at conferences is almost like a safe place to nerd out about the things that you really love in education. And so you get into these conversations. "Oh, what do you do?" And then all of a sudden. You found, your match that somewhere in another institution, but doing similar work and seeing that, it works the things that you're doing, but maybe in a different way somewhere else.
And you're getting new ideas and we're building education in those ways. So that's what I'd like to see, I think, in the future of professional development and conferences, like having those more authentic, just conversations, open discussion on these real things. Like, how are we really holding back our students by allowing colonialist practices to seep into education where there's one voice, there's one identity that kind of leads the way, right? There's one version of what the the most what is the word? Something that has, I don't know, you're more important because you went to a certain institution, you're from a certain part of the world, or from a certain culture there's a better word for it, but my point is that, we hold our students to a lesser standard when we stop short replicating in person online.
When we have educators, creatives, to really come together and are like, "This is an opportunity to create a whole different educational environment that can just reach students in a different way, it doesn't have to be end all be all we don't have to get rid of, schools or anything like that".
But there's a lot, especially at the case level where schools are fully online and they're interacting with students like that. But I would hate to think that a student. Spend 12, 14, 15, 16 years of their education, and they're just, staring at the same thing year after year, and they're just reading things online and they're missing opportunities to interact with their peers and grow their ideas and hear.
Validation and feedback like we did sitting in the classroom. Yeah,
[00:28:52] John Nash: You brought up the notion of colonialism and you've talked a little in the past about digital neo colonialism. Could you give our listeners the digital neo colonialism 101?
[00:29:05] Christelle Daceus: Yeah. So, this idea that um, I think I just mentioned colonialist practices are replicated through education. Right? And if we're thinking about imperialism. It's this pursuit of resources, right? In the past, it was the pursuit of humans, right? And the institution of slavery was the exploitation of human labor and human bodies and cultures and the eradication of culture so that other cultures could be elevated and given power socially, economically that stands to this day, right?
And. When you don't have the massive institution of slavery, it continues in different ways. And we saw things like the black codes and all the limitations that freed black persons had to deal with after emancipation that kind of limited and how people of color could be successful.
And that's just an example at, the domestic level. But then when you really think of it globally, there's just a continued, repression of so many cultures, whether that's in the Caribbean, whether that's in Africa and Asia, these cultures that were impacted by colonialism and intruded upon and some of these places, their Colonizers are still there, right?
They have embassies there and offices and, and we just made these laws and all these things. Right? And it's the same thing in education where just like the for profit prison system, right? That's a continuation of enslavement of control over the population is a way to control, consequence to what the larger they decide as what is criminal behavior, what is dangerous to the society that we are trying to uphold? And of course, that's important, but when it's designed based on stereotype and race and, these false ideologies of inferiority due to differences of, skin color or being an immigrant or different economic class, that's when those things get spread further and further, right?
And so in education, this looks having international students come to American schools to become more legitimate. That's the word I was looking for earlier, where these institutions legitimize you, right? Whereas you don't have American students going to some of the other institutions because in certain places, like the global south is what they'll call it, right? Those third world countries or whatever you want to call it you don't see American students or British students or Asian students going to those countries because the legitimacy is not there, right? There's the social legitimacy of that degree would not have the same weight, right? Even though I'm sure there's plenty of institutions with great work and they're like, I have partners all over the world.
And so what does that do? That brings more economic growth to certain institutions, certain regions, certain countries, brings more influence because this education is legitimate. So the research they're putting out from this institution is more legitimate than those other ones. And so those perspectives from the people who can afford to go to those institutions are then pushed forward, it's this kind of continued.
Elevation of a certain voice, right? Of a certain pedagogy, even, right? Again, we're going back to replicating what's in person online. That doesn't work, because, It was already barely working in person, right? We're still figuring that part out. So, you know, We to, to, to replicate something that's not even that doesn't as strong as a foundation is we wanted to online, which is something.
We don't even know as much as we can about it becomes just this loose experience, right? Where people aren't getting as much as they're investing into it. I think we're all spending a lot of time getting familiar with technology, investing into it, incorporating into our lives and we want to make sure that what we're getting back is not just a regurgitation of.
Colonialist thought of, making sure that the majority is elevated, that the global north is stays in its position. It's an opportunity for the global north to move out of the way and say, yes, because we have this technology that allows us to talk to people from all over the world. This is an opportunity for us to just give them that platform, right?
We want to give them the opportunity to speak for themselves. Like we don't need to advocate or save or, any of those things we need to. Just not bombard the industry, right? We don't need to dominate in a way that doesn't leave space for the global south or different institutions or different voices to actually be heard which is something I talk about in my chapter as well.
[00:33:59] Jason Johnston: Yeah, this idea, and please correct me if this is not part of what you're talking about here. One of the practical ways of moving forward is this idea of allyship. Does that resonate with you or is that is that different than what you're saying here?
[00:34:16] Christelle Daceus: Yeah, I think that's a really good word to put to it, so that I love it when big ideas can be consumable, right? And yeah, it's this authentic allyship. Right, that we remember that, yes, there's pursuits of greater things. However, we don't want to perpetuate competition and capitalism and just growth for the sake of, being bigger than the guy next to you kind of thing, but rather than, if you think about the SDGs, the Sustainable Development Goals, the goal is to really elevate our earth, right?
And to expand the longevity of our earth and our climate and making sure that in all aspects, industry and education and health and economic, we're all growing and we all have the same opportunities. To be, players on the world market. And yeah, so the allyship comes from first accepting that, the end all be all is not being the person that's most on top, and even if you are the person that's most on top, there's no problem with helping those that come behind you, right?
Or who are in a different position than you are, and bringing them to where you are, right? I think we have to get out of this illusion that technology creates and being online creates that, this is just a person on the screen. It's no, the world is still, if we're connecting the world and we're having these international conversations or conversations with people all over the country, or even in your community, we're not even meeting.
I could be in Baltimore still having my Zoom meeting with someone that's a couple blocks down. We don't do that anymore, right? It's oh, I don't want to meet you at your office. I'm just going to hop on Zoom, and that's it. And not forgetting that when we do have in person interactions to make them meaningful, I think, in a new way, because they're becoming less apparent and less available to us and enjoying life in that way, I think.
And as professionals, just really, like I said, just recognizing, one where you're coming from and what your strengths, privileges, whatever you want to call it, are. And when you are thinking about enhancing that work or growing that work, making sure that it's not just one voice that you're hearing in your head, right?
That you're trying to elevate those other voices that are available to us and trying to learn from us, right? They deserve that.
[00:36:51] Jason Johnston: You wonder about what this disembodiment of meeting together will do to our psyches over time, the fact that we're just floating heads here in zoom looking at each other versus being in body with one another.
Anyways, that's a whole nother topic . But I but I think I recognize what you're saying there in terms of our meeting together, how, the digital, although can span, because it'd be some amazing affordances to Zoom and we can span distances.
We would not be connecting again. I don't know the next time I'm going to be in Baltimore, might be a while. And so this is a wonderful way that we're using digital technology to span a distance that couldn't be spanned otherwise,
which is amazing and has expanded even in our conversation today, all the things you're talking about has expanded my way of thinking about things, hopefully have helped to move me towards more humanizing of people that are online and as we were talking about this, but also recognizing some of the dangers in the affordances that we're using.
That's good.
[00:37:57] John Nash: Yeah I appreciate you helping me remember, I think I was a little bit harsh on my own ilk, , the, of the instructor. I think that there I don't know if is victim the wrong word, but I see systems. rolling in place and then instructors are not victims of the system, but they are, they're caught in the system and don't know opportunities to change.
And then in turn, learners don't get to see the opportunities for change. But I appreciate that. I think I'm too hard on my fellow instructors thinking that it's all at their feet to make the difference. There is some there, we have some agency and we should be bringing our thoughts to that.
But yeah, I appreciate that.
[00:38:37] Christelle Daceus: Yeah. Even in my chapter, I go through things that kind of every level, right? And what we can do policy wise, right? What is the government giving us to even work with? Right? And what are we doing with vendors, right? The people who are creating this technology and how we're connecting them to the actual institutions and the leadership of that institution and the staff and the faculty and then the students, right?
After all that is said and done that's where I think the biggest missing pieces and where I always go back to, I really want people to just give students the path, right? Give them the steps to succeed. You know that education doesn't need to be this, you just have to figure it out.
You have to find the answers yourself. Right? And I think when we are more empathetic, like you said to our faculty, where it's like, they're learning too, right? And we're all just okay, we're on this new kind of adventure together. Let's do this. Approach it as a community, right? And see how instead of replicating the kind of like logistics of education, how can we replicate community online?
Right? How can we bring that experience where you see your favorite teacher? Or you knock on them on their door during lunchtime, but, nobody else is in there and you finally get to talk to them and share your favorite TV show from the weekend or whatever it is, those little experience.
How can we bring that online and the rest of the learning? But it's because we have the teachers already, right? We already have good learning and we have people like myself and my team who are working on accessibility, making sure that people with different abilities can reach the material, people like my partners who are doing global learning and VR but making sure that students who are blind can still participate in that, right?
Going that extra mile for them because the students are saying I don't care that I, Have different abilities. I want the experience. I invested into this the same way that my classmates did. And just because I have a different ability doesn't mean that I get part of the experience. And so it's our job to meet them, meet those students where they are and make sure that they're having that experience.
Right. And they're having Equilibrium experiences across the board.
[00:40:53] Jason Johnston: Yeah, I think meeting students where there are, that's a great place to land. And unfortunately we're going to have to land this. I have a thousand more questions for you. And I think John probably does too. I think we could talk for a long time and I just just put a pin in that to say, let's do this again.
Okay.
[00:41:07] Christelle Daceus: Absolutely. Happy
[00:41:09] Jason Johnston: been really good. Also your chapter of your book it's yet to come out. Is that correct? When do you expect it to be published that people
[00:41:17] Christelle Daceus: Springer Nature Press we're working for June, I believe, end of May, beginning of June. Not the editor, so I don't show any of the logistics of those things, but I will send that information to you, and then you can share it with the people when that is ready.
[00:41:31] Jason Johnston: Okay, we'll put that in the show notes as well as our slides from the JHU. Again, we'll put in the show notes and you can see some of these quotes as well as your session from JHU is now up on video. It's great. I've sent it on to quite a number of people. So many good things. So if you want to hear more from Christelle check out our show notes and she did a great session at JHU that you can watch as well.
Thank you so much for being with us. This has been great talking to you. I really appreciate you. We really appreciate you taking the time to share.
[00:42:00] Christelle Daceus: Thank you. I am happy to do this again anytime and talk with you guys. So thank you so much for, giving me some time with you guys.
[00:42:09] John Nash: Yeah and hey Christelle, I'll send you an email, but we have a It's not a parting gift because we're going to see each other again, but anyway, we send a, Jason doesn't know this. It's a new policy. We send a mug, one of our online learning in the second half podcast mugs as a thank you gift.
[00:42:25] Christelle Daceus: Oh, thank I'll send you an email. You can let me know an appropriate mailing address. Something real and physical will arrive for you. And so you can drink your tea out of it if you want,
thank you so much. for thinking of my teeth.
[00:42:38] John Nash: All right. Thanks you all.
[00:42:40] Christelle Daceus: Have a
[00:42:41] Jason Johnston: Thank you so much. Yeah. Have a great day. Bye.
[00:42:43] John Nash: Bye.
Monday Apr 01, 2024
Monday Apr 01, 2024
In this episode, John and Jason talk IN PERSON, reflecting on year one of their podcast. Keeping with the theme, they also find a few rabbit holes to chase, consider developments in AI, and talk about educational and ethical considerations around AI-generated music and video. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
Hard Fork Podcast
SORA OpenAI Video
Alibaba EMO Video Demo (Jason’s LinkedIn post)
Suno.ai
Support Human Artists! Gangstagrass
Mr. Beast on Youtube (not that he needs any more clicks)
The makeup brush holder John keeps his pens in
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
1 Year Anniversary Special
[00:00:00] Jason: Would you happen to have a pen I could borrow? Yeah.
[00:00:02] John: Felt blue, black.
[00:00:04] Jason: That is amazing. I've just this moment, I just noticed your incredible, your - you've got like a pen store.
[00:00:10] John: These are makeup brush holders.
[00:00:12] Jason: Oh really? Okay. Black, please.
[00:00:15] John: ballpoint, flare
[00:00:17] Jason: pen, Flare. Perfect.
[00:00:19] John: yeah
[00:00:19] Jason: And would you happen to have any sticky notes? That's incredible. You are really set up here. That is something else.
[00:00:24] John: I dream that someone, no one visits me. I'm set up for a full-on brainstorming session with a gigantic. Five feet by three-foot whiteboard and 500 colored sticky notes.
[00:00:34] Jason: Sticky notes galore.
[00:00:35] John: Yeah, I'm ready to change things if anybody wants to come over.
[00:00:38] John: I'm John Nash here in the same room with Jason Johnston.
[00:00:43] Jason: Hey John, hey everyone, and this is Online Learning in the Second Half, the Online Learning Podcast.
[00:00:48] John: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but there's still a lot that quite isn't there. How are we going to get to the next stage, Jason?
[00:01:02] Jason: How about we create a podcast and talk about it?
[00:01:06] John: How about we do that? How about we create a podcast, do it for a year, and then talk about what that year was like?
[00:01:11] Jason: that sounds great! Happy anniversary, John!
[00:01:13] John: Happy anniversary, Jason.
[00:01:15] Jason: I should have brought you something.
I didn't. I'm sorry. How about we go out to lunch and we and we celebrate?
[00:01:20] John: yeah, and maybe we can get a demo of the Apple Vision.
[00:01:23] Jason: Oh, that'd be cool. Yeah. There's a little place right there where we can grab some lunch and maybe go over to the Apple store. See what's going on.
[00:01:30] John: Yeah,
[00:01:31] Jason: That would be thematic. A lot of this podcast has been a number of things. One, talking about online learning, but also talking about the new tech and how it might affect online learning in the last year.
[00:01:41] John: Yeah. We are EdTech nerds also.
[00:01:43] Jason: We are, we tend to nerd out on a few of these things.
Today on my way over here, because I had to drive to this podcast today.
I didn't do this podcast in my pajamas.
[00:01:54] John: Horrors. And you drove yourself.
You had to operate a machine to get here.
[00:01:59] Jason: But it gave me, afforded me a little bit of time in the car to listen to a podcast. I listened to our first episode. It was kind of nostalgic,
[00:02:06] John: you weren't tuning in to our first episode just out of some kind of vanity thing Oh, I love listening to me.
[00:02:12] Jason: No, it was not because I like the sound of my own voice. Although after doing a podcast for a year, you get used to it.
[00:02:18] John: you don't even know what you sound like. You're just like,
[00:02:20] Jason: I listened in because I was curious about what we talked about in our first podcast. Whether or not, what we talked about then rang true in our first year of podcasting and maybe looking ahead to see what's going to be different.
And what I found was, we basically talked about. What we were going to talk about, which was online learning, the second half, check. We've been talking about this last year. How technology affects online learning, check. We've definitely had a lot of that. We also had thought our big theme was going to be humanizing online learning.
Check. We've had a bunch of that. And also, however, one thing we had slightly wrong. What our topic of the month was, which was AI.
[00:03:03] John: Yes.
[00:03:04] Jason: It's become the topic of the year, probably.
[00:03:07] John: The topic of the Year .5 Yeah. So
[00:03:12] Jason: that's the one thing that we probably got wrong. The other thing that I would say that we didn't know about, as we couldn't quite see into the future with this, but one of the big things that you and I have talked about is how much we've enjoyed having guests.
We started this as a conversation between you and me. But how great it's been to bring other voices in this year.
[00:03:34] John: It has been remarkable to have other voices in. It's been amazing having guests because I feel as though it's a privilege that we get to have this kind of professional development that we create, I guess is how I look at it. And I think we do something for our guests too.
They feel good about being able to talk about their work, but the breadth and depth of the things we've talked about with some amazingly smart people has been just a privilege from me.
[00:04:01] Jason: Yeah, a privilege. That is a great way to put it. And just to be able to talk with some of these experts the last year to get a completely different for some of them anyways take on the things that we've been talking about has been challenging, informing, guiding for me so that we're not just talking in a vacuum here.
Really, our first guest was when we did the podcast Super Friends episode a little less than a year ago at OLC and we did another one just a few episodes ago to wrap up the year and then we had some amazing guests Dr. Michelle Miller Dr.
Enilda Romero Hall. Then we were able to talk to Dr Kristen DeCerbo from Khan Academy. And that continues to be a big thing out there. We made a great connection to OLC keynote speaker, Dr. Brandeis
Marshall.
Michelle Ament, Dr. Alicia Magruder at Johns Hopkins, which actually then led into a podcast recording at their symposium, which was so much fun as
[00:05:01] John: That was so fun and so innovative to be able to have a, almost a simulcast of the podcast as the concluding session of an online teaching symposium.
It has been good in that regard. And also, a chance to connect these ideas over time with of other things that come across our desk as it were. So, I think about Michelle Miller, and we keep talking about same side pedagogy.
that
keeps coming up as a relevant thing. Brandeis Marshall's notions of what's un AIable. I continue to talk about that even this morning with a provost from a two-year college in Texas was talking about this.
[00:05:41] Jason: You know what's cool? I was talking to somebody at UT the other day who has been listening to our podcast and he quoted Brandeis Marshall from our podcast about
[00:05:51] John: that That's fabulous. Yeah. And then. You know what I think surprised me the most over time is how certain things are emerging now, I think that are more important than anything else that's happened with AI in the last 12, 13 months, which is still the topic of ethics. And it's not about the technology. It's not about the advancements.
We're coming up in March of 2024. So, it's one year into the old March madness when GPT 4 came out and then I guess Anthropic came out, BARD, all of them were releasing and it was an arms race in March of 2023 to see what these models would look like. And now. We haven't seen in the last 12 months a massive boost in the model capabilities and a bigger discussion, I think that's happened over ethical use and the creation of guidelines, particularly in the education space.
[00:06:46] Jason: Yeah. When we recorded, we didn't even know of the existence of chat GPT four at that point when we recorded our first episode a year.
[00:06:54] John: ago.
No,
we did not.
[00:06:56] Jason: And so that just started that whole year of recognizing first that AI is a thing.
And then all of a sudden people realize, oh, wait, it's actually pretty significant thing. When that next model came out and realized that the real capabilities of AI were Much deeper, much better than what we expected, even on the front.
end.
[00:07:18] John: but the two guys that run the hard fork. podcast, were talking about how Sidney at the time, now all these name changes, but Sidney was Bing chat, which was Microsoft thing. It, it had told, it was a Kevin Roos or
was it Kevin Roos was advised by Sidney to break up with his wife and start dating Sidney.
You, similarly, dad your heart broken by Bing.
[00:07:43] Jason: Right. I'm being chat and had some very strange conversations with Sidney right in that same time. So, it was just the wild west in some ways that some of the initial concerns of AI kind of were tamed, I would say about those chatbots.
[00:08:00] John: Yeah. Yeah, they were.
But we were so amazed by the Model 3. 5 we couldn't stop talking about it. We thought we'd be done in a month.
[00:08:08] Jason: I would agree After that initial surge I think what we seen is a lot of third-party companies starting to leverage this power and I would say as we predicted A lot of edtech companies that were starting to add to it. And so, we talked about that. We predicted that back last year in March. And then as we were walking the floor and if you look back at our episode number nine, how are ed tech vendors humanizing online education? When we were walking the floor of O.L.C. Nashville at that point, that was March of last year. It was very different even as we were walking the floor in the fall.
Conference in terms of who, at least I found, who was already talking about.
[00:08:59] John: at
that
[00:09:00] Jason: point.
At that point, they were talking about it, not really implementing it, and we had some interesting kind of responses. And then by the fall they were really advertising ai,
[00:09:09] John: AI. In fact, the vendors, I think, that were concerned about having AI be part of their models were the ones that were trying to catch kids cheating.
Using AI, not thinking about how AI might be embedded into their tool to advance some feature that they wanted.
[00:09:24] Jason: Yeah, it was much more that concern. Yeah, it's interesting. Yeah, I think in the other part that I feel like what we're seeing more of lately in the arms race, and this is why some of our ethical conversations have taken this turn is the capabilities of AI beyond just the chatbot language model into the areas of media when it comes to video. One of the things we've seen in the last few weeks is OpenAI's Sora, S O R A, even just like yesterday, I saw that Alibaba, you know, which I don't even know what it, I've never bought anything from it, but it looks like a place that you can buy, cheap stuff on a wholesale kind of level.
They have a model that they're working on for lip syncing that's quite impressive. We can put a link to that model in the chat, but I feel like what we're seeing are these kind of video lipsyncing kind of ideas as well as if you think about what has happened in the last year in terms of image creation, how much better it's gotten.
And then even audio. I was doing a a few of these audio demos that are out there right now, one that's actually built into CoPilot that you can ask for it to make a song for you. it's, oh it's cool. It's pretty wild. And maybe we'll make a little clip in here.
Okay. Let's. I'll I'll quickly make something and then we'll take a listen to it and and maybe close out the show or something with it at the end. But yeah, you could just put in a prompt saying for it to make a theme song in this style using these lyrics if you want to, and then you can actually edit it edited it afterwards.
[00:11:06] John: Are you noticing the same thing I'm noticing too about the sort of seamless integration of generative AI into almost, I don't want to sound hyperbolic, but almost every app now that has been popular, has now decided to seamlessly integrate AI into itself, making its presence in operations that are not very transparent to the user. Or Notion, Copilot you name it, well Canva, they're all putting AI. Operations in Zoom. And I'm wondering if this sort of invisible AI is going to lull users into thinking that this is just part of the app, and it may not actually be AI.
I think about Zoom and it's a meeting summary feature. We were talking about this at our university in our policy group because I think a lot of people think, if zoom has this feature, then it must be okay to use. And then It's part of our acceptable use. Maybe it's inside our privacy guidelines, so I'm going to turn it on and we're going to use it, but that's not necessarily the case.
And so, if you're recording meetings or you're putting in student data or you're having, I don't know it's interesting to think about because I think it can enhance user experience, but I think you can also lull people into thinking that this is safe AI.
[00:12:19] Jason: Yeah, I guess using their brand acceptance, so we work at institutions that there's quite a vetting process to get something inside of our doors. So, we know that we're obviously working with Google, microsoft, and and zoom would probably for us and canvas all four of those. We're both of our institutions.
That's, those are the four biggies.
[00:12:40] John: Yes.
[00:12:41] Jason: And so, you're saying that it is almost like. It feels like if something comes in alongside of those packages or with those packages then it becomes all of a sudden to just accept it. It doesn't have to go through a, yeah, it doesn't have to go through a new vetting process.
If all of a sudden, a new, say there's a new video product and this is how we would get AI video summaries. This would have to go through a whole new vetting process, but we're not doing that. It's just just happening.
[00:13:08] John: Yeah. And so, if the underlying models are suspect at times, even, if we look at Gemini, Google's Gemini, and as we record this on March 1st, 2024, in the past seven to nine days, they had a major generative AI failure on their imaging model.
If those are the underlying engines, if you will, that are, adopted and licensed by these brands accepted tools. Yeah, how safe are things going to be? How do they, can Zoom stop OpenAI if they're using that engine? Can they can't really put new guardrails on top of what it does with the data because the model's the model, I'm not technical enough to know the answer to that, if I'm making sense.
[00:13:50] Jason: Yeah, you're making sense. I don't even know if Zoom is using OpenAI, and it, because it just appears, and I think we get a lot of wrappers around things as well, that are really OpenAI.
And then we get this new wrap around it and other things that are more like companies that are doing their own thing. So, it's hard to, yeah, it's really hard to track down.
[00:14:10] John: and, the question for me becomes even more important to discuss when we think about all the wrappers that have been created for P 12 teachers like Magic School, Diffit, a couple of others come to mind but I don't remember the names, but they're all also running on top of these models that are only as safe as they're made by those developers. so yeah, I think it's, I think it's something to talk about
[00:14:34] Jason: You know what's funny?
This this audio creation program. So, we got SORA by OpenAI, which is this brand new video. And then we got Suno, S U N O, with this audio that's coming in with copilot anyways.
[00:14:51] John: I didn't know it was inside Copilot. So what app are you using in Microsoft to get, to invoke Suno?
[00:14:57] Jason: of Copilot. So what app are you using in Microsoft to get, to invoke Suno?. What kind of style should we do today?
[00:15:20] John: We're sitting together in a room in Lexington, Kentucky. Can we do some bluegrass?
[00:15:25] Jason: Yeah. In a bluegrass style. Any other parameters we want to put on it? Maybe what do we want to have in the, what's really important to us? What do we our year in reflection song here, what do we want in the chorus to really hit home for the listener?
[00:15:42] John: That let's see. Human centeredness is the key. and ethics is important and learner outcomes are paramount.
[00:15:57] Jason: Okay.
Say in the course, make sure to include something about human centered online learning. And then I, I got caught.
[00:16:07] John: in your,
[00:16:07] Jason: your superlatives. What was the, what were the, what was the second one?
[00:16:10] John: Ethical use of AI.
[00:16:14] Jason: It should be, maybe we
[00:16:16] John: And belonging. Oh, I rented a Okay. Let's see.
You shorten your prompt to fit belonging in there?
[00:16:24] Jason: Yeah, I'll try to. Nice.
[00:16:25] John: Nice.
[00:16:27] Jason: Yep, okay, it's creating it. It's going to give me two versions and we can take a listen to both of these.
[00:16:32] John: Okay, excellent.
[00:16:33] Jason: We can talk about other things.
[00:16:34] John: Yeah, while it's cooking, yeah.
[00:16:35] Jason: to it. here's what's amazing. The first version is already ready. I thought it was going to take longer.
Now the second version is ready.
[00:16:45] John: Oh, okay.
[00:16:45] Jason: I'm not sure how we're gonna be able to listen to this just because of the current setup here.
[00:16:53] John: Let's see what happens.
[00:16:54] Jason: But we can put it.
Song “Keep on Learnin’ plays in a bluegrass style:
[Verse] Gather 'round, folks, and lend an ear There's a podcast here that we hold dear (oh-yeah) It's all about learnin', in an online way Discoverin' new knowledge every single day (ooh)
[Chorus] Human centered, always yearnin' For that ethically tech and belonging learnin' (learnin') Tune in and listen, don't you ever stray Online Learning Podcast, we're here to stay (heyy) (Join us now, keep on learnin') (Oh-yeah, yeah-yeah) Keep on learnin' (Oh-yeah, yeah-yeah) Keep on learnin' (Oh-yeah, yeah-yeah) Keep on learnin'
[00:17:00] John: ha ha ha…
[00:17:08] John: oh, a little Cher. What? You're the audio guy. What is that?
[00:17:12] Jason: it's like a little new, yeah, it's a, like a new bluegrass. it's
[00:17:17] John: it's a little country though. I think it's not quite.
[00:17:20] Jason: quite bluegrass, Yeah, it's not quite.
[00:17:21] John: but.
[00:17:22] Jason: Okay, that was a, so that was the first one. It's called keep on learning with a little apostrophe. Keep on learnin'.
[00:17:28] John: There's two people singing apparently in this, and there's someone who goes, "oh yeah."
[00:17:33] Jason: the things that impress me are a year ago, since this is a podcast and review a little bit, a year ago. Not even close, the things that were out there that you could create music and it sounded like a mishmash, like something that you would hear on like a Star Wars film that they're trying to make it sound different and spacey and non-human.
[00:17:56] John: Or it was the third or fourth duplication on your Maxell tape. Yes. Yeah. And it just degradated and degradated.
[00:18:06] Jason: So, first thing that impressed me is just where we've come in a year, the quality the second, the kind of the clever turnarounds on the lyrics. And then the third, adding pop elements that are very catchy for the listener, these kinds of echoes, as you said, and so on.
[00:18:26] John: Yeah, for The TikTok nation.
[00:18:28] Jason: The TikTok Nation.
[00:18:29] John: Yeah,
[00:18:30] Jason: Yeah, which is basically all of our listeners, right? TikTok nation.
[00:18:32] John: Basically, yes, that's right.
[00:18:34] Jason: Listen up, TikTok Nation. Is that how we should start our podcast?
[00:18:37] John: Maybe our podcast should be 60 seconds long if we want to, if we want to capture them.
[00:18:43] Jason: Okay, here's the second one. That was Keep on Learnin'. This is this is called Learning in Harmony. Uplifting folksy bluegrass.
[Verse] Well, gather 'round folks, I've got a story to tell 'Bout a podcast that's got a lot to propel Online Learning Podcast, it's the name Where knowledge and wisdom come together like a flame
[Chorus] In the world of bytes and screens, we find our way Human-centered online learning, come what may From the hills to the valleys, we all belong Ethical tech use, we'll sing this song
[00:18:50] Jason: not sure about the chord progressions in that one.
[00:18:53] John: More than I would about that. I would. And this is I put these out here with full understanding that part of my brain and heart is, " wow, this is so cool that technology can do this."
[00:19:05] Jason: Another part of me who, I've written a few songs in my life, and I enjoy playing guitar and there was probably even a moment that if the winds of success had taken me in direction, I would have done full time music. And it's both scary and a little offensive when I think on that side of it.
[00:19:22] John: Yeah. So, let's go to the offensive part because I think we're both having conversations with colleagues and I'm also seeing reports online of research on where instructional design is going with AI and how these tools SORA and others are putting.
Making graphic designers drone operators who do B roll feel a little at risk. And I think, I bet there's some offensive feelings there too about their art.
[00:19:48] Jason: Yeah. Actually, it's not completely true that I make 0 a year from my music. John, I've I'm raking in some Spotify money. I didn't know if you knew this or not. Yeah. It's I think I get like point zero. zero three cents per play and yeah, I think my last cash out was maybe around 2 or something. Yeah. So, I really am a professional musician, but I say that to say that This is not something I'm trying to make a livelihood from.
It also is not something that feeds my own sense of self worth at this point in my life.
[00:20:28] John: Yeah, but how would you feel if you were trying to make your livelihood from this?
[00:20:31] Jason: I think particularly I; I think it would depend on the person and what I was trying to do. But I would say almost every musician would feel. A little scathed by this because even if their livelihood is mostly playing live concerts, which this is not going.
[00:20:49] John: No.
[00:20:50] Jason: And developing a fan base, which this is not going to do part of your livelihood is getting yourself noticed in this enormous sea of other talent that's out there. And then also, I know people that are, they're singer songwriters is how they make their living. But it's great to get those what they call sync royalties when you get a song placed in a movie or a TV show.
[00:21:14] John: I was just thinking about that because I'm wondering what Hollywood will do with this capability. I think that Hollywood feels like they want to protect the rights and the livelihoods of artists writ large. So, they probably wouldn't do what I'm suggesting, but television production could decide to use Suno to do the theme songs for new TV shows.
I'm thinking about one of my favorite bands is Gangstagrass. They're a band.
[00:21:37] Jason: Oh yeah. I love them.
[00:21:38] John: Yeah, they blend, if folks don't know, they blend bluegrass and hip hop and they're amazing. They're amazing. I've seen them three times. They're coming to town here in Lexington soon. We're going to go see them. But my point here is that they became more famous because their music was used as the opening theme song for the television show Justified.
And if I wanted to do that again, if I were in production, could I just skip all that and just have a theme song written right off the bat from AI.
[00:22:06] Jason: Yeah, if you're looking for a particular kind of sound and that kind of mix, you wanted something a little gritty but Southern, but also urban, then that would do it.
And then, essentially, while I was talking, Suno was able to recreate our learning theme song in a bluegrass hip hop style, right? So, you think about how quickly this can happen at the capabilities that we have today. And this is, here's song number one.
Verse] Well, gather 'round folks, let me tell you a tale 'Bout a podcast that'll make you wanna prevail (oh yeah) With a blend of hip hop and old-time string We're gonna dive deep, learnin' ev'rything (ooh-yeah)
[Chorus] Human centered online learnin', take a seat on the track Ethical tech use, we ain't gonna lack Belongin' is the rhythm, that's our podcast groove Put your hands in the air, let the beat make you move
[00:22:34] Jason: And this is, song number two.
[Verse] Well, gather 'round now, y'all, let me tell you a tale 'Bout a podcast that's bridgin' the gap without fail It's online learnin', it's the way of the world With a touch of bluegrass and some beats that'll twirl
From the hills of Kentucky, to the streets of the city This podcast brings the vibes, all witty and gritty Talkin' 'bout human-centered online learning, y'all And ethical tech use, that's what we're all 'bout
[Chorus] Come on now, let's sing it loud and clear Human-centered learnin' and ethical tech use right here Belonging is the key, come join the crowd Discover new knowledge, sing it out proud (yeehaw)
[00:22:36] John: oh my.
[00:22:38] Jason: The second one particularly, I'm a fan of Gangstagrass. That second one particularly
[00:22:42] John: hit.
it, it approached it.
[00:22:44] Jason: Old school. Yes. Hip hpehop.
[00:22:45] John: but that first one, I don't wanna offend anybody. I don't know what that was. Was that some kind of Toby Keith kind of thing?
I'm. I'm out on that. That's but and that's funny how musical tastes run to also I'm not a big, like traditional country fan, like CMA style country, but I'll go to every Gangstagrass concert I can get my hands on. But you're right. The second one approached it, but still, and then I started thinking about cultural appropriation and what is this?
Yeah. This is AI's attempt at understanding culture, which is, that's risky. Yeah,
[00:23:16] Jason: Yeah, we got yes, tricky waters right there.
[00:23:19] John: Incredibly tricky.
[00:23:21] Jason: so, we've
Talked about just ethically doing this in light of the musicians themselves, but I'm watching I'm a big jazz fan as well. I like a lot of different kinds of music, but I'm a big jazz fan. So, I'm watching the Ken Burns series on jazz, which I highly recommend. It's slow. It's long, but it's beautiful. But how many times have we taken an art form as a dominant white race from another people group and then appropriated it because we figured out how we could monetize different way. Or in this kind of case, how can we non monetize it? So, we're maybe they're not even making money off of this song. So maybe these aren't going to show up on iTunes. Cause I know iTunes has made some rules about this.
YouTube has now made some rules about this, but maybe they'll show up in the next ad for whatever, and they've made it for free.
So basically, the Suno terms of agreements is that if you pay for it, you have full mechanical rights to these songs.
[00:24:25] John: So, if I make a Suno song, were you logged into your University of Tennessee controlled garden of this? So, if I make a Suno song inside my University of Kentucky controlled garden of Co Pilot, does the University of Kentucky own the, that song?
[00:24:40] Jason: That's getting into the whole intellectual property end of things.
That's a whole They They have the mechanical rights to this really crappy culturally appropriated piece of junk that I created.
[00:24:51] John: And you're right, but look, how much of advertising now... I'm shocked now I've cut the cord on my television and whenever I accidentally happen to go back onto watching network TV or watch my local news.
I'm shocked and also simultaneously not shocked that the insipid advertising that I grew up with in the 70s really hasn't changed much. So, your comment about Madison Avenue using tools like this to create jingles and other things to cut out artists for their clients. Absolutely.
I bet they'll do it. I'm very cynical about this. I think it'll, yeah, I think that's where this is going.
[00:25:26] Jason: And you talked about networks and maybe some of the big ones will, for the sake of their already large group of customers, perhaps they'll make some rules about this to please people, but the networks are not just competing with other networks.
They're competing with Mr. Beast.
[00:25:43] John: Yes, they are.
Yes,
[00:25:45] Jason: Like Mr. Beast is enormous. And he has a enormous viewership, and my guess is that he probably, his income per year probably rivals some of these smaller, if not networks, maybe some of these smaller production houses for sure.
And I only know about Mr. Beast because I have teenage kids who drive these whole things. One, one of my kids particularly. And also, Dude Perfect they're not utilizing traditional streams, and so they're not going to be beholden to these kinds of larger ethical kind of, restrictions.
[00:26:18] John: Now, Mr. Beast is for folks who don't know, what would you, how would you describe him? He's an internet creator. I'm logging on to Variety. com. His annual earnings hit 82 million dollars last year, more than double any other digital creator and, and it's also, it's funny, his name, Mr. Beast sounds for those who aren't in the know, like some kind of awful weird guy, but he's just this, it's just this young guy, right?
[00:26:44] Jason: Yep, he seems to be, like, who knows, I've listened to some other podcasts that talk about him and so on, and actually even the Hard Fork that we mentioned, I think they talk about him one time-- his kind of use of YouTube who knows what all his motivations are, regardless, he does give away a lot of things, and he seems to be fairly kind to people in
that
[00:27:00] John: in that way.
His real name is Jimmy Donaldson, for the
[00:27:03] Jason: Oh yeah, yeah, of course I know that I've, I follow him on LinkedIn,
[00:27:06] John: oh, you're going to be a gigantic creator on LinkedIn now with the beast.
[00:27:11] Jason: Our connection is pending, is pending, so yeah, remarkable. My kids watched Rhett and Link throughout, do your kids watch Rhett and Link?
[00:27:20] John: Okay they're at 35 million, second place, but but they're 50 million away from Mr. Beast.
[00:27:24] Jason: Yeah, that's wild. I think that points to the fact that ethics is a huge topic right now and our one of our last podcasts was about this We can't rely on the companies coming up with the ethics to guide.
[00:27:38] John: No.
[00:27:39] Jason: partly because it won't be Comprehensive enough, it's one thing if Apple comes up with some ethics or Microsoft.
But not everybody's gonna abide by these rules, and there's gonna be so many startups that would,
[00:27:54] John: just Mm hmm.
[00:27:55] Jason: do an end run around any of these kind of companies to get a few more views.
[00:28:00] John: Yeah. I think that as we talked about in that episode on ethics, I think we've got two sets of ethical books going one by the companies to be sure that they can sell as well as possible. So, I'm calling those the kind of less, less ethical set of books. And there's a public persona of wanting to be safe. And so, the, they put in enough guardrails through their red teaming and things like that. So, we can't get instructions to do awful things, but then they stop right there. After that you're on your
own.
[00:28:28] Jason: Yeah, and depending on what AI you use, and you can always find one that can do what you want it to do.
[00:28:33] John: that's right. Or you download your own LLM, you get a llama and run it on your own. And then you can, there's no guardrails, no red teaming.
[00:28:41] Jason: It's crazy. I had a little bit of space this week to go follow some rabbit trails and one of them was looking at Hugging Face, trying to understand a little bit about what this is all about. And it's a place where you can actually download models. So, you talked about this one model. But have you been on here? Should I ask the question?
[00:28:59] John: Should
I
quiz you on
this?
No do not quiz me on this.
[00:29:02] Jason: Those listening, I won't quiz John on this because, it's.
It's hard not to be in the know sometimes about, a Hugging Face. I didn't know, I had no idea that this was going on.
[00:29:14] John: I just want to say that I'm comfortable being in the dark around.
you. Because you're kind to me, in
[00:29:19] Jason: to me. Oh good, that's great. And I put this out here to say I'm oblivious and I don't really understand all the implications of this.
However, right now, on Hugging Face, which is more of an open-source AI model arbitrator almost, there are currently, and I'll take a pause here, podcast listeners, guess, podcast listeners, to yourself, or to somebody you're listening with, maybe say it out loud, how many models do you think there are right now to download on Hugging Face LLM models.
[00:29:49] John: Okay. And while people are thinking about that, and I will too. So, what you're saying is that how you Hugging Face is actually sounds like it's a kind of a marketplace for large language models that like, or you make your own sort of, I'm air quoting "GPTs" and then you can go get one and download it and run it yourself.
[00:30:06] Jason: Yes. I would call it more of a GitHub.
[00:30:09] John: a
[00:30:09] Jason: a marketplace. I didn't say anything for
sale.
No.
And so, you create, it feels like GitHub when you get there. Where you can do different forks of different
[00:30:17] John: LLMs
and on this LLM landscape inside Hugging Face Are they, do they have special purposes, some of these, so they're in that way. They're like like the GPTs that you could make for
[00:30:28] Jason: Exactly.
Okay.
So, all these would have different purposes. So, this isn't, aren't like the big models we're talking about. Many of them are leveraging these big models.
[00:30:36] John: Okay, cool.
[00:30:36] Jason: these are GPTs. Many of them that you can download and use. Most of them that you can use on your own computer. Your own home computer. Okay.
[00:30:45] John: All right. So how many are there out there.
[00:30:47] Jason: Right now, as of today March 1st, 2024, and this will change. Currently there are 531, 270 thousand that one could download.
[00:31:02] John: Little large language models, little AIs
Yep.
That I can then pull onto my hard drive and never have to get on the internet and ask it anything. I darn well, please.
[00:31:13] Jason: Exactly. Yeah. If we're gonna we were talking about our one-year retrospective. Some of our predictions about what we were going to talk about last year were true.
We thought, we didn't know we were going to be talking about AI for this long, and it would move this quickly, was one of the, one of the differences from last year of doing this podcast. Here's what I think with all these creative elements, that it's going to start by some professors thinking "I don't need a production company to help me do these things and they're going to create maybe just for fun at the beginning a theme song for the class or a video of them teaching the class in Mandarin or or the class being taught by some historical character with their voice to it, or, using some images in their slides, which is already happening, right?
And that are created. And at first, it's going to be a little gimmicky, and then we're going to cross a threshold where it, A, is no longer gimmicky, and B, it actually starts to affect workflow and the people that we use for doing this work, particularly at large institutions. What do you think of that kind of prediction?
[00:32:27] John: I don't know. We'll have to see. I think based upon some of the surveying I'm doing before I go talk with groups about whether or not they've ever even used a large language model, used chat GPT, for 50%. routinely state that they either have never used it in their lives or have used it once or twice ever in the time since it's come out.
[00:32:51] Jason: So that's over.a year.
[00:32:52] John: And so if half of our educators out there are in that space, then I don't think that they're going to be using these models in any way in a deliberate way to advance their teaching and learning goals, and they'll be using them however, the platforms like we talked about before, as platforms start to integrate these tools into them, that's how they'll get used.
[00:33:14] Jason: Yeah. I think you're right. Yeah. The average professor, I agree, is not going to be going into Hugging Face and probably downloading and creating.
[00:33:22] John: I was just going to say the same thing. I'm crazy enough to go do that. But no. Yes. No, but no one else is. Not no one else's, but I don't know anybody, but maybe you in my circle of friends and colleagues that would do that.
[00:33:35] Jason: Yeah. Yeah. And even I don't think I would use it. I would use it out of curiosity to see what's going on and so that I can understand it's actually, this may seem strange, but my tinkering is actually a leadership mechanism for myself.
I think part of my job is to be able to see down the road a little bit. And to be able to anticipate it and figure out how we're going to react and how we're going to guide this whole thing.
[00:34:04] John: I agree with you 100%. And actually, I coined a little term in the bootcamp that I'm doing now for AI, but "you have to try AI before you guide AI.
[00:34:14] Jason: That's good.
[00:34:15] John: Because how can you talk about the direction in your institution, organization, department, unless you've tried it out yourself and can talk about what you know of its ramifications or even how you feel about it.
[00:34:27] Jason: Yeah. Yeah. Unless you understand where it's at, where the power of it is at right now then yeah, your ethical guidelines are going to be all over the place.
You're not going to really be able to hit, especially when, as we talked about in our previous episode especially when we're talking about contextual ethical guidelines. So that really have some teeth and examples to them.
[00:34:49] John: That's the key or the contextual guidelines because in our institutions, our universities have broad guidelines, but what happens once the classroom door is closed? Completely different matter.
[00:35:00] Jason: So, I agree with you about, Hugging Face. No, we're going to get a few crazy people like us poking around with this stuff. Now, Canva, though, think about how many teachers are using Canva. You can get an educational free license that gives you extra stuff and templates are there and it's, crazy what it can do.
They have AI baked right into it, right? Zoom. It's baked right into it. If you have an Adobe license now, firefly is baked right into it. This week there, they have a music production thing that's starting, they're starting to demo these are the, I think those are the places that it's going to sneak up on us in ways that it's not going to go through a regulatory body of whether or not we can use this software.
It's going to be on our computers with the next update.
[00:35:43] John: Yeah, no that's my point entirely that this will just become embedded in the ways of working throughout. I was talking with some folks from a two-year college in Texas who have 94% of their graduates either go on to a four-year institution or go into the world of work.
And so, they've got an academic side, and they give an AA associate degree. But then they've got, the welding and the HVAC and all, and they're painfully aware that AI is going to be embedded across both those paths for their graduates. And how is that going to look? And how should they be thinking about preparing folks for the world of work?
[00:36:19] Jason: Yes. And I think that part of our jobs as educators, of course, as we've talked about is not just disseminated knowledge, right?
It is preparing students for, not just a vocational life, but a life ahead of them, right? I think this is part of our mandate is that we are forming students at significant times in their life, whether they're just coming to college for the first time and they're 18, 19 years old, or if they're adults and they're coming back to college and trying to re-equip themselves for the life ahead of them.
These are significant times in these people's lives. And we owe it to them to prepare them for the world that is out there right now and the world that is.
coming
[00:37:00] John: yeah. Agreed. Agreed. I can't do better than that.
[00:37:04] Jason: On that note, thank you for listening, everybody. And And John, thank you for for having me. This is a good thing about doing it face to face is that you know that there's a real person at the other end of the
[00:37:15] John: the other end. Yes. Yes. It is better than zoom in a lot of ways.
I can put my hand up and tell you to, I want to say something and then we can, yeah, it was pretty good.
[00:37:25] Jason: Yep. And you can pass me a pen out of your enormous collection.
[00:37:29] John: I have an enormous collection of pens inside makeup brush holders, Lucite. They're beautiful. Cool.
[00:37:36] Jason: thanks for listening to everybody. You can get the show notes and thank you everybody that has listened and commented and encouraged us since last year.
It's been exciting to be part of this and that excitement partly comes from you. If people weren't listening, actually, we might keep doing this,
[00:37:52] John: actually. we might
[00:37:55] Jason: about us
[00:37:56] John: thing about our lives. That is a,
[00:37:58] Jason: I don't know. What do you think, John? If nobody was listening, would we keep doing it?
[00:38:01] John: maybe Maybe. not. Yeah.
[00:38:04] Jason: Maybe.
Maybe for
[00:38:04] John: a little while. Maybe for a little
[00:38:06] Jason: we'd just get rid of the microphones and just have a conversation.
[00:38:07] John: yeah,
how do we even know if anybody's listening? What's our threshold for anybody's listening?
But if you get a chance, we would love it if you'd go out there and give us a rating and leave us a comment. We like to feed the algo as we say, but it helps us know that you're out there and it helps us get out to more people. So yeah, leave us a rating and a comment and we'll get back to you too.
[00:38:28] Jason: yeah, absolutely. We do get back to you. It won't be AI and join us on LinkedIn. We got a little community there as well as you can just find us there to message with and to see other posts. And I'm gonna say John, as John's a good one to follow on LinkedIn. He's creating some. incredible content these days.
A lot of it around these conversations. So, I would highly recommend at least to go on to LinkedIn and follow John. If you don't want to follow me, that's fine. But at least follow John because he's got some good stuff.
[00:38:55] John: I recommend you follow Jason as well because he goes into more rabbit holes than I do, so I think that, and they're illuminating.
[00:39:02] Jason: Yeah. I don't know about illuminating, but I definitely have some rabbit holes. Mine tend to be less structured and thought out. It's just like what I'm thinking about in that moment. And I post off of my phone something. and at that, Happy anniversary, John. This has been great.
John: Happy anniversary, Jason.
(bluegrass style AI created song outro)
Talkin' 'bout human-centered online learning, y'all And ethical tech use, that's what we're all 'bout
[Chorus] Come on now, let's sing it loud and clear Human-centered learnin' and ethical tech use right here Belonging is the key, come join the crowd Discover new knowledge, sing it out proud (yeehaw)
Wednesday Mar 20, 2024
EP 25 - AI Guidance from Oregon State University Ecampus with Karen Watté
Wednesday Mar 20, 2024
Wednesday Mar 20, 2024
In this episode, John and Jason talk to Karen Watté, the Senior Director of Course Development and Training at Oregon State University’s Ecampus about their free tools for AI guidance in higher education and how to humanize online education. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
Oregon State University - eCampus AI Tools: https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/ )
Michelle Miller’s Newsletter: Teaching from the Same Side https://michellemillerphd.substack.com/p/r3-117-september-15-2023-reflection
OSU eCampus Readiness Playbook https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/readiness-playbook/
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
[00:00:01] Jason Johnston: I picture everyone in Oregon in Log cabins and so on. Is that correct?
[00:00:04] Karen Watté: no, not at all.
[00:00:06] Jason Johnston: What?
[00:00:07] Karen Watté: I always say tell our candidates who are coming, I say, we have the best of both worlds. You're an hour from some beautiful ski areas, you're an hour from the coast. And boy, if you wanna see the desert, you just head on a little bit further. And we've got the high desert. So, we've got something of every, for everyone here. I've lived other places too and I come back, and I say, oh, this is, this has got it all.
[00:00:31] Jason Johnston: I grew up in Canada, and sometimes we would talk to people about the igloos that we lived in and having to check our dog sleds at the border and those kinds of things. Sometimes they believed us, sometimes they didn't.
[00:00:44] Karen Watté: Yeah.
[00:00:45] John Nash: I'm John Nash here with Jason Johnston.
[00:00:48] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:53] John Nash: we're doing this podcast to let you in on a conversation that we've been having for the last couple of years about online education. Look, online learning's had its chance to be great and some of it is, but there's still a lot that really isn't. So, Jason, how are we going to get to the next stage?
[00:01:08] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:01:13] John Nash: I love that idea. What do you want to talk about today?
[00:01:16] Jason Johnston: I am really excited to be talking today with Karen Watté. She's the Senior Director of Course Development and Training at the Ecampus Oregon State University. Welcome, Karen. How are you?
[00:01:28] Karen Watté: I'm good. Thank you.
[00:01:29] Jason Johnston: We, connected at OLC, Online Learning Consortium conference as part of their leadership day that they do ahead of time, and it was very fortuitous, I think, because we had just come through this summer where everybody was scrambling around AI, trying to figure out what to do, and while we were, trying to come up with some ideas and so on all of a sudden Oregon State had a full-fledged website built out with resources and stuff like that.
And we're like, this is amazing. Over here at University of Tennessee and it was really well done. So, we got chatting about that at OLC and then we got chatting about being on the podcast. So, thanks for joining us. Cause I'm really excited about talking with you today.
[00:02:10] Karen Watté: Yeah. Thanks for inviting me. Glad to be here.
[00:02:12] Jason Johnston: Tell us a little bit about what you do at Oregon State and your role there.
[00:02:17] Karen Watté: Yeah, as you mentioned, I'm the Senior Director of Course Development and Training with eCampus, and at Oregon State, eCampus is a centralized distance education unit, so we're serving all of the colleges within OSU. We have about 13,000 fully online students that we serve, and that's about one third of all the students enrolled at Oregon State are fully distanced.
[00:02:42] John Nash: Wow, a third of them. Do you know what history is of deciding to do a centralized distance learning unit? I know some campuses do that, some campuses don't, and I'm curious a little bit about that.
[00:02:54] Karen Watté: We've been in online for quite a long time, 20 plus years, and we are, the Oregon State is the land grant institution in Oregon, and maybe 25 plus years ago, we were doing the television based learning, and sending it out to everyone in the state, and that unit, of course, was extremely small, and as online learning developed, it changed and morphed into what it is today.
And it, so it's always been that central support unit and the way that the funding was established at OSU to support that unit encouraged it to remain a centralized space.
[00:03:33] John Nash: I see.
[00:03:34] Karen Watté: It's been a really a nice advantage, I think, for OSU to have that, that centralized.
[00:03:38] John Nash: Yeah, I get the sense that there are advantages to it. my institution isn't so centralized. It still has a unit supports that, but it's not connected to tight instructional design support I'm sure that there's disadvantages to what you said, something that was interesting, which is, I think, we're the land grant institution here at the University of Kentucky, but it's something about funding from 50 years ago that seems to set these things in motion. And so, it sounds like, yeah, that was a centralized sort of ITV unit and sort of things like that.
And then it moved into that. Yeah. It's interesting. More decentralized here.
[00:04:13] Jason Johnston: Yeah, and we're, we are also the land grant here in Tennessee, so I think that we've got a common thread here. And I think as we've talked about, becoming really a modern land grant some of it is strategically thinking about how are we going to continue to serve everyone in Tennessee, right?
And in the olden days, it was setting up their outposts in every county. We've got 95 counties, I think, in Tennessee and setting up Outposts there. And in these days, we're talking a lot more about online learning and about trying to connect there's almost a million Tennesseans who started their undergrad degree and didn't finish it.
And how do we serve those students in 2024 to help them move forward? So that's good. I knew there was something else that probably connected us on a deeper level and it's that land grant, I think.
And you direct the course development and training. So, does that mean both like from a production standpoint developing the courses and then also professional development for teachers?
[00:05:16] Karen Watté: Yes. Yeah. So, my particular team, we have about 45 professionals. We're about half instructional designers, and then the other half is a media development unit. And we have a handful of folks that also focus just on faculty development. But our media unit does videography, animation. We have Quite a number of programmers. And so, we do a lot of work. We're basically the faculty facing side of our, of Ecampus.
[00:05:43] Jason Johnston: And so how many are dedicated then within your 40 some odd with professional development?
[00:05:49] Karen Watté: In terms of just doing faculty development and training, I would say we have about 3 individuals that really focus on that, but all of our instructional design staff as part of their duties, they also provide training, and support that could be one on one, but it could also be in assisting with specialized trainings that we're putting together for faculty as well.
[00:06:13] Jason Johnston: So, did you get to this role like through like a faculty pathway or instructional design or media or how'd you get here?
[00:06:21] Karen Watté: I have a unique background. Years ago in the early 2000s, I was actually, after I got my MBA, I was working in private industry as an operations manager for FedEx Logistics, which was embedded into Hewlett Packard, which If you are aware, we have a huge Hewlett Packard facility here in Corvallis, Oregon.
And then prior to coming to OSU for about seven years or so, I was actually faculty at a local community college in their business technology and computer systems department. And then I went to OSU about 15 years, and I started in faculty development and training with eCampus and really establishing the foundational trainings that we base a lot of our course developments on today.
And then I just moved up as eCampus has grown, because eCampus has grown quite dramatically, and I would say in the last 10 years especially.
[00:07:17] John Nash: What infrastructure was in place for you to come into your role at OSU and start to do that training? Or did you bring your experience from your past positions in and start to develop that?
[00:07:28] Karen Watté: Well, I brought in a lot of my previous experience, and then, when I started, my unit had, I was the fourth person to be hired into this unit. And so, then we hired on an instructional designer who actually is our, is my supervisor right now, Shannon Riggs, and she and I together crafted the foundational trainings that go into what we provide for faculty today.
And of course, there's been many improvements since we've brought on, very skilled people, and then they've added to this suite of trainings, but we started it about 15 years ago when we came in. She had come from a Quality Matters institution. I, of course, had, background in, in training, both in private industry and then at the community college as well.
And together we put this program in place.
[00:08:20] John Nash: Yeah. And then together you've grown it. What did you say? 40 folks?
[00:08:25] Karen Watté: We have on in our team, I have about 45 folks all Ecampus as a whole is about a slightly over 100 staff.
[00:08:35] Jason Johnston: And what's the online population these days at Oregon State? I know you talked about in terms of the number, the percentages of Oregon State, but how many online?
[00:08:46] Karen Watté: So, we have a little over 13,000 fully online students. And like I had mentioned, it's one in every three OSU students now is a fully distance student. But in terms of, how many students do we touch every year? I think our last report showed that we had 29,000 unique students who took an eCampus course because a lot of our campus-based students will also take an eCampus course here or there during the year.
It's, they find it very helpful and allows them to have a flexible schedule.
[00:09:20] John Nash: Yeah. Cool.
[00:09:21] Jason Johnston:
Going back to our earlier note about these AI resources, and we'll put the link for people that are listening into the chat. But I just thought there's a number of things on here and just so people can visualize even without seeing it. You've got some, ethics, and principled kind of statements.
But then you get into an AI decision tree, like when is the a guide to how to incorporate AI or if you should incorporate it into your work, as well as a reimagining of Bloom's taxonomy which is really like instructional design love language, Bloom's taxonomy, there's, we've got a few of them and that's one, it's up there.
And so I appreciated how you wrap that into things. So just to give people a little bit of a landscape of that, but I wanted to talk about, as we're all dealing with AI at our respective institutions, and we're, John and I are both involved with various conversations around that.
How did this come about? Was this in general, like where was the impetus for this? Is this something within eCampus or was this a kind of a provost said, you must do this, or we'd love for you to do this? Or were the faculty rising up and saying, give us AI guidance, or how did this all happen?
[00:10:33] Karen Watté: Yeah, that's a great question. I think back in winter of 23, we realized at that point that we were just really dealing with a situation that It was like none other we had ever seen before, this, here's this digital tool. It's just exploding in capability and faster than anything that we had seen before.
And like many institutions, I think we had. Sessions, talking sessions with faculty where we introduce them to this idea. We wanted to have discussions with them. And certainly there was a lot of curiosity out there, but there was also a lot of fear. And so I know that in the early spring, we had actually had at least one program leader who said, we're waiting for Ecampus to figure this out. And so there was some real pressure there. But I think I, I knew at that point and after having a number of conversations that we were going to have individual faculty coming to us very soon with a lot of questions about, what does this mean? What are the implications of these tools? Should I put them in my class? How can I avoid my students using them?
And so, I, at that point, I, I basically say we've got to, we've got two things we have to do "and we have to do it very quickly. Number one, we have to figure out what is the eCampus stance on these tools, because clearly, we were not getting a lot of guidance from any other tool.
Location. The university did have a small task force and I was on that task force, and we were, looking at what was happening, but there wasn't real action happening in terms of how to, how are we going to support our faculty going into the next year.
And so, number one, we had to figure that out. And then number two, we needed to get some resources in place because we were going to be providing training and support all through the summer and into the fall for faculty who are trying to grapple with this. And so that's really where that came from. And at that point, I said, okay, let's we've got a lot of really great thinkers here on this team. A lot of people have done a lot of innovative stuff. I know we have a lot of folks who were very interested in it on the eCampus team. And so, I handpicked 12 people based on their diverse backgrounds and what they were interested in.
And I said, you are our AI council, and these are the three things we're going to do. We're going to figure out what eCampus thinks about these tools. And we're going to make a stand on it or take a stand on it. And then we're going to secondly, we're going to figure out some kind of taxonomy that will allow us to identify what AI skills are needed. And I had through some other conversations been inspired to think about it in that way.
And then finally, third, I needed some practical. strategies. We needed like a library of strategies that our instructional designers could pull upon as they had questions from faculty. So that's really where it came from organically as we were having conversations and knowing that there was this sense of urgency that we needed to get our house in order so we could help faculty who are going to be coming at us all through this summer.
[00:13:45] John Nash: The, the tool Page and I'm looking at it now is number of reasons from my perspective and that is you start with an ethics statement, but then it follows with some principles, and principle number one of seven is be student-centered. Now, when Jason and I, and maybe you are hanging around having coffee, this seems obvious to us, I'm sure that this ought to be number one, but it's not actually for most people maybe. Maybe I'm stretching. It's not for many people, and at our institution, and as I work also with P 12 schools around how leaders are going to articulate guidelines for AI aren't always first thinking about student centered -that's more administrative, or it's a lockdown attitude, or it's a integrity issue.
Can you talk with us a little bit about the conversations you may have had and why being student centered is number one on the principles.
[00:14:38] Karen Watté: When we were trying to decide, what did we need to do first? And that was, establish this ethical foundation. What are we going to say we stand for and what's important to us? And, Forever, eCampus has always been student centered. So, when we've talked about what, what's important to us when we're evaluating these tools and whether we should use them, we went back to OSU values, but also our eCampus values, which articulate that, the student comes first.
We do things for the student. So that seemed like just a natural. A natural piece to bring over as one of the principles that we're going to abide by when we're looking at these tools. The other principle I think that is very important there on the list is that last one, which was accountability because I think that kind of wraps up the fact that, regardless of whether you're using AI, the human author is ultimately responsible. So, there's all these other issues that we have to, that we want to consider, but we also want to ensure accountability for everything that's being produced here.
[00:15:41] Jason Johnston: And just to read that one, it says, establish this number seven, establish accountability, regardless of how or whether AI is used, emphasize that the human author is accountable for all content produced.
[00:15:54] John Nash: Yeah, that's key. Involved with a generation of a document that's going to help our faculty have productive and developmental conversations about their distribution of effort. going to actually work on your teaching research and service and then relied a little bit on AI to help us brainstorm through some of these conversations that turn up very transactional document into something that's more of a developmental conversation. And yeah, we placed a statement at the, in the end notes about how it was used, but then also that we stand by the facts in the document as and contributors.
[00:16:28] Karen Watté: Yeah, so important.
[00:16:29] Jason Johnston: Yeah.
And your number two talks about demonstrating transparency again, along with that, if it's being used and integrated recommending that faculty are clear in the syllabus that such tools would be used and that's another place we've been talking a lot with our faculty.
Faculty about, which is that transparency, both on the faculty side, but also on the student side to creating a space in which things are transparent. And I think one of the outcomes of that is that you create a more trusting environment.
Along with that, I noticed you don't say anything about AI detectors here on your list. There's no number eight thou shalt use AI detectors or thou shalt not use AI detectors. Do you have any things that you would are willing to put on the record about AI detectors?
[00:17:15] Karen Watté: We haven't been impressed so far. I'll just say that. I think there is a lot of a lot of information out there pointing to the fact that they don't do the type of job that they should be doing, or that they claim to do. And the, often the bias that seems to come out in their results is very disturbing.
So, at OSU we have stayed away from that. That is not the direction we want to go this time,
[00:17:44] Jason Johnston: yeah, we've talked about just how Again, we're good. If you listen to this podcast This will be the fourth time you've heard this, maybe fifth, but about Michelle Miller talks about same side pedagogy and about within the classroom. What are we building together with an AI detector? Are we building a community of trust and co learning together or are we building a community of distrust, and separation between the student and the teacher. And I think the, it's a rhetorical question, the way I phrased it, but I think we know the answer to that, which is, AI detectors do not help with same side pedagogy, putting us on the same side as the students, right?
[00:18:27] Karen Watté: And I think really, I would emphasize just really the inaccuracy of these. And I was just reading some information and from some R1 institutions that have done a little bit of testing in house and they, these AI detectors just don't measure up to what they claim they can do. So just best to avoid them for now.
It's not something you want to get into.
[00:18:51] John Nash: For me, it's almost as though your first principle of being student centered suggests that the AI detectors aren't necessary. That if you're being student centered, doing as Dr. Miller at Northern Arizona says, having a same side pedagogy, not an adversarial for learning, then you're going to be okay.
[00:19:11] Karen Watté: Yes. Yeah.
[00:19:12] Jason Johnston: So, I had mentioned this before, we looked at your decision tree, here at UT, we as we were trying to work as a team to figure out when and when not to use it in our own work, and then when we recommend it as we were talking with faculty because I'm in kind of the same sort of position that you are in terms of working with course production, but also doing professional development with faculty.
[00:19:38] Jason Johnston: It seems like a lot of work to have gotten to this place in terms of decision tree. Did it come easily as you were going through things where did you base it on some other previous kind of work that you had been doing around, even just the implementation of technology, because I think there's some overlap here, or how did this specifically come about?
[00:20:01] Karen Watté: I think all of the hard work and conversation around what our values and principles will be really led naturally into the creation of that decision tree, because you can see each branch correlates very closely with many of the principles that we identified. So, in that respect, that piece of it was easy, but of course it was vetted numerous times among the small work group that created it.
And then with the larger council. And we added that very first question toward the end of creating that, which is, we must check with the department and the program first. That is always that the first step, does the department or the program have a policy in place? At the time that we were creating this, Okay.
Very few had any policies in place. They were still in, conversation, but I think that will be changing over time. So, we'll check there. And then the second one, of course, we're very student centered. So, the second question is how does this how would this impact your pedagogy? How does this lead to better outcomes?
What is the impact to students? And if you can articulate that well, and it makes sense, then you continue on down through that tree. But those first two questions are critical. If you can't get past that then you're not gonna, you should stop at that point, essentially.
[00:21:18] Jason Johnston: Yeah.
[00:21:19] John Nash: The decision tree. For those who are not looking at it right now, it's a guide that was developed by your unit to help decide when and how to incorporate You AI into your work. it's aimed at the teacher or the instructor. Is that fair? Or could it also be for an administrator, an associate dean, or someone who's thinking about using it for non-instructional purposes?
[00:21:44] Karen Watté: Yeah, that's a good question. I think it certainly could be repurposed. When we were creating it, of course, it was meant as a guide for our staff and for faculty who are working on course development, but certainly many of those questions are very applicable. If you're looking at AI to improve a business process at the university, you may want to review some of those kinds of questions.
So, I think it certainly could be applicable to other questions, other spaces.
[00:22:13] John Nash: Have you been approached as a unit, from folks who are looking to, as you advise here, when an answer is no, your recommendation is to pause and seek consultation, and then with an asterisk, you note that would be consulting a supervisor or other person who can provide expertise. When I think about, for instance, my department. We don't have a policy in the unit. I would consult my chair. They would shrug their shoulders. I might look inside my college. They would also likewise shrug their shoulders and say, and I think this might actually escalate up to maybe our center for learning and teaching or something like that.
Are you seeing similar things and how is this playing all the way down to the unit in terms of people's capacity to look at these questions?
[00:22:57] Karen Watté: Yeah, we've used it in a few different contexts. So, for example, a faculty came to us and wanted to create some AI supported materials for their course development. The first question was back to the department and the department at that moment said, absolutely not.
You're not going to do that. So that was the end of that. But then we've had another situation where we had a faculty who came, and they said we would like some graphics created to support this particular concept. And by the way, it's okay if we look at AI image generators to help support this piece.
And so, then we had a conversation. within our team and specifically with our videographer who is helping to pull some of this these images together around, okay, what are the concerns? And let's look at the limbs of this tree that are most applicable here, which of course would be copyright.
How are we, certain that there's not a copyright issue if you use this particular engine to develop a few images to support this particular learning object. And so, we were able to clear those hurdles, but this decision tree gave us that sort of framework for the conversation and to ask those kinds of questions.
And so, I think those are a couple examples of where I think it was useful.
[00:24:14] John Nash: Those are great examples, because I think that a tree like this, it really is less about being a dictatorial policy, but rather a driver to engender conversation around what people want to accomplish, Yeah.
[00:24:28] Jason Johnston: You'd mentioned your media team. Have you found some very, and you've got a pretty large team. Have you found a variety of opinions in terms of the use of AI within your own team? You don't have to name names on the podcast. Or have people tended to get behind the same horse on this one?
[00:24:49] Karen Watté: Generally, I think we have a pretty innovative group of people. So, they've been quite open to it and all of that. Although, I will say that we have a couple of instructional designers who are particularly concerned about privacy issues when it comes to using these and copyright and all of that, which rightfully, you know, rightfully and that's and so we've had conversations around that. That component. They're not quite as excited to start experimenting and putting things up into these systems, which totally makes sense. But otherwise, I would say we're probably a lot more willing to get out there and try things just because of the nature of what we do every day.
[00:25:32] Jason Johnston: That's impressive that you've been at this for a little while here at Oregon State and that you continue to be innovative. Only because it feels and please correct me if I'm wrong on this one, but it feels our institutions of higher learning our land grant established longstanding institutions don't tend to go that way all the time.
They tend to maybe favor the more traditional. And so how do you think that you've kept this going if you've been early adopters when it comes to online, and you continue to innovate forward?
[00:26:06] Karen Watté: I think it's just the culture of the unit. Essentially, it started out as this little skunk works area. We were trying things that no one else would try, and so the university continues to turn to us to do those kinds of experiments when it comes to teaching and learning and then we're hiring people that have that same mindset.
And we're telling them it's okay to take a risk. It's okay to try something. And if you fail, that's all right, because we're learning, I think it's just the culture and maintaining that momentum about innovative but innovating in a careful way. We are, Of course, research based. Much of what we do we experiment with it when we find that there's a research basis for it.
It's not just the Wild West, so in that regard, we, we value research just as much as the faculty, the rest of the faculty at the university, but we do try to push and experiment with new things when we think that there's a valid reason to do.
[00:27:05] Jason Johnston: so, it's been about maybe seven months at the, at this recording since you put these out which is like 20 years in AI years, I think, right? Is there a calculation for that yet, John?
[00:27:15] John Nash: but.
Could take a dog years times cat years and divided by Moore's law. I think we'll get somewhere in the ballpark of that.
[00:27:26] Jason Johnston: Yeah, exactly. I think that sounds, we'll work on that, and we'll get the we'll put the, like everything else, put the formula in the show notes.
[00:27:33] John Nash: Yes,
[00:27:33] Jason Johnston: John?
[00:27:33] John Nash: I was going to ask Bard, but I can't anymore because Bard is now called Gemini.
[00:27:38] Jason Johnston: Yes. We'll ask I've got the advance. Anyways, that's a whole other conversation. So, we'll talk later. Anyways, back to the question. Since the seven months has gone by, first, is there anything that you would change about what you put out there from before?
[00:27:54] Karen Watté: I think, we had made it very clear that what we put out there was really a snapshot in time, that this is what we see today, particularly around that, that Bloom's taxonomy one. This is AI capabilities as they are in the summer of 23. So, we pretty much knew that, We're going to have to revisit this, in a year or sooner, and I, and we will be reconvening our AI council in the spring, to start thinking about, what may need to change, but certainly that tool will have to be looked at again, the decision tree, I think, still probably stands as it is. I don't anticipate there will be a lot of change. But again, this is a conversation we're planning to have here very soon.
[00:28:38] Jason Johnston: Are there other ways that you think that you might expand? Like, what are some of the other gaps that that you're seeing that you would like to help with at your university?
[00:28:46] Karen Watté: Yeah. This fall we had some conversations around helping program leads, department chairs, anyone in a kind of leadership position facilitate conversations around AI. And one of my colleagues, Dr. Katherine McAlvich worked up a short guide, but she calls it a readiness playbook for department chairs.
That's actually posted out there on our website as well. It's about a five page document just to give some starting prompts. So, to encourage them to start speaking with faculty if they haven't already, started that conversation. Because I think we're getting to a point at, very soon that we're going to see some need for curriculum updates based around this. I'm starting to see case studies about industries and how they're integrating it into the, into work. And so that means that what we teach at the university or at any institution is going to soon have to reflect what the reality is out in the workforce. So, I think those conversations, trying to encourage that and get folks to talk about that is probably the next step.
[00:29:51] Jason Johnston: We look forward to more updates. Yeah, we'll be watching that. Thank you for being open handed. We're having conversations here about what goes on the web and what doesn't.
And we strongly advocate for sharing resources on the web for others to. To be able to see because they're helpful, and we've been helped by yours, so thank you for that.
[00:30:12] Karen Watté: You're welcome. And this is a topic that not one institution can answer, can manage alone. It is such a huge undertaking. We look to all of our colleagues too for help, guidance, and ideas around this topic because it's certainly a collaborative effort. It has to be. It's. It's just something that's so unusual at this time.
[00:30:35] John Nash: Can.
We, pivot away from AI a little bit and talk about learners?
[00:30:39] Jason Johnston: I guess so, John.
[00:30:41] John Nash: It turns out we didn't mean to, but about half our talk is about AI. Then the other half is actually about learners, I think...
But yeah.
You did an Interview in 2017 for the Oregon State Ecampus News, and you were asked what your best piece of advice for instructors was, and you said, "Be sure to let your personality come through in your online course. Communicate regularly with your students and provide them with timely feedback. Your interaction with your students is the most important part of the student's online experience." And it feels like that advice never gets old but feels fresh to some. Can you just say a little bit more about why this wisdom is so important?
[00:31:23] Karen Watté: Yeah, but we survey our Ecampus students every year, and it's interesting to note that even to this day, they continue to say that the number one indicator of their satisfaction in an online course is the interaction that they have with their instructor. So, I would say that our data continues to bear that out year after year.
So, instructor presence is just absolutely critical in an online class. And now you even see this reflected in, the Department of Ed's requirements around regular and substantive interaction, which a lot of folks have spent time thinking about as well.
[00:32:02] John Nash: that first part of your response, which was be sure to let your personality come through. What is some advice that you have for teachers who are thinking about upping their game in that area?
[00:32:14] Karen Watté: We of course love to try to get them on video if we can, at least an intro video in every course. We love to have them do, video overviews if they're willing to for each activity. But then, even if they're not able to do that or willing to do that, they can Just infusing their actual personality and their passion for the subject into the announcements that they make, into the content that they're delivering to the students.
So, we really try to work on helping each faculty bring out the best and put their personality into a course.
[00:32:48] John Nash: fantastic. we see That more and more. I know in a recent episode, we had the privilege to record a session with Johns Hopkins University's Symposium on Online and their speaker for that symposium was Flower Darby, and she was very clear about letting your personality come through in your course.
And so, it feels like yellow Volkswagen theory. Once you buy a yellow Volkswagen, then all you see on the road are yellow Volkswagens. And so, once you start talking about letting your personality come through in your course, you start picking up on it every time someone says something about it.
But yeah, that's it's really good advice.
[00:33:24] Jason Johnston: And that feels like a good Oregon thing, too, right? Yellow Volkswagens. You have a lot of yellow Volkswagens out there. Is that another stereotype that I have about Oregon?
[00:33:32] Karen Watté: We've got some on the road.
[00:33:33] Jason Johnston: I've got a few. Got a few. Yeah. Yeah. And along with that too, one of our themes here is talking about how do we humanize online learning, right? As John always eloquently introduces us, you know, we've done a lot of things great. And some of it, not so much. And I think one of our places that we want to grow in this next season of online life, now that we've, we've, we can get content to people, we figured that one out, right?
We figured that one out a long time ago. Now we're learning to maybe make it a little bit more interesting and interactive. But how do we humanize, you know, and I really like that about making sure that Personality comes through in your online course as part of that are there other ways that you as a group or in your professional development or in your course production process that you help faculty to really humanize their online courses?
[00:34:25] Karen Watté: Yeah, that's a great question. I, and I think a lot of that kind of comes down to just ensuring that you're explicitly designing in opportunities for engagement, because, unlike an on-campus course where it's a natural, you have that natural opportunity online. It has to be designed in.
And so, as you're designing that in, you're thinking about, is that channel easily accessible to students? Is it easy for the faculty to use? Is it easy to manage while you're teaching that course? The kind of communication that would allow you to connect easily with your students. What does the feedback look like in the class?
What's the pacing and how can you, do you have enough time to provide the kind of feedback that you'd like to provide so students feel like they're really having a good learning experience and connecting with you? So ultimately, it's, I think a lot of this is also just having to be built in into the course through the course development process in the conversations that the instructional designer is having with the faculty as they're talking about what is this course going to look like when it's actually being taught.
[00:35:34] Jason Johnston: Mmhmm. Yeah, this symposium that John had mentioned, there's a bit of a common thread. One of them was talking about intentionality, ..And that's one thing I really like about course design, instructional design, and the process, so that, we just don't expect faculty Just to arrive in their online course and just everything to be there and just to work We shouldn't know side note We shouldn't expect this in their face to face classes either, but there's not always a lot of concentration on that However, we're talking about online here. But I think that there's an intentionality about design that I love, and I think that if we can take a step back and think about what it is.
We're intentionally trying to do here. We can really move the needle.
[00:36:17] Karen Watté: Absolutely. Yeah, it's really thinking ahead And, the lovely thing is that we ask that online courses be entirely developed prior to the actual launch of the class. So, we're not developing them on the fly as the course is underway. And I think that really lends itself to some thoughtful kinds of activities, communications, and it, and I think it just makes for better, a better opportunity for the instructor to teach well, better opportunity for the students to learn well if you have everything ready to go, and then they're not worrying about, whether the content's up and ready and available.
[00:36:55] Jason Johnston: This has been a great conversation. I think that's a great place to land. What do you think,
[00:36:59] John Nash: I think,
It's, yeah, perfect place to land. Yeah, I think that was very intentional of you.
[00:37:04] Jason Johnston: It was not so much intentional of us as much as thank you for the yeah, for landing there. I think that's a that is a great a great place for us to think about this intentionality and the design and the students being student-centered, being humanized, and all that we do as we think forward.
Karen, thank you so much for taking the time to talk with us. We really appreciate it.
[00:37:25] Karen Watté: Thank you. I enjoyed this.
Monday Feb 19, 2024
Monday Feb 19, 2024
In this episode, John and Jason talk about the ethics of AI, including how ethics are formed and a few scenarios like if it’s ethical to use Midjourney. Listen in to find out who says no! See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
Article: Harvard Business Review Ethics in the Age of AI Series: Part 1, Part 2, and Part 3
Article: It's Not Like a Calculator, so What Is the Relationship between Learners and Generative Artificial Intelligence?
Jason’s FAFSA Assistant GPT
”Right Choices: Ethics of AI in Education” - John hosts Jason in an episode of the School Leadership + Generative AI series
John’s School Leader AI Bootcamp
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
Podcast Episode on AI Ethics - January 29, 2024
False Start
[00:00:00] John Nash: Should we do the intro?
[00:00:01] Jason Johnston: Yeah, let's do the intro.
[00:00:03] John Nash: I'm John Nash here with Jason Johnston.
[00:00:06] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning Podcast. The Online Learning Podcast. Let's try it again.
[00:00:12] John Nash: I'm John Nash here with Jason Johnston.
[00:00:14] Jason Johnston: That reminded me of do you ever watch The Office? My name is Kevin, because that's my name. My name is Kevin, because that's my name. So this is the Online Learning Podcast, the Online Learning Podcast.
Episode
[00:00:30] John Nash: I'm John Nash here with Jason Johnston.
[00:00:32] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the Online Learning Podcast.
[00:00:38] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but still a lot of it isn't. How are we going to get to the next stage, Jason?
[00:00:52] Jason Johnston: That is a great question. Why don't we do a podcast and talk about it?
[00:00:56] John Nash: That's perfect. What do you want to talk about today?
[00:00:59] Jason Johnston: John, I've got some ethical questions for you.
[00:01:02] John Nash: You do?
[00:01:03] Jason Johnston: I've been wondering about the ethics of using AI for certain tasks. And maybe we'll get back to some specifics later on.
But how do we form our ethics to begin with when it comes to AI and using AI these days when we think about education?
[00:01:19] John Nash: I'm stealing your line from the intro. That is a great question. How do we form our ethics? I think they're formed by the values and the beliefs we bring to anything we do. You've had a longer background and thinking and considering about ethics, both in your professional life and your education life.
What do you think about in terms of what sensibilities people bring to any task?
[00:01:45] Jason Johnston: Yeah, I think so. I like where you started there because sometimes people start externally. They think ethics are clear, right? We're not supposed to steal people's cars and we're not supposed to, kill people when we walk in front of them or whatever. And, but it's not that clear when it comes to certain things.
Certainly we can follow the ethics of a country or a city or institution, AI is something new. We haven't dealt with some of these questions before. And because of that, it does take some ethical reasoning. I happened to talk to a number of PhD students taking an instructional systems design course.
I was asked to come in by one of our previous guests, Dr. Anilda Romero Hall, and to talk about ethics in instructional design. And where I started with that was this question of what do we bring to the table? If we can understand what forms our ethics, our beliefs, our positionality to begin with, then we can start to understand why we might have some knee jerk reactions to certain things.
[00:02:49] Jason Johnston: And we might be more willing to concede on some things for the sake of the common good. And as we talk about ethics within a context or within a a group of people or a community or what have you.
[00:03:02] John Nash: Do you think the ethics of the companies that are creating these models drive how people feel ethically about using them, or is it the other way around? Did the companies decide they needed to sound ethical because they knew people were going to clamor about whether these models might be used in unethical ways?
[00:03:26] Jason Johnston: Yeah, this is a great question. Yeah, it feels like, to me they're aspects, if I'm reading down, like, and they've all got them, right? So you can look these up OpenAI, IBM, Anthropic. If you start to read down those ethics, typically you re, resonate with a lot of those ethics. They're good things, typically, about security And inclusivity and being non biased and private and so on, but then you've got to ask yourself what is really driving these companies to do what they do and what is not being said, right?
What's between the lines here and what are missing? And this is where I think we need to go beyond what the companies are saying and think ethically about our own context.
As educational institutions, I don't think we can just rely on these, do you think we can rely on these ethics to help guide our use of AI? Are they good enough, John?
[00:04:19] John Nash: we rely on them?
[00:04:21] Jason Johnston: Yes.
[00:04:22] John Nash: To what extent? I think, of course, they're a good start. They're a start. I think maybe even good gets left off of that last statement. They're a start. They're certainly not unethical, what's been put out there. I don't think that, But the companies are no fools.
They know that they're for profit companies and if they were to put out statements around ethics that didn't seem to meet with what general morally accepted principles look like, they would be derided in the marketplace.
[00:04:50] Jason Johnston: So do you think these ethical guidelines are crafted by philosophers within their midst or marketing people within their midst?
[00:04:58] John Nash: Certainly, I think it's more of the latter than the former. Many of them are Bay Area companies and there's ethos of the Bay Area and these guys and how they think.
I think they probably want to be ethical. Google once infamously now said, "do no evil." And then of course later got into many different kinds of arrangements that were not unevil.
[00:05:19] Jason Johnston: Yeah, you'd sent me an article a little while ago in the Harvard Business Review. They had a AI ethics series that I can put the links into the show notes here and where they looked at avoiding the ethical nightmares of emerging technology and questions about AI responsibility. And one of the questions was, what does the tech industry value? And it looked at some of the ideologies around the culture of speed.
And so I think my question with some of these, if you look at it, any of these big companies, Google, IBM, Anthropic with Claude, OpenAI, they have a list of ethics, but I think we always have to ask the question, what's not there, that's driving them. And I think this is one of those, is this culture of speed and the fact that it almost seems like their guiding point is that we need to do this as quickly as possible and get out there in front of other people. And, and that guides them ethically in terms of the choices that they make.
[00:06:22] John Nash: I agree with you. I think that they have two books of ethics, maybe, almost as though like a business that's got a second set of books. And so they've got the public ethics around keeping people safe and data safe and responses of our machines, that are very human like in their responses, the responses are safe. And then the other set of ethical books say we need to move on this like our board members want because shareholder value.
[00:06:52] Jason Johnston: Yeah. Yeah. And because of that, they may be willing to let some of those guardrails down a little bit to allow for the speed. And some of these post humanists or transhumanists kind of people that are running a lot of these companies think about the, from an ethical standpoint, , they're taking a more of a teleological approach, which is just looking at this ends justify the means. If in my mind, this is going to improve society so radically that we're willing to let a few things slide here along the way.
And I think that's where the speed comes in, is that if we can get there quicker, and we can improve society sooner, then we're willing to let a few, little ethical oversights go by while we're building whatever it is we're building.
[00:07:42] John Nash: Yes, because if you take what Mark Andreessen recently said, there is a belief amongst some of these founders that they are actually saving the world, that these are technologies that are going to save humans.
[00:07:56] Jason Johnston: I resonate with that idea of there being two books and we got to ask what the closed book, the secret book of ethics is, and what the open book of ethics is. The open book of ethics is almost always now talks about safety and inclusivity and privacy and these kind of things, whereas the closed book probably more things like like speed about having a perception of what the public needs in order to adopt it versus it actually being there.
So, managing basically your market and managing what the market perception is of a particular thing is more important in these cases than the actual thing itself.
[00:08:44] John Nash: Yeah, or what problem the thing is solving. We've not been privy to the real internal discussions at say Open AI when they said we will publicly release 3.5. I don't know what the problem was that they saw was being solved in the marketplace by releasing this.
[00:09:01] Jason Johnston: Right?
[00:09:01] John Nash: I don't know that there was one exactly, except that it's just, it's a fascinating technology and fun to play with and mind blowing.
But that's about it. And yet they were able to monetize that because people wanted to, play with it and actually do work with it. Yeah, I don't, I think this was all, these were all products with solutions in search of a problem.
[00:09:21] Jason Johnston: Yeah, it's strange. And this is what makes it really unlike a lot of other inventions. And I think because it's so open ended, it's so user driven,
[00:09:30] John Nash: Yes.
[00:09:31] Jason Johnston: And inquiry based that it doesn't need to be a solution to any one problem. That it's like an open ended potential solution,
[00:09:41] John Nash: Yeah, unlike the sundial, or the scientific calculator, or the phonograph, or the chalkboard, or go on. Yeah.
[00:10:05] Jason Johnston: On paper and in their heads, but you're right.
We continue to press math forward. However, again, here's a piece of technology, just like you mentioned there, with a very specific use in mind, right? And it has certain limitations to it. I think AI is more like the internet where it's wide open or as a recent article I read said : it's more like electricity somebody told me this about their about their grandfather that lived in Eastern Kentucky, didn't have electricity and they're asking do you want us to run the lines out to your home?
And he's like, why would I need lines? I don't have anything to run on the electricity. Which is true, right? It's an absolutely true statement. But electricity was almost like a, solution without a problem because as soon as you got it, then you figured out ways to use it.
[00:10:51] John Nash: I've been wrestling in my head whether or not this is like a utility. I don't think it's necessarily a public good. And, but it is, people are paying for it like it's a utility, they pay a monthly fee, they pay for their electricity, they pay for access to chat, GPT 4. 0. And so, but is it in doing so, is it just creating a situation where people need to get a bunch of stuff or do things that they didn't necessarily need?
[00:11:19] Jason Johnston: Yeah, I think my own use of it is probably a mixed bag. I sometimes come away and it feels like I've been on the internet and didn't get anywhere and then sometimes you go on the internet and you get some places, right?
[00:11:30] John Nash: Right.
[00:11:30] Jason Johnston: And you find the answers that you need or, sometimes you get lost and, a string of cat videos and you don't know how you got there.
And I feel like because it has such a lack of focus, There's a lot of experimenting still to be done with it that doesn't necessarily give you helpful results for your time investment.
[00:11:50] John Nash: What do you think about the ethics of all of the little GPTs that are getting built in the marketplace? Some of them are completely frivolous, some of them are a little malevolent, others could be useful. Do you think that the people who create a little GPT also need to have an ethical code?
[00:12:12] Jason Johnston: Yeah, that's a great question. I think, and this could lead into some other discussions about more contextual ethics. I do think that one can rely a lot on whatever the bigger ethics are in the system that you find yourself in, or the community, or the organization, or the country. So they can rely a lot on those larger ethics, but typically those larger ethics are general enough that they cannot always be helpful to guide what you should and shouldn't do in the specifics. Does that make sense?
[00:12:51] John Nash: think so.
[00:12:53] Jason Johnston: So like, maybe somebody running a little GPT might be generally guided by a care ethic, or ethic of how this might respond about certain races or stereotypes or people or whatever. I think it behooves the person who's making that to ensure that's true and do enough testing and to think about enough use cases that it might be used to get around these kind of general ethics to help guide it to keep it on track.
I really think a lot of people don't really start with ethics. When it comes to developing these things, I think it starts a lot with innovation, which is okay. I understand that, they're trying to, like you said, solve a problem. I've got a, this is a good time to plug my own GPTs, so people can ,use them.
And
I don't know, is this some sort of pyramid scheme? If I get people to use my GPTs or make GPTs, do I make money off of their GPTs?
[00:13:47] John Nash: Yeah yeah, no, I don't think so.
But I think you should, if you'd like, I would pose that to OpenAI to see if
[00:13:54] Jason Johnston: really I'm trying to find certain solutions, so I made a GPT because I've got questions, my kids are coming of age, and I've got FAFSA questions, and so I made a FAFSA GPT that is trained specifically on the information from the government so that it could answer questions from a reliable source.
And I think it was helpful for me personally. And so maybe it'd be helpful for other people, but honestly, I didn't really necessarily think of the ethics of that. It was just a utility.
[00:14:27] John Nash: You did think about the ethics tacitly because you wouldn't punk your kids on the FAFSA GPT.
[00:14:35] Jason Johnston: that's true. And I said things like, yeah, I think there would be maybe some specific ethics that we know, for instance, the, of the many qualities that GPT had, especially in the beginning, we still know that it can be very confidently wrong, right? And a lot of the other things it's grown away from, but it still can be very confidently wrong about certain things and it can hallucinate and so on.
And so I told it specifically to only give truthful answers. If it doesn't know, then say it doesn't know, and those kind of things. Whether or not that works, I don't know. Sometimes it does, I think, sometimes it doesn't. But by guiding it to, only use these resources, bang, then hopefully it will provide what I was hoping for was a truthful answering of my questions for myself and hopefully for other people, so people wouldn't get steered wrong. So I guess you're right, yeah.
[00:15:23] John Nash: Yeah. So what do you think our advice is for teachers as they think about how they might integrate ChatGPT, Claude, other large language models into their work routines, either as an instructional design assistant, which I use them for a lot. I use it more that way than I do as a tool for my students to solve a problem, for, students doing their work, or some hybrid of both, what if we're thinking about our notion of being human centered in our work, and encouraging others to be that way, what do you think we should say?
[00:16:06] Jason Johnston: Yeah, that's a great question. I would say on the front end that whatever institution or community that you're in that we should be at the place where people should have some pretty clear ethical guidelines to help guide as a community things that some principles that were agreed upon from a number of stakeholders across the community, institution, whatever that could be more general. Like, I was very thankful to be part of the a committee that developed some of these principles at UT, which can be really guiding principles. And so there are things like "we use AI intentionally, it's human centered, it's inclusive, it's open and transparent, we engage with it critically" and so on.
But then what I found when I'm working with my media team and my instructional designers, as we're talking about the use within our day to day work, I found that these guidelines were good, overarching guidelines for us that we could all agree upon. But then it came down to really specific kind of questions that we needed to talk about.
For instance, do we use AI image generators, right? And if we do which ones do we use? Do we open handedly use them? Do we just use specific ones? Are we concerned about things like copyright? Are we concerned beyond copyright? What other questions do we have in our smaller community? Questions that didn't even come up around faculty around creative works, not just about whether or not copyright is taken care of, but is there work creep happening when this person who's not a graphic designer uses AI to create graphics where another human would have typically done that, right? And so it starts to create much more of a specific kind of context for principles. And we were able to come up and we're still working on some more guiding principles , that can help inform our day to day work within our team.
[00:18:02] John Nash: Yeah, the graphic example is great because if you've got graphic designers, illustrators on your team, they take a brief from a client and they have to interpret that contextually and then they create an illustration, let's say. If they or someone else uses an image, generation model like DALI or MIDJOURNEY, they put in a prompt and it puts out something technically beautiful and maybe aesthetic, but does it hit the mark in terms of what the contextual interpretation was that was desired by the call?
That's very different. And if that can be created and say it does hit the mark and it's created by someone who's an 18-year-old intern, let's say that you hire, you have a new power dynamic problem. If we're, now we're back to my original problem, right?
[00:18:49] Jason Johnston: Yeah.
[00:18:49] John Nash: you are usurping traditional power dynamics about who's supposed to do what.
[00:18:55] Jason Johnston: And that's where it becomes so contextual, right? Because as you said, yeah there's a lot of ethical ways that you can talk about this, right? There's the copyright part of things. You can just lay it aside and say we're not gonna cross copyright laws and so we're just not gonna do it at this point or whatever.
But there are other ethical considerations beyond that someone's livelihood, potentially there could be some power dynamics, there could be some lack of care and respect for people who have done this job for a lifetime, and they're trained to do this, and they have the tools, and then all of a sudden some idiot with a Midjourney account
[00:19:28] John Nash: Yeah.
[00:19:30] Jason Johnston: that they can make graphics better than they do, and it's just not, it's not kind, and so I think that there are many ways to do that. Now, there could be another situation where somebody has a one person shop, and they're doing tech, they're doing instructional design, they're doing a little teaching and professional development, and they're expected to do graphics on top of this, and they don't have the budget. They've been told you can't hire anybody else. You don't have the budget, whatever. , it may be in those situations that that the ethical thing to do could, be to go ahead and use those,
um,
graphics .
[00:20:00] John Nash: you've hit the nail on the head. Context is everything. Because you're right. If you're a solopreneur who, say, makes logos for a living, then you are doing client development, you're doing billing, invoicing, and you're doing the creative work. I think you're probably using LLMs and image generation models all day long to help manage that process.
But that's different from a general ethic of care for just understanding how to deal with humans in the context of an organization, and whether you usurp their work without talking to them.
[00:20:32] Jason Johnston: Yeah. Yeah.
Let's do one other thought experiment here. What if two, I'll do two thought experiments.
[00:20:38] John Nash: A a 20 year old junior at university uses the LLM to critically examine the assignments given to them by a professor and writes back giving them a critique on how it doesn't really help them achieve the learning goals intended for the course.
Or a parent decides to write the lesson plans for an English composition, 10th grade teacher.
This sort of power still sits there. And so could a teacher's aide do the design work for a course instead of the teacher? Or should they? I think those are, leadership questions. Those are ethical questions. Those are organizational culture questions.
[00:21:18] Jason Johnston: Yeah. , I liked how your sentence changed there because this is a great indicator that we're doing some moral reasoning. A great indicator that we're doing some moral reasoning, is when your question shifts from could to should,
right?
And so could that parent do that?
Yeah, certainly can. Everything's there. Should they? That is the ethical question, and I think that takes some reflection. Probably takes some conversation, perhaps even to be able to work in empathy with other people. And so if we're trying to follow an ethic of care, then empathy is pretty high up there in terms of understanding.
And I'll be honest, and I, and this is also this completely contextual. I'm not saying anybody else should do this especially, present company, but I canceled my Midjourney subscription. I, hands down, it's making the best AI images out there, without question. It was worth it to me from that standpoint, and so on. But I canceled it because of some of these conversations I was having with creatives, and it didn't feel good anymore to have it,
[00:22:32] John Nash: Say more. In what context? Like, would you stop using DALL-E now? You could still make images with MidJourney without a subscription, right? And so, even if you can't, but I'm just curious, like, so you would never use it under any circumstances now?
I guess is what I'm trying to understand.
[00:22:51] Jason Johnston: in my current context the things that tipped me over, it was the, some of the copyright issues in terms of using artists work without their payment or their knowledge. Which didn't feel good to artists in general. The fact that I was paying for it as well, so somebody's making some bank off of this, right?
And so it's not experimental. This is a business. And then really thinking about this idea of why and should I be, right? Why am I doing this? Do I really need to be making images of this high quality? If it's important to somebody else that I am doing this, is it that important to me that I'm doing it?
So that's what was my reasoning around it. I'm not saying I would never for any circumstance but I, and partly a little bit of a statement, to be able to say, Oh, yeah, I just decided not to. It was an interesting experiment for a few months. And we have an Adobe Firefly subscription.
They have an ethic that includes paying artists and only using works that they have full license to. And. It's not as good, but I'm willing to do that for now if I need to use AI. And to be thinking about if there is anything that somebody should be doing that has the skills, then to be thinking about what place do they have in all this?
Should I be giving them opportunity and chance to do this?
[00:24:20] John Nash: Fantastic rationale. You've, yeah. You've convinced me I need to think about dropping mine.
[00:24:27] Jason Johnston: Again, I believe it's context. I think that people need to think about it for themselves. I'm not going to go around wagging my finger at people via LinkedIn about it. Although I have considered at least putting my thoughts out there. So maybe this will spur me to put my, some of my
[00:24:41] John Nash: Well, you know, there's, There's nothing worse than a reformed anybody.
[00:24:47] Jason Johnston: That's right. Nobody wants to talk to that person. Yeah.
This has been good John. I feel like we've covered a fair bit of ground. We partly started talking about this because we did a video, which we'll also put in there, where you and I broke some general ethics down in about 15 minutes.
You invited me to come talk to you, and this is part of a boot camp you're doing as well, tied in with that. Perfect.
[00:25:12] John Nash: yeah, you and I had a chat in a series I've launched called School Leadership and Generative AI, all in about 15 minutes where we cover pretty big topics on the top of mind of school leaders, but we get to it as quick as we can so they can gain some ground on some of these bigger issues. I did one with Dr.
Kurt Reese on data privacy with students. And then, yeah, with you on ethics and it's yeah, it's connected to my school leadership AI boot camp that I've got on Maven that people can enroll in, put a link to that in the show notes too. But yeah.
This was a good conversation today, I think. I think made me rethink some things.
Made me really think about context.
I was going to say earlier too, maybe this, maybe we fit this into the other part of the conversation. There were some articles six months ago or so, maybe about a firm in China that was going to have its CEO be a generative AI bot, and it was going to run the company.
And I don't know where that's landed since, but it made me think, could or should an AI bot run a school district? Could it even run a school? Could we have an AI LLM provost at a university? How difficult are those decisions anyway? That'll rankle some folks for me just even asking, but I think it's interesting to think about because this is the direction these are going.
Already with the terrible news of deep fakes that are coming out around Taylor Swift and others, and then with the election coming up with malevolent actors using this, these tools in bad ways. I think we're on the cusp of seeing the same sort of thing happening for leadership in organizations and maybe not malevolently, but it's going to be there.
We're going to have avatars that look very real, that'll get past the uncanny valley that will be driven by large language models that sound like they know what they're doing. So I think that we're another level of ethical discussions are coming around how badly do we need all these personnel?
[00:27:15] Jason Johnston: Yeah. All of those coming along. I'm convinced more than ever that we need to be thinking ethically about these. We need to be not just thinking about it for ourselves. We're talking about it in our communities, coming up with standards that we can support one another with, and that we bring people, all kinds of people into those circles so that we can think about not just ourselves and those ethics, but how it affects the people around us.
[00:27:39] John Nash: Yeah.
[00:27:40] Jason Johnston: Yeah, this is good. thank you, John, for this great conversation. And all of you, if you want these show notes, of course, we're at OnlineLearningPodcast. com. And you can check out all of our podcasts there as well as show notes. Yeah. Thanks for listening. And as well, if you have a chance, if you find us on Apple podcast, you can leave us a review, send us a note there.
You can always find us on LinkedIn as well. And connect with us there. We've got a community as well as you can just connect with us and we've got the links as well in the show notes for those.
[00:28:10] John Nash: Is it ethical for me to say that we found out that the algorithms like it when people go on Apple podcast and rate us and leave a comment?
Or is that just stating a fact? Am I just stating a fact without ethical considerations? It's okay to state
[00:28:28] Jason Johnston: I think that if it's true, it's ethical. And the fact that we're being transparent about this, we would like you to leave comments, not just for our own egos, but also to help the algorithm so other people can find this podcast. So, yeah, as long as we're being transparent, I think that's ethical, right? All about the algo. Talk to
[00:28:49] John Nash: Cool. Talk to you later.
[00:28:51] Jason Johnston: you soon. Bye..
[00:28:53] John Nash: Yeah, fun. I'll talk to you soon.
[00:28:55] Jason Johnston: Bye.
Monday Jan 29, 2024
Monday Jan 29, 2024
In this episode, John and Jason close off the 2023 Johns Hopkins University Excellence in Online Teaching Symposium with a live podcast recording, summarizing the day’s sessions and interacting with the audience around 6 Pillars of Humanizing Online Learning in the Second Half. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
6 Guideposts - Slide Deck (via Gamma.app)
Johns Hopkins Excellence in Online Teaching Symposium
Jana Lay-Hwa Bowden, Leonie Tickle & Kay Naumann (2021) The four pillars of tertiary student engagement and success: a holistic measurement approach, Studies in Higher Education, 46:6, 1207-1224, DOI: 10.1080/03075079.2019.1672647
Peabody Institute and their “Path to Funding” guide
Advancing Diversity in AI Education and Research Symposium - Stanford
Dr. Michelle Miller Substack - Teaching from the Same Side and the idea of “same-side pedagogy”
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
[00:00:00] Introducer: Welcome everyone. It's been a great day and we have. A very fun way that we're going to be ending today.
So this is our final session. I appreciate everyone greatly for attending our inaugural excellence and online teaching symposium and we're going to be ending our session with a live recorded podcast. We have Jason Johnston and John Nash, go ahead and take it away whenever you are ready.
[00:00:33] John Nash: Hi, I'm John Nash and I'm here with Jason Johnston
[00:00:36] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:44] John Nash: Yeah, and we are doing this podcast to let you all in on a conversation we've been having and to let you be part of the conversation that we are having about online education.
Look, online learning has had its chance to be great and some of it is, but there's still quite a ways to go. What are we going to do to get to the next stage, Jason?
[00:01:05] Jason Johnston: That's a great question. How about we make a podcast and talk about it?
[00:01:10] John Nash: That sounds great. What do you want to talk about?
[00:01:13] Jason Johnston: Today I think it'd be great to continue our theme of how to humanize online learning in the second half and to do it with a number of our friends here.
So today we want to not only do a podcast, but do a session here at the Johns Hopkins Excellence in Online Teaching Symposium, the first ever. Is this right, Olysha? We're on the first ever.
[00:01:36] Olysha Magruder: That's correct. This is the inaugural symposium. So you're a part of the new wave.
[00:01:43] Jason Johnston: We're so glad to be here. Thank you for the invitation.
And this is exciting that we're here and we're doing a live session where we are recording. And we had the auspicious and difficult task of trying to bring a little summary to this day. It's been a good day, hasn't it, John?
[00:02:01] John Nash: Yeah, it's been amazing. We've been in every session that we could attend.
We split up and took some notes along the way about what the overarching themes were and where we see some opportunity, but we're so excited to see what you all think as well and what you took away.
[00:02:17] Jason Johnston: Yeah, so here's how we are planning to proceed in the next little bit here. Our ideal as we were looking at the day is to try to give us some guidelines to talk about. We tried to pull a few quotes. We have a A little bit of an outline that will guide us, but first we thought we should probably introduce ourselves.
John, you wanna go first?
[00:02:41] John Nash: Yeah, sure. I'm John Nash. I am an associate professor at the University of Kentucky in the Department of Educational Leadership Studies, where I'm also the Director of graduate studies. We are an all online. Department and a graduate program offering master's and the doctorate at the EDD and PhD level, and I'm also the director of the laboratory on design thinking at the University of Kentucky, where we look at human centered design and its application in organizations and leadership in schools.
[00:03:11] Jason Johnston: And I am Jason Johnston. I'm at the University of Tennessee, Knoxville. I'm the executive director of online learning and course production. So, my big thing here is helping to stand up online programs, and I do it with a fabulous team of instructional designers, some of which are here. That's not the only reason why I said that, but some of them.
And media personnel who help to stand up online learning here at the University of Tennessee and do an amazing job of that. That's who we are. We also would like to just keep in mind that this is a recorded session. We would like to, as we go along, talk to all of you and hear from you as we proceed.
Please feel free to, unmute your mic as you have something to say or questions. And to quote Dr. Olysha Magruder I'm not sure what's gonna happen. And this is this was her, this is her plug for our session today was that I'm not sure what's going to happen in that one.
[00:04:13] Jason Johnston: We're not either, because part of this session is actually hearing from all of you, but we do have a few guiding ideas and guideposts that will help guide our discussion. John, you want to show our slides?
[00:04:29] Jason: And those who want to follow along at home can find these slides in the show notes.
[00:04:32] John Nash: And if the link that you got in the chat should track with what we're doing here today. And this document is made with the gamma. app. And so this document is a presentation deck. It's also a living document. It's a webpage and it's a handout. And so it's the new shimmer, if you will, of media.
And if you get that, then I love you. So browse through it before and after the session, as we. grow in our conversation in this hour. Some of that material may show up in here and please reuse and remix because we want you to do that. And so yeah, we're not sure what's going to happen in this one, but I think it's going to go well.
And we want to start to talk about being human to each other. The focus of our, podcast is to think about the second half of life for online learning. And we know it probably has much more life than we have in ourselves. But as we noted in the beginning, we think it's had its chance to be good, but we think that there's another chance here to be even better.
This whole day has really been about that. And so as we go forward, we want to talk about what we picked up on today and also really hear from what you picked up on. So Jason, do you want to say a little bit about where to find our podcast after this is done and people can listen to this?
[00:05:52] Jason Johnston: Yeah, onlinelearningpodcast. com. That URL actually will take you to our entire podcast. Not only is this session going to be edited and probably put out there, Maybe January, John but we just released on Monday, hot off the digital press a conversation with Dr. Olysha Magruder.
And so you can go check it out and listen to that podcast. Had a great conversation. One of the reasons why we're here today is that connection. Please listen in, let us know what you want to hear about. Like this session, we want this podcast to be a conversation and to be talking with all of you.
Yeah. And around the topics and subjects that you are interested in.
And without further ado John and I were trying to think of some larger themes. We guessed at a number of them before this day begun by, by looking at some of the session titles, by thinking about some of the ways in which we're thinking about humanizing online learning. But we have these six guideposts, if you would, and I think I was thinking about guideposts because my home here in Knoxville.
Pretty much every side of the driveway is a drop off and so there's a little turnaround that you know if you're somebody like me that drives a really cool car like a long, minivan There's a fair bit there's a fair bit of maneuvering to be done where I have to go into this turnaround And move forward what I did when I first got this place is to put in guide posts for myself so that I did not want to end up with a minivan in the ditch.
My own ditch. of my own making and and so putting in guideposts, especially at these kind of key spots as I'm coming up over the top, coming onto the driveway and as I'm doing into this turnaround putting some that were lit, other ones that were just like those reflectors others that actually are barriers that don't permit me to go over some of the spots.
And so today, if you, will walk with us, these six guideposts for humanizing online learning. Some of what was drawn from today, some from our podcast of this year and our own thinking. And how we're going to proceed in these is that we're going to talk about the guideposts, give a, maybe a little summary.
Couple quotes that we found from today and then open it up for you for any other things that you heard, maybe particularly from today. So maybe a little bit of a focus on on today's session. My one request would be that we're now down to less than 10 minutes for each one of these guideposts. And just try to keep the comments fairly quick if you can, as we get there. Shall we go on to guidepost number one, John? Yeah, let's go
[00:08:33] John Nash: to guidepost number one. And that would be this notion of being human to your students and yourself. And the two gems that I picked up on. from this came from Flower Darby, and it was this idea of sharing a little of yourself. And this idea of connection doesn't happen by accident. This, what did you think of these And particularly, I think really not just sharing a little of yourself, sharing quite a bit of yourself if you're comfortable as a model for students to be able to do that back with you.
And the, that the connection doesn't happen by accident really feels like a a thread that went through almost all of the sessions today. If I. I remember I texted Jason in the middle of one of the sessions that the word intentionality is just coming up every time.
Everything is, must be intentional. Nothing really happens with hope or or luck. And back to you all. What struck you as Flower was talking about these things and this idea of really being human to your students? come off mic and and think with us.
[00:09:40] Jason Johnston: And if you would, when you come off mic, would you say your name and where you are right now, like institution? would be perfect. And then, whatever it is you have to say.
[00:09:49] Jody: Okay, I'll come in, John. Joe DeBonis Dublin City University, or DCU. Loved the way Flower shared pictures of her family.
It's something I would love to do, I just never thought it would be appropriate, but of course it's fine. So it's something I will do in future.
[00:10:11] John Nash: I agree, Jody. I had not been so apt to do that kind of sharing at a level. I bring a, try to bring a level of energy and enthusiasm for the topic, but had not thought to talk about how my wife is an important thought partner and everything I do and an important critic, but I don't honor her the way I might that she has.
And so I'm really going to think about doing that. And Austin is getting some love in the chat because I love this idea of that luck is the residue of design.
It's almost like this idea of that luck favors the prepared mind. I don't know if you want to say more about that on Mike Austin, but I really love that comment. it's very tweet worthy.
[00:10:49] Austin: Yeah, I appreciate that.
I'm Austin Tremblay and I work at Johns Hopkins University in the Center for Learning Design and Technology.
I just think it's one thing to think that results are purely contingent on luck, as if we have no agency in the matter, but. to think that we can, put plans in place and create our own results rather than relying on just, this passive act of being lucky.
I think that's a nice way to think about it.
[00:11:19] John Nash: Wonderful. And I just also want to give a shout out to Sarah Schunkweiler, if you want to say something, but this lovely idea of really going forward with an informal module that has intro videos with dogs in the background in real life.
Do you want to add anything to that?
[00:11:37] Sara: I'm Sarah Schunkweiler. I'm an instructional designer also in Olysha's group and I work with Austin, but I'm also a faculty member, so I record informal videos myself. So my engineering professors have started doing that also.
And since having their kids walk in, having the dog there, the dog is there during office hours. So why not be there during the informal videos as well? And students are highly amused by the dog trying to get out of the office behind the instructor.
[00:12:06] John Nash: That's great. Awesome. Jason, do you feel like you want to go on to number two?
[00:12:10] Participant 1: Sure. Yeah, those are great. Thank you. Thank you for being brave all to speak out on this live recording. Number two, encourage students to be human to one another. We can set the tone ourselves. As teachers as well As we are setting the pace and the culture, and as Austin was talking about the design of the course, we can encourage students to be human to one another Joe said, , do you have a way to In the course to meet other students so you can help them and they can help you stressing the social pillar of engagement and then Flower Darby had talked about adding emotional presence to the community of inquiry framework of social and cognitive and teaching if you guys were in that session or you remember the three kind of concentric circles that overlap, and in the middle you have a learning experience where there's typically social, cognitive, and teaching presences within an educational setting.
And Flower talked about how this emotional presence helps support the whole thing. And this is where it's not just encouraging student to student interaction. Help each other with your homework or student to student assignments, do this assignment together, but actual support, this emotional support where they can be human to one another, not just act like a, human.
Any other thoughts on this one? What what other ways can we encourage students to be human to one another in our online classes?
[00:13:39] Mike Reese: I can jump in if it helps out. This is Mike Reese from Johns Hopkins Center for Teaching Excellence and Innovation. I was in Joe's session and this idea of social engagement came from a model that he was presenting on four pillars of student engagement that came from an article by Boden Tickle and Nauman in 2021.
And I think what was really helpful when he was discussing this was giving examples of how to engage both in synchronous and asynchronous environments and really stressing the importance of peer support, regardless of what the modality is and for students. to be able to support each other that requires them to have some sort of human connection.
[00:14:24] Jason Johnston: That's great. That's a great resource talking about those four pillars. , very helpful. Thank you.
[00:14:30] John Nash: That was good because the those pillars Mike, they, when he talked about the affect of emotional engagement, do you feel safe and welcome in class?
Do you have an opportunity to do your best? Do you have friends or teachers that recognize and praise you for doing good work among other things? Those almost have to be preconditions for the part where you're saying, " do I have a way to meet other students? I feel good about meeting other students.
And yeah, I really like that.
Any other comments on encouraging students to be human to one another?
[00:15:03] Andrea C: If I could, I've got two things. I'm Andrea Srevec, and I am the Director of the Office of Faculty Development and Advancement at the South Dakota School of Mines and Technology. One is setting ground rules for your class. So just that basic, you're going to respect each other.
If we do any discussion boards, try to keep your comments on topic. But also I taught an asynchronous class for the first time. And, in engineering, they don't necessarily like to use the discussion boards. It's just not something that they're like, let me run some numbers and don't make me talk.
But I told them before the class even started, you're going to have to be able to upload videos in this class. And you're going to have to upload videos of yourself in this class. So I had them do introduction videos. And they had to comment on each other's introduction videos. And it was just nice in a class where they were never pretty much going to be in the same room together or online together, to at least have them know who else was there, so that when they got back, because it was a summer class, when they got back on campus, they would know who was in their department.
And it seemed to work pretty well. They were really quite good about responding and, oh, I play an instrument too, kind of thing.
[00:16:13] John Nash: Beautiful. Thank you so much.
[00:16:16] Jason Johnston: I was going to say before you go on to the next one, thinking about, and this kind of ties into another one of our, one of our guideposts is just thinking about the different contexts, right? We're all. In different domains, in different contexts, teaching in different kinds of classes, and what might look like a really good human thing to do in one kind of class may not be the same thing for another program or another class, and there may be different ways to approach that.
That's great. John, number three? Sure.
[00:16:44] John Nash: Number three is we should endeavor to create content that is human centric. And we heard this across several of the topics today. Flower talked about how engagement precedes learning. This is an important notion to keep in mind. And not as much AI references today.
A lot of really just good ID stuff today. But Luke talked about how AI is a Kickstarter. I know you were in that session, jason I didn't go to that
[00:17:13] Participant 1: one. yeah. Can I just say that the idea behind that is that. rather than letting AI take front and center and removing some of the human centricness of our classes, we use AI to help us be more human in the classes by our design, to be more thoughtful and to help spur ideas that are more human.
I thought that was a great idea. Yeah,
[00:17:36] John Nash: that is a great idea. And it's a strategy I've been using in my own courses to use AI as this Kickstarter for instructional design to create active learning environments and activities for my students, but the students never, really interface with AI.
They just interface with the good learning that I've been able to create with the help of AI. And then Becky was talking about interface design impacting the learning experience. And so there's an aesthetic portion to this that really brings in learners. And then this idea of education happening in major and minor learning spaces where the interaction is taking place from learner to teacher, from learner to learner really good examples there. But how does this strike you all? We're talking about engagement, but we're also talking about AI. We're talking about aesthetic design and the importance of interface design to get us to a point where the content is really human centric.
Does this strike you all as worthy?
Great, please. Yeah, Caroline.
[00:18:38] Caroline: Yeah, it certainly does strike me as worthy.
My name is Caroline Egan and I am a program manager for the Center for Teaching Excellence and Innovation at Hopkins.
And I think it's going to be an evergreen topic with digitally delivered content, whether that's asynchronous or synchronous. And, I think that having such concrete, such great concrete examples like Flower Darby's "show me a photo of yourself" is just an excellent way of taking small steps towards, encouraging that humanity and the digital interface.
[00:19:11] John Nash: Nice.
[00:19:12] Jason Johnston: Yeah, I think there are a lot of great examples today of that, like the flower derby example. One of the reasons why I thought that last quote was great, about education happening in major and minor learning spaces, is that it helped me rethink even the title that we put on this number three, which is creating content that is human centric.
It could be creating learning spaces in general that it's human centric. It could be fill in the blank that is human centric, in terms of our interactions or ways in which we we structure our courses, our grading or our assignments. I think what helped me here is to get me out of thinking about online courses.
And John and I we talk about this all the time, but there's such a knee jerk reaction to think about online courses as content. And I fully say that, yeah we were making these titles and we just didn't even really think about the fact that, our knee jerk reaction was to talk about, content that is human centric, but I would like to just offer that I think maybe we should move more into this idea of learning spaces So like the examples that was just given and so on it's about the interactions about the relationships about all of it that is human centered.
[00:20:29] John Nash: It seems
like the more we talk about this topic, the more we realize that, and I love your idea about content, that the online courses are not content. The online courses are experiences. And so in order to create a great experience, we have to thread in all of these other things with intentionality and that's aesthetics that is engagement that is authentic bringing our real selves and then having really great active experiences on the learning side as the learner goes through the journey.
John
[00:21:01] Mike Reese: and Jason, I, if I can jump in, this is Mike Reese again, there was a, you've got some great examples here. I heard them throughout all the sessions, but one of the best that I heard today. Was our colleagues at the Peabody Conservatory. They're leading a course that is essentially preparing artists to go out and take their talents into the world and teach them the business of being an artist.
And it's not just about making money. It's really to ensure that these artists know how can they be entrepreneurs to advocate for themselves. so that their talents will be seen and heard by other people. And what was so exciting to me about this, and it really speaks to this third principle here, is that it's not about just simply creating an environment where it's human centric.
But it is a curriculum that has been designed to allow them to go out and connect with others and really allow their talents to be shown by others. And one of the great things about this program that they've put together. is the course for the Peabody students has been so successful that they have gone on to share a open education resource, a book, that anybody can access to learn these same lessons.
I just threw a link to it in the chat that any artist now can benefit from what they've developed at the Peabody Conservatory.
[00:22:27] John Nash: That's fantastic. It's, what strikes me as you talk about what they're accomplishing there is that they're building agency within the learners to go out and tackle the world in real life ways with the skills they pick up in the course. That's wonderful.
We've got that link and we'll put it underneath this topic here. So it's everybody can have that.
Great. So let's go to four
[00:22:49] Jason Johnston: Jason. Yeah. And this one is treating humans as individuals. So rather than just thinking about humanity as a whole becoming increasingly aware that there's a lot of different humans and that there are ways in which we can respond to all of humanity in the more individualistic kind of ways within online learning if we take the extra effort one of the quotes that I found was talking about learning styles aren't real, but we can use AI to guide us for various learning preferences thinking about adapting online learning to help with individual Activities and to meet the diverse needs and interests and so on of your students.
So some of this is really about, adaptation as a instructor and thinking about that. We talk about here, some about, this idea that here, meaning University of Tennessee, the idea that, when you're talking to teachers about their online courses, having to remind them sometimes that this course isn't for them, right?
It's for the individuals that they're there to serve and figuring out ways that we can adapt it to serve even within sub pockets within their own courses. What are some ways that you've either heard today or that you can speak from experience about how we can adapt our online learning to treat humans more as individuals?
[00:24:20] Olysha Magruder: This is Olesha again, Johns Hopkins University. I wanted to mention. That Jodi from Dublin City University presented about the tool Flip, and I feel like this gives an opportunity for educators to have the students interact in a way that they prefer, so you can do a video, or you can do audio, you can comment with a video, you can do text, so it gives them an opportunity.
And I know there's a lot of other tools that do that as well, but she demonstrated a really awesome way to do that I think that connects to this idea.
[00:24:54] Jason Johnston: Yeah, I love it. Yeah, and almost a reverse in some ways of UDL thinking about multiple ways of representation. There's a Perhaps giving students multiple ways of responding, right?
And so if they feel more comfortable with this particular way or that way or whatever in the minute it can respond to them depending on How they're you know tuned in and where their comfort level is. That's good.
[00:25:18] John Nash: I think about taking a page from the p12 world where I've come out of a meeting last week here in the Commonwealth of Kentucky where all the superintendents of public schools were meeting talking about this idea of a portrait of a learner and in this effort, an attempt to individualize instruction on core topics that really can't change, so math, social studies, history, what have you, but the work that the student does is predicated on their own personal interests and what they want to do and what they are interested in.
And so they're able to work cross curricularly, if that's a term we can use with the personal interest of the student. We woven into the curriculum. And the outcomes that are necessary for them to be successful in school. So that's I think about is treating humans as individuals. We're also learning about their personal interests what they really are wanting to do, what they like to do, and then how the content can be adapted so that they can apply those interests within the content.
[00:26:27] Jason Johnston: Yeah, that's good. And we you and I, John, have talked about this concern about the industrialization of online learning, where it tends to mass market to the larger, swath. And I think that is a nice response to that thinking about , the more personalization. And yeah, that's good.
Other comments on that one?
[00:26:49] Mello: Hi. Hello. Can you hear me? Yes. Hi. Hi. I'm Mello. I'm coming also from the Center for Teaching Excellence and Innovation at Johns Hopkins. And I really like number four. It reflects on one of the. Sessions that I went to, and I think I see some folks, it was the one about equity. I see Sarah Shankweiler was there, Chris Sett was there, but basically they're talking about decolonial ways.
Of engaging students and students with disabilities and basically the more, oh, and Rolando, thank you. The more we engage and allow people to embrace. Like their identities, the more this is like basically helping them live authentically. I feel like this is hitting at that core, just when you give students that space to be authentic and acknowledge like who they are and what their struggles are and what their joys are, then they would feel more human and they would share more of themselves.
[00:28:02] Jason Johnston: Yeah, I love it. Yeah, that was a great session. And I agree. And I would say treating humans as individuals is maybe a first step. And we've got a couple other points that maybe even point to that. a little bit or encourage us a little bit more to be actively creating spaces for for inclusivity.
And maybe John, you want to go on to number five and since that's a good lead into that
[00:28:25] John Nash: yeah, absolutely. Which is this notion of making space for all humans. And we picked up on on four different things that struck us as supporting this idea. And of course, back to Flower Darby and thinking about this idea of excellence in online teaching is an equity imperative.
And so when we think about people being their authentic selves and coming in with their own identity, then we can really create this culture of inclusion and really advocate, support, and empower faculty to as Sarah was noting. Nice thoughts around UDL and how do we make a meal for a lot of different people?
You've got this sort of buffet, and so you have to be thoughtful and intentional again. This idea of intentionality is always coming up. And then this notion here from Rolando, which is being focused on teaching accessibility and supporting efforts to teach accessibly. Yeah, really nice.
What do you all take from this this idea of making space for all humans? And I think Melo Really teed that up for us here thinking about that. What else strikes you as we try to make space for all humans?
[00:29:39] Jason Johnston: Melissa, go ahead.
[00:29:40] Mel R.: My name is Mel Rizzuto. I'm a instructional designer in the Center for Learning Design and Technology in the Whiting School at Hopkins. And I love this idea of making space for all humans. And in a session that I facilitated earlier with a few of my former colleagues, we talk about how we're going to how we developed a tool to assist faculty with evaluating their online teaching practices.
And we were very careful to include a standard about immediacy and inclusion in that tool that we developed because we really wanted faculty to reflect on their own practices and, their strategies for fostering belonging for students and then also modeling communication and positive messaging for students.
And so I think a lot of times we get caught up in just the design of the course itself. And we, I don't want to say we fail, but We neglect the professional development needed for faculty in the actual delivery of their instruction. And so I think we have to be mindful of that.
[00:30:47] John Nash: Really nice.
Mello, did you want to add something?
Oh, no,
[00:30:50] Mello: I totally agree. We're talking about students here, but faculty are also humans. Staff is also humans. We're thinking about training the students, but. We're also should train ourselves so that we can better train others.
[00:31:07] John Nash: Yeah, we have a long runway in front of us as as instructional designers, as supporters of those who want to do good instructional design.
There's a lot of faculty who want to do well, but don't have the tools. And I think that they should be considered part of our human set that we want to bring about here.
[00:31:25] Kim V: I just want to. Add in this is Kim Vars. I'm an instructional designer at the Center for Learning Design and Technology at Johns Hopkins. And in the session just before this one that I moderated Chris Ryder and Pankaj were perfectly paired in a way that Pankaj talked about and even showed this perfect image of these really uncomfortable chairs, That you remember from sitting in during your childhood and in school and talking about how you're not just putting content up on the screen to get the content across.
As if you were to hand off a textbook to someone, but instead Chris was talking about creating that space that is comfortable for students for all students to feel as though they have a place there that they can communicate. And I know as an instructional designer, I often think. Most about getting the space to exist and not necessarily ensuring that space truly is comfortable for everyone, which is like, everyone has been saying today, truly not an accident.
It has to be intentionally designed in a way that allows all folks like Dr. Hobson was saying anybody who has any kind of learning or eating preference to be offered this buffet that that they can pick and choose from and craft their own perfect meal in their comfortable course. A lot of work to be done for sure.
[00:33:02] John Nash: Yeah, definitely. Fantastic.
[00:33:05] Jason Johnston: It made me think of as well. What Sarah Schunkweiler, who is here, talked about the steep steps, both perceived and actual barriers she spoke about in her session there that was alongside of those the visual icon of the classic education building with these big steps that went up and the pillars in between and so on, and how those that, whether they can manage that or not, When they perceive that, it becomes this visual icon for spaces that they maybe are Not welcoming, or if they were welcome, they're not welcomed enough that they could actually go into and made me think of probably the first story we heard of the day, which was Flower Darby talking about her Pilates class and finding this person who was lost in the hallway because nobody was in there. The lights weren't on. It didn't feel like a welcoming place. And so they didn't think that this is where they belonged. And I think all of this fits together for creating this. Good space for all humans.
[00:34:10] John Nash: Good. Should we do number six then, Jason? Sure.
[00:34:14] Jason Johnston: Yeah, sounds good.
John and I were quickly Brainstorming and wrapping up right before this session, talking about today's wonderful symposium, and we had come up with a number of these before, but we wanted to create a space that was a bit of a wild card. What, what doesn't fit? And this was actually one that we just arrived on an hour ago, and we've talked about before, but it just fit well, in addition to creating inclusive space and treating individuals humans as individuals.
Number six, recognizing that not all humans are present. So whatever space that you are in, wherever you're making decisions, whether they're design decisions or teaching decisions, Not all humans are going to be represented there, and it's important to be thinking beyond beyond those spaces and what we see in front of us.
And John and I talk about this all the time. We're two, admittedly, middle aged. educated white guys, right? We have a very similar culture. We can talk about a lot of the same things, but one of the things we strive to do within our podcasts is bring other voices in because we recognize that we can't understand and we don't see all the corners and we need to be able to see outside of ourselves.
So this was a great session, really was a mini session talking about neocolonialism first earlier in the day, Luke Hobson talked about how it's not just DEI, it's also about JB, justice and belonging, moves on beyond that Christelle Dacius was talking about the idea of the Northern Hemisphere versus the Southern Hemisphere and how too often that we are essentially using up the resources from the Southern Hemisphere, speaking as somebody that originally was from Haiti.
And so she said, what is digital neocolonialism? Online education is another vessel of imperialist practice to gather human and biological resources through technological means.
And she also went on to say resources are digital human data. We haven't taken the time to realize the impact data that is most likely being sold back as product.
Anyways, I thought these were very heavy statements impactful for me because they were a different voice than than maybe we were hearing earlier in the day. And a voice that takes things a step further as, Represented in the title beyond beyond just thinking about the typical typical groups that perhaps that we're making these decisions in.
[00:36:49] John Nash: Yeah. Part of this reminds me the conversation that's going on in parallel around the fight to reclaim AI and other things from big techs control. And you look at the story of Timnit Gebru, and the work to think about. How content moderation is going on in other corners and this is really affecting the mental health of moderators and all in the name of trying to keep the machine going as it were and so how do we think about what we do day by day as online instructors, online designers of experiences and keep the recognition in mind that not the way it's been presented to us may not be the best way it's been presented to us.
I don't know. Yeah. I don't know if I put that so well, but I'm also appreciating Sarah Schunkweller's comment here that students who might be forced to use illegal means to access online education and that digital human data can be dangerous. How does this all strike you as we think about this last sixth point?
[00:37:51] Jason Johnston: Yeah, thoughts on this. How does that strike you? What other ways can we be more mindful of this?
[00:37:56] Austin: This is Austin Tremblay from Johns Hopkins University again, and I just thought this was fantastic to include because if you are establishing guideposts, but your vantage point doesn't include You know, the totality of the space you're designing around, and that's a dangerous way to design guideposts.
So I think that this informs that idea of, the design of the course itself.
[00:38:24] Jason Johnston: Yeah, you're getting a lot of head nods. We realize head nods don't really translate into podcasting, but yes. So we're giving you a, we got some amens here, Austin, on that one. Thank you for that.
[00:38:36] Mello: Can I just I put something on the chat, but I feel like everything that. We've all been talking about here since the beginning of the hour. It's really like considering this DEI and justice and belonging in terms of AI education and maybe research. And I feel like this is all really relevant to a symposium that I'm a part of that I'm organizing, that I'm helping organize.
And so I put the link in the chat, but basically it's a symposium with the AAI, and it's called Advancing Diversity in AI Education and Research. It's at Stanford in March, and I invite you all to submit something. It's due like in early January and even if you don't submit something, you just want to check it out you can also just attend for fun and education, obviously.
[00:39:31] Jason Johnston: That sounds great. Thank you for that. And we'll make sure that we get these. Links in this chat and that we get these into our show notes. So if anybody's listening to the show and they would like to either know about that symposium or the submission, then we'll put those into the show notes, as well as these slides with all the quotes and all the people so that make sure everybody gets referenced that way.
All those will be in the show notes. Thank you for that.
We had a final quote that also I wanted to get this right. She had, Christelle had said something about humanizing and I didn't get the full quote. And so I wanted to get it right. And so Olysha connected with her and I was able to get this quote so that we get this right. But she said as part of her session,
"humanizing happens when the instructor takes time to talk about how they got to the work and their personal influences. The intentional sharing creates a culture of genuine interaction. This empowers students to show up as their authentic selves. Share their own narratives and bring their funds of knowledge to the classroom to make learning more relevant and meaningful. And that's by Christelle Dosses.
And I just thought this was a great quote to land on because it just seemed to wrap up so many of the, so many of the themes from today, the themes that we were finding and the thoughts around humanizing online learning and what that looks like with these different guideposts. It just wraps so much in there.
John, other thoughts on that?
[00:41:04] John Nash: I regret that I wasn't in that session and you're right. This this passage really captures what we tried to think about today throughout all the sessions from beginning to end. This idea of intentional sharing genuine interaction and then this empowerment to bring all our authentic selves to the table to the conversation.
I think it's wonderful.
[00:41:29] Mello: I was in that session and yeah this also really resonated with me. Especially actually when I teach a class, the very first session for the very first day, I always talk about funds of knowledge especially I'm with students who maybe they've never heard of that before, or they've never taken a topic about the class that I'm teaching, but I always tell them you're not a blank slate you're coming here with Your funds of knowledge from basically living your life and you're bringing something to the table.
And I think that's really powerful, especially if you're suffering from like some kind of imposter syndrome, right? Just knowing that you're bringing something to the table. It's really powerful. And I don't know. I think I have Mike and Caroline took my class and I talked about funds of knowledge.
I don't know if they want to say something.
[00:42:22] Caroline: I can absolutely 100 percent reiterate that it was very helpful to me so that understanding that I brought a
fund of knowledge to a subject matter that I thought I knew really nothing about, which was educational research my own academic training is in a different field. And Mello said, Oh, no, Caroline, you have funds of knowledge.
And and it turns out that I did. So it's a great way of anticipating people's insecurities and reassuring them that they should be in the room with you.
[00:42:54] Mike Reese: Yeah, I'll just add that Mello, when she first pitched what was, we would typically call a workshop, she described as an experience and it truly was that because of the way she engaged us and set up the activities throughout the event. To really allow all of us to be able to learn from each other based on these different founts of knowledge that we have.
[00:43:21] Sara: Hello, I had a comment that went along with that. This is Sarah Strunkweiler again from Johns Hopkins. In Christelle of Rolando's of my presentation, we were talking about accessibility. So as an instructional Designer, I go to the 1st office hours for a lot of my engineering courses. And so we can talk about the accessibility features in the course.
And we can talk about things like student disability services and student advocacy and speaking up for yourself. So we're empowering students. To show up in class and ask for what they need and I had a student reach out to me earlier this year, this fall, who was an engineering student from another country.
She was new to the U. S. She was ran into some housing insecurity issues and because I had gone to that and we had normalized the conversation about asking for what you need, she reached out to me directly and the faculty and I and our support services work with her. And she told me later. That in her country where she came from, it wouldn't be normal.
It would be culturally unusual to reach out for that support. So she really appreciated us making that available and opening that up that conversation for her. So it supported her as a student and it's supporting her as a working professional in the field as well.
[00:44:43] Jason Johnston: That's great. This has been an amazing conversation. Thank you all for jumping in John This has been a great day of learning from all these different presenters as well as being able to wrap it up with These folks here. Thanks for jumping in. Thanks for being brave jumping into the arena and being willing to Speak up even if it's being recorded and we promise to, to hold all of you with respect and as we put this out just know that it's with our great thanks that you have jumped into this conversation with us.
John, anything else?
[00:45:20] John Nash: Yes, I think I've been struggling to think about how to put a point on all of this. And I'm reminded of Flower Darby invoked a quote from Michelle Miller today, and I thought of another one from Michelle Miller.
If I think about the entire day and everything we've talked about, it goes to something that, it's a, Dr. Miller said in her substack, and we can put a link to it in the notes about this idea of "same side pedagogy" and so much of what we're trying to undo around an us versus them kind of approach to learning and what she was saying in her article was that if we come to a same side pedagogy where we're co designing with learners and we're seeing each other as students, equal partners in the same goal, which is to reach this sort of this learning destination, then things will really come together.
And I think everything today were intentional pieces in this notion of us all being on the same side.
Do you mind if I close it out? Please do.
[00:46:22] Olysha Magruder: Just to go back to my the quote you quoted me on earlier, now I know what has happened, and it was all good. Very good. I want to thank everyone for participating in today's event.
This is our, as we mentioned at the top of this hour that it's our inaugural excellence on online teaching symposium. We plan to have this every year going forward. We will be sending out a link for you all to give us feedback on this event so we can take that into consideration as we plan for next year.
And yeah, I just really happy that we all came together today and it's pretty amazing. This final session you put together cause you. Really, we're able to connect all of the dots, which I feel like we don't get to do that much when we come together for things like this. I appreciate you all, and I appreciate everyone who participated and attended.
Thank you. Thank you all. Thank you
[00:47:14] John Nash: all. Goodbye, everyone.
Monday Jan 08, 2024
Monday Jan 08, 2024
In this episode, John and Jason have a “year in review” conversation with their podcast superfriends about why they podcast, the impact of artificial intelligence on education, the importance of human interaction in learning, and their collective efforts in forming a community of education podcasters. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)
Links and Resources:
Amanda Bickerstaff AI In Education Year 1 Timeline (on LinkedIn)
Course Stories, Season 4, Episode 2: The AI Whisperer: Faculty and Students on ChatGPT Dialogues
Planet Money Podcast: Can ChatGPT write a podcast episode? Can AI take our jobs?
Book Recommendation: A More Beautiful Question: The Power of Inquiry to Spark Breakthrough Ideas
Request to join the Network of Education Podcasters on LinkedIn (active education podcasters only please!)
ASU Academic Dishonesty Risk Reduction Guide
ASU Online Eventbrite Webinars
Here’s a link to our original Superfriends episode:
https://www.onlinelearningpodcast.com/e/ep-10-podcast-super-friends-crossover-episode-at-olc-innovate-23/
Our Podcast Superfriends:
Josh Reppun
What School Could be
https://whatschoolcouldbe.org/
Bio: ormer chef, hotel manager and history teacher, Josh Reppun is the founder of Plexus Education, LLC, dba as Most Likely to Succeed in Hawai’i, a “movement” founded by extraordinary people dedicated to developing global public, private and charter school conversations around Ted Dintersmith’s film, Most Likely to Succeed and his book, What School Could Be. Josh is also the founder of Josh Reppun Productions. He is the host of the What School Could Be Podcast and the producer of two films: Ka Helena Aʻo: The Learning Walk and The Innovation Playlist, both about creative, imaginative and innovative educators and education leaders. Josh’s podcast, edited by the talented Evan Kurohara, with music by Michael Sloan, has now reached nearly 80,000 downloads in over 100 countries.
Course Stories (from EdPlus at ASU)
https://teachonline.asu.edu/podcast/course-stories/
Mary Loder
Mary Loder is an Online Learning Manager at EdPlus, supporting Faculty professional development and training along with managing special projects in a variety of disciplines. She is also co-creator and co-host of Course Stories, a podcast where an array of course design stories are told alongside other designers and faculty from Arizona State University.
Ricardo Leon
Ricardo Leon is a Media Developer Sr for EdPlus and is a co-creator and co-host of Course Stories. He has developed a number of other podcasts and various other forms of instructional media.
Tom Pantazes
ODLI On Air
Tom Pantazes, Ed.D. is an Instructional Designer with the Teaching & Learning Center at West Chester University who loves helping instructors integrate technology and robust learning pedagogy. His research interests include digital instructional video, extended reality, content interactivity, and simulations. If he is not cheering on Philly sports teams, camping or building Legos, you can catch him as a cohost of the ODLI on Air podcast.
Specific Episodes:
Generative AI in teaching
Ram Poll gauging student opinions
Lee Skallerup Bessette on LinkedIn
All the Things ADHD Podcast
https://allthethingsadhd.com/
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
EP 22 - Podcast Super Friends II
Intro
[00:00:00] Jason Johnston: Questions? Anyone?
[00:00:02] John Nash: They're podcasters. They don't talk.
[00:00:06] Ricardo Leon: We listen.
[00:00:07] Mary Loder: That's right, intently.
[00:00:09] Jason Johnston: That's right. It's going to be all questions, actually. The whole podcast is people asking each other questions.
Start of Episode
[00:00:15] John Nash: I'm John Nash here with Jason Johnston.
[00:00:18] Jason Johnston: Hey, John. Hey, everyone. And when I say everyone, everyone that I'm looking at as well. This is online learning in the second half, the online learning podcast.
[00:00:26] John Nash: Yeah, we're doing this podcast to let you all in on a conversation we've been having for the last couple of years about online education. Look, online learning has had its chance to be great, and some of it is, but there's still a lot that isn't. So how are we going to get to the next stage, Jason?
[00:00:42] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:46] John Nash: That sounds perfect. podcast and talk with a bunch of people about it?
[00:00:50] Jason Johnston: That sounds amazing. We're so excited today to have our next episode, our super friends, podcast, Super Friends II episode with a bunch of our friends. So, Let's get into it and meet some of our friends. How does that sound?
[00:01:04] John Nash: Yeah, let's do it.
[00:01:06] Jason Johnston: All right. Let's have each one of you introduce yourself and the podcast that you represent, and maybe just a little something about where, maybe where you're located, your podcast, what you currently do. Starting with Josh.
[00:01:21] Josh Reppun: Good morning, everybody. It's a little after 7 a. m. in Honolulu, where we are experiencing torrential rains here in at the end of November, the beginning of December. My name is Josh Rapun, and I'm the host of the What School Could Be podcast. And it's just an absolute blast to be on this episode today and to be with other podcasters as part of this conversation.
So glad to be here
[00:01:44] Jason Johnston: thank you. Lee?
[00:01:47] Lee Skallerup Bessette: Hey I am Leigh Skallerup Bessette. I'm coming at you just outside of D. C. I work at Georgetown University where I'm the Assistant Director for Digital Learning at our Center for New Designs in Learning and Scholarship, also known as CANDLES. And I have a little podcast with a colleague of mine, Amy Morrison, up in Canada, and it is called "All the Things ADHD"
it's called All the Things ADHD. Where we talk about neurodivergence in generally, but since we're both in higher education, more specifically in higher education.
[00:02:20] Jason Johnston: Amazing.
[00:02:22] Tom Pantazes: Hi everybody, I'm Tom Pantazis. Really excited to be here on the sequel of Super Friends and I am one of four co hosts of the "Oddly On Air" podcast that runs out of the Westchester University Teaching and Learning Center.
[00:02:36] Jason Johnston: Amazing. Ricardo.
[00:02:38] Ricardo Leon: I am Ricardo Leon. I am one of the hosts and producers of the "Course Stories" Podcast, which is produced through EdPlus at ASU, Arizona State University in Tempe, Arizona, where I am currently at right now. It's a little drizzly. Looks like the sun's coming out though. So we're doing good. But also in addition to the podcast, we also I'm part of a studio that runs quite a few things, including something that will be coming out in January of 2024 called "Space for Humans"
it's a YouTube program about how we design futures in space that are accessible and inclusive.
[00:03:10] Jason Johnston: Amazing. And Mary. Mary. Mary.
[00:03:12] Mary Loder: And I'm Mary Loader and I am with Ricardo on "Course Stories." We created this about two years ago. Is that right, Ricardo? I think
so.
[00:03:19] Ricardo Leon: I don't know. I have no idea.
[00:03:21] Mary Loder: That is a weird concept. But yeah, we're excited to be here. I'm the manager of professional development and training for Arizona State University's department called EdPlus on the team that Ricardo and I are on called Instructional Design and New Media.
So there's three layers to understanding where we are at our very large university, but we're really excited to have been invited back. Thanks guys.
[00:03:41] John Nash: I get say it, "And you guys work together."
[00:03:43] Mary Loder: to my gosh you said that perfect
[00:03:44] Ricardo Leon: yeah,
[00:03:46] Jason Johnston: That's good. That's great. And John, I guess maybe we should introduce ourselves in case this is the first podcast that people are listening to. I'm Jason Johnston . I'm the executive director of online learning and course production at the University of Tennessee, Knoxville. And this is our podcast, online learning podcast, "Online Learning in the Second Half." John?
[00:04:06] John Nash: Yeah, I'm John Nash. I'm an associate professor of educational leadership studies at the University of Kentucky and the director of graduate studies in that department where 95 percent all online instruction department, and I'm also the director of the laboratory on design thinking at the University of Kentucky.
Yeah, this is fun.
[00:04:26] Jason Johnston: This is fun. I love the fact that we're spread all over the place and we're coming from different institutions. This is yeah, I'm just really excited about this conversation. Starting off, I'm just curious about other than the fame and the fortune and the notoriety of doing podcasts.
We all know, we all share in that. We all understand how all that works. But other than those, aspects why did you either start the podcast or why do you continue the podcast? What is your why in this situation? And just a few sentences for each of you.
[00:04:59] Lee Skallerup Bessette: I'll go first. My co host Amy and I, we, there wasn't a podcast like that out there, like what we were talking about. Particularly with our context of two middle aged women in academia who got late in life neurodivergence diagnoses. And we thought, surely we can't be the only ones, but even if we are, it gives us an excuse to talk to each other for an hour once every other week or month, depending on when we can get our act together.
And we keep doing it because it has resonated with so many people. And it is not just about our experiences, but our experiences as educators, how our diagnoses has shaped and reshaped our pedagogies and how we're thinking about these things. And just to help people feel less alone, less isolated, less weird in terms of the, maybe the experiences that they're having or the struggles that they're having, even if they're neurotypical and dealing with neurodivergent students or peers and those kinds of things.
So the reception has been so positive and gracious, and that, that's really what keeps us going even when our lives are turned upside down and we have trouble finding an hour once a month to get together, let alone once a week.
[00:06:10] Jason Johnston: That's great.
[00:06:12] Ricardo Leon: This is flashing me back to the one time that I definitely lied in the interview when I came to plus they said, they make thousands of videos a year and he said, that's, it's a kind of a factory kind of a process. Are you, what are you going to do about getting bored? And I said, Oh, I'll find something.
So I guess I didn't lie. Cause I did find something. And so there would have been some iteration of this, because I just can't help myself. I think podcasts are great. And I think that, it's a wonderful way to, to do that knowledge share that that what do they call it?
When you go out and you share that knowledge as a, mary, help me out. What do they encourage us to do when we go to conferences?
[00:06:50] Mary Loder: I don't know the term for it. I know
[00:06:52] Ricardo Leon: There's a term for it. Yeah, Justin has something for it. Our boss. But, just that we're making a name for our institutions by the work that we're doing.
And there's an easy way to associate us with those things.
[00:07:03] Mary Loder: And I would say Ricardo's like the king of podcasting. I've literally said that on our podcast after correcting myself and calling somebody else the king of podcasting. But Ricardo's literally the king of producing podcasts. He's so good at it. And so we're really lucky that he one has capacity to fit this in where he does.
Cause we have the same thing, Lee, like, where are we going to. Do this. We really want to do it, but when and where are we going to do it in our existing work? But the
[00:07:26] Ricardo Leon: Thought leadership. Thought leadership.
[00:07:28] Mary Loder: Yeah, thought leadership! Good callback good. Yes Yeah. Part of the reason we started this podcast was I was already having conversations with faculty on the things they were doing that were working.
And it was like, okay, so it's great that I'm having these conversations and I can share with others when I meet with them, but what's an intentional way that we can package this to have a larger reach and impact. So a lot of the reason "Course Stories" exist is because there are lots of faculty doing amazing things.
And a lot of instructional designers doing amazing things alongside them. And so being able to package that in a way that's funny, because again, Ricardo is really great at creating a podcast that's entertaining. But in a way that's actually meaningful to the work that we do was the reason that course stories continues and why we keep on trying to push it through.
And we're lucky now we have a producer, Liz Lee, who's like amazing and helps us do all the things that became heavily tasked. Weighted.
[00:08:23] Jason Johnston: great. I believe deeply in thought leadership. I believe I'm part of thought leadership. However, and I'm sorry if this offends anybody, if that is your second thing in your LinkedIn profile running your name, that you're a thought leader I may or may not accept that invite. I'm just putting it out there.
I have a little bit of a, a thing about that, but I think this is what podcasting is about thought leadership. I think in a sense, that's what John and I were certainly about when we wanted to do this. We didn't feel like doing a, another paper together, but we wanted to quickly be able to disseminate not only answers, but also questions out there into the real world as we're talking about things that are happening in the online space.
[00:09:04] John Nash: failed solo podcaster. And so I, I realized I needed a partner to work with. And that's where, because we each bring different strengths to the microphone, to the backend, to the production that are complimentary and don't really overlap too much. And then yeah, isn't it fair to say that you can't call yourself a thought leader?
People have to decide you're a thought leader. And then, so
[00:09:27] Ricardo Leon: you can have
you can perform thought leadership.
[00:09:30] Jason Johnston: Yes. Yes.
[00:09:31] John Nash: Say that one more time, Ricardo. I'm sorry.
[00:09:33] Ricardo Leon: Oh, I'm sorry. You can perform thought leadership.
[00:09:36] Jason Johnston: Just don't put it on your t shirt
[00:09:39] John Nash: I'm with thought leader.
[00:09:42] Ricardo Leon: Thought boss.
[00:09:43] Tom Pantazes: So for me, one of the reasons why I keep doing this, because we're in year two now, at this point is the joy that comes with being all in all the parts of the podcast. We started for the same reasons that Mary and Ricardo talked about in terms of trying to tell stories about the folks that work here at Westchester and the work that they're doing, that's cool and creative and different.
But I just, Every time we reach out to somebody to say, Hey, you want to be on the podcast? And they're just tickled to even be asked, there's joy in that. And then you have this hour long opportunity to just sit and listen to them, talk about something that they're passionate about or that they like to do, or that they're an expert in.
And there's joy that in getting to hear that story. And then as the producing side as well, the editing side of trying to take that. gift that they gave you and turn it around into less than 30 minute chunk of time to put it out into the world. There's joy in that part. And then the last part too, when it goes into the world and Derek Bruff likes and shares your social media posts about it, there's joy when you see that kind of thing going on.
Thank you, Derek. You can keep doing that. Every part of it for me brings joy.
[00:10:49] Josh Reppun: That's awesome, Tom. I agree. It is about the joy. It doesn't quite start out that way. It's very daunting in the very beginning, but for me, starting the What's Cool Could Be podcast, it, I actually work part time at Apple, which is where I get my health benefits from. And my podcast was launched in the back of the Apple store three years ago.
Right when we were closing the store a tech geek here in Hawaii, Ryan Ozawa was sitting in the back customer messing with his phone and he and I started a conversation about just technology and immediately discovered that we both wanted to start a podcast. I had the mission and vision. He had the technical expertise.
And we just went to a whiteboard space here in Honolulu and whiteboarded the thing out and launched it. So for me, very briefly, Ted Dintersmith produced the film Most Likely to Succeed in 2015. Then he went on his 50 state tour, which ended in Hawaii, and I was privileged to curate his visit here in May of 2016, and then he wrote his book, What School Could Be, and it was a real joy, Tom, to have a chapter at the back end of that book about his time in Hawaii.
It's going to be the only time in my life that I'm listed in the index of a book. And so the podcast was born because I wanted to figure out a way to make that chapter longer and longer. Because I knew that Ted wasn't going to write another version of the book. And so now about to do my 117th interview, it's just an absolute joy to do this work and I'm very privileged.
I sit in a privileged position because I'm underwritten by Ted. And that means that I can actually spend a tremendous amount of time researching my guests. And I know we're going to talk about this, Jason, a little bit later about the research process and all that but what a joy to do a two week deep dive into somebody's life and education and their life in general, and then to be able to craft that story and to, have an actual professional editor do the work.
Evan Kurohara, he's amazing, super creative guy, audio engineer. And then you get this episode and and it's amazing. People actually download the darn thing and listen to it. Go figure, right? So yeah, very much about joy, Tom, very much.
[00:13:06] John Nash: Josh was asking, did I see Amanda Bickerstaff's timeline that she put on LinkedIn the other day? Amanda Bickerstaff, if you don't know her, she's with AI for Education. She founded this group that does a lot of good work around thinking about applications of AI in mostly P 12 space.
But she put up a great year-in-ChatGPT graphic and I keep reminding folks that we're now we're 53 weeks in which is wild. And and so it would be good time to ask as we think about year in review. Yeah. What have been the impacts of AI from your perspective and what do you think is going to happen next?
Jason and I were talking about earlier we say, has anything changed? And I I surprised myself by saying no, I don't think much has changed, even though a ton has changed. But I'd love to, we'd love to hear what you all are thinking particularly from the, your perch of your podcast and your audience.
Lee, do you have thoughts on this?
[00:14:02] Lee Skallerup Bessette: In my job it's that same sort of thing. It's changed everything and also nothing in terms of just being able to say good pedagogy is good pedagogy. But been thinking about how that it could be an assistive technology. In particular, is thinking about it through the lens of disability and neurodivergence in terms of what students and even faculty and staff struggle with with ADHD, with autism, and, thinking through those sorts of possible applications for it.
What's it good at? What's it not good at? And how can it be used as an assistive technology and thinking through all the ways that also we throw the proverbial baby out with the bathwater and a lot of times when it comes to new technological innovations for better or worse. Where it's horrific. And so we're in all of these kinds of ways. So we're just going to get rid of it, even though there is a definite benefit to certain subpopulations. In that sense, so there's, talking about the balancing act of like the horrific environmental destruction that goes through, but also could really help.
Someone with ADHD gets started on a paper because that's notoriously something that people with ADHD have difficulty with. But is it worth all of that? Really taking a step back and being able to think through those things and thinking about specific populations and how it can have an impact.
[00:15:28] John Nash: Yeah, I like that a lot. And this notion of take a step back. And of course, we're at the end of a calendar year now. And so everybody's doing kind of a year in the review or taking a step back. But I feel like even in the middle of this calendar year, say, in the summer, July, educators, people like ourselves were even saying, then how do we take a step back?
Because how is this, different. How is this the same? Josh, that's a theme that you've been talking about the last little bit. And I know we're going to talk more about it next week, but this idea of taking a step back, what do you feel about that?
[00:16:02] Josh Reppun: Yeah, Lee, I'm struck by first of all, I'm deliberately choosing not to get into the weeds with the folks that I'm interviewing, educators, education leaders around AI, because it feels almost a little bit too early. To do something like that, but I'm, I am conscious of our position as podcasters, as producers and hosts that we have an opportunity to, as John described, to lift ourselves up to that kind of hot air balloon level and be able to comment on what's happening.
And John, you and I have had a conversation at a different time about how when EdTech emerged for all of us, in the mid 2000s, everybody just went gaga over the devices. And I remember I went nuts over the iPad. I thought it was the second coming. And then slowly, but surely the whole EdTech world righted the ship and went back to the pedagogy again.
And I think it feels to me like AI is very similar. That we're all going. Bonkers about the bots and in indeed, even the individualized bots that can do things that are in the field of Neurodivergence. And this is what Amanda is doing at AI for Education, is that she's having these very specific conversations around AI and special ed, AI and this and that and the other.
So, slowly over time. I think that we're going to go back to just thinking, what is the most engaging teaching and learning? What is the most learner centered, the most student driven and real world? And then we're going to look at the tools that are coming out of AI and say, how can we use them? And that's what I'm looking forward to in 2024 with my guests is to have those conversations that are both at the meta level and then in the weeds about how the tools are actually being used.
Yeah, it's going to be an interesting year coming up for sure.
[00:17:54] John Nash: For sure. And I like what you said there about you alluded to it as well, Lee, this idea of getting back to first principles around pedagogy. I certainly noticed when the pandemic forced a lot of our colleagues to move their courses online. It laid bare a myriad of instructional design holes in their approaches.
And I think that to your point, Josh, it was a an awakening Oh, we just need to good instructional design turns out is just good instructional design back in early 2000s. I was hanging around people who were concerned about these horse race studies about whether online learning was better than in person learning.
And, I think that's been settled. It's just all about good instructional design. But that sort of takes us to the online learning space. Mary, what are you thinking as we come 52 weeks into our friend Chad and Claude and Bard? How are you feeling?
[00:18:48] Mary Loder: Probably overly confident, to be honest. So like you said, it exposed what was already there, right? So the fear laid within the framework of things that already really weren't working well. And it was probably a really hard shift for a lot of people. And so what we did at Arizona State University this summer is we created this course of teaching with generative AI, like really intentionally learning how to use it because that's the first step and then getting curious into what that means for your classroom and framing it around your learning objectives.
And then how can you leverage it in interesting ways that not only help you be intentional in the inclusion, but help create a space for your students to be literate in the technology, right? Because that's probably a primary responsibility that the fear's not gonna help us with.
So jumping in is a good thing and we've seen so many people jump in and we just celebrated over a thousand faculty registering for that course. So that's really good. We were at a place where people were intentionally avoiding generative AI last year to intentionally seeking out opportunities to improve their experiences to help improve their students experiences. So I'm feeling very confident because we have such good energy around it now in just one year. And it has been a journey for a lot of people, right? But some of them already were really excited too, because we're very lucky to work with some extremely innovative individuals who, like this summer, Andrew Maynard had a course where he taught students how to use ChatGPT, which is great, because if you're not taught how to use it, you might think it's not a great tool, but if you learn how to write a prompt properly, you've increased your efficiency in so many places.
Riccardo, I don't know if you know this, but I'm going to try to figure out our podcasting timestamp-like issues with so many people by feeding all of our transcripts into ChatGPT and then asking it to do some things for me. But I think there's some major efficiencies that can happen when you know how to write a proper prompt.
And by all the additional plus options, specifically in ChatGPT with being able to feed in websites or feed in PDFs or, whatever you need. And to what Lee was saying, like being able to reframe and redefine. education as a person who doesn't get it because of how it's being worded through a system like that, through a conversation.
What an amazing opportunity for access for someone to really better understand the environment that they're in and then be prepared to interact in that environment.
[00:21:15] Ricardo Leon: Oh, Mary, where can we hear more about this ChatGPT course?
[00:21:19] Mary Loder: Oh, It's funny you should ask. Season four, "Course Stories", episode two, one? I don't remember. It's on our Teach Online page. Yeah.
[00:21:29] Lee Skallerup Bessette: Put it in the show notes. Put it in
[00:21:31] Mary Loder: Yeah, we'll definitely
[00:21:32] Jason Johnston: It'll be in the show notes. It'll all be in the show notes, bit of it.
[00:21:36] John Nash: I can vouch for that episode. I listened to it and it was great. Yeah.
[00:21:41] Jason Johnston: Yeah,
me too. I was quite interested in that. It was yeah, really interesting to hear the approach and what you're doing. Yeah. I was curious about those podcasting this year, John and I started our podcast this year in February in some of the fervor of ChatGPT really hitting the people hard.
And so I think we had four podcasts where we said, "Oh, we'll move on to other topics next podcast." And then it kept getting stranger and more advanced. And we kept talking about it and talking about it. And then we dropped into kind of getting a little more organized about what we're doing.
And so we started talking to people still with. AI as part of the conversation. I'm just curious about other people that were doing podcasts. I know, Tom, that you had a couple of podcasts this year that were more around the theme. Did you find it it ebbed and flowed this year or what were you finding?
[00:22:32] Tom Pantazes: We actually stayed away from it for a little while, mostly because we were spending a lot of time trying to understand it. And it gets to what Mary was talking about. I found a great quote from Seth Godin. He said, "AI is a mystery to many, it's a threat. But it turns out that understanding a mystery not only makes it feel less like a threat, it gives us the confidence to make it into something better."
So we spent some time just trying to get our heads around it. One example of that is Planet Money did a great three part series on " Can AI take our jobs?" I highly recommend that three part listen if you haven't listened to it yet. They, in the way that they do a great job of telling that story and exploring like what it might look like for AI to take their jobs as podcasters.
And I'm not going to spoil the ending, you have to go listen to see what they ultimately settled on. But we did just recently, I think last week and the week before that, did our first episode with some of our local experts here about AI and got them to speak a little bit about how they've seen the impact happen in their, classes as folks who are going to be comfortable using it, and they found the students were mostly using it to ask questions about things that they weren't particularly clear about from class that they had experienced, which the instructor thought was pretty interesting, and he would love to get those chat logs as a way to better understand where his students were struggling.
So that's where we've been. We haven't stepped into it too hard yet somewhat intentionally in order to get our heads around it a little bit better than we had in the past.
[00:24:01] Mary Loder: I
will say, guys, I loved listening to you guys try to figure it out in your first episodes. They were really entertaining, Jason and John. It was fun to listen to you guys figure out things that were working or Jason, I think you got kicked out because you were having a mental health conversation.
There was just some really fun From parts of your episodes.
[00:24:19] Jason Johnston: Yeah, ChatGPT broke my heart at one point.
[00:24:23] John Nash: Yeah, it was fun working in that time period, particularly as Jason was noting. I couldn't believe we kept talking about AI. I thought we really thought we would move on. Surely there's more to online learning than this. And then it just kept pushing us into the breach as it were.
[00:24:40] Josh Reppun: John, can I just, I'd love to share a quick story. A couple of weeks ago I attended our 16th annual Honolulu based Schools of the Future conference. And there were about, I think, 1400 educators and education leaders who were there at our convention center. And on day two, Our lunchtime keynote was Kevin Roos, who's the co host of the hard fork podcast, which I'm completely obsessed with.
I, I listened to every episode. And lately, as with the whole business of Sam Altman going to Microsoft and coming back to open AI. Kevin was actually sitting at his table 10 minutes before his keynote and there were a bunch of us in a group chat iMessaging each other, and that's when the news broke that Altman was out, and Kevin was literally writing his column 10 minutes before he went up on stage to deliver this really broad and beautiful overview of the whole last year of AI to these, 13, 1400 educators. And it just really struck me that it must be bewildering for a lot of educators to look at the kind of national global landscape and wonder what the heck is going on here, right?
Because it just seems very chaotic. And I know that's something, John, that I would love to talk to you about in our upcoming event that we're going to have just about the design of AI and how it's unfolding for educators and how it must be traumatizing in some way because it's just upset their normal procedures in much in the way that iPads did as well.
Yeah.
[00:26:18] John Nash: Yeah I'm wondering how many how many of the stripes of educators that are in a system have been affected and at to what level? I'm going to talk to some superintendents next week at their statewide convention. And some of my early forays into talking to those participants suggests that a lot of superintendents still really aren't using AI or have used it once or twice, and so they're not really thinking about it. So I think there also is a conversation to be had about to Mary's point. I think there's opportunities for leaders to be thinking about what they're doing with it as a leader, but also how are superintendents and others thinking about managing this change at the teacher level?
I think it's, yeah, I think it's different. I think this is, this goes back to this point of so many things have changed, but yet some things are not changing at all.
[00:27:09] Jason Johnston: I don't know if we explain this or not, but we did a previous podcast episode like this at OLC. You can look it up. Those that are listening in the spring of 2023, that we called podcast super friends.
It's just a name that came about as we were talking about how this is like those crossover episodes where people are coming in. And so we're calling this Super Friends II. And these are definitely podcast super friends here. But one of the things I was thinking about was, how podcasting is a form of translatable research, as we're all dipping into these different fields and then coming together, and here we are coming together as this podcast, and I was thinking about getting divergent views.
We had a couple podcast interviews that were side by side that really had different views about AI. We talked to Kristen DeCirbo from Khan Academy. Obviously they're pushing out this whole Khanmigo chatbot and they basically scrapped all of their plans for the next year to put their development efforts behind that.
And then we had a great conversation with. With Brandice Marshall, who is much more of a, I don't know what you'd call it, maybe John have a word for this, but, or somebody else does, but we're almost like a-- even though she's deep, like she knows so much more about coding and about data and so on than we do, but she's almost like a reluctant technologist in some ways when it comes to AI, trying to take a slow approach to AI and being skeptical about its abilities and about what it is that we should be putting our hopes and dreams into here.
Have other people found with their podcasting that they're able find divergent views or views that maybe have challenged you in this regard over the last year?
[00:28:45] Ricardo Leon: I was going to keep quiet because I'm not a fan of those kinds of AI solutions. I think it's really not good enough yet to be using it as much as we're using it. Yeah, I just, I'm just not a fan of that. Mary's Oh, I'm going to use ChatGPT to do this or that. And I'm like cause already our transcripts are run through our editing software-- creates a really rough transcript. So Pedagogy is going to come up as "Purple Monkey Dishwasher." You know what I mean? And then we're going to use AI to, use that, to, to leverage that "purple monkey dishwasher" to create this or that. And, so that's, I think that there's there's been a lot more excitement or interest in AI rather than in human capital.
I think that, sometimes you just have to do the sifting through and I know it's painful and it's time consuming, but I do, we just had a hack day where we look to solve a problem and at the end of it, you have a presentation.
On my team, we had me and one of our designers, really great designer, Ron, and we were able to just put together a video with a, with an interface, aspect of it that was really well designed, and I was really happy with that, and I see some of the other teams, they didn't have that human capital they, and they're using AI to develop some of their slides, and I can see, the characters on the slides having multiple fingers, things just not-- so that's so distracting to me to see that, and maybe it's just in the creative stuff for me at least it just drives me crazy, it's not good enough, and I think that we rely on it way too much, and that, that really, of course we're gonna try to eliminate as much human capital as possible, but those are, I think, still really valuable things until we have these, perfect dream machines, I think it's great. It enables a lot to happen, it makes everybody a jack of all trades, but there's another half of that idiom that you're a master of none. And so I think that we're gonna lose out on a lot of that stuff if we rely on these technologies too much.
[00:30:39] Mary Loder: We are the divergent views on the podcast. I'm just kidding. Actually, we don't usually disagree, but we do disagree there. I can't wait to prove you wrong with my amazing prompt.
[00:30:48] Ricardo Leon: I can't wait for the "purple monkey dishwasher" podcast episode.
[00:30:52] John Nash: Ricardo, I appreciate what you're saying, and it goes back to what Brendice Marshall wrote in Medium, which I still gush about, which is there are these things that are un-AIable, and I think, I was hoping you'd also say, was it a hack day, or what was it? Yeah. Did you win?
Because I think those skills that you talked about are the ones that we still need in great measure to do great creative work that AI can't do.
[00:31:33] Mary Loder: we'll know what timestamps go where.
So he can quickly go timestamp to this one, timestamp to that one. Oh, that's the spot. That's the spot. And just like splice them all together. So I'm hopeful I prove you wrong, Ricardo, and we improve some of the experiences for you. Cause what you do is time consuming, although highly necessary and totally creative as well.
[00:31:50] Tom Pantazes: So I'm running a little experiment right now where we did the recording already. I grabbed the transcript, threw it into Claude and said, where would you cut this down? Like, how would you bring it down? And then it totally butchered the timestamps, but I held on to that and I'm going to do the edit myself.
I'm going to do the work I can normally do, but then I want to compare the two when I'm done to see if what it spit out actually would have helped me. And we'll see what we get. And I'll try to share back about that at some point.
[00:32:17] Mary Loder: Yeah, please email me and let me know if it works. Maybe I'll hold off on writing my prompt till I see her outcome.
[00:32:23] Lee Skallerup Bessette: But I think that this is where we have an opportunity, like I was saying, I've said this a lot that I think we not necessarily here in this podcast, but we within higher education and elsewhere having the wrong conversations.
In terms of how we're free, how the discussions are framed. I'm going to use that passive voice. And then one of the things that I've really enjoyed is in in podcasting and talking about it is getting to have that reframing. Getting to say, okay we're talking about it like this, but why aren't we talking about it like this?
Would it be more, would it be more beneficial? Would it be more generative and generous to talk about it in this kind of way, which is again, why I'm very. I've been very big on thinking about it as an assistive technology, because I think that's a way to reframe it in a way that it's it's not going to take over.
It's not going to Grammarly is an assistive technology, right? My alarm is an assistive technology. My calendar notifications are an assistive technology. In what ways can this be an assistive technology? and say, and thinking about what is it good at, what is it not good at, and then if we think about those affordances, like any digital tool, right? We go through this with any digital tool that comes out. What are its affordances? What is it good at? What is it not good at? Then how do we use it in our teaching and learning or in our lives? And so to be able to use these conversations as opportunities for reframing and rethinking and, having those moments of friction. I think that's the real power of it because, even, and I'm a writer, I love writing, but there's an immediacy to the podcast that, putting something out on the web now, particularly with, the, what formerly known as Twitter has gone downhill. tHere used to be an immediacy in those kinds of conversations that I think is being picked up again in podcasts when, as you're saying, people want to come on and have these conversations, and listen to them as well, because there's an immediacy to it that I think is really unique generally about podcasts.
[00:34:28] Josh Reppun: Ricardo, I, what you said really resonated with me. I, again, from a privileged position of having the time to do it, I spend two weeks getting ready for a guest, and I've developed a very kind of intricate Google Docs process of creating raw questions that come from information that's provided via an intake form.
And then I, once I have the big giant bank of raw questions. I start to move them over into a final script that I'm going to use in the interview. And last summer I find that process extremely humanizing for me. It's I'm like a huge fan of Warren Berger's "A More Beautiful Question," the book and it's just such a beautiful process to go through creating a beautiful question and last summer, just on a lark, I asked ChatGPT to create a dozen questions based on a short bio of a guest that I was about to do, and it just 20 seconds.
It suddenly did all the work that I would take two weeks to do. And I reared back from that like I just saw the devil. It was just horrible moment where I'm like, I'm not going there because it's going to take away from me my very human process of getting to know somebody. So what I think what you're saying is ,I writ large, I feel like we're in a moment where we have to have these conversations about where the humanity remains and where the technology becomes helpful to us.
And in podcasting, it's just a great medium to be able to have those kinds of conversations. So appreciate what you said.
[00:36:03] Jason Johnston: Yeah, I
appreciate that as well.
[00:36:06] Lee Skallerup Bessette: I think another thing that, that this also like in, in this moment that we're having is what are the systems in place that make it so We would want to use ChatGPT to save time,
[00:36:20] Josh Reppun: Yeah.
[00:36:21] Lee Skallerup Bessette: right? And that, like you were saying, the luxury of having two weeks to research and have those questions to be able to go forward.
But there is still this pressure of time, of efficiency, of whatever there is that, you know, that If I have a choice between two weeks and going through this very humane process and humanizing process versus five minutes with ChatGPT what are the systems that are informing my decision to pick one or the other?
This is pre tech, but when I was teaching at one point I got a TA and so the weekly quizzes that my students did, I could give that to my TA to grade. It was this huge class that was extraordinarily time consuming.
The trade off was, is I didn't get to know my students very well that semester because that was the way I did it. But , I was a PhD student. I had a dissertation to write. And so ChatGPT again is exposing a lot of these inequities and these pressures that have always been there and it's just really highlighting them and bringing them to the forefront and again, it's that thinking of reframing the conversation around generative AI to be able to say, okay what is this telling us more largely about our work practices, about work life balance, about our strategies about our values about all of these kinds of things.
And so that just to be able to blow that up and have again, those larger conversations about the society in which ChatGPT is growing and being adopted. And then also the nitty gritty of what is it good at? What can it do? Why is it doing what it does? And those kinds of things. So I think that there's that, that really great spectrum that we can hit on in this medium.
[00:38:00] Josh Reppun: Lee, I would add, there's I'm becoming aware now that there's a company called Magic School which is experiencing explosive growth. And basically Magic School is a tool that saves teachers time through generative AI. Awesome, glad there's explosive growth, but what are you doing with that time that you're now freed up to experiment with?
Like, how about you take, dip your toes into design thinking? How about if you dip your toes into real world assessments or deeper learning, assessments for deeper learning, right? That's the reframing of the conversation. If we give ourselves time, what do we do with that time? And that, that's what I think we can do as podcasters in 2024 is to start having those conversations with people about what they're going to do and how they're going to be more student focused.
[00:38:49] John Nash: The AI doesn't know what else you could do with your time because you decided to use AI and that's part of the problem. And I'm also appreciating all y'all's comments about the tireless and, generout list of questions that ChatGPT can give you -- it robs you of that Process by which you've decided to ask the question And it's that thing that we're always talking about is like how do we better humanize what we're doing in the presence of all this technology
[00:39:20] Ricardo Leon: And we're just this current generation of that too. I think about the students, the younger students who are coming along and this is just going to be. Ubiquitous. It's just, so that, that is really the, for me, the bummer is that the next generation of people where we're finding out ways to what are they going to be doing?
It's, what is work going to be in the future
[00:39:42] Jason Johnston: Yeah, my kids don't know a world without internet or even a world without cell phones. They will never experience what it is like to be wandering a country having to ask people for help.
[00:39:57] Lee Skallerup Bessette: Physical maps.
[00:39:59] Jason Johnston: or physical maps. It's interesting. It's...
[00:40:01] John Nash: hitchhike! Can I throw that in there? They're not going to know what it's-- man, I had to hitchhike in Ireland because they're like, how am I going to get back? I don't know. Put your thumb out. Oh,
[00:40:18] Jason Johnston: as we're seeing this kind of transition. And I think what intrigues me about AI is not. because I'm so enamored by it, but because I have a sense that this is a internet Gutenberg press kind of moment in time, there will be a before and afterwards and things are shifting in a way that we need to have our eyes open.
, one of the things about talking to different people with different perspectives, and even I appreciate what you're saying, Ricardo, and within my own team my media, more creative types, have a different approach and thoughts about this and then my instructional designers do. And I appreciate hearing all of those and it's giving me pause for both what I'm chasing after but also the way in which I speak about such things because it's helped me hopefully at the end of the day and I'm still growing and learning, but hopefully at the end of the day, it's not just about using the thing, but it is about me being more human and me being more empathetic and understanding about how all of this is affecting everyone around me and the people that are within my own touch.
[00:41:25] Mary Loder: I have one more gem to share from ASU. Sorry, Tom, I'm going to be fast. It's the Academic Risk Reduction Guide. So I'm going to give that to you to put in the show notes because we created that, and I don't say we, like me, I mean Deanna Soth, Tamara Mitchell, with the guidance of the Office of the Provost, created this awesome pedagogical guide.
And it's not focused on generative AI, but it helps to address generative AI through just good instructional design and pedagogy. So I just wanted to put that out there.
[00:41:54] Jason Johnston: Wonderful.
[00:41:55] Tom Pantazes: I was just going to point out that what I'm hearing is the not least the human aspects in it. As we move into these generative AI tools and their use, and I am reminded of those things that I've seen floating around. I may have even been from John, you may have posted one of these, but they're like AI writes the questions and then the AI answers the questions and there's no human work or understanding or labor that takes place in that scenario.
And I've seen lots of variations of that. So, how do we help folks, or at least in my role, how do we help folks create situations and scenarios where we're using AI as that? assist and not in a way that removes the humanity and the situations in the work that we do.
[00:42:35] Jason Johnston: That's good. Yeah. And how do we work at our own communities, whether it's our podcasting community or working community to help, as you were talking about, Mary, bringing together some guidelines to, to help that we can form together. So it's not just one segment of the population. And as you brought up, Lee, which I so appreciate, we're thinking about how this impacts multiple kinds of learners and And also, Ricardo, in terms of different workers and different aspects of what our work looks like I firmly believe that we can form ethics that are objective, meaning that there are ones that we.
We form together as groups of people and we can fiercely defend and fiercely move forward to help guide us during these times. And I think that if we can't do it in our educational circles, I don't know who's going to be able to do it really. So we can't depend on the ed tech folks to do it.
We can't depend on the AI companies are not going to bring up black box transparency and things like this, right? So I think that we are some of the people that need to be doing that and as well using our platforms to help move that forward. They're
[00:43:50] Josh Reppun: if I could add Mary to your comment I think one of the things that I've been thinking about partly as a result of all the work of doing these episodes over the course of 2023, which is a fantastic learning curve that any host would be on been thinking a lot about how maybe a little bit worried about the potential for uneven presentation of professional development around AI.
Very similar to what happened in EdTech, very similar to what happened with project based learning. When that became a word or a phrase for people, there were lots of entities that jumped into the arena to offer professional development around project based learning, but a lot of it was really uneven, and a lot of it really wasn't student focused, and I worry a little bit now, and I think, is that going to play out with AI, and who are the entities that we really trust in this space who are going to deliver the professional development that is student centered and is focused on learner centered pedagogy.
That's something that's been on my mind and something I'd like to keep on my mind as I go through episodes in the, in 2024.
[00:45:01] Tom Pantazes: I trust these guys on this podcast called "Online Learning in the Second Half."
[00:45:05] Josh Reppun: Good place, great resource. Absolutely.
[00:45:09] John Nash: I've met those guys. They're not thought leaders.
[00:45:12] Mary Loder: I
[00:45:13] Jason Johnston: More question leaders than anything else. I think those guys have. fewer things to say and more things to stir up. Yeah,
[00:45:20] John Nash: Yeah.
[00:45:20] Lee Skallerup Bessette: But it brings up a good point though. And I think you're I think you're really right on this is that this is a space where I think podcasts can really fill a gap in that sense where it, I know, I'm at Georgetown now we are very well resourced, we are, our center is very well staffed, we are doing all the things to support faculty in teaching and learning with AI and a myriad of other things.
I've also worked at regional comprehensive public institutions where there isn't a kind of robust support for faculty and staff around any of these things. I think project based learning again, I think that's a, that's an excellent example where, these podcasts like this become a way for.
the dissemination of knowledge and the dissemination of discussions. And that was, it was one of the things that being at a regional comprehensive, I found most difficult was this sense of isolation. Who else is thinking these things? Who else is having these conversations? Nobody, again, the time factor, everybody's on 4, 4 or 5, 4 course loads.
How are we, how can we deal with any of this stuff? And to be able to listen in on conversations, participate in these conversations know that other people are having these conversations and thoughtful ways that we'd hope to be having them on our own campus, being able to bring them to our colleagues and peers.
I think again, that's one of the strengths of having podcasts like these and having these conversations is again, providing. prOviding resources and hoping those resources get to places where they wouldn't have typically gotten in the past.
[00:47:00] Mary Loder: I mean, for instance, thank you for the ability to plug again. At ASU online, we have these webinars. They're open to anybody. They're free. So educators, please go to asuonline. eventbrite. com and join our instructional designers and our faculty in the conversations and presentations around learner centered pedagogy.
[00:47:18] Jason Johnston: great. Yeah. And I think we're all in agreement. A hundred percent of the those surveyed, I think, say yes to what you're saying there, Lee, about the, some of the strength and purpose of podcasting. And at that, why don't we, why don't we spend a minute to go around and just let us know how we can find your podcast as we are wrapping things up here.
Maybe starting with Lee, tell us how we can find you and listen.
[00:47:41] Lee Skallerup Bessette: It's all the things ADHD. It is available if you just search all the things ADHD on just about every podcast distribution service. Wow. I can't even think of the word for that right now. What is it? Platform. That's it. Syndicators. There we go. Or you can go to allthethingsadhd. com where we also have every single one of our episodes. That's where the RSS feed is generated for all of the other platforms. And you can find me online as ReadyWriting on literally all the socials. I just went through and claimed it on all of them. And I'll, I also share the podcast there when we do actually get around to recording it every once in a while.
[00:48:21] Jason Johnston: That's great. Mary and Ricardo?
[00:48:24] Ricardo Leon: We are "Course Stories". Or you can listen to us anywhere that you find podcasts. Also I, like I said earlier in the episode we, I am producing a program called "Space for Humans," which can be found on YouTube starting in January. And that is a weekly show. It's about us designing it's a partnership with the Interplanetary Initiative. And we are talking about how we design space futures and that are inclusive and accessible.
[00:48:51] Mary Loder: And if you want our show notes, which has like bios and all the links to all the things people share, that's at teach online forward slash podcasts forward slash course dash stories. We need a better website for that, but you can just go to teachonline. asu. com. And we're under podcasts.
[00:49:09] Jason Johnston: And we'll put all these in the show notes as well. Tom?
[00:49:13] Tom Pantazes: So oddly on air, oddly spelled O D L I when you're searching for it in your podcast provider of choice, but you can also catch our episodes and our links from our wcu -tlc. org website.
[00:49:28] Jason Johnston: wonderful.
[00:49:29] Josh Reppun: Yep. And so you can find the What School Could Be podcast and all of the podcast platforms, including Apple and and everything else. You can also go to whatschoolcouldbe. org and in the nav bar at the top, just tap on podcasts and that'll take you directly to my podcast website.
And Mary, just feedback on what you said in terms of learner centered, student driven learning at whatschoolcouldbe. org. If you go to the NavBar and tap on Innovation Playlist, that's another awesome resource for student driven learning.
It's really nice. That we're all working now a little bit more deliberately in these spaces where students are the center of the conversation.
And Jason, I guess this is the right moment to mention that you and John and I have been working on a project here. Which is something called the Network of Education Podcasters.
And when this episode goes live, there will be. An NEP group on LinkedIn, and we invite anyone who's podcasting, hosting, producing in the education space or related spaces to join us on LinkedIn, and we'll just keep this conversation going. Lee, I loved what you said about how, if we as podcasters are all talking to each other, there's no possible way that we can all listen to each other's episodes, there's not enough time in the day, right?
But when we have these kinds of conversations, we actually can. Move the thought leadership forward over the course of the next couple of years. And I love that idea and it puts fuel on my tank and makes me want to just keep right on going. Network of education, podcasters on LinkedIn, join us and we'll start working together.
[00:51:11] Jason Johnston: Sounds great. And we are found at onlinelearningpodcast. com. You can find all the show notes that will include all of these links, as well as information about each of these fine people that joined us today in our show notes. So please find us there or on LinkedIn. I think everybody's on LinkedIn.
You could probably find us and hit us up there as well and make some connections because it's not just about for us. And I think all of you. It's not just about the one way dissemination of information, but also about creating community and connections and getting your questions and your feedback. We'd love to hear what you think about this podcast and others.
So right.
[00:51:46] John Nash: Yeah, absolutely. And I'm grateful to you, Jason, for helping us put this all together. It's like you're the Gary Shandling and I'm your Hank. I think that's how.
[00:51:56] Jason Johnston: I have a vague idea of what that means, but not completely.
[00:52:02] John Nash: Yeah, that's right.
[00:52:03] Lee Skallerup Bessette: We lost anyone under the age of 40 just right now. That was it. Anyone under the age of 40 is I don't know what is going on at the moment.
[00:52:10] John Nash: We'll put a Hank "Hey now!" Gif in the show notes.
[00:52:14] Tom Pantazes: That would help me out.
[00:52:17] Jason Johnston: It's a positive thing though, John. That was a positive thing?
[00:52:20] John Nash: Yeah, absolutely. Yeah, you're, yeah, you're glib and interesting and I just go, "yeah, that." So
[00:52:25] Jason Johnston: Oh, I see. So it was a little self depreciating. That's not the case at all. But yeah, for sure. Oh, that's good. Thank you, everybody. It was great to talk. This was a great conversation. Appreciate all of you. And we'll see you out there in the In the podcasting world someplace.
[00:52:41] John Nash: Yeah,
[00:52:42] Josh Reppun: Thank you, Jason. Thank
you,
John.
[00:52:43] Ricardo Leon: Thank
[00:52:44] Lee Skallerup Bessette: you so
[00:52:44] Tom Pantazes: for having us.
[00:52:45] Jason Johnston: Yeah.
Outro
[00:52:48] Lee Skallerup Bessette: I love how even when we record podcasts, we all wave like
[00:52:51] Mary Loder: I literally, yeah, I couldn't even help myself. Yes, absolutely.
Tuesday Dec 19, 2023
Tuesday Dec 19, 2023
In this episode, John and Jason talk about dangers and opportunities in the second half of online life, from their Online Learning Consortium (OLC) 2023 presentation and “live off the OLC floor” interviews. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
See slides from the full presentation here
More about OLC here
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
False Start
[00:00:00] John Nash: I took a class from a professional in San Francisco for voice acting. I thought I wanted to be a voice actor. So yeah, that
[00:00:07] Jason Johnston: and here you are doing a podcast. You basically are a voice actor, except you happen to be acting like John
[00:00:13] John Nash: Like John Nash, not like Barney the dinosaur, or doing my Louis Armstrong imitation or something like that.
Start of Episode
[00:00:20] John Nash: I'm John Nash here with Jason Johnston.
[00:00:23] Jason Johnston: Hey, John. Hey, everyone. And this is online learning in the second half, the online learning podcast.
[00:00:28] John Nash: Yeah. And we are doing this podcast to let you in on a conversation we've been having for the last two and a half years about online education. Look, online learning's had its chance to be great. And some of it is, but, a lot still isn't. And so how are we going to get to the next stage?
[00:00:43] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:47] John Nash: That's perfect. What do you want to talk about today?
[00:00:50] Jason Johnston: So John, would you call yourself a techno? optimist or a techno pessimist? Do you think we're, all of this is winding up into a better world? Or is technology taking us down this path of doomsday and destruction?
[00:01:06] John Nash: If the left side is doomsday and destruction and the right side is optimism and happiness, I'm a cautious optimist. I'm, I think I'm a little bit to the right of a cautious optimist. I'm no Mark Andreessen who's recently come out with a tech manifesto suggesting that anybody who doesn't believe the bros in Silicon Valley can fix everything is crazy. I'm not like that at all.
I do worry about my own critical thinking around technology and how it may be exacerbating environmental problems and social problems. Because I love playing with these tools so much, I think I'm clouded a little at times, but I'm, yeah , I'm right of center on if being right is optimistic I'm over there.
[00:01:55] Jason Johnston: Yeah, I think I'm, find myself in the same space, not because I necessarily have a lot of optimism around technology. I do think it's pretty consumer driven and profit driven. And so that doesn't build in me a lot of optimism for its final outcome. However, I have an optimistic view of humanity, one that we typically work together towards our own survival when it comes down to it, and that there are a lot more good people in this world than bad people. And I think that maybe I'm an idealist and that I think the good will win out over, but not because I believe technology is going to save us by any means, but because there are a Usually enough good people that are helping to drive technology that I think we'll get to a better place.
[00:02:46] John Nash: Yes. Yes, I think that's well put. I think I'm in the same space you are because we're both educators and we surround ourselves with other educators who are interested in applying the use of technology to help learners achieve their goals. I'm not on the side of thinking "the technology we need to have in place to save the world is that which puts billionaires in space."
I'm not thinking that's the way to go, but you're right. I think when we surround ourselves with people who are interested in applying technology, particularly the technology that allows us to have online learning, and create more equitable, lower cost, high impact activities, then I think we're in a good place.
[00:03:29] Jason Johnston: Yeah, I agree. . So you don't think you're going to climb into the next Mars shuttle to help expand us into a multi planet species?
[00:03:37] John Nash: Now, I'm not in line for that. I'll watch the rockets leave earth.
[00:03:40] Jason Johnston: Oh yeah. I will too. I would love to watch the rockets leave, but I don't have any interest in doing it nor do I think it's the best place. I think we have enough issues and good things to put our money towards here on this planet with these people that we have in front of us that I'm not really in line with that.
[00:03:57] John Nash: Yeah, I agree. So where does that put us? We're both on the optimistic side of center here. But that doesn't mean we're not without some dangers.
[00:04:08] Jason Johnston: That's right. And so today I would love to talk about our last OLC presentation, but around the theme of turning dangers into opportunities in online learning. Online learning in the second half. looking at the dangers, turning them into opportunities.
How does that sound?
[00:04:26] John Nash: Yeah, that sounds really good. And let's remind our listeners what OLC is. That's the Online Learning Consortium and they hold two major conferences every year, and this fall conference was in Washington, DC
[00:04:42] Jason Johnston: Fall of 2023. If you're listening to us in the future, it's fall 2023.
And also we're sorry. That's the other part. If you're listening to us in the future we really are trying our best, but I know we could have done more. That's all.
[00:04:55] John Nash: That's right. So we had a presentation where we were able to talk with participants at the conference about the potential challenges that we have in front of us with online learning and really disambiguating those from the dangers that we might face. Also have in front of us. Jason, I think the word danger might sound a little alarmist to some of our listeners.
Maybe we ought to put that into context. Also,
[00:05:22] Jason Johnston: Yeah, and we found that as we were talking to people, so we roamed the snack area, basically, and accosted people with our microphone, asking them this big question, and I think a lot of times, "dangers" took them back just a little bit and say, danger, could I talk about a concern or a problem?
And it was said, yes, but we're really looking for dangers. We're thinking about the big threats here, the big kind of more existential threats to online learning. What are the big things that come to mind? But we did talk a little bit about what "challenges" were versus "dangers," which challenges are more like the obstacles or difficulties, things that you could overcome with some effort and creativity and so on dangers, really these bigger challenges that potentially could pose significant risks or threats and have some harmful consequences if they're not addressed.
[00:06:13] John Nash: Let's also put some more context on the danger and the things that we're concerned about. The people that go to the Online Learning Consortium meetings, there are certainly some vendors who supply tools and packages and other technology for institutions of higher ed and P 12 to do online learning, but It's also significantly populated with instructional designers and people who are really interested in bringing about higher quality experiences for learners in online environments.
And so when we talk about dangers, we're really talking about what may be in front of us that could really threaten quality of learning experience. Is that fair?
[00:06:56] Jason Johnston: I think so. I think most of the people that we talked to are well versed in building online classes, not just from a theoretical stance, but a practical stance of getting in there and making them happen from a quality standpoint. And so that certainly puts a particular context on this. Nobody was talking to us about the enrollment cliff or things like that.
They tended to be around more of the issues that are apparent within the course and programs that are being delivered online.
[00:07:29] John Nash: Yeah.
[00:07:31] Jason Johnston: Shall we listen to a few quotes from the OLC floor?
[00:07:34] John Nash: Yeah, absolutely. Let's get on the floor interrupt some some snack time that people are having and hear what they were thinking was a potential danger to online learning in the future.
OLC FLOOR INTERVIEWS
Yeah. My name's John Ruzicka. I'm with learning Sandbox. I feel like the greatest danger to online learning is overreliance on what I would call the shiny new object. So a couple of years ago at this conference, you might've heard a lot of talk about the metaverse.
Today, it's all about generative AI, open AI. And so what will it be in the next couple of two or three years? It depends. I mean, of course, these are topical things we need to all think about and know about and experiment with, but I think the over reliance and over indexing on that new technology could be a distraction.
My name is Carrie Kennedy. I'm here with the University of North Carolina at Charlotte, and I would say the biggest danger or risk that I'd like to make sure that my university avoids is being too slow to consider workforce impact and mobile pathways between non credit to academic credit.
I think we're already a little bit behind in doing that and I think to, you know, keep up with demands of employers and skill gaps that we need to have those pathways in place.
I'm Ellen Rogers with Penn State University. Big concern might be, if the faculty get too good at all this online learning and instruction, what happens to the need for instructional designers?
Bill Egan, instructional designer of Penn State World Campus, one of the biggest threats play off.
of that, what if we get rid of faculty, because we're using things like AI and other certain content experts to generate the content, which is one of the biggest obstacles from an instructional design perspective, is working with faculty, getting content on time, etc. So I answered a question with a question.
I'm Cody House, I'm the Director of Academic Programs at George Washington University with the College of Professional Studies. I see the thing that most influences or should be challenging to higher education and online learning in the next few years is how slowly institutions are to accept change and to embrace innovations.
You know, obviously I feel like to this question, most people have probably said generative AI. But I think even that conversation shows you how slow institutions are to figure out what their stances on changing. They're coming up with committees about committees to figure out nomenclature for new terms and new credential terminology.
And so I think that institutions need to just figure out how to streamline processes and make decisions quicker to accept change and move on to the next thing.
Jamie Holcomb, and I'm with Unitech Learning, and I think one of the greatest challenges for online learning coming up will be the disparity between the consumer experience and the online learning experience.
And the expectations that consumers have for the quality of interactions that they have from other platforms where they engage with frequently. I think online learning is lagging there. So, to me, that's one of the greatest challenges we have coming up.
I'm Carrie Brown Parker from North Carolina State University, and I guess I think the danger in terms of student work or student productivity maybe is AI tools, although there's also a great inspiration there for instructors to get creative and do new work with students.
Caleb Hutchins instructional designer for the community colleges of Spokane. I think the greatest danger is probably commercialization, to be honest.
I perceive that a lot of different colleges are moving towards standardized publisher content as much as possible. And I think that I think that more and more it's taking away instructor agency and instructor interaction with students. I think publisher content has its place, but I think that when it starts to become a replacement for the teacher, then we have a problem.
My name is Dr. Sonja Dennis, I'm with Morehouse College. So I think the biggest threat would be the lack of in depth knowledge, or lack of in depth understanding where students have at their fingertip so much information, are they really having any deep learning occurring?
MY name is John Moraine LaSalle. I am with Montclair State University. Specifically, I'm an instructional designer, part of the team. And I think that what could potentially be one of the biggest dangers While I want to say it's potentially artificial intelligence, it's not specifically that, but I think it's more so the growing danger of feeling isolated in the online environment, and I feel that artificial intelligence poses the risk of making it even easier for students to disconnect from each other.
They're already struggling in the online environment sometimes with that. So that is what I think the bigger threat is from AI. Not so much, Oh, they could use it to try to get a solution or an answer, but how it could, it could basically almost like Pavlovian make them just immediately go, I'm going to go to chat GPT to figure out what is the best way to discuss the best way to find an answer or a solution than rather than your actual peers in this virtual environment with you.
My name is Yingjie Liu. I'm the leading instructional designer from San Jose State University. I Would say We might, we might be too slow to catch up what's going on in the, in the world, especially like with, with XR with AI. Like we, we are slowly to integrate those into our teaching and learning.
But I, uh, I'm wondering when the students are already using the technology like AI in their learning how we. Update our teaching, especially the, our pedagogy, best practice to catch up with what's going on and what students need, right? So the students might have different ways to learn, they might have different practices.
Preference which will be different and we are, we are exploring that direction. Just hope we catch the speed of things evolving.
My name is Vincent DelCassino and I'm at San Jose State University.
That's a really interesting question. I think the potential for it to become so diffuse that it loses its center point. In the sense that anyone thinks they could get into the game, and it has the potential to lose the kind of engaged pedagogical value. That you sometimes see when, and I think one of the areas is in corporate in particular.
Going out there and building courses and programs and thinking like, they've nailed what we haven't been able to deliver on. But some of the criticality, some of the other things like that can have a real impact on how people think and imagine what value higher ed brings. And we tend to move a little slower sometimes.
But I would argue with a little more thoughtfulness. But I think that could be a risk for us in the future.
END FLOOR INTERVIEWS
[00:14:52] John Nash: Yeah, wow, so what'd you think of those, Jason?
[00:14:56] Jason Johnston: Yeah, there were some parts that I was not surprised and some of the themes that were coming out, especially those around AI and institutional change quality and so on. But yeah, it was really interesting to talk to people just to get their initial reaction on the floor.
[00:15:13] John Nash: Yeah, you never know where people are heading with what their concerns are going to be. We hear them talking about over reliance on new technologies, maybe slow adaptation to workforce needs, to redundancy of instructional designers. It's a conference of instructional designers, of course. AI is on everybody's mind.
Will they be put out of a job? Will faculty be put out of a job? So I think that's, yeah, it's interesting. And then, of course the ever present institutional resistance to change.
[00:15:44] Jason Johnston: Yes. Yeah, which, we talked about a little bit is, and we'll go through and talk about these individually, but is both probably to our benefit and to our demise in some ways, our resistance to change, right? How quickly we move into things like this,
[00:15:59] John Nash: I think it's, yeah it's a risk to our demise. I think that the glacial pace of change in a lot of places is going to be a threat going forward. At the same time, I'm not advocating a move fast and break things approach, but I think we need to find a more middle ground. I think the institution's responsiveness to change through their leadership, to understand what expertise is need to be brought to bear to.
Fix the problems in front of us is just not, it's not responsive enough.
[00:16:31] Jason Johnston: But aren't you from San Francisco? Aren't you one of the bros?
[00:16:33] John Nash: I am not one of the bros because I don't know how to code. I can write rudimentary HTML, but that's about it.
[00:16:40] Jason Johnston: Okay. I thought everybody from San Francisco just believed in, in moving fast and breaking things and seeing what happens.
[00:16:47] John Nash: Yeah I like to prototype things. And and I grew up in Menlo Park where all the VCs are. But but I am not a VC myself, nor do I really know any.
[00:16:56] Jason Johnston: Huh. Interesting. As we were looking at this, we were looking at a pivot, listening to what people were saying from the floor, listening to what people were saying in our conference room, and thinking about how we could create this pivot of transforming dangers into opportunities. What are the top dangers?
And then how could we pivot to opportunities? And we came up with this three part response and approach, which is to assess the threat level. Is it a real danger? How likely will this danger destroy a fundamental part of academic life? And then two, how could we simply survive the danger?
What are the basic skills necessary? And then three, how could we then thrive within this danger or in response to this danger. Use it as an opportunity to create a better, in our case, in our theme, more humanized online Education.
[00:17:52] John Nash: sort of level of discussion where we were talking about the threat level and is it a real danger was really important for folks we were talking to because it, it helps us start to disentangle hyperbole from real concerns. I mean, you get into a room with enough people and there's always going to be some kind of complaint about something that's going on, but is it going to be a real threat?
Is what we're hearing in the rumor mill and the, in the world around AI. And right now we're, we are recording this at the time in which open AI's board has fired Sam Altman, their CEO. Is this a real threat to what we're going to do? No. So I think vetting those discussions in such a way that we think about what is the real danger, reframing it to something that's actually.
Then taking it to a discussion where are we going to survive this? And then how can we actually thrive it? How can we flip it on its head?
[00:18:46] Jason Johnston: And so that we could be specific, we tried to frame the dangers that we're going to present here as the danger of blank to the existence of blank so that we could actually be really specific. So it doesn't become just this just kind of nebulous danger that's out there, but what exactly, if it is a danger, what exactly is it a danger to?
So our first one that was coming up over and over again, obviously a big topic of conversation, was around AI, but specifically, danger number one, AI threatening our ability to assess student learning in online courses. What do you think the threat level is for this? AI threatening our ability to assess student learning in online courses.
What do you think the threat level is and why?
[00:19:36] John Nash: I think the threat level is is in the middle there. If we're going from like a one to a five we're about at a three. I think that AI's threat to our collective ability as instructors to assess student learning in online courses is as large as the instructor's capacity to understand their ability to pivot and change what they assign. I'm going to go back to an, it's not the old adage because AI has only been around 51 weeks, by the way, at this point in time, as we record.
And but it's an...
[00:20:10] Jason Johnston: Happy birthday,
[00:20:11] John Nash: you
[00:20:11] Jason Johnston: AI.
[00:20:12] John Nash: Happy birthday, GPT 3. 5. The adage is something as follows. If you're assigning work that can be done by AI, you need to rethink what you're assigning. And I think that's where the threat sits. So the question is then how do we survive across that but what do you think the threat level is there?
Do you think that AI threatens our ability to assess student learning in online courses?
[00:20:35] Jason Johnston: I think that, yeah, my answer is it's hard to give it a number because it depends, right? And so I think in short, I would say high for those that are inflexible to change and rethinking their assignments, but also high for people or programs where the typical assignment that is being assessed easily replicated by AI, meaning that it's not just about rethinking the process towards whatever it is that you're learning, but this final product is something that could be easily replicated by AI.
So I think it has a high threat to those kind of programs and people and a more challenging threat, I would say. So how do we survive this threat then if we've assessed it and then we're looking to survive it?
[00:21:24] John Nash: I think that one thing that instructors can think about to just merely survive is to start to communicate with their students the presence of AI and how they feel about it they meaning, how does the instructor feel about it and how do students feel about it themselves? And so there's this communication component that I think is going to be the lowest level threshold and highest impact thing at the surviving level.
If you're not prepared to think about your assignments in terms of redesigning them or thinking about the way you assess the assignments that you give at least you could be talking about what it is you believe about this and why you also believe the assignments you give are the ones that you want to do.
[00:22:07] Jason Johnston: Yeah. So taking an active communication stance, being transparent. We heard a lot of people talking about creating policies and principles, which I think are ways to survive, but not necessarily thrive, but they are ways to approach things. And that maybe comes in with some of your communication.
Being in a place where you can really test out, figure out what AI is doing and how it affects. So it's not just this unknown boogeyman threat in the closet. And I don't know what it looks like, but you have a clear sense of, I've heard of instructors basically putting their assignments into AI to see what it would spit out.
And that gives you a clear sense of really where this threat is at rather than this unknown nebulous kind of threat.
[00:22:49] John Nash: Yeah. So what about thriving? How do we flip this on its head?
[00:22:54] Jason Johnston: Yeah, one of the first things that we had talked about was our conversation with Dr. Brandeis Marshall on episode 18 about making assignments un-AIable. And I think that's one way to thrive is, as we've talked about beyond just the communication transparency, but actually reforming, re imagining our assignments under the influence of AI, in the age of AI, so that we could be thinking about how these assignments could not only help us really assess where the students are at, but actually prepare them for a future of work and life and scholarship within AI.
[00:23:30] John Nash: Yeah, that's right. When we talked to Dr. Marshall on episode 18 that was really inspirational and it made me think about ways in which assignments could be turned into more public demonstrations of learning, more about oral defense of ideas. And a polite pushback to that might be that, that takes more time. If I'm gonna do an oral defensive ideas with every student and I have 200 students, that's may not be scalable. So I think we also have to be thinking as a community how we can support instructors at scale.
[00:24:05] Jason Johnston: Yeah. All these ideas, not assuming that there's a one size fits all or a silver bullet. That's going to solve this for every single kind of program and class size and so on. We got to be thoughtful about this. That's right. Yeah,
[00:24:19] John Nash: One way we might think about scale that could work in larger classes and inside a learning management system is, for instance, letting students cheat on purpose with ChatGPT or Claude or Bard and then ask them to rate the quality of that response to a prompt that you might ordinarily just give to students on their own.
And so you start to get to this sort of metacognitive critical thinking lens going and you get ideas as instructor on what the AI can really do and also help students see the limitations of what AI can do.
[00:24:58] Jason Johnston: That's good. And we talked a little bit about scaling online classes and humanizing those classes with Dr. Enilda Romero-Hall in episode 13. And within that thinking too, about how we might focus on skills and maybe focus on grading in those situations as well. And those are things that could be scaled because it's just a shift in what it is we're assessing and also just a different process in terms of grading, which it could actually turn into, let's say on the surface level, less work, not more work for the instructor when it comes to assessing where their students are at.
[00:25:34] John Nash: Yes, and Dr. Romero Hall's presence in the classroom is really predicated on a community presence and with a feminist pedagogy lens bringing in student voice along the way. And so that could also be scaled to some extent through the LMS and through polling and questions and even discussion posts to say, how might we together consider how we want to address this learning goal in the presence of AI and with these kind of activities that we must get done? That could happen as a community.
[00:26:08] Jason Johnston: Yeah, it reminds me of a quote that I ran into this last week by Paulo Friere and the quote is this, "the answer does not lie in the rejection of the machine, but rather in the humanization of man (or people)." This is from "Education of Critical Consciousness."
And what reminds me of this idea of we, we just can't, we can't have large classes and actually humanize them. That may not be the case, right? We can think about our approaches even in the face of AI. We can think about our approaches in large classes that may be because of AI, it's forcing us to then think about the humanization of students within the context of these large classes in ways that we didn't have to think about before because we were just following what Paolo would also call "the massification of education."
We're just following this incremental enlargement of the class size and without really critically reflecting upon what it means to continue to humanize the students in these contexts.
[00:27:10] John Nash: And that's related to the webinar we did recently with the group from Inscribe, looking at the impact of AI on student connection and belonging. With AI,we are able to explore opportunities in large classes to help differentiate instruction, to help think about ways to advance belonging and large swaths of students. So I think that there's ways to get at this if you're thoughtful about it.
[00:27:35] Jason Johnston: Yeah, absolutely. Oh, there's so much there. I just read a great article about belonging from a research with 26, 000 students across 22 institutions. Anyways, that's a whole nother episode. We should go there. We should find somebody that can talk to us about that. And let's do a whole episode on belonging online.
Let's move on to danger to though. So this is what we saw from our group and from the floor of OLC, danger number two was institutional resistance to change in pedagogical approaches. So what do you think the threat level of this is?
[00:28:07] John Nash: I don't know, maybe I'm too close to the mothership, but I feel like it's a little high. I'll give it a four out of five.
[00:28:14] Jason Johnston: Yeah. Yeah. And I think it, for me, again I'm sorry for this big cop out, but it depends, right? I think there's certain units and certain programs that are embracing change. There are others that are quite resistant. And I think there's certainly ways in which a lot of people across units are wanting to hold on to the way we've always done things versus, versus adapting.
So for instance, they want to just have, TurnItIn 2.0 so that it can detect AI versus rethinking the way that we're interacting with students around plagiarism detection and our relationship with students.
[00:28:53] John Nash: Yeah, I mean, what I would hope for is that as institutions think they're responding to the need for change, it's not that they're bringing in new tools like the "TurnItIn 3.0," that's going to let us catch more cheaters, but rather they're thinking about ways to do capacity building that are akin to what we learned from Olysha McGruder's episode and what they do at Johns Hopkins in the School of Engineering, which is everybody who's teaching an online program has to go through the online instructional design process. And so my institution doesn't necessarily require that. I think that would be, that would raise the tide for all the quality across our institution if we did that and right now I think it's more akin to here are tools you can use and we hope you use them.
[00:29:41] Jason Johnston: Yeah, which is probably some of the survival part of it is like we, we have provided you with some tools to use and give you some guidance around that, maybe surviving this threat level right now in terms of this change that AI is bringing about, this disruption really, AI is a disruptor it's not a calculator, I don't think. I've decided that's an interesting analogy about AI being like the calculator.
It's not really like a calculator because it crosses so many boundaries of everything, it's a disrupter across every single discipline and so part of this survival is maybe giving people some ways to adapt and giving them guidance and so on. How would we thrive though in response to the danger of institutions resisting change?
So how do we turn this into something that, that could really take us into a second half of online life , that we're imagining?
[00:30:41] John Nash: I think that when institutions can become learning organizations and start to see the richness of the opportunity when they are able to build capacity amongst faculty, create environments where faculty want to learn, and also for Research-One institutions like you and I are at, incentive structures for faculty to be really interested in taking on that capacity building.
[00:31:08] Jason Johnston: Yeah, that's good. And I think along with that too, that learning organization, capacity building, creating systems and positions that help us adopt and adapt to innovations. So creating pockets to test and educate and try out new things and help us with this transition of new technologies, I think is really important.
I've heard more and more people getting assigned these kinds of roles, like a dean of AI and these kind of things to help move us in that direction, and I think that is really important because I think faculty are really busy, and they're just trying to make it through the semester and I think that they would welcome people who are taking the time to really dig into this and come from an institutional standpoint, guardrails up and think about how this change is affecting and should be affecting
[00:32:01] John Nash: Yeah, there's a quote that came across my desk. Have you heard of the app Readwise?
[00:32:07] Jason Johnston: Oh, yeah. Yep. Yep.
[00:32:08] John Nash: One of my doctoral students recommended Readwise, and its connection the app, Notion, and I get a little push of everything I've highlighted on my Kindle. There's a quote from Daniel Priestley in his book, "24 Assets." And he says, "Systems aren't there to replace people, they are there to make your life easier."
So I think thriving also means that institutions will put into place the resources to create systems that really work across the spectrum of services that we provide as faculty and that the staff do to make things go.
[00:32:41] Jason Johnston: Yeah, that's good. Could you say that quote one more time?
[00:32:44] John Nash: Yeah, "systems aren't there to replace people. They are there to make your life easier. Your teams, your customers, yours. Everybody's life."
[00:32:53] Jason Johnston: That's good. So danger number three, increasingly low quality of online education. So the threat, the danger is low, increasingly low quality of online education. I'll say just from the top that, this seemed to really resonate with the people that we were talking to. Of course, we were with a bunch of instructional designers who were concerned about online education getting watered down, about it becoming a much more mechanized, much more shovelware.
And I think I share that concern as well. That's probably a higher concern for me than a concern around AI taking over things. How about for you?
[00:33:29] John Nash: Yeah I am concerned about that. And I think I want to frame our danger a little bit. because the way we've stated it here, "increasingly low quality of online education," suggests that we are on a downward trend. I think that the threat really is that there's a chance that we will see an increase in low quality of online education doesn't that sound like we're saying it's, there is an increasingly low quality of online education?
[00:33:54] Jason Johnston: I think that is a danger is that online education will not get better. It's going to get worse. It's going to go lower and lower quality as we develop this out into this next next decade.
[00:34:06] John Nash: And that threat manifests itself because of potentially crowded vendor marketplace, a potential run to make money in the space what have you.
[00:34:16] Jason Johnston: Yeah. And tied in with the AI as well. I'm not a,
[00:34:19] John Nash: yes.
[00:34:20] Jason Johnston: I'm not a fortune teller, but I guarantee you a year from now, we're going to have thousands more online learning opportunities that have been created by AI as a subject matters expert. So we're not working with subject matter experts anymore on this.
There are people just cranking these things out because the access to the information is there and there's an opportunity to get it in front of people and maybe make a couple of bucks or not.
[00:34:47] John Nash: Yes, I think that's a part of that threat. And how do I feel about it? I was concerned that the level of quality of online learning that presented itself during the pandemic, which was horrid would bring people to a certain belief that this is what good online learning looks like, or I guess this is as good as it's going to get.
And a lot of new people came into the space with not a lot of experience-- good teachers who had never taught online before, but then did not so great online. teaching and learning. And so I wondered if that was going to, and I posed that question to Dr. Olysha McGruder on episode 20, and she actually turned me around a little bit on that and said yeah, but you know what?
Look at all the people who got exposed to teaching online and, that's more people than would have otherwise. And so at least we have those people knowing that you can teach online and that there is a, there's a light at the end of that tunnel where you can get better and better at it.
[00:35:42] Jason Johnston: Yeah, and there are interventions that can happen to help us get there. And I guess talking about the surviving, would that be surviving or that was really moving into that thriving. I think surviving would be things perhaps like applying quality rubrics and continuing to bang the drum of we've got to continue to have quality online, for all the reasons that we should be building quality online.
I think thriving into that, though, is really continuing to work heavily recognizing the importance of working with our instructors, not just to develop good online learning courses, but also have the right tools and approaches to, to make those courses good, to make them excellent, right?
[00:36:27] John Nash: Yeah. I think that one thing that going online lays bare for new instructors in the space is that good instructional design trumps everything. And so there's a lot of things that you can get around and avoid when you're teaching face to face. You have that sort of context, but when you start to move online, you've really got to have quality rubrics, good instructional design have some professional development under your belt on how to really have a presence online. So yeah, I think you're right.
[00:36:57] Jason Johnston: And of course, in our last episode we talked with Alicia Magruder about a Coursera course that she created about excellence. And teaching online. And the other cool thing is that on December 12th, we were invited to do a podcast wrap up session for their conference called the Excellence in Online Teaching Symposium.
And this is just a, this is just a plug, of course, but a, also just an excellent moment to stop and say, yeah, we can always learn more, and this is how we thrive. in the face of the potential of low quality is by continuing to connect with peers, continuing to work the professional development, look for these opportunities to continue to grow and to learn and get better.
[00:37:47] John Nash: Yeah, I learn best from concrete examples that I can copy and steal. And I think that, and I'm happy to do the same for others. I think that this opportunity with the Johns Hopkins School of Engineering showing excellent examples of online learning is a model for what we need to see more of.
[00:38:05] Jason Johnston: So as we think about wrapping up on this OLC discussion around the dangers of future dangers of online learning what are your overall thoughts about our approach or about your optimism for what we have in the years to come?
[00:38:22] John Nash: I think we don't want to come off as alarmist talking about danger, but I think we can take some time here like we did today to understand challenges versus dangers. Challenges are the hurdles that can be overcome with some effort and the dangers are significant threats that could have potentially harmful outcomes for the way we want to see online learning go forward.
And so, I think by identifying some of these and then pivoting to opportunities is a great way for us to keep optimism in the mix of our conversations. Because I think there's practical strategies that we can take for a lot of these things. And it's just a matter of us working as a community to find out where they are, share those out, and then be kind and empathetic to those that are coming along.
I think that there's a good opportunity here for online learning to be great. We just have to, be vigilant.
[00:39:14] Jason Johnston: Yes. I agree. And I think OLC was a excellent example, again, of connecting with a larger community around these questions as well, so that it means that we're sharing these dangers, we're coming up with solutions together, we're maybe validating some of these or invalidating some of these, as the case may be, in terms of talking us down on the ones that we feel like might be a really, a big danger, but by specifying, we realize that maybe it's not as big as other things.
And I also think like the ongoing community-- want to encourage people and welcome, have them to join us on LinkedIn as well. So they can connect with us there as well of our, as our community there as well.
[00:39:55] John Nash: Yeah.
[00:39:55] Jason Johnston: And we'd love to hear from you about, what you think about these top dangers, these top three that we talked about, but also if there's other ones that you want to talk about, or you have other solutions, just reach out and we'd love to hear more. And I think this will not be the last time we'd probably talk about this.
Do you think John?
[00:40:11] John Nash: No, we're never going to talk about AI again. We're never going to talk about online learning again. Actually we have to, that's the podcast, right? Okay.
I'm actually looking forward to the balance of 2023 and talking to you more. I think we've got some good stuff lined up. A potential year in review. And maybe even a second Super Friends episode.
[00:40:35] Jason Johnston: I think we might even try to combine those two maybe a year in review with our super friends. We'll see how that goes.
[00:40:41] John Nash: Yeah. I think that would be a great idea. Let's do that.
[00:40:44] Jason Johnston: Okay. That sounds good. This has been great, John. And again, as we said, connect with us on LinkedIn. Also, you can find all these podcasts at our website, OnlineLearningPodcast. com, as well as show notes. We'll put as many links as we can about the things we've talked about today in there. And anything else, John?
[00:41:02] John Nash: You can catch the transcript of the podcast episode as well on our website. And yeah, do join our LinkedIn group.
[00:41:10] Jason Johnston: Yeah. And we want to hear from you and the kinds of things that you want to talk about. And if you like what you hear, please review us on Apple podcasts. I understand that the AI likes that and will push us up to even more stardom and success. If real humans go in there.
and review our show.
[00:41:31] John Nash: So we have concerns about AI, but we still treat it with a cheerful tone, because one day when it does become sentient, it's going to remember that we were nice.
[00:41:40] Jason Johnston: That's right. I always say thank you.
[00:41:42] John Nash: Always say thank you. Thank you, Jason.
Monday Dec 11, 2023
Monday Dec 11, 2023
In this episode, John and Jason talk with Dr. Olysha Magruder about the future of online education, a three-pronged approach to faculty development including JHU’s Coursera MOOC Course, and time boxing to help achieve successful outcomes. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
Dr. Olysha Magruder is the Interim Assistant Dean in the Center for Learning Design at Johns Hopkins University and can be found here at LinkedIn
Excellence in Online Teaching Coursera Course
Johns Hopkins Excellence in Online Teaching Symposium
Beth McMurtrie on Teaching: What happens to teaching after Covid? (Chronicle of Higher Ed Paywall)
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
False Start
[00:00:00] Jason Johnston: Any other questions for us before we get rolling? We'll do our normal kind of intro here, and then we'll get into the conversation.
[00:00:07] Olysha Magruder: No, no questions. I hope I don't sound too goofy, but...
[00:00:10] John Nash: No, we like goofy.
[00:00:11] Jason Johnston: Yeah, you'll fit right in! . We decided on the front end. We're just going to let it roll in that way. And I feel like john people have appreciated that
[00:00:18] John Nash: I even laugh at our own dumb intros because it's just, but yeah, we're not too stiff about it, but we have a serious topic here, but yeah, we're still humans.
Start of Episode
[00:00:27] John Nash: I'm John Nash here with Jason Johnston.
[00:00:30] Jason Johnston: Hey, John. Hey, everyone. And this is online learning in the second half, the online learning podcast.
[00:00:35] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, but a lot still isn't. How are we going to get to the next stage, Jason?
[00:00:49] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:54] John Nash: I agree. Let's do a podcast and talk about it right now. What do you want to talk about today?
[00:01:00] Jason Johnston: Wait, we are doing a podcast to talk about it. That's the weird thing about our intro. We're already doing a podcast
[00:01:05] John Nash: Yeah. It's very meta.
[00:01:07] Jason Johnston: a little meta that way. Yeah. Yeah today we are going to talk with Alicia Magruder. Dr. Alicia Magruder from John Hopkins Whiting School of Engineering.
[00:01:22] Olysha Magruder: Hello, Alicia here.
[00:01:25] Jason Johnston: Did I say all that right?
[00:01:27] Olysha Magruder: tHere is one funny thing about the name of my university, which is it's named after somebody. who has a weird first name, and it's Johns. That was his name. It's very common to say John because it feels weird to say Johns, and in fact, when I originally applied for my position in my cover letter, I also said John.
[00:01:47] Jason Johnston: Oh boy.
[00:01:48] Olysha Magruder: I learned very quickly that, oops, it's a weird name, Johns Hopkins, but everything else, yes.
[00:01:53] Jason Johnston: I'm glad you, you made it through that first that first test and they were kind to you, somebody not too long ago spelled Tennessee wrong on a cover letter, hard one to look over, easy to do though, easy to do, but also a little hard to look over sometimes.
Yeah nice to have you here. So Johns Hopkins Whiting School of Engineering.
[00:02:14] Olysha Magruder: Yes, that's right. Whiting School of Engineering.
[00:02:17] Jason Johnston: Yeah. And tell us about what you do there.
[00:02:21] Olysha Magruder: So I am the interim assistant dean of learning design and innovation, which is a somewhat new position not on the team, but for me, but I lead the learning design team of instructional designers and course support specialists. And we work collaboratively with our multimedia and instructional technology team to create online courses.
So we have our main program that we I guess we could say service is a engineering for professionals program and there are 22 online master's degrees that we help support. So we run hundreds of courses at any given time. We have right now we have about 130 plus courses in development with our instructional designers and yeah, that's what we do.
[00:03:09] Jason Johnston: Yeah. That's exciting. You've got a lot going on there though with with a hundred plus
[00:03:15] Olysha Magruder: Yeah,
[00:03:15] Jason Johnston: Does it feel like a lot?
[00:03:18] Olysha Magruder: it does, but we've structured our team so that it's very collaborative amongst our instructional designers, faculty, the multimedia folks, the course support specialists. So we have what we call pods of team members and they work together for certain programs. And that way they can. Discuss the program specific needs, but because of that, we have a very nice workflow that we've created.
So it is a lot, but it's manageable. So far. I haven't had any anybody tell me that they're ready to put their hands up in frustration. That happens occasionally, we get through it.
[00:03:53] Jason Johnston: Yeah.
[00:03:55] John Nash: Alicia, does every program that you just mentioned receive the benefits of your services or do some just still go on their own?
[00:04:02] Olysha Magruder: They all go through us to create their courses. There are some, a couple of programs that have a slightly different approach that we accommodate, but eventually they end up with us. So everything that goes online through those master's programs through that one engineering for professionals is all the courses go through us at some point and we have a the course support specialists.
We also have a quality assurance manager. They make sure everything is like accessible and, we have all these quality checks throughout the process. So that's by design. So all of those courses are vetted, reviewed, et cetera.
[00:04:41] John Nash: That's wonderful. I know at my institution, there are online programs that do the best they can, and we have folks that can help us, but it's not as systematic as that.
[00:04:52] Olysha Magruder: Yeah, we are somewhat unique and a part of that is because these big online programs, most of the faculty are by design people in the field. So they're super busy people and they're incredible people. It's pretty cool working with people who work at NASA or worked on the, asteroid destruction mission or are doing biomedical engineering things that you see on LinkedIn .
And I don't even know half the time what they're working on, but then I see these posts and I'm like, whoa, they're like revolutionizing the medical field and. So they're super busy people, and so we've designed this to work with that particular type of person, and it's a longer, it's a very, it's a committed process, and it's a longer term process, but it works, mostly.
[00:05:39] John Nash: Nice.
[00:05:40] Jason Johnston: If we ever wondered whether or not online learning was important we now know that it is because we're training up the next people that will actually stop that asteroid from hitting the earth, right? The current people that know how to save us aren't going to be alive when the asteroids hit. So we're online learning is essential for training up the next generation.
[00:06:02] Olysha Magruder: We actually had , one of the mission coordinator for the DART mission speak at it at a recent event, I don't think she teaches for us, but she's affiliated with one of our units. Anyways, she gave the whole awesome session on the DART mission, which was the asteroid deflection mission.
She reassured everyone that it's very rare that's going to happen, that we have an asteroid that will hit that, We'll destroy the earth or what have you, but some of the statistics you put up there, she seemed reassured by, but I was like, oh my gosh, I don't feel good about these numbers.
I know she does, but they still as any number seems a little bit,
[00:06:38] Jason Johnston: right? If it's not zero, it still feels like something that could happen.
[00:06:42] Olysha Magruder: Exactly,
[00:06:43] Jason Johnston: Willis
When you need them too, right? Because I, he's not going to be saving us. That's an Armageddon reference. I don't know if you've seen it. Have you seen that one, John
[00:06:50] John Nash: I have years ago, 80s or 90s, I think. Yeah, but yeah,
[00:06:55] Olysha Magruder: I'll have to look for it. I don't think I've seen it.
[00:06:58] Jason Johnston: Yeah. It's an action movie where they're the ones that are skilled and determined to, to take care of this asteroid before it hits the earth. So I won't tell you how it ends.
[00:07:09] Olysha Magruder: The good news is with the real life dart mission, they managed to hit not the big, I guess there are two, a big asteroid and small asteroid. This is all layman terms because
[00:07:19] Jason Johnston: I don't know.
[00:07:21] Olysha Magruder: but they managed to hit the small asteroid that was orbiting the larger one and they managed to change the time of it by 30 minutes.
So it would, the orbit would take 15 hours and I don't know, I'm just gonna make this up, 50 minutes. But after they hit it, it's now orbiting at 15 hours and 20 minutes. And her point was like, this was just an experiment to see if we could, and we can, and so we can change the sort of, I don't know.
It's wild. You can change space. We can change space.
[00:07:54] John Nash: it is wild. It's as if they're playing a game of galactic billiards.
[00:07:58] Olysha Magruder: Yeah. So all that to say, I get to work with cool people like that. We get to work with cool people like that and have to accommodate their extremely busy professional lives.
[00:08:08] Jason Johnston: And it sounds like your approach, because as John was talking about, there's a lot of centralized units where their approach is a little bit more of a consulting approach. You come to us, if you need something, we're going to help you do a couple of things in Canvas or your LMS or whatever it is that you're in versus a, Let's walk you through more of a systematic process to really build this out.
We're going to hold your hand through this. We're going to try to help this, um, happen in a reasonable timeframe and to a good quality and that kind of thing. So it sounds like yours is a lot of approach that more build out kind of approach, right?
[00:08:46] Olysha Magruder: Yeah. We call it a high touch approach versus what you were saying, the consultant. And I was at an institution prior to this one where it was very much come to us. We will help you, but we only had three instructional designers for an entire college campus. So it wasn't a very one on one walk you through everything kind of situation.
[00:09:07] Jason Johnston: Your team is also called it's Learning Design and Innovation. So what does innovation mean for you right now? I'm just curious about what things are on your plate.
[00:09:17] Olysha Magruder: Funny you should mention that. No I am. I just, I wrote a dissertation about faculty development and getting faculty on board to adopt some things. And this was. A while back before I was at this institution, but I like to go back to my definition of innovation, which is something that is new to someone, basically, a pencil could be an innovative tool if one has never used a pencil before to keep it like super simplistic.
So to me, innovation is what is new? Even if it's new to someone one of the things I'm interested in right now is I'm working on looking at bilingual programming, and if that's something that we can pursue, so I have some things in the works.
Because, that increases the audience of people that could obtain this sort of education. That's one of the things. Obviously, AI is on everyone's mind, and we'd be... Not to include that in our innovative approaches. So we're looking at we have several of my team members are using a tool called Descript.
I don't know if you've heard of that
[00:10:23] Jason Johnston: We use it for our podcasts. So we are a little familiar and will put a link in the show notes for listeners
[00:10:28] Olysha Magruder: cool. Yeah. So they're trying to use that to help with the editing of faculty videos. Because there are lots of ums and ahs and I hope that you will use that to help with this.
[00:10:39] Jason Johnston: Oh yeah, we're there for that. Yeah.
[00:10:41] Olysha Magruder: One thing I'm really trying to help is move away from, the typical PowerPoint video lecture kind of thing. And so we have team members working with different interactive tools to create material that. is I would say engaging in a different kind of way. sO yeah, that's, those are kinds of the things that specific things that we're working on that I think could be under the umbrella of innovation.
[00:11:07] Jason Johnston: We actually connected at OLC Innovate this last spring, 2023 and. One of the things that was over some common people that we knew, but also around this Coursera course that you helped developed called excellence in online teaching. coUld you tell us a little bit of about that course?
[00:11:33] Olysha Magruder: Sure. I'll start by sharing just. how it came to be because it was a cross divisional effort across the campus of Hopkins. sO within Hopkins, and I don't want to get too in the weeds with this, but they offer something called Delta Grants internally. And the Delta grant is the acronyms, digital education and learning technology acceleration.
So it's pretty cool that Hopkins is invested in supporting, these kinds of initiatives. And so in 2021, I and other a few others. put together a grant for this project. And so we call it the online excellence and multi pronged approach to prepare faculty for excellence in online teaching and learning. So the three different, I like to think of it as like a spork perhaps has three different prongs or maybe a plug, an outlet plug. Anyway, one of them is this MOOC that we created. Another one is an internal certificate of completion for Hopkins folks. And then the third one is a conference or symposium that showcases excellence in online teaching.
So the MOOC was the sort of most straightforward one to work on. So that's what we started with. So we Coursera relationships across the campus. School of Public Health has a really a lot of Coursera courses, uh, engineering as well. We've added a few to our portfolio there. So we had a relationship with Coursera already, so we were able to pitch this.
They gave us a good idea of what we could expect in terms of enrollments and went forward. And then what we also wanted to do was to Like I said, it was cross divisional. So we had a an advisory board or committee created where we invited people from all over the campus to join us to talk about what should this MOOC hold, what topics, the MOOC and the Certificate of Completion program.
So we worked with them to come up with the topics that made the most sense. And a big part of this was to actually focus on the faculty who are teaching online in our institution and to showcase them. So we recruited those faculty, we got the topics and put together the MOOC. So my one of my instructional designers on the team, Kimberly Barce, was the one who led the project and basically managed to get all these people together and do all these different recordings.
And we do have studio recording spaces. on campus, but a lot of folks are remote. For example, I am remote. So we were flexible in terms of how the content was created. But essentially we have in the Coursera MOOC I think it's five modules and they cover some pretty broad topics. And then within those modules are short videos and lessons created by those faculty.
And then, also the content gathered through that. So that's the structure.
[00:14:40] John Nash: And is the intended audience is beyond Hopkins though, isn't that the case? Obviously I was looking at it today and so it goes pretty broad.
[00:14:49] Olysha Magruder: yes, the intended audience is anyone who wants to pick up some general practices and ideas about excellent online teaching. And we really try to steer away from course design, because that's, A different ballgame altogether. We were talking more about the facilitation, the actual student interaction piece of online learning.
[00:15:16] John Nash: Can I follow up on that? I think that's really interesting. And that may be for some who listen to this and others in our sort of colleague space as a distinction without a difference. Can you talk for a second about that decision? And what is the difference between the two?
[00:15:30] Olysha Magruder: Sure, so from our perspective, and this is very much like from What my team does. When you speak of course design to us, that implies you're laying out the blueprint for the course. You're deciding, you're making these curriculum decisions. I also teach for the School of Education within Hopkins and I did the same thing with them.
So what do I want to teach? What are the topics? How am I going to present this content? What are the resources I want to pull in? So there, that piece where it's the taking, it's like a flipped house, you're taking the structure and just making those design decisions.
And then the other side of that is the teaching piece. So that's when you actually have The students in the house and they're using the rooms in the house and exploring and you're there to help them along. So that was just a really random comparison I came up with, but hopefully it makes sense.
[00:16:26] Jason Johnston: I think it makes a lot of sense to me the building of it versus the using of it. The delivery as we define it out as we're talking with our faculty, we talk about. Design, development and delivery often with them with deliver being a complete focus on that teaching aspect.
Ours we, we do internally professional development, both synchronous, asynchronous with our faculty and But we just focus internally selfishly of University of Tennessee. We just focus on our own faculty. What kind of took you to that decision of using Coursera as a platform.
It was being used already there, but it's a public platform. Was there any conversation about the good and bad of this being in more of a public forum or was there an impetus there someplace to really make it public?
[00:17:17] Olysha Magruder: That's a really good question. Well, there, I think I could say a couple of things to that. And the first is a really pragmatic comment, which is a part of the grant proposal was to have a sustainability plan. And as with anything, you need resources and resources means money. So we had to be creative about, okay, so if we're going to offer something. Campus wide or externally like this symposium, which I'll talk about later if you'd like, because that's coming up. But how do we actually pay for this stuff? So that's why we were like the MOOC will help. So with Coursera, there's a a share in the, in whatever income the course makes.
It's not a whole lot. It's not like we're out here making a bunch of money off of people or something, but it's enough to feed back into the programming. And so we've worked it out with our institution where we get those funds put back into that sort of bucket that will then help us make these other things happen so that was one piece of it was, this could help us sustain this plan, and then the other piece of it was just, we internally know we have all these cool things happening.
Is there a way we can showcase this and get More awareness around online learning at Hopkins. So it's just more of a, let's, let's shine some light on this and celebrate some of our faculty who are doing some cool stuff. So those were the two main reasons for choosing the MOOC.
[00:18:48] Jason Johnston: Have you been happy with using the Coursera platform? I I've taken Coursera courses there. I've got, I probably have 20 that I started and haven't completed, is my guess at least. But were you happy with working within that structure? I've never made one there
[00:19:04] Olysha Magruder: Yeah, Coursera has a really they've, they have a really good process set up for people developing. Materials you basically work with them. So our instructional designer, Kim, she worked closely with their processes and it was pretty easy for her now in terms of the platform itself. I agree. I tend to go into things and I don't always finish them.
But the way I look at. This particular MOOC is that again, we tried to make things intentionally bite sized. We wanted you just to get a taste of some of these topics. And then if you're curious or interested, we have this symposium that we're setting up. To have a more alive presence to talk about these topics in more depth.
So I think if you use a MOOC as a sort of a springboard for other things, it can be effective. Or at least for faculty development, that's my thought anyway.
[00:19:59] John Nash: Innovation. You, one of the examples you talked about was getting past PowerPoint lectures and I was thinking of the term shovelware and in the Coursera course, there was a section that you have one of the modules, maybe on student engagement, and there's a video by your colleague, Mike Reeson there, and I could see several elements in that video that aligned with our Thank you.
theme for this podcast, which is around humanizing online learning in lots of things around building relationships and community sharing personal experiences, this idea of explicitly centralizing climate some neat ideas in there. It made me wonder about your experiences and how you've navigated the challenges that come with transitioning faculty and instructors to online or hybrid models both as an instructor yourself and now in your administrative role.
Are there specific strategies or tools or things you've found effective in fostering, belonging and active participation in online courses that help faculty who maybe have not really been there or have it in their head that's what they have to do?
[00:21:05] Olysha Magruder: I'd say that's always an ongoing challenge, to be honest. It's. It's a definite mind shift because, as with face to face teachings, you walk into a physical space, people smiling, happy faces, or gauge their reactions to things and so then you go into this sort of online situation, and it can be, nerve wracking, I think, for people.
How do I know how people are actually thinking or feeling? What we've done is really that instructional designer relationship with the faculty, the high touch model. We basically push some of these ideas. Maybe I shouldn't have used the word push, but it is like a matter of persuasion
we, have a thing we call the course design matrix. Sounds really fancy. It's just basically the design document. And one of the things we asked them to lay out is what is your plan for interaction? What are your interaction plans? So upfront, we're having them think about it, and then throughout the development process, That is a key piece that is constantly reiterated and we have tools that can help interact.
It's required that they have office hours. And I think there's one video in there. I can't remember which one, but another colleague, David Porter, who's amazing. He does really interesting office hours. So he talks about that in the MOOC. It's not just, I'm going to show up and if somebody shows up, cool. If not, I'm just gonna be sitting here emailing or what have you.
He actually does an engagement piece where the students love to come to those office hours. And so we try to also share those things out with our faculty, celebrate them, put them at the center, give examples. We have a, what we call our cohort training program for the design of their courses.
They have to go through this eight week process to build their document that's why we believe in that relationship between faculty and instructional designers because that's the ticket.
[00:23:03] John Nash: Yeah, I can see that, and I get what you're saying about pushing, but you're advocates, you advocate for that, and you advocate and privilege it a little bit up front to to showcase its importance. Yeah, it's interesting. I think that there was a really timely article this week in the Chronicle of Higher Ed.
It was Beth McMurtry's piece on thinking about how hybrid and online learning is just going to get bigger and larger, but there was a neat sort of discussion in there, the legacy from the pandemic and, the, these pandemic induced online learning courses really weren't representative of well prepared online, asynchronous teaching and learning.
Do you think that lulled instructors into not taking online learning as seriously as they could? Part of me feels like this, the rush to online learning during the pandemic left maybe a slight negative mark in terms of perceptions and practices surrounding online ed. Do you have any thoughts on that?
[00:24:02] Olysha Magruder: I see where you're coming from and a part of me also thinks maybe not because. Otherwise, what any of when many of these people have even tried it. And if let's just throw out some fake numbers here. Let's say we had it. I'm just going to make this all up. Please, if you're listening, this is not based on any reality.
Just a theory here. Let's say we had 20 percent more faculty. Let's say 80% more faculty teaching online. And of those 10% bought into it and now are committed to quality online learning, I'd say we, we won there a bit, and even if 70% aren't buying in, they had experience with it. So at least they understand it.
At least they know that there's an L M Ss and a thing called Zoom and a thing called flexibility for students. So I don't know, I'm gonna take that as a win personally.
[00:24:54] John Nash: I like that view. I like that. Yeah. I think if you listen to our conversations over time, I'm, I'm a regular title series, tenure track associate professor at a research one and I have this sort of, jaded look at like how everything goes.
And, I think. It is interesting in that same piece, McMurtry talks about how traditional age students are increasingly interested in online options. It's not just your father's Oldsmobile anymore sort of thing. And so I also worry about incentive structures inside research institutions like mine and yours where it may favor research over teaching quality.
And so you kind of touched on it, I mean, for places that have a consultative model for instructional design support, is there a risk that we may still have a hard time getting a lot of instructors to do this work?
[00:25:44] Olysha Magruder: Yeah, probably, but there are things that we can do. And so I mentioned I was at another institution before Hopkins and it was a state college. Mostly teaching faculty. So that's a little different than what you're talking about. But what I started there and I actually did a similar thing at Hopkins is creating this faculty development programs in which faculty go through it as students.
I think any time that you can get. faculty to experience things as a student, then you're going to help increase awareness. And so we run that through our school through Whiting School, but it's called the Faculty Forward Fellowship. It's been going on for almost five years now, and I've had about a hundred faculty go through it across the campus, like we've opened it up to everyone.
And that's one of the main things that we try to get through is. And it's actually a hybrid program. So we do meet face to face if they want to. It's very flexible. But we have four weeks of online modules. We want them to know how it feels, basically. We have a group project. We have weekly office hours. We use the tools and the LMS that we would recommend that they use. We show various ways of presenting content. We talk about Hot topic issues. This past one we did an activity with AI. So we're trying to situate them in the student perspective. And I think that really helps. Of course, they have to have an interest. And if they're not interested, they're not going to be interested. So we can't really do anything if they're not signing up for these kinds of things. So that's why I think it's probably always going to be a challenge.
[00:27:16] John Nash: yeah, but that's lovely and having them walk in the shoes of students so that they can see what's going to happen on the other end. Do you get that kind of feedback when they're done? Do you ever hear about that?
[00:27:27] Olysha Magruder: Yeah, definitely. They, I think there's a deeper appreciation for what students experience. And interestingly, on the flip side, we always have a deeper appreciation for what faculty experience because we're constantly dealing with Oh, this didn't work. Or why didn't you give me a score on this? Or, like we, we deal with all of those issues that faculty deal with on a regular basis.
So for us, it's humbling as well.
[00:27:55] Jason Johnston: Yeah, it's easy for us to say on a design basis oh, here, do this online discussion board, right? And without being in that place of actually trying to run one or run them week after week, right? You, we start to get a bit of a yeah, like you said, John, a walk a mile in the shoes kind of experience as we try to get busy faculty to engage into a professional development online course and do the same things that they might be asking students to do.
Yeah, that's good.
[00:28:27] John Nash: Were you also saying that when you put faculty in a course and the rules are reversed, you still find out that they're grade grubbers, just like undergraduates.
[00:28:37] Olysha Magruder: Some, yeah, we actually did a book club, uh, I think it was in the spring of this past year and it was about ungrading. I don't know if you've read the book
[00:28:47] John Nash: love that. Yeah.
[00:28:48] Olysha Magruder: Yeah.
we and in spirit of that, we took out all, Grading metrics in that program. And so now it's just more did you finish it or not?
And we give you feedback or what have you. But before that, yes, things were scored and it was always like, this is just to show you how things can be scored or show you a rubric . And it was definitely, there was anxiety and, it's always, again, that's always like insightful from this perspective
[00:29:17] John Nash: it is. Once a student, always a student, I say, yeah.
[00:29:20] Olysha Magruder: yeah. That's true.
[00:29:22] Jason Johnston: And I've felt the same thing about like we've we've tried a variety of different ways to go about professional development, as I'm sure that you have, to the fully self paced, to fully synchronous to now we are experimenting with kind of shorter and we're coming up with better language for this, but we're talking about self-paced but within a period of time, right? So it's asynchronous, but it actually is synchronous because we want you to do these assignments this week and then complete it. We'll do a two week kind of module for faculty and have certain things completed. Do you have a better name for that? Because it's not a, it's asynchronous, but we've got date constraints on it on either end,
[00:30:08] Olysha Magruder: So we've done this before too, and I feel like we were in the same kind of boat where we don't, we call it self paced, but there are due dates or deadlines.
[00:30:17] John Nash: There's a term that we'll use inside design sprints called "time boxing." I wonder if it's time boxed.
[00:30:23] Olysha Magruder: Let's write that down.
[00:30:27] Jason Johnston: I like that. If the three of us at three different institutions start calling this,
middle thing, time boxing, just as this, everybody knows of course, we've got the fully asynchronous and then synchronous and then the time box, right?
[00:30:39] John Nash: Nash
[00:30:41] Olysha Magruder: Timebox course.
[00:30:43] Jason Johnston: box. That's good. But I've found that like adding grades in, even if they're meaningless, really, like they're, these aren't going on your faculty transcripts or anything like that.
They get a sense of a little bit of that stress and about feedback and so on. I've felt the same way about time, at least time boxing. Some of our now I'm using it as a verb, at least time boxing some of our assignments because faculty then also as adult learners will get the sense of what this means and also what it means on the other end if, "oh, I might need an extension on this. How does this work? Can I ask for an extension? What are the expectations? How flexible is my teacher going to be? How does this make me feel? How does this affect my own outcomes?" I think for the most part, it feels like Faculty appreciate things being time boxed because it helps them to prioritize and it actually does help them learn do it's accountability. It helps them do something. They've already decided that I want to learn this. I want to do this and I need someone to just tell me what to do and when to get it done so that it actually comes to pass. That makes sense.
[00:31:51] Olysha Magruder: Sure. Yeah. And that program that I talked about, the fellowship program, we have taken off grades, but we still have a deadline based
And then another thing we have always had journal coaches is what we call them. So there's like a self reflection journal throughout the program because reflection is, helps you in your practice.
And What we did this past year, which I'm in love with, and we're going to continue is we actually took previous participants and they are now facilitating many pieces of the program.
[00:32:23] Jason Johnston: Nice.
[00:32:23] Olysha Magruder: So they became the journal coaches, they facilitate the discussions and they even helped with the live sessions that we had.
But that was a fun thing to shift to. Now, we also have some people who are probably like many of students that where they put you off and they're, can I please have this and I still haven't finished it and it's always funny just to watch the spectrum of participants, but we have all kinds.
[00:32:52] Jason Johnston: yeS, as we do. You had mentioned before, just shifting topics a little bit about the symposium, I wanted to give you a moment to talk about that, as like the, I like the Spork analogy, actually, because it gives you, you said like a three, like a
trident, a three prong, but I actually like the Spork, because it's there may be something else you need to scoop up, right?
You've got these three prongs, and Then you've still got that part of the spork where if you need it for something, you've got it there. Even if you don't have a little thing of mashed potatoes or something to eat with.
[00:33:20] John Nash: you can stab it and scoop it.
[00:33:23] Jason Johnston: and do it at the same time. Anyways, so the, but the symposium was one of the, one of the prongs.
Tell us a little bit about the symposium.
[00:33:32] Olysha Magruder: First I think I need to change the tagline to stab it or scoop it.
[00:33:36] John Nash: That's my only job here. That's all I do is just I have little quips that might be useful. And then,
[00:33:42] Jason Johnston: pretty good, John. You've got two now, Timebox and stab it or scoop it.
[00:33:48] Olysha Magruder: Yeah, so the, it's the JHU Online Excellence Symposium, and at this moment, we still have the open for proposals, but by the time I think this airs, it will be closed for that. But, It's on December 12th of 2023, and it will be a virtual event. It's a half a day. We have Flower Darby, who is our keynote speaker, and she co wrote Small Changes and Online Teaching, I think is the name of the title.
I don't have it right in front of me, but we love her, we think she's great, so she's gonna keynote. There is a small fee to attend. Again, that goes back to our sustainability piece. We need to feed it back in to help with our programming.
I'm seeing people from all over, even from Ireland, which if you wanted me to tell you a little bit about why that is, I will, but we have people from all over the world submitting proposals. And it's just going to be a half a day 12 to 5 p.
m. Eastern time of just learning about excellence and online teaching from people who are doing this on a day to day basis.
[00:34:52] Jason Johnston: That sounds great. I'm putting it on my calendar for sure. Yeah, we'll put the link in our show notes as well. Tell us verbally how to find you though, just in case, because I fear that people don't go to our show notes as much as I feel like they should, but tell us verbally how one might search for you and find the name of this the name of this symposium.
[00:35:14] Olysha Magruder: if You, I'm just doing it now to make sure. Yes, if you Google JHU Excellence in Online Teaching Symposium, you'll see there's a website, teaching. jhu. edu, and it has all the information on it. There's more to that web link, but if you get to that teaching. jhu. edu site, you'll find it. And it has all the information about registration and details and so forth.
Yes, we'll definitely link to it. That would be awesome.
[00:35:40] Jason Johnston: Yeah, that's great. And what kind of other topics are you hoping to come out of this? Did you have different tracks for the symposium or is it just a wide open whatever you want underneath the umbrella of excellence in teaching.
[00:35:54] Olysha Magruder: Yeah, we wanted to keep it pretty open this year. We had talked about do we want to have, themes or tracks and since this was our first time trying this out. We wanted to be pretty broad so that if you feel like you have something to contribute in this area, we want to hear about it.
And we Did say in the description, "the session should provoke conversation, spark new thinking, and advance the ongoing pursuit of online education excellence by actively engaging participants." So that was the the idea behind these proposals. And we have a little bit of a criteria but.
Since we're still in the planning phase, we have yet to decide which proposals will be on, but excited by what I'm seeing so far.
[00:36:37] Jason Johnston: That's great. Oh, that sounds good. Yeah. And we're here for all of that. I think that the conversations around all these things are so important. I think the more that we have them as as administrators and faculty, and even in this kind of context, which is really the reason why we.
doing this podcast. It's just really an excuse for John and I mostly to get together to have conversations, but then to be able to talk to cool people like yourself, doing amazing things out there that really line up with the things that get us excited about the future of online education. I think it's looking up myself.
Do you have a positive outlook for online education in the future?
[00:37:17] Olysha Magruder: I do, because I interviewed for a position a long time ago at another institution. And they said, they were like, what do you think is the most exciting thing on the horizon? And I was like, this is not a very. I don't know. It may not sound very exciting, but blended learning. That's what that's where we're headed.
And then this was like pre pandemic like this is a long time ago. And here we are. And I think no matter what happens, it's just becoming a reality, versus some of us don't some of us do it's like well students especially because even my son's in third grade like He's not taking online classes, but he's doing online stuff all the time.
High school students are often required to take online courses. It's just an expectation that's going to be more and more prevalent. And we got to stay, we've got to stay up on these things. So I have a positive outlook of it too. And by the way, I was going to say, you all should submit a proposal before the end of the day to talk about your podcast.
[00:38:21] John Nash: I just saw that, and I thought, Hey Jason, they're due today, want to try one?
[00:38:27] Jason Johnston: It closes today.
Oh,
[00:38:29] Olysha Magruder: I'm sure I could get you an extension.
[00:38:31] John Nash: It's only 300 words, surely
[00:38:33] Jason Johnston: We could get there.
[00:38:34] John Nash: Claude can write 300 words before we're done with the podcast.
[00:38:39] Jason Johnston: Your short conversation with us, if you were to like a proposal from somebody like John and myself, what would this proposal perhaps be about?
[00:38:50] Olysha Magruder: Just riffing here, but I think it would be cool if you talked about your podcast, but also the trends you're seeing or the things that you and your guests are talking about because it's all related to online teaching. So you could highlight some of your guests or you could even double up and record some of the audience for one of your episodes.
I don't know. Throwing stuff out there.
[00:39:16] John Nash: we could make it a live, yeah, we could do it as a podcast, yeah,
[00:39:20] Olysha Magruder: Yeah. Why not kill two birds... Oh wait, that's not the term people like to use these days.
[00:39:23] Jason Johnston: yeah, there's a list of non violent ones like that I can't remember what the equivalent is for that one, though, but I also try to avoid skin a cat, and I think it's I don't know what it is.
[00:39:33] Olysha Magruder: Crack two eggs with one hand? I don't know.
[00:39:35] John Nash: yes. That's it. We'll go into descript and we'll overdub some good examples in a minute.
[00:39:42] Olysha Magruder: There you go. Thank you.
[00:39:43] Jason Johnston: Yeah.
[00:39:45] John Nash: thAt is interesting, though, and I think one thing that we could start to explore more, Jason, thanks to Olysha's good suggestion here is we don't really know enough about I'll just say rank and file, but we've been talking to a lot of experts, a lot of thought leaders and sort of thinkers from psychology and from instructional design.
But just, yeah, the everyday instructor who wants to do well in this space. I'd love to know more and think more clearly about their concerns and worries and where their benefits, what benefits would accrue to them most.
[00:40:18] Olysha Magruder: That'd be cool.
[00:40:19] Jason Johnston: I think it's a great idea and kind of wrap up some of what, yeah, we've been learning this last year and to hear from other people that attend. Okay, you're on the challenge accepted.
we're going to try we got a deadline now john. This, we've been time boxed. And we're gonna, we're gonna rise to meet your challenge.
Even though we've probably have other priorities today because we've been time boxed, we're going to rise and meet this challenge.
[00:40:46] Olysha Magruder: See, now you know how I operate. You invite me here thinking that you're going to get something, but I'm in fact getting something from you.
[00:40:54] Jason Johnston: And here I've got more work on my plate. Come on. This is good. This has been a great conversation. Thank you so much for spending this time with us. And is there a preferred way for people to get in touch with you? We'll put obviously links in for the Coursera course, to the symposium. Are there any links or any ways that you have a preferred way for people listening to get in touch with you if they want to?
[00:41:19] Olysha Magruder: Sure. LinkedIn I'm there. I know we've connected via LinkedIn. And then Olysha at JHU. edu. O L Y S H A. I keep it simple.
[00:41:29] Jason Johnston: Okay. That's great.
[00:41:31] Olysha Magruder: Reach out if you, if anybody wants to talk about anything pretty open.
[00:41:35] Jason Johnston: Awesome. Thank you so much. This has been great and we appreciate you and the work you're doing and you just simply taking the time. And for those listening, please check us out online learningpodcast. com or you can find the same at LinkedIn. Just search Online Learning Podcast. We've got a group there you can ask to be part of.
And yeah, feel free to connect with us on LinkedIn as well. That tends to be where we're hanging out more than other places that, that we're not completely out of, but we probably won't name because I'm not really at the other places that much anymore.
[00:42:12] Olysha Magruder: same, but thank you all so much for having me. It's been a lot of fun talking to you all
[00:42:16] John Nash: was a ton of fun. Thank you so much.
[00:42:18] Jason Johnston: Yeah, that's great.
END
Friday Dec 01, 2023
Friday Dec 01, 2023
In this episode, John and Jason engage in a discussion with Dr. Michelle Ament about the impact of AI on education, its role in reducing transactional tasks for educators, the significance of human intelligence and soft skills in an AI-driven world, how AI can be leveraged in professional development, and the potential future of AI-integrated, relationship-based classroom environments tailored to individual student needs. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - Online Learning Podcast
Links and Resources:
Dr. Michelle Ament is the Cheif Academic Officer at ProSolve
Michelle Ament on LinkedIn
Jason's AI 4 Language Translation Video
An intro to the Zone of Proximal Development
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
False Start
[00:00:00] Michelle Ament: Thank you for having me this morning. I'm so looking forward to this conversation.
[00:00:04] Jason Johnston: Yeah, and we just wanted to get started just to understand a little bit about you and your background, where you've come from. Currently, you're a chief academic officer. And
John Deletes Jason's Notes
[00:00:14] John Nash: Oh, I did that, didn't I?
[00:00:16] Jason Johnston: John just deleted all my notes.
[00:00:19] John Nash: No, I didn't. I moved my notes and put them below yours.
[00:00:25] Jason Johnston: I'll try again.
[00:00:28] John Nash: Podcasting at its best.
[00:00:31] Michelle Ament: This is fun.
Start Intro
[00:00:33] John Nash: I'm John Nash here with Jason Johnston.
[00:00:37] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the Online Learning Podcast.
[00:00:42] John Nash: Yes, we're doing this podcast to let you in on a conversation that we've been having for the last couple of years about online education. Look, Online learning's had its chance to be great, and a lot of it is, but there's still a bit that isn't. And how are we going to get to the next stage, Jason?
[00:00:58] Jason Johnston: And that's a great question. How about we do a podcast and talk about it?
[00:01:03] John Nash: I love that. Perfect. What do you want to talk about today?
[00:01:06] Jason Johnston: I would love to talk about online learning. How does that sound as a theme, overall theme, for our conversation today? But more specifically, I would love to talk with, we've got a guest with us, Dr. Michelle Ament. Welcome, Michelle.
[00:01:21] Michelle Ament: Good morning.
[00:01:23] Jason Johnston: How are you?
[00:01:24] Michelle Ament: I'm doing great. Thank you for having me this morning. I'm so looking forward to this conversation.
[00:01:30] Jason Johnston: We're really looking forward to having you with us. We just wanted to get started just to understand a little bit about you and your background, you're currently, the chief learning officer you've had a background in personalized learning technology and learning curriculum and design fifth grade teacher, one of my question for you today Michelle is how did you get to where you are today?
[00:01:51] Michelle Ament: Great question. I think about why, where I, how I got to where I am today is I love the design of learning. So when I went into teaching, I was a classroom teacher. I've been in education 25 years. And like you said, I started in fifth grade and was an elementary teacher. And what I loved about teaching first was the daily interaction with kids, of course, but the design of learning.
So it was all about, Yeah. learning, figuring out what learners needed, what were some of their strengths, what were some of their areas of growth, and then figuring out how to design really engaging learning. And so in the classroom, that's what fueled me every day. It was like a problem to solve. How could I design something that was really relevant, highly engaging?
And authentic for kids. And then I went on to lead in that way and led in several different positions, like you mentioned, with technology and learning, with professional development, and a focus on personalized learning. And at that point, it really became what haven't I done?
And the next step was really the superintendency. And I made a conscious decision to, at that point to think is this really where I want to go or do I want to look outside of the public education sector and decided to make the move to ProSolve, found this rapidly growing company who is really focused on learning design and focused on how do we create learning experiences that really are authentic for kids, relevant for kids. And so it fit with my background in design.
[00:03:27] Jason Johnston: And what do you do on a day to day basis?
[00:03:30] Michelle Ament: What I do on a day to day basis is a whole lot of things. I don't know if either of you have ever worked for a rapidly growing company. When I started with ProSolve, there were eight team members, and now we have 35 over the year about 18 months. So that's rapid growth. I've done everything from design learning to lead customer success to run an amazing shipping department. Oh, there was a skill set I didn't know I had and really serving on our executive team and helping lead the company and and really have an impact on school districts across the country.
And so I'm driven by impact. And I had an impact in my classroom and an impact in my school district, but now I get to have an impact nationwide on a whole lot of school districts and see learning really changing as a result of experiences that teachers are designing.
[00:04:23] Jason Johnston: Great. And you're in ProSolve, just to understand it. It is really an organization that focuses on, is it mostly K 12?
[00:04:33] Michelle Ament: Yes, we are focused K to 12. And what we're really all about is our we believe, and I think this will resonate with both of you, given what I know about your backgrounds. We believe that the education system has been really focused on knowledge dissemination. The educator, the professor, the instructor is just providing information to the learner.
And so what we are really about is shifting that paradigm to more of an experiential based learning. an opportunity to create an experience that learners can reflect on and apply to their day to day world. And we just, we believe school isn't relevant for kids anymore. They, we've seen a decrease in attendance and enrollment, and now with the rise of artificial intelligence, they have all the knowledge at their fingertips, everything that they need to know they can find.
They don't need the traditional classroom. So we believe it's we've got to change. We've got to figure out how to move more towards that experiential application based learning in our classrooms so that students find it relevant. But that's where learning sticks. It becomes sticky when you can apply it to something that's important to you.
[00:05:46] Jason Johnston: Your engagement with school systems is on a, is it on a contract basis? Or do they buy, essentially, curriculum and packages from you to, to use? Or how does your engagement work with school systems and schools?
[00:06:02] Michelle Ament: We have several different options. We do have a curriculum that we offer. We are focused on supporting schools with social emotional learning. And so one of the things with our social emotional learning curriculum is that it is hands on. students have an opportunity to be in a situation where I have to tell you about Quest.
So Quest is this game that is part video game, part board game, part escape room. And so they are immersed in this experience where they all of the adults have vanished This from Sarabella Falls and they have to figure out what has happened to all of the adults and so they work as teams to solve challenges, collect food and collect reputation points and just move through this series of episodes in this experience and all the while doesn't that sound fun?
Don't you two want to play right?
[00:06:57] Jason Johnston: Yeah, for sure. And I was just thinking that's like you're starting with every kid's... Like fantasy that every adult has disappeared and we can just run run this place as we as we wanted to.
[00:07:08] Michelle Ament: And they got to figure out why have the adults vanished, which yes, every child's dream, but also pretty like dystopian type genre, which is what kids are really interested in. And as they work in teams, they compete against each other and they have to collaborate. They have to make decisions together.
They have to compromise. They have to communicate. They have to persevere. So all of those are real world skills, right? And they are developing them in an authentic context. And so that right there is different than any kind of social emotional curriculum in a K 12 market. Typically you are listening to the teacher tell about a scenario and you're Giving them the answers they want to hear and then nothing really changes and now our SEL time is done and we're moving on to math time and then we move on to reading time.
So Quest, that's one of our services that we offer. We have a service learning solution where we help students That's where we leverage design thinking, which I'm really excited to talk to John about design thinking and how we bring that into the classroom. And then we have professional development. How do we make professional development look different?
I'm sure you've all seen that meme where the teachers are falling asleep, falling out of their chairs with a traditional professional development. So we. We do things differently. We engage teachers in a, in fun experiences that really push their thinking and have them reflect on, okay, how do I do more of this in my classroom?
How do I make learning different and more? I keep saying the word relevant on this podcast. You'll probably go back and count relevant and authentic. How many times I say it. But it's true, like, how do we just change what school looks like for for our learner, for today's learners? Those are our solutions, professional development, an SEL curriculum, and a service learning curriculum.
[00:09:07] John Nash: nice. When we were talking a little bit on LinkedIn, we had a little back and forth thinking about. What we might talk about today with you, and you were thinking out loud a little bit about how human centered design and AI might augment or support all of this.
We see more and more people talking about the power of human intelligence now that AI is in front of us, although we've had this human intelligence. In front of us all this time, but now all of a sudden we're worried about it because AI is here. You were thinking maybe we could think together out loud because you said you hadn't fully formed your ideas on this, but you wanted to dissect the possibilities of how using AI makes us better humans.
What are you thinking about that now?
[00:09:52] Michelle Ament: iT as I think you're right on a couple of points you bring up like we have had human centered learning for forever, right? And we valued the soft skills and the four C's in our classrooms. But I think now that we see that more technical jobs will be taken over by repetitive tasks, I should say will be taken over by AI, our workforce is going to need to leverage those human intelligence skills even more than they ever have before. And so I think when I think about AI in our classrooms, I'm less focused on helping teachers use AI or teach students how to use AI. And I'm more focused on helping them think about how to build human intelligence.
I think it's going to be crucial for our workforce to have really strong employees that are coming out of college and out of high school, knowing having those soft skills and being able to be on a team and being able to think critically and being able to collaborate. And a and I've heard you both talk a lot about ways that you've used chat, g p t and had to really be able to analyze what it was saying, making sure that there wasn't a hallucination present and that's that critical thinking.
And so those are some of my thoughts about AI in education and why the importance of human intelligence. Before I talk about design thinking, I have a question. What. When I say human intelligence to you two, what do you think of? What comes to mind with that word? Cause we've been tossing that around.
Is that the word we want to use to describe this?
[00:11:30] John Nash: That's a great question. I think for me, I'll continue to promote the Brandeis Marshall piece she did in Medium , which is, what's un AIable, which are context, critical thinking, and conflict resolution. These are definite human intelligence skills or properties that AI can't really do well. It can only fake it.
And then the other thing I heard her say also is that, yeah, AI doesn't know when to shut up. And it will just continue to talk and talk as long as you let it. And so I think that's another thing that, that comes with the context bit is that AI, humans are here to be able to be thoughtful about when to speak, when not to speak, and what makes the most appropriate.
response at any given time to advance resolution towards a challenge or a problem.
[00:12:14] Michelle Ament: Yeah, I hear you talking about self awareness there. It's not really self aware.
[00:12:19] John Nash: said. Yes. Yes. I think so. Yeah.
[00:12:22] Michelle Ament: I love that AI proof skills or so do you like, does so human intelligence resonates with you then?
[00:12:29] John Nash: Yeah. I think we've both been saying it. Yeah. And that, you bring up an interesting point is that as generative AI. Becomes more, I was going to say sophisticated, it's already sophisticated, but I think as generative AI becomes more interwoven into our work lives and our personal lives in more transparent ways, that you're right, that the work skills that we want people to do are going to be more on the human intelligence side. And so people will not necessarily be brought in to do tasks that AI can handle now. So they'll be expected to step up, as it were, a little bit in terms of having the ability to do these other things that we think are more important.
[00:13:12] Michelle Ament: Definitely. And there's some really alarming stats out there that our workforce, when they go to hire, aren't finding employees that have those skills. And so I think while our K 12 system is really focused on college and career ready. I feel like we've been more focused on the college ready, the prepping, prepping our students to go to the the higher ed organs at college or university and be successful academically.
I have a 22 year old who just graduated from university and he is brilliant. He's a double major in mathematics and data science, a triple major in computer science, mathematics, and data science. And he is struggling to, to get out there in the workforce and have those soft skills that are needed even in an industry like that.
And so I'm curious. About you. Prob, I don't know if you wanna talk about how kids that you interact with when they come right outta high school, but I'm just not sure if we're, that we're preparing them for to be college or career ready using those soft skills that we're talking about.
[00:14:19] Jason Johnston: Can I add one thing to on that human intelligence is you mentioned critical thinking. Which is a hard skill. It's a soft skill, right? And, but I like the, this idea of human intelligence and it reminded me another conversation I was having with a very intelligent person, my wife, the other day.
We were talking about, AI in general. And she said , figuring out what is real will be the next big work for our kids and their generation. And I think that is a both a hard skill and a soft skill, right? There's a am I right on this? Is this? This is part of human intelligence is being able to take that higher order thinking of evaluating what is true.
We're looking at some of these image And I just produced a video that I showed her where I spoke in four different languages. I don't know any other languages, and evaluation and critical thinking is a soft skill because it's not focused on, you can do this task, and this task.
It's more focused on an overarching human intelligence that helps you manage and move forward any task. Am I on the right track here?
[00:15:34] Michelle Ament: I think so. Absolutely. I think about. The five paragraph essay is that, educators right now are really concerned about kids using AI to write a typical five paragraph essay or any kind of writing. And I think that we're thinking about it a little bit. incorrectly. I think we need to be not trying to remove students or or I should say, I think, let me start over.
I think instead of trying to figure out how to prevent students from using generative AI with their writing, it's about tasking them to think Critically about their writing. Maybe they all write something. Maybe they write two or three versions of something, and then they have a classmate or in a collaborative setting, they have to analyze which one feels more accurate, which one was written By the student versus by the by generative AI.
I'm this is a very simple idea that I'm bringing forward, but I think the idea here is to think differently about how we design learning that in the face of having these. our students having these tools, what does, how does it look different and feel different? Instead of trying to think about, we want to do the same thing that we've always done and put restrictions in place or prevent kids from using these tools.
I just, I think it, it's about critical thinking, as you mentioned, and how do we design with that being the outcome in the, in how they analyze or critically think about their writing versus producing a level of writing.
[00:17:10] John Nash: Yeah. Can I ask you a question about your work to design experiential curriculum or experiential work with students? How do you gather insights from students and teachers and others to understand their needs and perspectives to build those experiences? And then Maybe we can talk after that.
Actually, I'd just like to stop there with the human intelligence part, but also how maybe AI could help augment that design research process. But I come to you with this question with our overarching mission here, which is to humanize online learning. And when you bring your full suite of thoughts and tools and mindsets to bear on What do you do to gather those insights to make great experiential curriculum?
[00:17:57] Michelle Ament: That's, yeah, that's a great question. I think it really is about getting to know your learners, asking some overarching questions, some essential questions that help them see what's important to them in the process. surveys, whether it is introduction, spending time both in an online environment and in the classroom, really building that community and understanding who they are, what's important to them, what drives them, what are curiosities that they have.
And I think it's less about me as the designer. saying, okay, now you let, you're really curious about this idea, here's the task for you. And you're really curious about this idea, here's the task for you. And instead in the learning design, having it be more overarching or having a universal question that we are aiming for.
And then we're having them connect their interests to that overarching question. And so then it becomes, I think That each learner can do have a path towards learning and understanding. And then the assessment is where you start to have a similar rubric or things that you're looking for in the assessment design is how you start to find the parallels between behind what every learner is researching or learning about.
And then I think even with assessment, there is an opportunity to have some it. differentiation and how learners show you what they know. And so what does that resonate with you in terms of as you build your online communities and start to think about how to personalize learning? And I think You know, how does AI fit into that?
I don't know. It's, there'd be a powerful way to, to have them even begin to use the tool to make connections to that overarching question or that principle.
[00:19:54] John Nash: Yeah, quite possibly. We meet a lot of our peers and other instructors that I meet in the P 12 space who feel like they've been, Sort of their horses that have been brought to water and they don't want to drink in terms of to having to teach on the, you must teach online. Now, certainly the pandemic put a lot of people in a place to do online learning without really feeling like they had the tools or the capabilities or predispositions for it.
And I love what you had to say there about that design. And those are the, that's the mission. When you approach new instructors who want to do well in this space, what are the kinds of practical advice that you give them, or might give them, to say, everything's going to be okay, you're going to do fine, I've suggested that you get to know your learners, now here are three things you could do that are low threshold, sort of high impact things that might get you going, to give you the confidence to do well.
[00:20:48] Michelle Ament: I think first it would be giving those learners a suite of different tools that they might use. I think of Flipgrid as a video type tool. Maybe maybe it's as simple as make a slide deck that says you're going to show some images or some videos about three things, three, three overarching questions.
Who are you? What's important to you? Where do you want to go with your life? I'm just really spit balling right now. But it could be something like that. And I earned my doctorate in an online education for Capella, completely online. And that was a fascinating experience to be in a really thought cerebral space and still trying to develop community.
And I think we could have spent more time in that space building community. And then as we continued to go through the process, , then we have relationships and relationships drive learning. And so we could have even supported each other even more.
So I think it's, simple video introductions. It could be a slide deck introduction. I think it's moving past the narrative when we see. See each other is when when it's most impactful. It could even be pulling people together in community forums like this and just having time and space to share a little bit about ourselves, share about things we're passionate about, questions we have, and just building those connections with one another.
Because I think then once you start to have a re we're all relational and that's what. drives human capacity is that relationship having empathy for one another. And so creating situations where that empathy comes out, I think can go a long way than in learning,
[00:22:28] John Nash: Really nice. I think that's a great point that the relationships drive learning because I've fallen into this trap and I think my colleagues do too is that I have so much to cover. I don't have time to do community building and I found that when I've taken the time to do it, it's paid off. In a big way on the other side where students become peer support for each other where we have a mission together, not as this divide between teacher and students, but we're trying to get somewhere together.
I think then the learning becomes so much more powerful and easier to also carry out.
[00:23:03] Michelle Ament: right, it becomes personal to both to everyone in the group or in the class. And I think once you start to do that it, you can deepen learning so much more quickly when it becomes, people are, it becomes personal, you become invested in both your own learning, but your peers your colleagues learning as well.
[00:23:22] Jason Johnston: I've got a question. So a couple weeks ago John and I talked with Dr Kristen DeCirbo, who's the chief learning officer at Khan Academy. And they're conceiving of this AI chat bot that would be there for students, whether, first in K 12 and then in higher ed and how that student would always or that.
Conmigo would always be there for that student to help them along anytime they have trouble move them forward when teachers can't be okay. So thinking about that, thinking about how we're trying to build human intelligence, and then this idea which I absolutely agree with that it's relationships that drive learning.
What is it about relationships that you think really drive learning? You touched on a couple things, but what is it about relationships that you think drive learning within an online context that a AI bot would not do?
[00:24:20] Michelle Ament: I was. When you were asking the question, one of the things I was thinking a lot about first, and then maybe I'll get to the online piece what I think, but I went into education, I told you both this, I went into teaching because I love kids, and I, there is an art of teaching, there's an art of being an educator.
being relational, having, I have a deep sense of empathy for others. And I think that is part of why I was able to personalize learning and really understand the strengths and the areas of growth and what matters to that individual. And I call that the art of teaching. And I think we have seen our school system be so much more focused on the science of teaching.
I believe strongly in standards alignment and I believe strongly in assessing what students should know and be able to do, but I feel like along the way our educators have lost that ability to show up with what they're really good at and that is the art of teaching. Understanding their students, having that sense of empathy.
I think if you asked most educators, they would say They went into teaching because they love learning. They love students. And so I think that is an area that we need to really focus on bringing back that art of teaching. And I'd be curious in your online form, if you would agree that there has been a little bit of that loss of art of teaching in higher ed and more of a focus on the science of teaching.
And what I'm seeing you nod. And would you agree?
[00:25:52] John Nash: if I may,
[00:25:53] Jason Johnston: Yes.
[00:25:54] John Nash: no, if I may, perhaps I might even call it the art of teaching versus the transaction of teaching. I think I see us move. I would love more science of teaching actually in light of all the sort of transactional teaching I see going on, particularly in the online space and vendors don't help all the time in that way either.
They've sold us packages that make us think that the teaching is a transactional act. I've got more on that, but I wanted to just add that. I think
[00:26:23] Michelle Ament: I really, I do really like that because I think even in, in in the pre K 12 system there, it does become more transactional. And I think you mentioned, we just. Bring students to what we need them to know, and then we move on to the next thing, and the next thing, and the next thing, because we've got to cover everything and check off all the things.
And I haven't, I actually, to be honest, haven't looked at Khan Academy's new tool. I've heard amazing things about it, and It feels though, all of a sudden in this conversation that could help solve some of that transaction, some of that place where learners could go there for information and then learners come to the instructor to, to really be in that relationship, be able to share ideas in a way that, and get feedback from the instructor and be in that relational setting.
And what I think drives. Why learning is relational is because it becomes a part of who we are, like the times when I have internalized learning is because it mattered to me and it impacted me in a really here we go, relevant and authentic way. It was something that was important to me. And so I can think about times where just.
being in conversation with someone has deepened that learning far more than reading an article or watching or reading a text on it. Now that helps me build my background, so then I can be in a conversation with someone. But there's that piece of being able to push each other's thinking and be in that cognitive dissonance.
And that, doesn't come just because two humans are talking. It comes because of trust, because I believe you have good intentions in helping me by pushing my thinking or disagreeing with something I'm saying. And so I think as online educators, if you had the time and space to really build that trust, to be in those places, to talk about ideas, to push thinking, that could go a long way and having a more technical tool
to support that transactional teaching. It, there's the possibilities are endless when I think about that.
[00:28:27] Jason Johnston: I love that idea of trust, I think that could be one of our distinctions, a distinctive factor. I think there are a lot of things that these AI tickerbots can do. Khan Academy does not have any intention. They, Intentionally do not want these tutor bots to be a student's best friend or for them to have real relationship with these tutor bots.
They intentionally move them over into this helpful, empathetic AI, but without this kind of, they don't think it'd be healthy for students to form attachments to these tutor bots, right? And I think attachments come through trust and I also think trust becomes a bit of a tether for students, especially those that are.
That are struggling to help them, to help pull them forward in this journey of education. What do you think about that idea?
[00:29:19] Michelle Ament: Yes, I think that having those AI bots to be able to take the load off some of the instructors work, the transactional work, and then be able to really foster that trust. And again, that's intentionality there, you have to be thinking about how do I, because trust is a two way street, how do you get the, your learner to trust you, but also how do you trust the learner too?
And I think that I think teachers right now, there's like this, the sentiment out there with AI, and it doesn't promote trust. It is about cheating. It's about using it in inappropriate ways. And so I think we're onto something. We've got to be thinking about how do we, what's the opportunity here? So that we can, as educators, build more relationships, have more trust, bring back that art of teaching, and not be adversarial to what AI is bringing into our classrooms.
And I'm saying very broad, brushstroke language here because I know there are a lot of educators, a lot of school systems that are embracing the use of the technology, but I, it's so new, we're just at the beginning of how is this going to change teaching and learning? And I think there's opportunity to focus the conversation on what are the opportunities that it will give educators in this ability to build human intelligence, to build relationships.
To do what they've gone into education for, which is supporting the learning of their students in a relational way.
[00:30:53] John Nash: I love the focus on trying to help improve professional development. We run a lot of dissertations through our department where students are looking at how to improve that, because it seems like it's a perennial problem that will never go away.
When you stakeholder buy in and how key that is for any education initiative, How do you think AI might help model and predict the impact of a professional development initiative and build support for a change in a school?
[00:31:32] Michelle Ament: I haven't thought about that yet, so this is fresh thinking. I think anytime we can build some background is a strong thing, is a good thing. I'm also thinking about As those leading initiatives in a school system, how do they really understand change management and how could the AI tool like if you were to put in I'd be fascinating, it would be fascinating to put into your, chat GPT.
What is the initiative? What is the problem you're solving? And then what steps would it tell you that you maybe need to consider? What pitfalls might it help you anticipate? I would be really curious what it would tell you about because that then could help you in the planning, the design. I think my experience with professional development and change is that school districts They, they dive right into the next initiative.
We're going to plan this great PD day and they're not really thinking about the long game. So what is the next three to five years look like change doesn't happen overnight. And so I think that's part of the pitfall there. and what we will do a couple of professional development sessions and then we're like, geez, nothing's changing.
They're not implementing it with fidelity. There, things aren't really happening. It must be that. Let's go to this solution now. Let's move to this this initiative. And I think implementation science tells us that it takes three to five years for change to happen. And there has to be really intentional change management steps.
And so for me, I wonder how the AI tool could help you in predicting what pitfalls you might have and what change management principles you might put in place. I also wonder about just in the design, how it could prompt and give you ideas to begin to work from. I know I've used, the AI tool in that way to get me started, it's when we work from a blank sheet of paper, sometimes that can be challenging using a tool to help us brainstorm, get ideas, might, might help, and especially if you prompt it to, to say, I want an immersive experience, I want something that is innovative, I want something, you give it some, you prompt it with some parameters, it might at least get you and your team started with the design.
So those are, what do you think about? Are any of those resonating with you?
[00:33:53] John Nash: Yeah, those are resonating with me. And I could see the tool, some of the things you mentioned at the start of that, which were ideas around giving designers, it could be even instructors in online programs. It could be designers of PD programs, the questions they should be asking. I think that the tool is very useful for brainstorming questions, if you give it the parameters around the kind of audience and some of the perceived risks that you see with what might happen with the initiative. You can get a good set of questions to ask your stakeholders, but prior to doing anything, I think it's also good for doing, you alluded to it, but it sounded like almost doing a pre-mortem.
It's here's what we're gonna do and here's where we want to be. Now, what could go wrong here? And that could be fed back into a loop of asking questions and even doing scenario thinking with people before anything even starts and how likely do you think this is to happen? And how much do you think we have the bandwidth or the initiative to carry out these things?
That can even tell you more about where the tailoring of the PD should be and things like that.
[00:34:59] Jason Johnston: Yeah. And I think when you're talking about change management in the school as well, that predicting, as you said, John and Michelle, both kind of predicting where some of those pain points are going to be is a, it could help us. Yeah, it could help us think about some possible. Negative scenarios that would happen when we're so often, I think, optimistic about our change initiatives, right?
We're optimistic about, oh yeah, this is the perfect plan. It's just going to move forward and so on. And I, and maybe AI would help break through a little bit of that optimism in a good way to help us be predictive, as you said, and be thinking about that. That's great.
[00:35:36] Michelle Ament: John, one of the things you, I think you, we started to go down this path and I'd be curious to talk a little bit about how you think AI would fit with the human centered design thinking, because that, when we think about that, it is it, we don't want to lose the human centered nature of it.
That is what makes design thinking marvelous. But have you thought at all about in, in leading people through design thinking where you might leverage the AI tools?
[00:36:06] John Nash: I have been thinking about this and I think, Okay. There are a couple of places where I see it being pretty useful, and there are several places where I advise not using it at all. Let's start with where you wouldn't want to use it, and I almost alluded to it in the comments before about professional development.
It's alluring for new users of the large language models to think that they can interview it. As a user and you shouldn't do that in my book. I don't believe that's that should ever be done because AI is not human. And so
it's gonna give a very authentic sounding response that seems like it's. It's you talking to the people that you want to serve and it's not. And so I think that's the place where you really shouldn't use it. I don't believe it should be used for initial brainstorming on challenges either. I think that's best left to humans.
I think that in a true sort of air quotes here, we're on audio, but I have my fingers in
the where you might do some empathetic need finding, and then you define the problem. And then you brainstorm on that problem and then select some things from that. It's this synthesis of all that data from all those interviews and the observations you do that I think should be done by humans, but then can be augmented by AI.
So then let's talk about where AI could be useful in design thinking. I think it can be useful in making sense of a whole series of interviews. And looking at some themes and getting some themes up after the human design team has really thought also what they think that, so we can affirm those.
I think in a in a brainstorm, I think it's really good for extending the brainstorm. So after a group has sat down and really thought through all the ideas to solve a challenge then I think you could feed those into the model and then get some extensions. I think it serves a pretty interesting...
Role in potential prototype ideas. And and I'm in favor of people having really wild ideas to get to the tame, useful ideas. And so you can prompt the model to say here are some things that we're thinking about doing. Here's the solution. How could we, how could this show itself? I think it's also good for, we do a lot of storyboarding for prototyping and we use Pixar's story spine technique to tell a story of a user going through a problem and having a challenge.
And then the solution that's been developed by the students comes into play and then their life becomes better. And I was just working with a colleague last week in their class on using the AI model to generate these quick stories of like how, what would be some scenarios that might happen when people are experiencing a solution created through design thinking.
So that was a bit of a list, but those are my thoughts at this moment.
[00:38:51] Michelle Ament: I, something that I was thinking about was during the empathize stage, definitely not using AI because yes, you would feel like it's empathizing with users, but it isn't the users you're serving. It isn't the users you're designing for. And I don't even like to say that word user. Because it feels impersonal, then, I've used design thinking a lot in my classroom and in the school district and then now with ProSolve, and it is about listening to those you're serving and hearing what it is their needs are.
And AI would make it seem like you were, but you're not even talking to the people that you are designing for. And so I think that is for sure. I really liked where you were going with the prototyping, because I think, and I think again, starting with the human and then augmenting like how do we take this and see how do we make our prototype even better.
So here's all the things we've thought of. What haven't we thought of? What are ways that we might improve this? Could go a long way. I think you could, sometimes it's hard for people to come up with , empathy questions at the initial stage. So my mind also went to, we were interviewing students.
We want to understand about their school experience. What questions might we ask? And it might generate five really great questions and then even prompted a second time of we really want to understand how they're feeling like if all the questions come back very very basic to the maybe more technical, then you could prompt it and get some of those more deeper questions about getting a sense of how people are feeling about something, for example.
So those are things that come to mind there.
[00:40:32] John Nash: I work with a lot of, I'll call them novice designers, but they're sometimes they're teachers in our graduate class, sometimes they're undergraduates in the class, but they've not gone through a sort of a creative thinking process to creatively solve a an ambiguous challenge. And so when they get into the brainstorming stage or even at the the prototyping stage where they're thinking about how they might manifest their solutions, they'll run out of steam and we, my, we like them to live with the problem and think through the problem.
But I'm seeing AI as a way to help them reinvigorate their thinking after they run out of steam because they're not accustomed to their brains thinking in these sorts of ways. I've also thought about it as as an adaptive tutor. And I stole an idea from Ethan Mollick, who was talking about deliberative practice and using ChatGPT to become a tutor.
And I think it can be helpful in teaching students how to use open ended questions for empathetic interviews and give feedback on the quality of their follow up questions in a mock interview to get them accustomed to doing that. Without that, we usually will just have students we'll teach them the empathetic interview process and we'll give them a protocol and they'll go out and do it.
But their first interviews really are just practice and they're not as strong as they could be. And I haven't tested this empirically, but I'm wondering if we did the practice first, if those first interviews might be stronger and they might get better data from those if they had an opportunity to practice first with ai.
[00:41:58] Michelle Ament: I like that. I like where you're going with that because it starts to, there's two things. I do know that fatigue, I will, when leading design thinking, I'll say, okay, you've gotten all your ideas now do one more. And they look at me like, I don't have one more. And so that might be a way to just spark some of that creativity again and save time design thinking can be something that goes over a course, of weeks or or several class periods, for example, and sometimes we don't have that. And so I like that idea of practicing the interview with the chat might be a way to just build some capacity before they go in front of the people they're designing for.
[00:42:39] Jason Johnston: And one thing I said there, Michelle, remind me of something you've said before, John, which is using it to try to get through roadblocks. So you don't maybe use it before you hit that roadblock. So I've got four ideas and I can't come up with any more. Maybe chat can help generate a few more ideas, if you've really worked through that and that's good.
AS we're rolling things up here, we're wondering what you think about a future with ai. Our guess is that this is not going away. Online learning is not going away. We're gonna see more online learning, we're gonna see more ai. It's probably going to be more, uh, work side by side with AI and or ai even taking the place of some of the things that we traditionally do as teachers, as educators. What do you think the world is going to look like 10 years from now? What are some of the things that maybe excite you about that world? What are the things that perhaps concern you about that world?
[00:43:39] Michelle Ament: I Think there are a lot of things that excite me, probably more than concern me. I think about, I think that our AI tools can become really great teacher assistants. How do we, I don't believe that, I don't believe it's going to replace any educators, the system at large. I think, how do we start to try different things now as educators?
Maybe we're not even using them in the classroom. Maybe we're just personally trying things out, seeing what we learn. I think, because I agree with you, it's not going anywhere, and I believe our classrooms will become more where AI becomes an assistant, becomes a place where we can create efficiencies, but also be able to use it as a learning design, to be able to say, here's what I want to teach, here are a couple of different, things my learners are interested in, what are different ways to design the learning so that it is relevant.
And I just, I go back to what we've already talked about. It's going to free up a teacher to go back to why we all went into education, to have that art of teaching, to build those relationships. Building that human intelligence in the classroom. So that's where I think the focus really it is opportunistic.
I think if we approach it that way with a really positive mindset and also be aware of the cautions, the things we have to think about. A huge thing that I know that our school systems have to think about is just data privacy and the amount of information that that could be, that is potentially harmful for children.
I'm not naive in that there aren't. anything, any cautions or things we need to be thinking about, but I'm trying to stay more of that glass half full mentality, like what are the opportunities and how do we envision a classroom where the, where what students are learning is really tailored to their needs?
It's exactly at their zone of proximal development. It's very relevant to them and it has an opportunity to be applied to their day to day life and the real world and through, and teachers having the opportunity to build those relationships and foster that human intelligence. What do you two envision your classrooms to look like in 10 years?
[00:46:01] John Nash: I'm so glad you took the question because I tend to avoid answering that because my answer lately is after what we've seen in the last nine, 10 months I can't tell you what the next six months will look like. But my hopes are that it is it's something that feels a little bit seamless, but still to Jason's point earlier about his conversation with his wife is that we're still able to really know what's real, what's artificial and what's human, but as it doesn't feel as I don't know, what's the word I'm looking for?
RiGht now we have to decide to go and have a chat, right? You know, It's not text based, it's not going to be image based. Somehow it's all going to be integrated in some seamless way, and I think that will be interesting to see how that plays out.
[00:46:44] Jason Johnston: my my answer is how about we do a podcast and talk about it, which is really kind of a veiled uh, like, yeah, I think, I think there's so much more in the conversation to, to to talk about. But yeah, I think those are great answers. I wish, you know, we talked about how this is not a video podcast, which it won't be, but I kind of wish it was a video podcast in part because I try not to interrupt, but I'm nodding a lot. So people can picture me nodding a lot when Michelle and John are talking because there's so many good ideas here. This is a great conversation. We're going to put links in our show notes for ProSolve and so people can get in contact with Michelle and her team if they wish to do that.
As well as yeah, as well as we should have a transcript there and any other resources that we can connect you with, as well as please join us on our LinkedIn group, our our podcast is at onlinelearningpodcast. com, our LinkedIn group, if you look that up as well on LinkedIn, you should be able to find our group and feel free to send messages to John or myself or probably Michelle.
Do you take messages on LinkedIn every once in a while, Michelle?
[00:47:51] Michelle Ament: Absolutely. I love talking with people on LinkedIn. There's so many great ideas. And it goes back to a theme of this entire conversation is learning is relational. And so if I can build a relationship with people, I that's where the heart of learning happens. So certainly reach out to me and let's have some conversations.
[00:48:11] Jason Johnston: That's right, that's good. Again, lots of head nods.
[00:48:14] John Nash: Yes, lots of head nods.
[00:48:16] Jason Johnston: this is why we're doing it and learn lots from you today, Michelle. Thank you. And as always, , learning lots from you, John. So thank you for this great conversation.
[00:48:24] John Nash: thank you both
[00:48:25] Michelle Ament: Thank you.
Tuesday Oct 24, 2023
Tuesday Oct 24, 2023
In this episode, John and Jason talk with Dr. Brandeis Marshall about making online assignments Un-AIable, understanding data science, concerns & opportunities of using AI in the classroom, and the new digital AI divide. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - Online Learning Podcast
Dr. Brandeis Marshall Links and Resources:
Dr. Brandeis Marshall’s Website and LinkedIn
What’s Un-AIable by Dr. Brandeis Marshall on Medium (Paywall)
Book by Dr. Marshall - Data Conscience: Algorithmic Siege on our Humanity
WaPo article on Harriet Tubman and Khan Academy and Dr. Marshall’s article how not to use AI
Rebel Tech Newsletter
Other Reading / Resources:
These Women Tried to Warn Us About AI
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Dr. Brandeis Marshall Bio:
Brandeis Marshall is Founder and CEO of DataedX Group, a data ethics learning and development agency for educators, scholars and practitioners to counteract automated oppression efforts with culturally-responsive instruction and strategies. Trained as a computer scientist and as a former college professor, Brandeis teaches, speaks and writes about the racial, gender, socioeconomic and socio-technical impact of data operations on technology and society. She wrote Data Conscience: Algorithmic Siege on our Humanity (Wiley, 2022) as a counter-argument reference for tech’s move fast and break things philosophy. She pinpoints, guides and recommends paths to moving slower and building more responsible human-centered AI approaches.
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
Intro
[00:00:00] Jason: Some banter on the front end.
[00:00:02] Brandeis: Oh, I'm great at banter.
[00:00:03] Jason: Oh, good.
[00:00:04] Brandeis: I've been teaching for 23 years, so you have to have that conversation with the students before classes begin.
[00:00:13] Jason: If you like banter, then you've come to the right place because This podcast is mostly banter
[00:00:18] John: I'm John Nash here with Jason Johnston.
[00:00:21] Jason: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:27] John: Yeah, and we are doing this podcast to let you all in on a conversation we've been having for the last two and a half years about online education. Look, online learning's had its chance to be great, and some of it is, but a lot of it still isn't. So how are we going to get to the next stage, Jason?
[00:00:43] Jason: That is a great question. How about we do a podcast and talk about it?
[00:00:47] John: I love that idea. What do you want to talk about today?
[00:00:50] Jason: um, to talk with you, of like usual,
[00:00:53] John: That's overrated, but
that's
[00:00:54] Jason: but I would also love
to talk to you. a very special guest with us, Dr. Brandeis Marshall. Welcome.
[00:01:01] Brandeis: Thank you both for having me.
[00:01:03] Jason: And is it okay if we call you Brandeis?
[00:01:05] Brandeis: Yes, feel free to
[00:01:06] Jason: Okay. Thank you. It's so great to have you here. And Brandeis, I'd love for you to introduce yourself, but just in, in general she's the founder, CEO of Data edX Group, a data ethics learning and development for educators. scholars and practitioners to counteract automated oppression efforts with culturally responsive instruction and strategies. Not only that, but she has a background in education. And we'd love to talk to you a little bit about that. What would you like to say about yourself here today?
[00:01:40] Brandeis: listen. I am an educator, a data person. Like I think everyone is at this point in this age of AI and whatnot what, and what it isn't and what it is. And yeah, I'm also just, I just lead black women in data as well, which is really focused on increasing the number of black women in the data industry.
So that's all I want to say about myself. I have books and I write things and I talk to people, but. thOse are the main things about me.
[00:02:07] Jason: You're humble. She writes books. She talks about some things. She has excellent posts. She is continues to be an educator for us and ways in which we have connected with some of her writings that we'll talk about. But yeah, thanks so much for.
being with us.
Un-AIable Assignments
[00:02:23] John: I'm just going to get it out of the way. I'm gushing a little bit, but I'm very excited to get to talk to you today, Dr. Marshall. And so there, I just got it out of the way. But yeah, but mostly because, the number one. piece of reading that I've been telling everybody I know, particularly those who are in education circles and worrying about AI, is to read your Medium piece called "What's Un-AIable."
[00:02:47] Brandeis: Yes. I keep telling people to just calm down, and I'm now seeing commercials that are like, we're going to be using AI basically as an assistant. I'm thinking I've been saying that since March, but yes. Thank you for sharing the piece and hopefully people get something good out of it. It seems as though it has been very well received and people are yeah, that's right. AI can't do context. AI cannot. AI cannot do conflict resolution. It cannot. What happens? AI will literally get to a place where it has a fork in the road.
And then what does it do? Abort. It just aborts. You can't abort as a human. You gotta decide what you're gonna do. Doing nothing is still a decision.
But AI will be like, it'll just end the program. And you'll be like, what the, what happened? It's like cyber, it's what is that, Cyber Monday? That happens, or, right after the holidays? It's start up, everything's just frozen. That's what it, and just abort.
[00:04:00] John: I'm in the business of preparing P 12 school principals and superintendents. And so these are, my students are adult teachers who are going to be leaders of schools.
And so that puts me in circles of people who are talking about. What are students going to do in my classrooms now, and what are they going to create, and what am I going to do to be able to thwart this? And my response was, perhaps don't try to thwart this, but how might you look at things that students can do that are un AIable?
And then I share your piece on that. And you cover three big things, which is that AI does not have contextual awareness, it cannot do conflict resolution, and it cannot do critical thinking. And when I mentioned that, these two, the teachers just they like, lean back a little and their shoulders relax and they go, yeah, you're right.
It can't. And we can still teach that. And we can ask students to demonstrate that to us. Talk a little bit about what drove you to write that piece and why we should always be thinking carefully about what's un AIable.
[00:05:05] Brandeis: Yeah. So I wrote the piece because I was having similar conversations because I do teach adults as well. And some of them are instructors, right? Some of them are new instructors. Some of them have been in the education industry for a while at all levels. And I just one day in this conversation just sat back and was like, there are things that this AI cannot do.
And I was in a room with people who were just so enamored with. all of the generative AI new tools that had just come out. Cause this was like April, May, but everything had hit the scene. And I was just like, y'all are excited for no reason. And so I sat down and I just thought about what can't AI do?
And as with many people who are writing pieces, you get your best ideas when you're not trying to get the idea. So I think I was like in the shower or something. And I started just to list like these things in my head. And those, these were the three things that bubbled up. And then saying this needs to be front and center for a lot of instructors and just a lot of people in general, just everybody is trying to adopt AI without understanding its limitations.
And so I wrote the piece as a way to provide like a grounding and a practicality on what you cannot make AI do, nor do we want to, which is the other part of the piece, which is we don't want AI to do any of this stuff.
Understanding Data Science
[00:06:34] Jason: Tell me, what does it really mean? I'm not a data scientist. I have a sense of what that means because, as an educator, of course, we work with data, but I realize that I'm not a data scientist. What does that really mean to be a data scientist?
[00:06:49] Brandeis: Data scientist as a profession has changed over the last five years or so. Originally, a data scientist was just a big umbrella for anyone who worked in data. If you were working on an Excel spreadsheet, you were coding in a particular language, or you were a journalist talking about different types of visualizations and figures and charts and statistics. And a data scientist now has elevated to be someone who is more of like a data architect, really trying to deal with the strategy behind how data is modeled and organized, and then how is it interpreted by a team to help decision makers, help human and automated decision makers. And most times they tend to be managers, and They also can be more on the technical, statistical, and computer science side, where they're actually doing some coding and managing projects.
So it really does depend on the organization of what a data scientist is. I call myself a data person because I think it's important to think about the full ramifications of data and how unifying and divisive it can be, right? Because data is everywhere. Every company is essentially a data company now, because they're trying to get your data.
They're trying to understand it in ways in order to market better to you. Capitalistic society. So I call myself a data person that I, my real niche is in data engineering, which is all the data modeling side. So I'm that database person that everyone says, Oh, that's boring stuff. I be I'm here for it.
Give me an SQL query and a table. And I'm happy as a, I'm happy as a plant, but I do dabble in some of the other areas of data, which is, analysis, So a little bit of statistical background , as well as visualization and everyone knows the pros and cons of data visualization, right?
So long answer for a very easy question. So sorry, you are talking to a fellow educator.
[00:08:56] Jason: No, that's a great answer. And I think it's interesting for those of us that are not in it understanding the shifts as well. This kind of shift in role is that I'm assuming that's partly just from the massive amounts of data that we have nowadays, as opposed to even 10 years ago.
[00:09:12] Brandeis: Yeah, that's, it's because data science came on the scene. What about, let's say about 2008 or nine, like pretty heavily. It got more into the public eye about 2014, 15. That's when I started to really see myself in the data science realm. But then companies had to figure out what a data team consisted of inside their organization.
And they started to figure out that a data team is different than a software development team, because they were merging the two. And a lot of companies merged the two. They think, Oh, if you're a coder, you also know data. No, you don't. Because there's statistics, there's implications of that data, that's where the data is being sourced.
There's where the data is being output and how is it being interpreted and communicated to the public or to stakeholders within an organization. So there's A whole ecosystem of data that then started to really tease out about 2015 2016. So then you had these different roles really populate. You had data engineering on that back end.
Then you had more of the data analysis piece. So all of the data analysts that are really dealing with statistical models and machine learning engineers and scientists, and then you had data visualizers that were dealing with Tableau and all other types of data tools. And then you also had people who were just doing the communication component, who was there, it was their job.
They were like the marketing and PR people, but just for all of the data work that had been done. So then it all got separated and then you needed someone to be in charge of all these different roles and that's where you have a data scientist or a data like project manager. So more roles just came out because everyone started to notice like, it's not about software development.
This is literally about how do you deal with data.
[00:11:09] John: So when you think about approaching the question of AI in the classroom, maybe differently now than maybe you would have a decade ago?
[00:11:18] Brandeis: Most definitely.
Almost definitely. Because AI was extremely dumb. It's still dumb, right? Let's be clear. Let's level set. AI is dumb. It's just a parrot, right? Whatever input is given, it churns out a variation of it to you as a human. But 10 years ago, it was very much in its infancy.
10 years ago, we were just being able to go onto the interwebs and type in a phrase and then it auto complete. And that autocomplete wasn't done well. Now we can type in a phrase to pretty much any search box, and it has a pretty good likelihood of being what we want. Or at least providing us several options, right? At this point. But 10 years ago, I would have been like, Okay, AI doesn't, isn't really a factor in my life as much. It's not a day to day. Interaction, there was a little bit less surveillance, a little bit less of grabbing my data and then using it for nefarious purposes, because there's data breaches that are happening every single day.
Now we have generative AI, which is automated generation of content. 10 years ago, that wasn't a thing. It was in process, but it wasn't. public. And now it's public. You can create images, you can create words, you can create essays, you can create tests, you can create paragraphs, right? And that is a different set of AI algorithms than it was 10 years ago.
So it's scary. It's also good. Because you can now tell what is AI and what isn't if you really know how to attune to that stuff. But it's, yeah, it's just very different. Ten years ago it was very different. I could easily change my questions around and the students wouldn't be able to find the answer.
Now they can dump the question into an automated system of generative AI and produce a response that may or may not be accurate, and I might not be able to tell.
How LLMs are Trained
[00:13:45] John I would love to talk about the data that these LLMs are trained on. And that great article recently in Rolling Stone about Timnit Gebru. But yeah,
really
[00:13:54] Brandeis: Yeah, Yeah there's a lot of lawsuits that are in, in process because a lot of the data has been just. Web scraped and no one's given the consent. There's even a lawsuit for GitHub for their co pilot software. Because GitHub co pilot is generating computer code.
They use all the github repository code in order to then create copilot. And then those who had created, , code using github then said, Hey. This is supposed to be open source and therefore you use my data in order to now create an automated tool in order to create software, you're infringing upon my copyright.
And so there's a lawsuit there. And then of course, Microsoft slash GitHub is of course making a subscription model for copilot. What is the royalties for those software developers and other coders? Yeah, so there's, it's, that Rolling Stone article just is the tip of the iceberg of all the different issues.
when it comes to how these LLMs were trained. And the fact that of course, the main part of the article was all about the discrimination and the bias that's built in. So it's very skewed toward certain demographics and then anyone that's outside of that demographic therefore isn't represented at all.
Or it's very oppressive in the way that It returns results, right? To black and brown people in particular and also to women. And so there's a, there's, yeah, that's a whole 20 minute conversation for me because there's a lot there,
[00:15:39] John: yeah. Were they shocked when they scraped Twitter and Reddit and then they ended up with white supremacist misogynistic responses?
[00:15:47] Brandeis: right? Because I like in my book, Data Conscience, I have a whole chapter that talks about like the discrimination piece of it, right? There's a whole chapter that talks about like algorithmic influencers. That's basically the name of the chapter. And I just, yeah. Go down the list of all the content moderation failures on the human side and the automated side.
There's just a lot of breakdown of how content moderation does not moderate the content, right? The people who are trying to say, Hey, this content is bad. If they do it in the wrong way that the system doesn't understand, then they become the harasser instead of for the harassed.
[00:16:29] John: Remarkable.
[00:16:31] Brandeis: Yep. So yeah, do not put a comment. Under a post that you believe is harassing because you become the harasser because the way most of these platforms work is that the originator of the comment is the victim in the situation. So it's just so if you want to say, "hey, this is bad content," you want to repost with your own thoughts. So you become the originator and not the commenter or the replier.
[00:17:08] John: Wow. I had no idea
[00:17:09] Brandeis: fascinating and terrible all at the same time.
[00:17:12] John: Yeah.
[00:17:14] Jason: In context again, right? It just takes this basic hierarchical kind of approach. They can completely miss it.
[00:17:21] Brandeis: Yeah, I
[00:17:22] Jason: In terms of understanding what the power dynamic is or what the context is or anything, they completely miss it.
[00:17:28] Brandeis: Exactly. And so my example that I put in this chapter was from one very well known tweeter. Her name is Shana White. So Dr. White was commenting on a video. of a congressperson, I believe, who was talking about indigenous people in an oppressive way. And she made a comment and say, this is not right. No, indigenous people are Americans and you need to stop doing this. And why don't you, I don't know, walk out into traffic," or something. That comment got her banned from the platform because she commented. And so based upon the rules at the time of that platform, which was Twitter, she was a commenter, so therefore what she said was considered bullying and harassment and inciting someone to kill themselves, and therefore she was banned from Twitter, not looking at the fact that the original post was content that was inaccurate and derogatory. Yeah. It's terrible. But this is the way content moderators work because it's looking for a pattern. It's an algorithm. It's trying to find a pattern. And a lot of patterns, they work well for very niche problems. But when it comes to general purpose, they fall apart. To your earlier, point, Jason.
Where, data just, there's a certain point where just having everything in a general arena just doesn't work anymore. Because you completely lose the context.
[00:19:02] John: set of data that is ripped from sources that are already very white, very male. To your point earlier that, we don't have, we don't have a corpus of data from Sub Saharan Africa that could be brought in to think that all through. And so we have to start in Silicon Valley, unfortunately,
[00:19:25] Brandeis: Unfortunately, and then some people believe that there should be people that should get data from Sub Saharan Africa and put it in a digital, into our digital infrastructure. And I go, why would you want to do that? Would the people of Sub Saharan Africa want that to happen? Would they want their stuff digitized?
Like, do they understand ramifications would be?
[00:19:46] John: Yeah. To your agency point.
[00:19:47] Brandeis: Yeah, to the agency point, exactly. So there might be causes where you might not want your data in there. EvEry time you go to purchase something, do you always register? Or do you sometimes continue as guest.
[00:20:02] John: Right.
[00:20:02] Brandeis: And sometimes we just want to continue as guest. We don't want to have a profile ID and a password with our name and our email and our address and a credit card on file and
[00:20:13] John: Yeah.
[00:20:14] Jason: Do you envision a world, you're a data scientist, you have a good understanding of how this how this all works, how it's growing. Do you envision a world where there's enough data to draw upon that they can understand things like context?
[00:20:31] Brandeis: No. I can't even imagine that. And here's why. When we think about AI learning our patterns, We tend to forget as humans that our patterns change at the same time or even faster than AI, right? And so as AI evolves we evolve so we're going to be evolving Faster than the AI because how can we create any AI tool without us evolving ahead of it? So I imagine a world where We would have these AI tools that could be a compliment and help support us. I think using some of these AI tools and even some of the canned questions that you give in a classroom and putting it inside of these AI tools and have the AI tools provider response and really having the students critique it.
I think that would be a very interesting application. right? How do you know that this literature, right, synopsis is terrible or that these sources are not accurate, right? I think that is a great learning option for instructors, but People tend to think that like humanity evolution is going to stop somehow, and then AI is going to just go beyond no, this is not minority report.
[00:21:57] Jason: Thank you.
[00:21:59] Brandeis: We live in the real world where every single time we are evolving faster. And then, given the fact that our world is so global, there's many different cultures that are trying to be represented by this general AI. And it's very hard to capture the nuances of different cultures. You can't capture the culture that might be in Nigeria, versus that in Cameroon, versus that in the Bronx, New York, versus that in Akron, Ohio, where I'm from. There you can't capture those pockets of culture. within an AI system because the AI is built for general, right?
Because it's trying to find a pattern, and that's what everyone needs to understand about algorithms. They're looking for a pattern. They can't find a pattern in all of these different cultures in order to be accurate enough to circumvent your own human understanding of how that culture interacts.
Like as soon as someone says pop, I know that they're from the Midwest.
[00:23:01] John: Yeah.
[00:23:02] Brandeis: how they say it, I can have an idea if they're from Ohio like me, or from Michigan, so there's a difference. And even though it's the same word, it has a different tone, there's a different way you say it. I had to break myself out of saying pop when I got to college in New York, because everyone made fun of me. And now I some, I don't slip up, but then when other people say, they're like, Oh, where are you from?
So you from Chicago? You're from Ohio? Like, where are you from? And then there's a different conversation.
[00:23:28] Jason: , I think that is a great point about imagining an AI when sometimes when people imagine an AI that say out humans us. Okay, I'm just gonna put that as a general sense.
They're thinking about a very static kind of At this point in time that they are somehow able to achieve the singularity of everything that we know at this moment But It's does not seem possible because we continue to move forward as human beings and so on And it seems like Tell me this from a data standpoint It seems like data, when generalized, may always get it wrong.
[00:24:10] Brandeis: Yes. because there's no context around the data,
right?
And there's also another issue when it comes to data is that people think that data exists forever. And right now it does. You can pretty much store data and put it on some cloud somewhere. You can pay for that storage, but there will become a point where, environmental justice is going to become more well known.
And people are going to start to say, Why am I storing junk? Because if we think about if I were astrologer, and I'm talking about the cosmos and the galaxy and how much debris of all of the satellites that exist around Earth is, we're essentially doing the same thing with data. We just have a whole bunch of junk that we're holding on to, and we meaning companies, people, etc.
And we're going to need to start to be more deliberate and intentional about what data we collect and what data we store and actually deleting. data.
Environmental Concerns of Data
[00:25:19] Jason: You mean my 2010 YouTube cat video isn't going to live forever.
[00:25:23] Brandeis: Oh God, I hope not. I think we need to think about how are we going to remove data because the computing power to store and maintain the data in the digital infrastructure is harming our world. And we're not talking about that. The heat that we've experienced this summer, there's data centers that are just combating, how do we cool?
Because it's just so hot outside that they're, AC units are being taxed, right? So there's a lot of residual effects of having all of this data. And yes, data needs to have a limit. Yes, data needs to, we need to think more broadly about how the ramifications of data and how we deal with that infrastructure works and doesn't work for us as humans.
[00:26:22] Jason: I saw a report the other day, and I had not even thought about this direction. I obviously had thought about the amount of cooling that it takes for data centers and so on. But they talked about the amount of water that it actually takes to maintain a data center as a data person and how you, I assume throughout your.
Life will continue to use large amounts of data and be a expert in that direction. Do you feel some conflict with that? Like just how, moving forward. My assumption for you anyways, is that you'll just find yourself managing large sets of data for all time,
[00:27:07] Brandeis: Eons.
[00:27:08] Jason: but, and I don't think it is, I don't think of it as a boring thing, per your previous conversation, I think that It's a very creative thing to think about how might we wield this data into ways that we can improve humanity and so on.
However, do you feel some conflict with the more environmental issues around that?
[00:27:29] Brandeis: I Do. But what I try to do is hold on to devices for much longer than I need to. So like my computer that we're on right now, it's from 2017. I don't need the newest version, right? I do have external drives. instead of storing everything on my laptop. I make sure to shut down my laptop, as, as often as I can.
So it tends to be five to six days a week. An order and I try to unplug even the charger, right? And so I, I do my best to be conscious of those little things. I make sure that my, even my mobile device is not the newest. I can't remember what iPhone I'm on. It's old. aNd so I do have conflict. But I do my best to make the individual effort in order to take the machinery to its limit, right? Rather than being the person who is every time a new gadget comes out that I need to have it. Or like every 18 months I need to have a new machine. I used to be like that and then I stopped. I was like, I really don't.
I really haven't used all the gigs. And of these gigs that I'm using, I could just delete some of this stuff. Mind you, I still have my dissertation in digital form from 2007. But but a lot of the other stuff is gone.
But it's hard. Yeah, but there is conflict.
[00:28:57] John: So I hear you saying in some the data uses a massive amount of electricity and this is a problem. And to Artificial General Intelligence, AGI, the next terrible thing that's supposedly coming when the robots rise is probably not on the horizon in our lifetimes or at all.
[00:29:15] Brandeis: I would say so. I think people are excited about this general purpose AI, and I roll my eyes. I'm like, really? You really want general purpose AI? Hasn't AI been really trashy thus far? You want that on steroids? Really? I don't. I'm not a fan.
And realize that a lot of our core systems don't subscribe to all of the AI hype. I'm talking about banking and healthcare. Every time you want to get access to their system, you have to do a separate login. You're not doing a login using your Gmail credentials. or some other single sign on, you have to create a completely separate sign on because it's a different security level.
[00:30:12] John: Yeah,
[00:30:14] Brandeis: if the core systems aren't subscribing to it, then that should make you have cause for pause.
[00:30:23] John: in fact, the core systems are probably still they're pretty dumb, actually, I think, because they're just. when I log into my health care system or my bank, and again, I'm not a data person, but aren't I just querying an SQL database probably or something like that? And it's just spitting stuff back and forth to me.
[00:30:42] Brandeis: mOst likely. Like on their back end, they probably do have a pretty sophisticated data repo that probably is hosted on some type of cloud, but it's under such severe security that we don't see it at the commercial level. For this reason it has to maintain a privacy. So a lot of the data is cartementalized.
It's not just put in a general pool and can therefore be used to train
[00:31:09] John: Yeah.
[00:31:10] Brandeis: some type of gendered AI "system, which is a good thing.
[00:31:13] John: I wonder if we could talk a little bit about that tangent a bit in a couple of ways, and that's, that is the underlying data on which these new generative AI systems, then the LLMs are trained. You have a newsletter called the Rebel Tech Newsletter, and in a recent issue called "How Not to Use AI," you wrote about Washington Post contributor Jillian Brockel's interview, and I'm using air quotes with interview, with Harriet Tubman, and this appeared in the Washington Post.
She used Khan Academy's Kahnmigo to have this interview, and you noted a few things, three key issues that it was a little bit of hubris on her part to assume the right for AI to generate an interview with Harriet Tubman without consent. That the learning goals for the AI interview exercise were pretty vague and not measurable. and there was a lack of an authentic Tubman source material to train the AI. I see. System, and that led to some pretty superficial outputs. Smiled a little along with notions of hubris because Brockell said she was so relieved to find that the Tubman simulation used modern conversational language.
So in that piece, you have some suggestions for a more methodical approach to testing AI interview generation, like getting consent, customizing the model. It made me think more and wondered if you could say something about AI generated content and how teachers and instructors in general should be thinking about this carefully as they present it to learners.
[00:32:53] Brandeis: Yeah, so I think when it comes to online educators, they need to think about Where this AI tool goes wrong and how to provide some clarity, like guardrails for their students to understand where it goes wrong and why. So what I mean by that is, and I wrote about this in a Medium piece as well as saying, "Hey, if you're going to use These generative AI tools in the classroom. I suggest if you're allowed, if it's not banned by your institution or the organization, is to present the solution of the AI tool, and then have that be commentary. What do the students, what do your learners think? And this is for adult learners or even K 12. Is this insightful? Is it basic? Is it bringing up other things for you? What is it?"
I think that's a good conversation piece. And then especially if they're sources, then there's another level of vetting of research. This is getting to that critical thinking arena. How do you know if this source is really good or not? What makes a source good in your discipline?
Like in my discipline, you use Wikipedia, people are like, "really? You're using Wikipedia?" Or if you're using a seminal work, but the seminal work doesn't have the quotation that is being noted by the generative AI system. How are you going to catch that? So I think that is one way that we can, as educators, really leverage these AI tools and ways in order to spark conversation.
So then students have a better understanding and grounding of what the limitations are and then how to handle output. But this is what I find when it comes to, let's say, Like I, I taught college for a number of years. So let's say the college student is that they don't want to spend that much time thinking about it.
But. As an educational tool, when they get into their first job, they might be asked in order to leverage these general AI tools and therefore they're going to need to have that skill set. So that's how you can try to position it. But also with adult learners, they are using it in their day to day interactions at their full time employment job.
So then they are now going to apply it right away because they're going to go, yes, I was wondering how am I supposed to use this? Or is this supposed to be like helping me brainstorm? Or is this helping me like figure out like what my next steps are? So then I don't do some of this AI generated stuff or do I embed some of it?
And that's again, another conversation. Yeah, I think there's opportunity in order to make this phase of generative AI that is really in its infancy. People don't know how to deal with it is to spark the conversation and then start identifying your own criterias on what makes sense.
On when to use it. And what criteria does it mean when to not use it, right? You can't use it for a project. And this is the limitation on the project side. So you can't use it on the project. If you try to use it in a writing assignment, here's the issues that you're gonna fall into, is that you're gonna spend more time vetting the responses than actually if you would've spend time writing it. And if you're caught plagiarizing the AI system, then there's gonna be repercussions.
[00:36:40] John: Yeah.
[00:36:41] Brandeis: That's a whole nother conversation about how do you know whether or not it's AI generated content and things like that, but just want to put that out there.
[00:36:50] John: I think we're kindred spirits here. Your first comment about if the instructor, the teacher can actually have students almost cheat on purpose. Another air quotes again with cheat on purpose. And have them then vet and analyze critically what the system gave them and then talk about that.
Then that sort of takes the pressure off of wondering what the system is going to do with your assignments.
[00:37:17] Brandeis: And if I were teaching like in a college environment, I might actually provide assignments that would go, "okay, use the AI tool to give the result, give me the result, and then tell me how you vetted it for its accuracy and you tell me what grade you would give it."
[00:37:34] John: Nice.
[00:37:34] Brandeis: And so then they will see it's about 30 percent accurate or 40 percent accurate.
Okay. If it's that, if it's at that 30 to 40%, that would be your grade. Is that what you want? And of course, students are going to say "no," but I think that's how I would take an, like a practically taken assignment and go, yes, cheat on purpose, quote unquote, as you said, and then tell me how you vetted it.
Take me through those logical steps. So that makes the students have to go through context, right? awareness makes them have to then understand conflict because if there's something that is in the response that they cannot validate or verify, then what is that? An AI hallucination? Yes or no. And then lastly, it makes them critically think. And I think I would have done my job as an educator if I make them be un AIable.
[00:38:34] Jason: That's good. I love that approach. And I think what I also love about it is that it puts you on a better side of the equation, so to speak, in terms of where AI is and where your students are, rather than being The side of banning it and then trying to detect it among your students.
You're pulling AI over onto your side, showing ways to use it, ways to flip it in a sense to make it push students higher. And I would hope creating more of a transparent relationship with your students and with AI as you're working through these assignments.
[00:39:15] Brandeis: Yes, I would think so as well. And more importantly, and I think all of that, is that the classroom is a culture. Every classroom that we teach, every group of students that we teach, it's a certain culture that we are fostering for whatever time period we have them. And we have to build that trust.
And so by banning AI in the classroom, especially these generative AI tools, what we're doing is almost criminalizing students because if we think it's, Genitive AI or it's automated in some way, then there it's plagiarism. Now there is some plagiarism that will always be, but as an instructor, we are the role model to ensure that the environment is healthy. And so I think it's important that we keep that in mind as instructors, that we are trying to do our best to cultivate the trust, so that they understand why we're doing what we're doing. And not just saying Oh, it's AI generated. You're cheated. You're now have to, go off to whatever the Dean's office or whatever that next step is that we insist that they are trustworthy people from the beginning. And I think the second thing that is important with bringing AI more to our side is that when AI changes. So we can capture that as well in the classroom because, for instance, BARD is being updated more regularly than, let's say, CHAT GPT. It's the way that these two LLMs are structured. And so if one, Set of students would use chat GPT.
Another students would use BARD. Would the responses be the same or different? And depending on when the students might have done their assignment, would the responses be different? And then that's another conversation piece.
sTudents understand that AI is evolving. And so you provide an answer through, and just rely on AI, here's some of the fallacies, right?
Because maybe there's an update that's made that actually is more accurate. So one student would say, this is about 40 percent right out of this BARD, response. And then another student would say my BARD response, I thought it was about 50 percent right. But their responses would be different because they did them at different times.
[00:41:50] Jason: That's a great point. And just while you were saying that, I was just thinking about what could be a another digital divide, like John and I have at least enough means that we can buy, uh, GPT 4, right?
Which is significantly superior to 3. 5 and more consistent than the free versions of BARD and the free version that you can find on Bing, right? I think I like that from a transparency side of things to be able to talk and compare with students as well so that you don't have people coming to, in a sense, coming to the classroom better equipped than other people, just simply by the fact that they have means to spend 20 bucks a month on, on whatever.
[00:42:37] Brandeis: Exactly. And I think especially given the way that these AI tools are being updated so regularly and how it's more and more behind a paywall, there's a possibility. And I think it is, it was happening from the beginning. And I think it was really COVID that really provided a break point where people really understood the digital divide a whole lot better. But for CHAT4 and then whatever the next versions of the chat a GPT is going to be. There will be a divide. If you can afford, in order to get access to the most up-to-date version of the generative AI tool, you're gonna be a step ahead. But that is something that we are gonna, we've been battling in different variations for decades when it came to just who had the internet and who didn't.
There's still rural parts of the United States that don't have access to the internet, which just seems preposterous to me in this year of 2023. Like, why is this such an issue? So it's just going to be exaggerated to your point. And I think it's happening. I just think we haven't had the conversation.
There hasn't been articles written about it, but it's happening.
[00:43:49] John: I'm noticing already that the the the web apps that have a wrap around the GPT 4 engines they're very expensive because OpenAI charges an arm and a leg for its use to those vendors. And I think you're right. I think that the divide is going to get larger as more tools start to have niche products with the chat engines.
And then to have that value add, they're going to have to charge a lot more to the user for that.
[00:44:18] Brandeis: Exactly. And it's interesting. It's called OpenAI, but it's not. open. It started out open, but then once they figured out their business model, then all of a sudden it became closed, and all of a sudden there became tiers in what you could pay for and how much access you can get again, following very much the trajectory of how the internet access works, right?
Internet service providers, it used to be relatively open, and then it became like, oh, are you residential or are you business?
[00:44:51] Jason: And
[00:44:51] Brandeis: want residential plus? Do you want business plus? Or do you want enterprise? So We've had, and then there was a whole net neutrality conversation that happened about five, four, five, five years ago or so, where we were teetering, not having net neutrality.
Yeah, I think OpenAI is not quite doing what it said its mission was.
[00:45:15] Jason: Yeah. And we were just talking with Dr. Kristen DeCerbo, who is the chief learning officer at Khan Academy, and it probably will be the episode before your episode. So this will be a really nice little like back and forth here. Yeah, wonderful. Because one of the things that you said, we didn't even get into this really is this idea, which is driven by chat 4. She said a little side note about how, it costs money.
[00:45:44] Brandeis: Mmmhmmm
[00:45:44] Jason: Up to this point, Khan Academy has been pretty open handed with its with its tool, and their mission is to help schools and to provide learning for everybody, but there's a little side it does cost money to access CHAT4, and that's what it's trained on, and that's who they're partnering with, and so what happens moving forward for those schools or even whole, school districts that maybe can't afford the, even if it is 10 a year per student.
It could amount
[00:46:18] Brandeis: becomes a lot when there's a lot of students.
[00:46:20] Jason: Exactly. And students that have come to depend on it. And then do we have a new digital divide of say these chat tools do actually help to accelerate and give people a sense of. Competency and with this one on one tutoring and , help people achieve better results or whatever like that.
Now we're starting to see this access thing happen again and again, right?
[00:46:45] Brandeis: Yeah, exactly. And I think I've been saying this since the beginning of the pandemic, which is there is an attack on education, and the attack is how can we, it's the business model of education isn't tenable. It never was. It always was based on free access to the content and almost privatizing the tutoring. And now that we're full on into the AI realm, that divide, as you are both mentioning, is becoming even better because it's less about the content and more about the tutoring, right? You can go on YouTube and get a lot of the content, but... The context and the understanding of the content happens in tutoring. That's where the digital leader, the online educator, the in person educator, the tutor that you might have, like traditional tutor might have, actually comes into play. And so I think as an industry, education professionals as a whole should really think about how do we fund this. To be more equitable.
Do we think about banding together these different units, whether it's like Canada that has it so that Everyone gets access to Jupiter Notebooks. Jupiter Notebooks is hosted on the cloud. It has a lot of programming as computer programming type of resources to help all the students.
So as soon as you are part of a Canadian university, you have access to Jupiter Notebooks. That is something that the government level did, but is there something in the United States that we could do similarly in order to make sure that the cost of access to these platforms, any type of AI assisted platforms, becomes part of the standard and not part of an add on. And that would take us banding together, having conversations with institutions K 12, really making it so that it is equitable. that there is this conversation that happens and there is access for all people to have this ability in order to receive at the very least the opportunity. To be able to use the same tools no matter which district that you're in or what part of the district that you're in.
[00:49:31] Jason: Leave it to Canada to give universal access to important things
[00:49:34] Brandeis: I think that's the reason why, Canada has been a forerunner in a lot of the development of some of the data tools is because of the fact that all the, everyone that has a dot-edu that is in Canada. It now has access to Jupyter Notebooks, so they can learn how to code, they can create their own projects, they can write the comments notes, they can share their Jupyter Notebooks with each other.
It's not this thing where Like in the United States, we have, you have to have the right Google account with the right amount of storage in order to be able to share it. And that other person has to have the right storage in order to share it with you and see it and all this other type of roadblocks in order to be able to work collaboratively.
[00:50:14] Jason: I think this has been a great conversation so far. Very challenging. I'm going to admit and I'm going to speak for John.
Maybe a little bit here as well that we're both a little bit AI fanboys, like we love digging in and we're, we think it's pretty cool and we go into it wide eyed and we're always texting each other saying, Hey, check this out, kind of thing, right? So we do think it's cool. I think that your voice today has helped challenge us to rethink it because I think you're critical in a good way. we probably don't come at it critically enough. And I'm just wondering do you have other big concerns we've talked to, we've talked about the quality of the data we've talked about transparency, we've talked about proper use of it in the classroom. Do you have other big concerns when it comes to AI in, in education,
[00:51:04] Brandeis: I think the biggest conversation that I think we haven't touched on is just how do you vet what a student actually knows and given, even if you do all the things that we've talked about and talk to them and be critical about AI and share. Now, once you get their paper, sorry, their digital submission, what do you do as an instructor? How do you then examine it and assess it? And I think that's a whole separate conversation. But that's something that is front of mind for me because it's hard. especially if these students are new to you, if you've had them a couple of times and you know how they will respond to a class, you have an idea, the class isn't too big, right? But if you're in a large class, that's, a hundred or more students and you're not even the one grading it, you have a a team of undergraduate and graduate students, that's grading it. How do you then help the graders? Then assess whether or not the learner actually has learned. So that's like the biggest concern.
Is that what does that look like in the digital leader space? And that's what I'm gnawing on right now because it's a hard problem.
[00:52:24] John: It is a hard problem. I'Ve been gnawing on that too. I was in a room recently where the question was about, how do we scale, un AIable learning outcomes? How do we scale public demonstrations of learning, for instance, in a large lecture? aNd it's a tough nut to crack.
[00:52:43] Brandeis: it's a very tough nut. I did it in my classrooms by doing projects. And that does mean that you're going to have to divide up the students into smaller teams in order for them to work on a project and then provide those milestones, those tasks and milestones to see if they reach them.
But yeah, in a very large classroom, I don't, For see it being scalable. I think you're still going to need that human in the loop in order to really handle that context and that conflict resolution piece. But as I said, I'm just gnawing on that factor is that it's difficult because as an instructor you trust your learners.
But there will be people who are going to push the envelope and you don't want them to get a pass. so what do we, what do we do? Again, I think that's a whole separate, you have to have me back then. So we can just
[00:53:36] Jason: sounds like a good problem for another another podcast. But yeah, I'm gnawing on that in a similar way alongside of like the large classrooms thinking about fully asynchronous because a lot of my work is in helping programs, teachers develop fully asynchronous online learning which I think then presents another layer.
Of of challenge, partly because, not that people can't learn that way, but when we're talking about authentic assessments and really trying to figure out if the students have learned it, it does provide another layer of, possibility for the use of AI and in this asynchronous space. And because a lot of the solutions that I've seen people throw out there are things like, One on one interviews with them or, synchronous, synchronous group projects or recording videos, which you can do some of that stuff or using, using blue books or whatever, while you watch them, so
a lot of those things aren't very
[00:54:38] Brandeis: practical.
[00:54:40] Jason: They aren't very practical, exactly, and and I always feel it because I'm a horrible handwriter, like I always did terribly in those high pressure blue book situations, because it also if I got nervous, and if I was trying to rush it it was like reading doctor's
[00:54:56] Brandeis: yeah, it's chicken scratch at that
[00:54:58] Jason: it's horrible.
And yeah. Yeah, so I don't know what the answer is, but I think you pose, that's a really great, more of a question here at the end and give us something more to
[00:55:07] Brandeis: more to think about. Yeah, a lot of the solutions I thought of as well is very similar just trying to figure out like, even just grading scale is another way that I thought like you don't you grade the assignments which are very much like Tutorials and demonstrations much lower than you do like exams and projects but then again, you still have to deal with how do you grade those exams and projects and create them so that it will help suss out anyone who is trying to be a bad actor, right?
yeAh, as I said, I just, I got questions. I don't claim to have a lot of solutions, but I do have questions.
[00:55:48] Jason: I did want to ask, just as we're closing off here, some of our big focus obviously is on online learning, on humanizing online learning. It doesn't have to be, the answer doesn't have to be AI and, or anything to do with that.
It, to step away a little bit from that. But what thoughts do you have in that direction as we're, for us, for our listeners, as we're thinking about re imagining what online learning might look like in the second half of life as we move forward? What are some of your thoughts?
[00:56:22] Brandeis: I think when it comes to learning, we have to get into this notion that we're always going to be learning. Having a practice of having newsletters that we read on a regular basis. So like medium is a good place for that. There's also just like weekly or monthly newsletters that are good. I think also being intentional about having conversations with like real humans. lEt's go back to the coffee shop where you sit down and you are just literally just talking about whatever is the current news within your discipline. I think those... Those forms of learning, I think we need to enhance because I think that's where we get the richness in order to help us deal with all the other stuff in the education industry. And I don't know if we have, we, we are taking our time in order to do those things, right? We're so into the like AI ness of it that we're not going back to some of the regular traditional or old school ways,,
[00:57:38] Jason: , those collegial conversations may be more important than ever. The more siloed we get and the more automated we get. Yeah. Yeah.
[00:57:47] Brandeis: yeah. And I think that's going to be something that we'll, I think =
we'll get to as educators that will come back and have these like conversation pieces. But I want us to accelerate that. Can we get back to that? Because there's times which when, especially when I was new in the industry.
I was like talking to people and I'd be like, you did what? Like I had this great line in my syllabus, which was after one week, the greatest final. So then students wouldn't come back and try to challenge questions that, you know how students are at the end of the semester. And that was because I was literally on campus.
I was at Purdue at the time. I walked down the hall two doors down and I'm talking to my colleague and he tells me, I put this in my syllabus because someone else told me and I was like, aha. And since then, when I've shared my syllabus with other people, they're like, I love that line. And I'm like, I got that from someone, 10 years ago, 15 years ago, and so I think that is what we need more of.
[00:58:51] Jason: Yeah. tHat's one of the reasons why we do this podcast, frankly, is just to continue our conversation and then have. These amazing conversations with people that we wouldn't, we might not get a chance to otherwise, right? And this is just a good excuse to get together and to talk about these important things.
So thank you. For listeners out there as well. We try to continue the conversation on LinkedIn So you'll find this podcast posted there and if you have any comments, suggestions, corrections, challenges, Please put them in there and let us know what you think as well.
[00:59:26] John: Yeah,
[00:59:26] Brandeis: Yes, please do.
[00:59:27] John: I think it's great. We're talking about online learning and we were talking about your article in Medium and how I thought it was so useful for so many other educators. And then here, lo and behold, you'll be a keynote speaker at the Online Learning Consortium meeting in Washington, D.
C. Where we'll really get to bring some of these points home because I don't think that this has really hit in the online learning space as much as it has in the sort of classroom space. And so it's going to be a great conversation. I'm really excited to see you out there too.
[01:00:03] Brandeis: Yes, I'm excited to meet both of you in person, and I'm excited to, this is my first time as a speaker at, OLC, and so hopefully I bring these nuggets out and some other things as I noodle around some other ideas about how do you teach with generative AI, and what does that mean for navigating this space as an instructor, and then how do you assess the learner's knowledge.
[01:00:32] Jason: great. We'll be doing a session, actually two sessions, the day after you speak. I think you're on the Wednesday and we're on the Thursday. So it'll be great because we'll do a lot of talkback in both of those sessions, a lot of conversation. And so I am sure your session will come up in the points that you bring up there.
So I'm excited about the dynamic, excited about being there and learning and yeah,
[01:00:57] Brandeis: Awesome. Yes. Very excited.
[01:01:00] Jason: Yeah. I think that's about it for those listening OnlineLearningPodcast. com. You can find all of our episodes and find this one as well as any show notes where we'll put in links for Brandeis and to our LinkedIn as well as to the articles we've referenced and so on and then you can join us, of course, on the LinkedIn as well.
Thank you so much, Brandeis, for joining us. This has been a great conversation.
We've learned a lot from you.
[01:01:28] Brandeis: Thank you, Jason. Thank you, John. This has been fun and hopefully we can do it again because there's much more to talk about.
[01:01:35] John: Oh, a lot. Thank you so much. Great.
[01:01:38] Brandeis: All right.
[01:01:40] Jason: Thank you..