Image created with ChatGPT

Generative artificial intelligence (GenAI) is a highly controversial topic that is debated among many experts in education. Some say to outright ban AI in schools, while others believe that we must adapt AI to our needs. While I see the potential threat of GenAI, I don’t believe that banning GenAI will create a long-term solution. When things are outright banned that are desirable to people, often the response won’t be to just stop using that thing. I think of when I was younger and my parents told me I wasn’t allowed to watch TV or play video games. I did everything in my power to figure out a way to do those things without them knowing. I believe GenAI will function in a similar way. As AI grows in popularity and is embedded into more applications, children will gain access to AI at a younger age and more frequently. I believe our role as educators is to give children information that will prepare them for the world. The world has decided to embrace GenAI so as educators we need to teach students about AI. When children are younger we can start with simple information and as they get older get into more complex discussions.

People all seem to have a view on what GenAI is, what it can be used for, and who should use it. This thought made me curious about how AI viewed itself. I asked three different GenAI platforms to “create an image of how you view yourself as an AI entity.” The images above were what the GenAI created as a result. I decided to do a deeper dive on images of AI that people had created to see how people might view AI. I found an interesting blog by Tristian Ferne called “What Does AI Really Look Like?” They discussed reframing how we view AI by having accurate imagery of what AI really is: math, data, coding, and information/resources. This means that AI has to draw on information that was created at some point, but this information may not be correct. AI can create more content at faster rates, then algorithms target people’s interests to spread this misinformation that aligns with how we feel. This article by the American Psychological Association discusses how and why false information spreads further. AI also lacks the same complex reasoning we have as humans and may misinterpret the context of information or a situation when giving summarized information. Our brains work using many top down and bottom up processes that let us think in complex ways that we are still learning ourselves.

The different behaviours we model play a large role in children’s lives as educators. This means we have to be extremely thoughtful when we use AI because we are modelling it’s use to students. If we begin using AI for daily tasks like marking a students worksheet we might be breaking school policies, students trust, and we aren’t building personal relationships with students. Teachers have a delicate role in a classroom as they balance personal information of students, everyday jobs like teaching math, and teaching new and complex topics like AI. When using AI for classrooms we need to consider if it is an ethical situation that maintain the safety and privacy of students. To explore this I asked Google Gemini to consider “What are the ethical limitations of using AI.” I’m sure I could have come up with several ethical considerations but mine would have been broad and not focused on British Columbia educators. One of my biggest concerns with adopting AI on such a large scale is the environmental impact. Cooling centres will continue to grow in necessity as AI’s scale becomes larger. CBC created a video that highlights some of the impacts this may have on Canada.

I commonly use GenAI for things like summaries of articles, videos, check my writing for errors, and sometimes create images. I am still very skeptical of AI and I will often skim articles to make sure that the information is accurate. Often I find it does a significantly worse job than I would have given I just skimmed the article. I find it gets significantly worse when I am giving it larger sources. What I do love is that these resources will continue to get better and be more useful for learners in the future. I really like the idea of trainable AI for courses so that students can draw on the information they learned in that class. Magic School is a good example of what the future of AI might look like for educators.