I presented on AI use in the classroom at the ECOO conference pre-COVID in fall of 2019. Gord from IBM even came all the way down to Niagara Falls to offer world class suppport. The room was all but empty:
If you ever wonder why education always seems two steps behind emerging technologies that will have profound impacts on classrooms, here's a fine example. Except you won't even see four people sitting in an empty room in 2024 because all edtech conferences like ECOO focused on teacher technology integration have evaporated in Ontario.
***
OK, so I've been banging my head against pedagogically driven AI engagement in education for almost a decade only to see it swamp an oblvious education system anyway, so what's happening now? I'm ressearching the leading edge of this technology to see if we can't still rescue a pedagogically meaningful approach to it.
In the summer Katina Papulkas from Dell Canada put out a call for educators interested in action research on AI use in learning. I've been talking to Aman Sahota and Henry Fu from Factors Education over the past year looking for an excuse to work on something like this, so I pitched this idea: De-blackboxing AI technology and using it to understand how it works.Our plan is to use the Factors AI engine that Henry himself has built and Aman administrates to build custom data libraries that will support an AI agent that will interact with students and encourage them to ask questions to better understand how generative AI works. As mentioned before on Dusty World, GenAI isn't intelligent and it's important that people realize what it is and how it works to demystify it and then apply it effectively. Getting misdirected by the marketing driven AI hype isn't helpful.
So far we've built modules that describe the history and development of AI, how AI works and the future of AI. In the process of doing this I've come across all sorts of public facing research material that breaks down generative AI for you (like Deep Learning from MIT Press), but it's technically dense and not accessible to the casual reader.
During the last week of August Factors had a meeting with interested educators through UofT OISE (their AI system came out of the OISE edtech accelerator). I demonstrated in the presentation how the AI engine might be used to break down a complex article for easier consumption through agent interaction. The example was WIRED's story on how Google employees developed the transformers that moved generative AI from a curiosity to real world useful in the late teens. I picked this one because it explains some of what happens in the 'blackbox' that AI is often hidden in.
With some well crafted prompting and then conversational interaction, students can get clear, specific answers to technical details that might have eluded them in the long form article. The reading support side of GenAI hasn't been fully explored yet (though WIRED did a recent interesting piece on cloning famous authors to become AI reading buddies as you tackle the classics which is in the ballpark).
What have I learned from working in the engine room (BTW, that image at the top is Adobe Firefly's AI image generator) building an AI data library and then tuning it? AI isn't automatic at all. It demands knowledgable people providing focus and context to aim it in the right direction and maximize productive responses with users. An interesting example of this was finding documents that provided relevant data on the subjects we wanted the AI to respond to. When I couldn't find specific ones Henry suggested using Perplexity, an AI research tool that coalates online sources and then gives you concise summaries along with a bibliography of credible sources.
I thought I was being perverse asking Factors to design an AI that expalins AI using AI, but Henry's always a step ahead. His suggstion is to use an AI to build a library of information to feed the AI engine that then uses AI to interact with the user... about AI. It's turtles all the way down!