Thursday 21 July 2022

Dancing in the Datasphere 2022 Edition: AI Refined User Interfaces!

This quote is 12 years old now, but it's more true than
ever, and our technology is about to take another leap
forward that will make our current passive information
/screen based approach to digital look
as outdated as a fax machine.
Way back in 2011 I made one of my first presentations for a provincial education conference (Dancing in the Datasphere).  Leveraging years in IT prior to teaching, I tried to edge teachers closer to an understanding of how the rest of the world had moved on in terms of their digital engagement.  Stepping out of IT in 2003 to become a teacher felt like time warping back 20 years, so out of date was the use of technology in education.  In 2019 I attended Cisco Live and discovered that the rest of the world has moved on again, leveraging cloud based systems in a way that no one in education is, so the anti-tech habits of education are still there.  The need for online/cloud based systems in education is apparent (especially since the pandemic began), but poor cybersecurity management is often used as an excuse to stay out of it.  We're still the only school in South Western Ontario doing CyberTitan and one of only five in the province with any kind of cyber-focus.

In the past decade education has staggered into the 21st Century, though Ontario has gone out of its way to fear and shun it until all the tech-haters suddenly desperately needed it during the pandemic.  The past two years have forced a recognition of the importance of digital fluency, though there are still no mandatory digital literacy courses in any Ontario high school.


On To The Future, Ready or Not...

With all that in mind, what's coming next offers some exciting possibilities, not that education will leverage them before I retire.  Machine learning and the artificial intelligence growing out of it is already offering students a silent AI partner for coding with Github's Copilot.  The GPT-3 OpenAI system Copilot runs on is already producing original text, and perhaps even some of the original essays that teachers think are written by students.

As systems become smarter information falls to hand more readily and old habits become irrelevant (like memorizing phone numbers).  With all that in mind, I've had grade 10s building IBM Watson AI powered chatbots for several years now, and this past semester several of my seniors used Copilot to make their culminating coding projects.  Being able to communicate effectively with ML & AI is going to become increasingly important in the next decade.

But what really excites me about intelligent machines is how they're able to simulate activities with human users in order to streamline and improve the human-machine interface.  Last week we were watching FITC's Spotlight UX, an online conference about the multidisciplinary field of User Experience (UX) based on digital design, ergonomics and user interfaces.  UX opens things up to consider all aspects of digital design from a user's point of view; it has a lot in common with student centered learning in education.  The opening speaker was formerly an ethnologist before getting into UX and her background allowed her to dismantle many of the assumptions that alienate users, especially in online systems that may be designed in one country and used many others.

At the same time I was reading Guy Huntington's piece on The Coming Classroom Revolution.  One of the things he covers is the concept of a virtual-self personal learning assistant.  Guy is looking at the AssistBot from a legal/privacy perspective in the article, but a complex digital model of a students' learning habits offers some interesting possibilities.  What if the virtual student could be run through simulations using various software?  User interface issues could be recognized even before a student picks up a new device or software for the first time.  Interfaces that have been refined by AI driven user simulations would feel intuitive in a way they never have been before because each user would be interacting with digital information on an interface that was custom designed for them based on thousands of hours of simulation prior to them ever picking it up for the first time.

The learning benefits should also be apparent if everyone is walking around with a digital doppelganger in tow.  A teacher might pitch a lesson into a simulation space and the virtual student-bots would be able to show where it does and doesn't work for them, and the lesson could then be customized for each student as needed prior to them ever seeing it for the first time.  Classrooms would become radically personalized after over a century of factory conformity and low resolution information sharing.

A buzzword flying about at the moment is 'metaverse', especially after Facebook rebranded itself Meta.  In the last post I talked about my long involvement with interactive and immersive virtual reality, and after years of development we are close to finally making it happen on a system-wide scale, but it's going to happen while the systems themselves are becoming intelligent and the web itself is attempting to evolve itself past the attention merchant economy that web2.0 became.


Back in April I watched FITC's big early conference and they had Jared Ficklin keynoting about how web3 (driven by blockchain encryption) might give us back control of our own data and change the paradigm we're stuck in online with multi-nationals selling our data as if they owned it.  It was a thrilling talk and I've since come across similar thinking in WIRED.

Web3's a bit of a dog's breakfast thanks to crypto and the mess it has made, but the possibility of individuals owning their online presence is a thrilling return to what the internet once was and might be again.

Combining all of these converging ideas into a viable technological future is ambitious, but it's something worth pursuing because if you don't push for the best outcome for the most people we end up with what we have now.

Could the internet provide us with secure interaction and storage without abusing our information?  Could we move past the low-resolution two dimensional windows that we all peer into the datasphere with now?  Could we leverage machine intelligence to treat each other in a more human way than our 'superior' one teacher to 30+ student brick-in-the-wall classrooms continue to do even now?


Imagine if you will a future where you are able to move in and out of digital information at will without it ever distracting you from the real world as it does now.  Peripheral user interface ergonomics will drastically improve as we get clear of the smartphone myopia we're currently stuck in.  When deep diving into digital data you'll be able to do it using complex multi-dimensional interfaces that make our current screen fixation look positively archaic.  Haptic IoT devices mean you'll interact with data with more than your fingers, allowing for much more nuanced control of your digital interactions.  Your awareness of that environment will also be dimensionally greater than peering through a 2d screen.  Moving three dimensionally in digital data offers you a much richer connection to your digital self.

A better interface with digital information is already here and will only improve, and though Web3 struggles to make sense at the best of times, the idea that we could bring our shared network back to a user-centric experience where our privacy and personal information is owned and controlled by users points to a possible future beyond the tyranny of the attention economy.  But what's most exciting to me is the idea that we can have virtual versions of our habits that we can run simulations on in order to produce software experiences unlike any we've had before.  The efficiency in that combined with all these other converging technologies points to a digital future much richer than the step we're stuck on now.

Imagine opening up a brand new app to discover that it intuitively makes sense to you because it was designed using thousands of simulated hours with your digital avatar.  This also offers some interesting security opportunities because no two interfaces would be the same since each would be tailored to its user.  Combined with a more privacy friendly web, multi-dimensional user interfaces and machine learning that enables us to refine the human-machine connection even before first use, the cybernaut of the future will be doing things in digital spaces that will challenge what we think is possible, which is vital because we will interacting with more and more complex artificial intelligences when digitally connected and if we don't refine and improve our ability to operate in digital spaces, we'll rapidly lose touch with what these automated intelligences are doing.

Rachel, one of our founding Terabytches who took care of Cisco networking during cybersecurity competitions has moved on to computer science at university.  Back when she was doing coop it she developed a simple machine intelligence to develop an understanding of what was going on in large datasets.  One of the biggest surprises for us both was how much work is involved in the Explore/Transform section of this hierarchy.  ML, AI and deep learning offer us a new way to understand large, complex data-sets, but they also need human oversight to make them work.  The automation possible in modern data science is another one of those 21st Century skills most classrooms don't consider.