Seeing Through the Hype of AI in Education
I spent a couple of days last month at the Re:Work Deep Learning Summit in London. It was a friendly affair where those involved in AI and Deep Learning got together to ‘show and tell’ about their latest work and challenges they’re facing.
It was a great sanity check for me in that it helped clarify my thoughts towards AI and its use in education.
I’m very much a layman when it comes to the deep technical and scientific aspects of developing AI systems. I know enough not to be completely lost, but I’ve still a huge amount to learn. But, as part of my role is to help identify potential future developments for the Open University’s learning systems, it’s really useful to me to have enough understanding of emerging hot topics to the extent that:
- I know enough to hold high level conversations on a topic with experts in the field without looking like a complete idiot.
- I am reasonably aware of what I don’t know.
Having this base level of understanding helps me to recognise realistic potential benefits of an emerging technology. And, more importantly, it allows me to see through a lot of the unrealistic hype.
And this was the main thing that struck me about the Deep Learning Summit, and what made it rather quite fascinating and enjoyable — there was no hype.
When you’ve got the big guns like Facebook, Amazon, Google, Deepmind, NASA and MIT in a room talking deep learning there’s no scope for bluster and unrealistic claims.
What we did get was a glimpse into their current work. Seeing what’s gone well. What problems they’ve faced. What’s surprised them. And what they hope to do next.
There were some fascinating insights from the very start when Fabrizio Silvestri of Facebook examined how they’re using deep learning to help personalise the experience on their platform. For example, if you type a search query into Facebook the results should be personal to you. A search for ‘Pictures of Frisky’ should understand that this refers to pictures of your cat Frisky and should be able to return pictures of said feline even if you didn’t tag him in all the images. This is a completely different ball game to the Google search scenario that we’re all familiar with.
It was telling that Facebook, with all their data, all their processing power, and all their data science resources are still a long way from nailing personalisation. It’s so, so tricky to do right.
Bad personalisation is worse than no personalisation at all.
If you hadn’t already realised it, it’s clear we have a lot to do in education if we’re really looking to use AI to personalise a student’s experience. We risk getting personalisations wrong. Bad personalisation is worse than no personalisation at all. (Personally I think ‘niche at scale’ is our best interim option — but that’s another blog that probably won’t be written).
Professor Agata Lapidriza, from the Universtat Oberta de Catalunya and MIT Media Lab gave a great talk about the range of challenges associated with determining a person’s emotions from looking at their face. Your expression may be more to do with an action you’re carrying out than any emotion you’re feeling. You may have just won the lottery, but if you’re struggling to get the lid off a jar of celebratory pickled onions your face is unlikely to show emotion akin to joyous delight. That’s my analogy by the way. Not Professor Lapidriza’s.
So any pitches you might have heard from an edtech start-up determining a student’s frame of mind by looking at them through a webcam? Yeah, maybe not.
And so the two days continued. Each talk adding another couple of useful tidbits of information based on the real world experience of major players in the field of AI and deep learning.
There were two main themes that came through time and again from the presentations during the event.
Theme One: We’re only just getting started.
If we assume that AI, like many past technologies, is going to follow an S-curve path of increasing capability then we’re still near the bottom at the first up-turn. The concepts used in AI have been around for decades. But, in recent years, changes have allowed far more complex algorithms to work on far more complex tasks.
The machine learning S-curve — which kind of ties in with what I was saying.
For example, we have hardware that is much faster now, allowing many more calculations to be processed in a set period of time. This faster hardware is also far more accessible. Where once a large lab of machinery was needed to seriously work on AI, anyone can now get their hands on the necessary level of processing power through the cloud, and at a fraction of the cost.
And, of course, there is now much more data available to train and test algorithms.
So, exciting things are starting to happen. But it won’t be a straightforward, incident free, parade through to the broad sunlit uplands of an AI infused future.
A recurring theme was one of projects progressing one problem at a time. Constantly building upon their work, addressing problems by adding capability and improving efficiency. Slow and steady progress towards a useful and workable outcome. There was no headlong rush towards delivering a sparkly, awe inspiring showstopper of a product.
Theme Two: Context, Context, Context.
The second main theme is one that really resonates when it comes to the potential uses for AI in education. Context is everything.
Without context humans can struggle to clearly understand information they receive. This is finely illustrated by the very scientific Venn diagram below.
On its own ‘Put your hands up’, although a clear instruction, is ambiguous in its intent. How should you feel after hearing this? Scared? Happy? Slightly uncomfortable?
Other inputs, such as what type of building you are in, and your personal feelings towards playing an active part in a party, organised religion and/or violent crime, will make it far more clear which emotion you should be feeling.
Without context we can struggle to determine intent and the likely cause of what someone says or how they act. We need additional information to give that context. So do AI processes.
Having more than one mode of input can greatly improve the performance of AI processes. I’d always been wary of edtech solutions that offered AI insights from limited inputs, such as sentiment analysis purely from small bodies of text. I now understand why I was right to be doubtful.
Try an AI conference.
If you are keen to understand the potential for AI in education then try to attend an event like this. It might not be directly related to learning and teaching, in the human sense. But it’s a great sanity check in a world full of hype.
AI in education.
So, having said all that, what do I feel the future is for AI in education?
There are many positive use cases. I firmly believe AI can have a beneficial impact on learning and on the running of teaching institutions. But I suspect, in the short to medium term, most of the benefits in teaching and learning will be indirect.
Things like the automation of less glamorous tasks, dealing with low level repetitive queries, and helping teachers to find patterns in student actions as they explore content can help reduce the workload of teachers and provide insights that were previously out of reach.
The increase in time available, and the improved information at hand could enhance the human interaction between tutors and students, and help amplify the efforts to support all learners.
But efforts in these areas will take a little time to nurture to fruition, and to bed into practice. AI is not just a technological change. The human processes and interactions with AI will need just as much forethought, care and attention.
My feeling is that education’s strategy should be to resist the temptation to go for a mixed bag of tactical buzzword based ‘quick fixes’, that have a debatable chance of success, just so we feel like we’re doing something in this space.
A pragmatic well understood long-term strategy to develop the correct foundations and continuously evolve our capabilities will lead to the best results in the longer term.
My fear is that poor experiences from unrealistic, hype-based AI experiments might damage the confidence and willingness of organisations to undertake and fund these longer-term AI strategies.
Let’s try not to do that.
As I mentioned earlier, we are very much at the start of this artificial intelligence journey. AI is a long term game with huge rewards for those who play it well.
Now, when’s that Blockchain summit taking place?