The power and potential of EdTech: Using AI as a study tool … and maybe more
This summer, I've been diving deep into the application of AI in the education sector. In particular, I'm interested in how we can provide access to educational supports at scale, particularly in countries experiencing fragility, conflict, and/or violence, where access to educational resources may not be available to many.
As reported in The Verge, Quizlet, came up with AI features to assist students using their tools. The notion sounds compelling, right? Picturing AI directing us what and how to learn is captivating! But it also tends to label one as a 'good student' or a 'bad student'. Is it a fair judgement from machines? Let's explore...
The question is, can we make use of AI to create authentic, personalized learning experiences? And can we trust it to measure how well we do?
The potential of AI in educational technology is massive. EdTech companies, like Quizlet, utilize AI's adaptive learning capabilities to provide students with tailored, personalized study aids. Each student is different --- they have unique learning patterns, abilities, and pace. The one-size-fits-all instructional model doesn't work well for everybody. That's where AI comes to play!
With AI, each student's study path can be customized to their strengths and areas needing improvement. By generating personalized study sets and suggesting problem-solving methods based on a student's learning behavior, AI tools aim to facilitate efficient, personalized learning. It's not just about studying more; it's about studying smartly!
And then there is the trap of measurement. Being designated as a 'good student' or a 'bad student' by an AI tool raises serious concerns about objectivity and the effect on the learner's overall perspective towards studying. Learning is a complex process and encompasses various aspects like comprehension, application, creativity, and critical thinking. Can an AI tool capture this entirety?
AI tools base their judgement on your usage of the tool. Mistakes are learning opportunities for students and a chance for teachers (or AI) to provide constructive feedback. However, labeling a learner as a 'bad student' can lead to discouragement and demotivation rather than fostering a growth mindset.
Moreover, these designations are straightforward judgments of a learner's interaction with the app -- they don't fully represent a learner's efforts, comprehension, or real-world application skills. If a student doesn't engage with the tool regularly or in a manner the tool expects, the likelihood of being labeled 'bad' increases. It's a narrow judgement from the AI tool's side, one that fails to consider a comprehensive picture of a student's learning journey.
Looking forward ...
AI in learning holds promise but it's still a work-in-progress. The value of AI depends on how it's programmed, used, and interpreted. Currently, it's pivotal for EdTech companies to fine-tune these features to create a more comprehensive, objective, and encouraging learning environment. The objective is to ensure that AI enhances educational experiences rather than restricting them.
So, to all the 'bad students' out there, remember you're not 'bad' or 'good' based on an AI tool's perception. Use it as a tool for assistance and not as a judge of your learning ability. Embrace your learning journey, knowing that every stumble and fumble is just a stepping-stone towards your growth.
AI in learning has bright prospects and, as researchers and educators, we must collaboratively work to mould it into a tool that breathes life into learning and not into categorizations. We must strive to develop AI into an advocate of learning, a tutor that motivates and encourages growth, rather than a judge that defaults to labels.
But, again, if it is clear that the one-sized-fits all model for education works, maybe there's a greater role for AI to assist in the development of informal and non-formal approaches to learning, such as through free play? If we can use these technologies to measure the easy, objective stuff, can we also use them to measure hard-to-measure skills and competencies? Can we trust technologies to aid in our subjective judgment?