AI – Fantastic But Flawed

by

Life in our digital, robotic age is great. The Roomba vacuums for you. Cars can navigate and drive themselves. You can print your own 3D products, have an interactive smart home, or da Vinci-assisted surgery. Electronic devices have also revolutionized classrooms and instruction. The Internet of Things and the explosion of computer processing capability have us feeling powerful, pampered, more efficient, and more advanced. Students will now grow up learning and working in environments with machinery that learns and thinks as well. Machine learning is already here—automating processes and extracting patterns from big data, then making predictions and decisions. Machine learning algorithms are lightning-fast and complex, but they have flaws and problems too.

While algorithms are an essential part of digital technology, we have always used them. Sorting laundry, planning a trip, or following a recipe are all simple algorithms. Algorithms are merely problem-solving procedures—a fancy term for step-by-step determination. Students use algorithmic logic playing games, assembling projects, doing research, and even planning parties. Humans use algorithms every day to make decisions and complete tasks. Algorithms are also the key component in Artificial Intelligence (AI).

Many teachers and students are tech savvy, but most of us are not calculus or coding experts. Math and programming classrooms create and resolve algorithms. However, industry IT algorithms are much more complicated, high-speed calculations that anchor a myriad of different and pervasive systems. The Internet, finance, trade, healthcare, employment, manufacturing, immigration, data collection, security, and processing are all rooted in the multi-levelled decision-making of AI algorithms. Are you aware of their profound impact and influence?

Algorithms in Daily Life

Algorithms determine the TV and movies we stream, dating and shopping apps, and the online games and quizzes we’re offered. Algorithms connect us to networks and opportunities—and restrict us from others. Students are graduating into a world where unknown software algorithms will determine the job opportunities and interviews they receive, which financial products and bank loans they’ll be offered, and even their international travel and immigration options. Pause to think about that.

I have three mantras in my classroom: Reflect. Ask questions. Make connections. Across Canada, we teach our students to be critical thinkers and we expect new technology to get better and better. Circuitry and machines don’t “feel” or “discriminate” as humans can; they merely function 24/7. Right? Not quite. Software functions the way human programmers design it to—and there are inherent problems at that fundamental stage.

AI Flaws and Dilemmas

Creating a computer algorithm requires clearly defined inputs and outcomes. Coding languages, like JavaScript or Python, each have syntax and semantics issues as well. Limits and bias get built into the initial datasets and decision-making criteria, intentionally or not. Then, as AI makes new decisions based on previous patterns and outcomes, skewed results and lop-sided systems become the new norm. This can develop rapidly and well beyond our knowledge and plans because, despite being digital and mathematical, AI algorithms can still be incorrect. Underfitting means they are too simplistic. Overfitting means they are too complex. They can induce or replicate stereotypes and bias in data, procedures, and outcomes based on gender, race, age, location, or online history.

Consider some possible costs or dilemmas. The more that students use Instagram, Tik Tok, YouTube, or Netflix, the more corporate algorithms determine content rather than users searching or choosing for themselves. Automated decision systems soon predict, recommend, select, and limit your viewing options. This is also true for search engines and websites like Google or Amazon. Who is truly searching and discerning information— you or the corporate algorithm? Who actually stores or owns it? Different users get different options and results. So, is this technological assistance or manipulation?

Uber or Google Maps can determine the best route to get you to a destination. They also redirect traffic patterns and can influence or isolate entire neighbourhoods. AI systems, like America’s COMPAS, use algorithms to determine a prisoner’s parole, but predictions and decisions have repeatedly been proven biased and incorrect. Will they scrap the system? Will it get worse? AI can also personalize and privatize the pace and curriculum of online education, but then what happens to collective knowledge and experience? What happens to students’ social skills?

Digital Discrimination

If the initial data used to program and train machine-learning algorithms is historically biased or limited, the result is digital discrimination. Voice recognition software can misinterpret intonation or accents and misidentify or refuse people. Facial analysis tools misread expressions, misjudge darker complexions, and “red flag” or filter out otherwise perfectly qualified candidates for jobs or travellers at airports. How do your students feel about this?

Canada has received AI solution options to manage Immigration, Refugees, and Citizenship Canada procedures from American tech giants Amazon, IBM, and PwC. What if similar discriminatory faults or problems come with their AI? Will one nation’s AI solutions translate cohesively to another nation’s system or situation? Should we even use automated algorithms for such pivotal and individual decision-making?

If the original data pool favours one race, gender, background, or set of skills, then eliminating, accepting, or promoting future candidates gets biased because of it. We strive to promote and celebrate diversity and equity in our schools, so it’s alarming to think that AI systems might undermine those very ideals. Underrepresented or disadvantaged individuals can be overlooked or automatically excluded by algorithms long before the stage of in-person interviews. This is certainly not the goal but can still be the reality. Our digital age comes with many caveats. If the input is flawed, the outcomes are too.

Critics Are Required

Youth today are transfixed by devices. So are industry and society. We are too far down the technological road to turn back. Artificial Intelligence and machine learning are very exciting and already industry essentials. Unfortunately, there is minimal legislation and oversight of AI. Public scrutiny and government auditing could help standardize algorithms and encourage better, bias-free software, but algorithms are proprietary and profitable. Private companies and multinational corporations fiercely protect and even obscure their intellectual property. Some audit internally, to be more efficient or competitive, but most refuse to reveal or share their datasets and code.

We must teach our students to recognize prejudice and privilege in our world’s problems and in our problem-solving algorithms. Today’s students are tomorrow’s coders or will certainly be affected by them. Algorithms already anchor our digital economic and social systems. They’ll grow increasingly complex as Artificial Intelligence learns to use and alter information and procedures on its own. Our students are already tech consumers. Some will become the next tech creators. Will the rest be critics or victims? Unfortunately, there is no simple flowchart to decide their fate.


ABOUT THE AUTHOR

Maria Campbell
Maria Campbell, OCT is a career teacher with the Ottawa Catholic School Board, currently at Lester B. Pearson High School. She tries to model rigorous research skills, fiction and non-fiction writing, and lifelong learning in and out of the classroom—we must practice what we preach. She also believes that as the globe grows more digital and interconnected, educators and critical thinkers are more vital than ever.


This article is featured in the Fall 2022 issue of Canadian Teacher Magazine.

You may also like