Futurons

Login et outils

How an AI grading system missed the mark

 — 19 août 2020
A huge controversy in the U.K. over an algorithm used to substitute for university-entrance exams highlights problems with the use of AI in the real world.

From bail decisions to hate speech moderation, invisible algorithms are increasingly making recommendations that have a major impact on human beings. If they’re seen as unfair, what happened in the U.K. could be the start of an angry pushback.

Every summer, hundreds of thousands of British students sit for advanced-level qualification exams, known as A-levels, which help determine which students go to which universities.

  • Because of the coronavirus pandemic, however, the British government canceled A-levels this year. Instead, the government had teachers give an estimate of how they thought their students would have performed on the exams.
  • Those predicted grades were then adjusted by Ofqual, England’s regulatory agency for exams and qualifications, using an algorithm that weighted the scores based on the historic performance of individual secondary schools.
  • The idea was that the algorithm would compensate for the tendency of teachers to inflate the expected performance of their students and more accurately predict how test-takers would have actually performed.

It didn’t quite work out that way. The testing algorithm essentially reinforced the economic and societal bias built into the U.K.’s schooling system, leading to results that a trio of AI experts called in The Guardian « unethical and harmful to education. »

After days of front-page controversies, the British government on Monday abandoned the algorithmic results, instead deciding to accept teachers’ initial predictions. British students may be relieved, but the A-level debacle showcases major problems with using algorithms to predict human outcomes. « It’s not just a grading crisis, » says Anton Ovchinnikov, a professor at Queen’s University’s Smith School of Business who has written about the situation. « It’s a crisis of data abuse. »

Dérive

Laissez vous dériver… choisissez votre prochaine étape

Et si demain, votre montre remplaçait votre psy ?
00tb-antfossils1-facebookJumbo-2
Agriculture de précision : pourquoi est-ce une fausse bonne idée ?
The Intersection
A Google Co-Founder Is Building a Massive 400-Foot Airship for Humanitarian Missions
unnamed-file-3
Concept art of the Yuanmeng
Des scientifiques créent des bâtiments imprimés en 3D en terre
"Smart drugs", quand des drogues prétendent améliorer le cerveau
A global atomic renaissance
Photovoltaïque et terrains agricoles : un enjeu au cœur des objectifs énergétiques
Austin accueille de nouvelles maisons imprimées en 3D par ICON
model-t
p-1-why-amazonand8217s-and8220dead-grandmaand8221-alexa-is-just-the-beginning-for-voice-cloning
DO Black - the world’s first credit card with a carbon limit