The elasticity of “ethical AI”
This is a nice review by @carlykind_of the evolution of the term “ethical AI”:
I wonder if the term is now becoming too broad to be useful — for example, the Court of Appeal’s decision on police facial recognition systems was on straightforward human rights law grounds, not ethics. The #ALevelFiasco outcome was simple politics ? (And, did the mark moderation algorithm have anything to do with AI/ML? I thought it was a relatively straightforward statistical fitting? As @RDBinns commented, isn’t most “AI”/“Machine Learning”?!)
“Framing the problem” in the sense used was something privacy campaigners have been doing for decades — I remember well this was Barry Steinhardt’s position on the “Snooper Bowl” 20 years ago.
Useless as the NHSX #TracingApp 1.0 was, I think its ethics board did a better job than most in exposing its issues — even if it was told it was not there to make assessments of the overall approach, and then shut down when it became too much of an obstacle to politicians
I also wonder how far these three phases happened in parallel, rather than more sequentially. For example, Oscar Gandy has been asking at least some of the “3rd phase” questions since the late 1980s… And the “second phase” started at the latest with Cynthia Dwork’s turn to fairness in 2011, much earlier than many of the endless “AI ethics” codes.
Finally, this excellent @adwooldridge column makes clear some of these issues have been raised over several centuries!
‘Scientific management retreated in the face of popular fury: Charles Dickens satirised it in the person of Mr Gradgrind, who wanted to “weigh and measure every parcel of human nature”. F.R. Leavis, a literary critic, dubbed it “technologico-Benthamism”.’
As Adrian Wooldridge added, ‘The universities to which A-level students are struggling to get admitted provide an example… Tenure and promotion are awarded on the basis of the production of articles (which can be measured) rather than teaching (which can’t), so students suffer.” See also Goodhart’s Law, and this excellent letter from the director of the UK’s national institute for AI and data science:
while we’d be happy to support third parties to develop and deploy artificial intelligence and data science ethically and efficiently, it didn’t take an algorithm — or in this case a statistical model — to spot that the main issue was human. Its formula may have done exactly as it was meant to — but the Department for Education and Ofqual lacked the open, interdisciplinary, accountable, equitable and democratically-governed processes to ensure a fairer result for our students.
Adrian Smith, Director, Alan Turing Institute
These are all very Crucial points.
Certainly the A-Level algorithm was too arbitrary to decide the future of these students and their admittance to their selected universities. I think even from a mathematical point of view, the algorithm has been criticised for not taking account of all factors. Playing the devil’s advocate, is unfairness not inherent to any selection ? I remember where I get a high mark for the luck of getting a question on my favourite chapter and sometimes just the opposite. I remember missing my baccalauréat for spending the night before at the A&E. we all have are good and bad experiences with exams. Some very bright people can be terrible at exams. Coming back to this specific context of COVID and A Levels, what would be the most ethical and fair option ?