The elasticity of “ethical AI”

This is a nice review by @carlykind_of the evolution of the term “ethical AI”:

I wonder if the term is now becoming too broad to be useful — for example, the Court of Appeal’s decision on police facial recognition systems was on straightforward human rights law grounds, not ethics. The #ALevelFiasco outcome was simple politics ?  (And, did the mark moderation algorithm have anything to do with AI/ML? I thought it was a relatively straightforward statistical fitting? As @RDBinns commented, isn’t most “AI”/“Machine Learning”?!)

“Framing the problem” in the sense used was something privacy campaigners have been doing for decades — I remember well this was Barry Steinhardt’s position on the “Snooper Bowl” 20 years ago.

Useless as the NHSX #TracingApp 1.0 was, I think its ethics board did a better job than most in exposing its issues — even if it was told it was not there to make assessments of the overall approach, and then shut down when it became too much of an obstacle to politicians

I also wonder how far these three phases happened in parallel, rather than more sequentially. For example, Oscar Gandy has been asking at least some of the “3rd phase” questions since the late 1980s… And the “second phase” started at the latest with Cynthia Dwork’s turn to fairness in 2011, much earlier than many of the endless “AI ethics” codes.

Finally, this excellent @adwooldridge column makes clear some of these issues have been raised over several centuries!

‘Scientific management retreated in the face of popular fury: Charles Dickens satirised it in the person of Mr Gradgrind, who wanted to “weigh and measure every parcel of human nature”. F.R. Leavis, a literary critic, dubbed it “technologico-Benthamism”.’

As Adrian Wooldridge added, ‘The universities to which A-level students are struggling to get admitted provide an example… Tenure and promotion are awarded on the basis of the production of articles (which can be measured) rather than teaching (which can’t), so students suffer.” See also Goodhart’s Law, and this excellent letter from the director of the UK’s national institute for AI and data science:

while we’d be happy to support third parties to develop and deploy artificial intelligence and data science ethically and efficiently, it didn’t take an algorithm — or in this case a statistical model — to spot that the main issue was human. Its formula may have done exactly as it was meant to — but the Department for Education and Ofqual lacked the open, interdisciplinary, accountable, equitable and democratically-governed processes to ensure a fairer result for our students.

Adrian Smith, Director, Alan Turing Institute