Dangerous machines: what not to trust about data analytics
I am deep in conversation with Professor Stephen Roberts, Professor of Machine Learning in Information Engineering at the University of Oxford.
We are preparing for his talk at the forthcoming 2019 STEER conference day on educating the mind in a robotic age at the 2019 Oxford Literary Festival. We are discussing the contribution of big data, data analytics and machine learning to solving some of the world’s problems.
He says three things that surprise me.
First, he says much of the current AI hype is hype. AI has not moved on anything like as fast as people are being led to believe. The basic mechanisms and maths of interrogating datasets have been around for 40 years. What has changed is the scale of the data on which machines are running the maths. Networked machines mean that huge, aggregated data sets can be created, and patterns in them detected.
What people don’t realise, he says, is that this maths is very limited when facing situations in which the known data is unpredictable, undescribed, open or even unknown. In other words, how to win at GO is easy for a machine; how to better address the social care crisis is hard. AI is a bit like a very clever but narrow pupil who is brilliant when sat down at a specific board game they have practiced; but left to choose the best activities to educate himself in a room full of puzzles, books, posters and people without guidance – he doesn’t know where to start.
Second, he says that people need to realise that algorithms are sociopathic. We tend of think of machines as morally neutral. In fact, they are morally blind. He explains how algorithms must be first trained using a set of data in order to recognise patterns. The data they are trained on determines the patterns they detect and the meanings they ascribe to them. If the dataset is distorted (for example, a data set with a bias of negative data on the violence committed by women), then the machine will be trained to see women as more violent and predict that future pattern.
You may have seen the BBC report entitled Are you scared yet? Meet Norman, the psychopathic AI https://www.bbc.co.uk/news/technology-44040008 which illustrated this point vividly.
Third, as an implication from the second, he says that the moralities of the data gatherer and the algorithm developer determine the moral trustworthiness of the machine’s conclusions. In this regard, machines don’t remove the moral risks for societies; they amplify them and create a new priesthood with the power to morally steer society.
Unlike mediaeval times, that priesthood does not reside locked in the scriptorium with closed access to Vulgate biblical texts. The new priesthood resides in anonymous warehouses and offices in California, Oregon, Moscow, Beijing, Pyongyang and London: engineers and mathematicians who write the codes that analyse our world, interpret its meaning and push back the world we believe in.
Understanding the opportunities, risks and threats of machines is critical for Heads engaged in educating the next generation....
STEER is delighted to announce the first FT Weekend Oxford Literary Festival Educational Leaders Day
Educating the Human Mind in a Robotic Age
April 1st 2019, Oxford
We are delighted to announce the launch of a new whole day event Educating the Human Mind in a Robotic Age. The day is designed for educational leaders and policy makers at the 2019 FT Weekend Oxford Literary Festival. The event will address the changes that are required to educate the human mind in a robotic age.
- The morning session will focus on the effects of social media & digital technologies on the human mind, ability to learn and our mental health.
- The afternoon session will focus on the unique cognitive capabilities required by graduates to succeed in an economy of machine learning and AI.
KEYNOTE SPEAKERS INCLUDE:
• Professor John Bargh, Director of the Automaticity in Cognition, Motivation, and Evaluation Lab at Yale University. John has led global research into cognitive priming for the past three decades. John is uniquely positioned to explain the unconscious impacts of the real and digital environments on the minds of young people.
• Professor Stephen Roberts, Professor of Machine Learning in Information Engineering at the University of Oxford. Stephen has pioneered the development of intelligent algorithms to analyse big datasets. Stephen will clarify both the power and limits of machine learning, identifying the uniquely human cognitive capacities which will remain critical to educate in a robotic age.
• UNESCO ICT in Education sharing global perspectives on technology in education.
• The day will be hosted by Dr Simon Walker, Co-founder of STEER. Simon has led STEER’s pioneering work in reducing mental health risks, signposting learning-to-learn skills and improving employability in students across more than 100 schools.
OTHER HIGHLIGHTS INCLUDE:
- Data from an ongoing study of the development of adolescent social cognition between ages of 8-18 involving 30,000 students.
- An extended panel interview and Q&A with keynote speakers.
Event places are limited to 100 and are available to headteachers, deputies and policy makers in educational trusts & UK government on a first come-first-served basis.