“If we create sentient beings will that change how we feel about ourselves?”
Dr Beth Singler
She is speaking as part of the Cambridge Series and has been selected as one of the Hay 30 thinkers to watch in celebration of the prestigious literary festival’s 30th anniversary celebrations.
Singler’s talk is based on a film she made for the Cambridge Shorts scheme, funded by the Wellcome Trust and the University of Cambridge. The film was screened at the Cambridge Festival of Ideas and Beth, a Research Associate on the Human Identity in an age of Nearly-Human Machines project at the Faraday Institute for Science and Religion, has been showing it and speaking about it in public talks and at schools.
The feedback has been so good that three further films are being made with funding from the Faraday Institute.
“We are interested in the bigger questions,” says Singler. “People think, for instance, that pain is a simple issue, but it is complex and opens up all sorts of questions about consciousness.”
The films will follow a similar format to the original Pain in the Machine film, incorporating narratives, clips from science fiction and interviews with experts.
Cambridge Shorts supports early career researchers to make professional quality short films with local artists and filmmakers. Beth was one of a number of researchers who submitted proposals for shorts films.
As part of the process, a kind of speed dating event was held and there Singler met Ewan St John Smith, who is based in the Department of Pharmacology where he is group leader of the sensory neurophysiology and pain group. That event was followed by another where the two researchers met Colin Ramsay and James Uren of Little Dragon Films.
The film includes science fiction clips and Singler says they are often the first to introduce the possibilities of technology to a wider audience and can influence and spur on technological developments. However, she says sci-fi tends to depend on the idea of conflict to drive the narrative and can also set the way we view technology. “It depends on binaries of utopia or dystopia. It doesn’t take the complexities in the middle into account,” she says.
Singler’s research involves looking at human identity in an age of nearly human machines. She studies how technological advances will affect society in the near future and how they will alter how we view ourselves as human beings. “If we create sentient beings will that change how we feel about ourselves?” she asks.
Because she is based at the Faraday Institute for Science and Religion there is also a religious aspect to the research. Singler says a lot of talk about technology “sounds a lot like end of days eschatological narratives from the Judaeo-Christian traditions”.
She adds that discussions around robots can tend to anthropomorphism on the one hand – could robots take on human characteristics – and robomorphism on the other – responding to humans as if they were machines. “The lines are blurring,” she says.
She refers to tv programmes such as Humans, but also to DeepMind’s AI programme AlphaGo, saying that technology doesn’t have to look like a human for people to develop a relationship with it. People even created fanart around it.
On the robomorphism issue Singler cites projects such as Elon Musk’s company Neuralink’s ambition to link the human brain to a machine interface. Musk’s aim is to use Artificial Intelligence and machine learning to create computers so sophisticated that humans will need to implant “neural laces” in their brains to keep up.
Singler says the questions opened up by AI are profound. “AI will not just replace the simple physical jobs that we may not want to do. It may also replace the mental jobs that we still want to do. We need to prepare and ask questions about what humans are actually for. For centuries we have been defined by what we do. Maybe we need to think about a post-work future and to redefine how we think about ourselves.”
She adds that we need to create frameworks in which to discuss the implications of AI and cites the EU proposal for rights for electronic persons as one example of a legal framework.
She says: “People say that we should stop making robots, but I think that is unlikely. I don’t think there are any definitive answers, but people need to have the spaces where they can learn about what is happening and we need to enable conversations.”
Source : University of Cambridge