Are we asking the wrong questions about AI?
There's no lack of discussion about whether machines can be conscious and whether they can undertake all that is distinctly human. But these tend to centre around the relatively narrow question of their computational capabilities, obscuring important aspects of how we think about consciousness.
Let's begin with Alan Turing's seminal paper, Computing Machinery and Intelligence, where he proposes we replace the more abstract question of "Can machines think?" with a clever thought experiment called The Imitation Game, now more popularly known as the Turing Test. According to this, an interrogator is allowed to ask questions to someone in another room using only a typewriter. The interrogator is allowed to ask whatever question he wants, and he receives responses from the person in the other room. According to Turing, instead of wondering in the abstract whether machines are capable of thought, a sufficient condition for a machine being able to think would be a digital computer's possession of the ability to answer the interrogator's questions well enough to fool the interrogator into thinking it is human.
Turing gives examples of how exchanges in this game could occur:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
He argues that this set-up is valuable because it is "suitable for introducing almost any of the fields of human endeavour that we wish to include".
This aspect of the test is important to note because the stringency of the requirement is often not taken too seriously. For example, the recent unveiling of Google Duplex, Google Assistant's newest feature that automatically sets up appointments for its users, was met with excited headlines like Did Google's Duplex AI Demo Just Pass the Turing Test?. While the system certainly seems competent with respect to its narrow goal, it does not come close to capturing the massive variability and depth of human communication, and so obviously fails the Turing Test.
Turing's paper came out in 1950, and he hoped that within a century, it would be commonsensical that machines could think:
I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted
While the century hasn't run out just yet, this transformation in the way we think hasn't quite come to pass. One reason for this are a class of arguments Turning termed "Arguments from Various Disabilities", which argue that even if certain human capabilities could be carried out by machines, it takes more than that to actually think or be conscious. There will always be certain things they wouldn't be able to do, including:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.
Turing's own response to these was that they were a result of faulty scientific induction. According to him, people had just been exposed to a small range of machines with limited capabilities, and had made sweeping and unwarranted assumptions about the limitations of all machines based on these. This is almost certainly right, but here Turing fails to develop a line of inquiry which I believe is vital to understanding the force of this objection.
Turing was sensitive to the fact that an adult mind doesn't leap into existence from nowhere. He points out that along with the mind at birth, there is much that is taught and experienced which eventually shapes how the adult mind functiona. But here Turing doesn't go far enough in seeing how dependent consciousness is on other people. After all, his way of talking about learning and experiencing treats machines as purely cerebral and solipsistic.
As Abeba Birhane explains in a recent Aeon article titled Descartes was wrong: 'a person is a person through other persons', there are aspects of human identity and being which are irreducibly relational. The presence of others and paying attention to their perspectives (both actual and imagined) play crucial roles in how humans develop a sense of self and function in the world.
I'm not suggesting Turing necessarily missed this, after all a key reason for his development of the Imitation Game was to produce a stripped-down test which would not require much background information. But by not exploring questions about the nature of the self in his paper, Turing inadvertently kicked off a research programme which centered questions about the capabilities of machines in isolation, and to this day this colours the way we think about AI. To move past this, we'll have to face head-on those possibilities where machines develop their capabilities over time, through interactions with humans and each other, all while being able to run computations much faster than we ever could.
I suspect that doing this will force us to confront the very plausible scenario of our oncoming obsolescence. It's tempting to pretend this isn't a serious issue, that it can never come about, but to echo Turing, I think "consolation would be more appropriate".