05 Feb When Will The Machines Wake Up?
Machines matter to people. But, they “matter” only because they affect people. It’s widely supposed that today’s machines themselves cannot be “affected” — because they have no feelings, no conscious thought, no sentience.
Interestingly enough, it might not always be that way.
While biology has held a relatively firm monopoly on “consciousness” over the last few hundreds of millions of years, many researchers in the domain of machine learning are of the belief that, eventually, humans may replicate self-awareness and inner experience (rough terminology that we’ll use as representative of the broad term “consciousness” for the sake of this article) in our machines. And some of their guesses are sooner than one might expect.
Over the last three months I’ve interviewed more than 30 artificial intelligence researchers (essentially all of whom hold PhDs). I asked them why they believe or don’t believe that consciousness can be replicated in machines.
One of the most common contentions as to why conscious will eventually be replicated is based on the fact that nature bumbled its way to human-level conscious experience, and with a deeper understanding of the neurological and computational underpinnings of what is “happening” to create a conscious experience, we should be able to do the same.
Professor Bruce MacLennan sums up the sentiments of many of the researchers in his response: “I think that the issue of machine consciousness (and consciousness in general) can be resolved empirically, but that it has not been to date. That said, I see no scientific reason why artificial systems could not be conscious, if sufficiently complex and appropriately organized.”
It might be supposed that attaining conscious experience in machines may require more than just a development in the fields of cognitive and computer science, but also an advancement in how research and inquiry are conducted. Dr. Ben Goertzel, artificial intelligence researcher behind OpenCog, had this to say: “I think that as brain-computer interfacing, neuroscience and AGI develop, we will gradually gain a better understanding of consciousness — but this may require an expansion of the scientific methodology itself.”
Some researchers hold even greater optimism, and believe that in some form or another, machines may already be conscious (such as Dr. Stephen Thaler of Imagitron, LLC), or have a good likelihood of obtaining consciousness within the next five years (like Dr. Pieter Mosterman of McGill University in Canada); others are less hasty with their timelines.