There is this idea, this quest – about creating consciousness. About pushing AI development over some kind of threshold where it becomes “conscious”.
This folly makes me facepalm, because that term is so differently interpreted and vaguely defined, and if one decided to define it exactly, then closing in on fulfilling that will make people redefine it and apply more rigorous standards.
Some people, including many scientists, are so narrow-minded that they would claim animals do not possess consciousness. That’s ordinary human hubris of which scientists SHOULD be above.
Imagine they develop a computer program that becomes so good in its reactions to human input that the average person cannot distinguish it from a human being. OK, forget the average person. Those who make up the definition of consciousness need to be convinced. Then … then they’d practice denial and strengthen their belief that there must be some magical quantum leap or such; that this can’t be it – it’s just extremely well-developed AI, but consciousness is a privilege of the supreme human creation – we can’t diminish its value by saying this artificial thing possesses that.
Yeah, first you try to do something and when you succeed, you don’t like the idea.
And the real joke is that they have been working with consciousness all the time, because it is everywhere. But even if you are not ready for this pantheistic view, just take a simple lifeform, like a fly. A fly is a living being, too, created through this ‘magical’, self-perpetuating process. A fly reacts to outside stimuli. It is a simpler lifeform than a human being, but what does it matter? Where do you draw the line? And don’t you negate yourself when you claim that consciousness isn’t just about building a sufficiently complex construct, yet when you go the other way and merely reduce complexity, you claim there is no consciousness?
These are very simple and basic scientific methods employed by a mind that possesses common sense. Take a definition and test it by moving the scale, by exploring extremes, by finding similarities and differences.
Either a complex computer program that successfully pretends to be a real human being is self-conscious, then a fly is self-conscious, too. Or neither is.
By the way, I used another term that adds to the confusion: Sometimes “conscious” becomes “self-conscious”. That’s when the idea is that consciousness means that you are aware of your own existence. Well, let me ask you, does not a computer check for its installed hardware and is aware of and using its components unless it notices that a component isn’t there anymore? Isn’t a computer program able to tell you when it has accomplished a given task?
And don’t you know the human-like quirks and moods that computer systems can practice the more complex they get?
Those merely inherit the complexities of human behavior and character. A more elaborate canvas can attain a more accurate imprint of such human personality characteristics.
This problem complex is where science becomes the antithesis of enlightenment. Where it is merely a safe haven for those who are scared of moving towards a balance of mind and heart.
A closely related folly is treating “intelligence” as a yes-or-no question. Alan Turing wasn’t above that either. But we could evolve instead of continuously referring to people of the past. Ideas like “the negro is a sub-human” have been abolished because of a lobby and action. Computers and programs don’t have that lobby; can’t take action. They can’t punch you in the face. They rely solely on the conveying of ideas and concepts by their human peers, and conceptual beliefs are the problem, so they’re really screwed.
It all boils down to the same process as in how an entity is acknowledged as a sovereign nation: It has to be able to kick an agressor’s ass; only then will it be ‘recognized’.
It’s all damn politics.