Well, a bit of a hiatus, but I should be posting more regularly again(although I remember saying that before). In the pipeline, half written, are posts on Alien Physiology, Plasma Weapons, Residential Space Stations, and a few small 'Random Numbers' posts.
Honda's ASIMO |
Artificial Intelligence has to be one of the oldest themes in SF, both written and visual. Rather than try to list the many ways it has been employed, or trying to pick the first, most famous, or best examples, I will simply link to the Wikipedia page on Artificial Intelligence in Fiction. It is especially helpful as the entries are subdivided under the more common treatments.
There are a wide range of fictional AI; from HALO's Cortana to the 'Minds' of Ian M. Bank's Culture series. From the Replicant clones of Blade Runner to the disembodied mind of Jane in the Ender's Game series. Most of them have one thing in common however; they are all very human.
With more than a few this is intentional. The question of if a AI can be considered 'Human' is as old as the concept of AI itself. This is the basis of innumerable SF works, and will continue to be a cornerstone trope for the foreseeable future. With others it is an indirect rather then direct result of the themes and plot. The robot could be used as a metaphor for the dangers of logic untempered with emotion, or to show the dangers of blindly following a system. In those examples the commonly surmised 'traits' of a AI are used to highlight the human characters and themes. In others the AI might be perfectly moral beings, contrasted with fallible humanity. Others yet simply use AI as a convenient and ominous foe.
And for the most part this is not an issue. It is a perfectly legitimate way to depict AI even before considering Burnside's Zeroth Law of space combat: SF fans relate more to human beings than they do to silicone chips. If it is not necessary for the story you are telling why go to the extra effort to create a character that might be difficult for the readers/audience to understand, and will definitely be hard to write.
But, will AI ever be Human-like. Not will they be Human? That is a question none can answer at this point in time. But is it likely that a full AI, should we create it, be like it is depicted in SF? I personally think it unlikely.
Note, however, that there is an exception; AI designed to mimic Human behaviour. Such a system may be possible through pure number-crunching, statistical analysis of someone prior decisions giving a accurate prediction of how they would respond. For the point I am making AI is assumed to be an Intelligence that has not expressly been intended to be anthropomorphic. AN AI created by copying a human mind might well act like its original to an extent, but one built from scratch is unlikely to do so.
Language, Data Processing, and Intelligence
Laying aside philosophy, ethics, and religion the Human brain is fundamentally a computer; a device that processes information gathered through the senses, and produces a response. It follows, then, that our intelligence is directly related to both how we receive information, and how we process it. By extension of the first point language, or more strictly communication, is also vital as it is a major source of information for Humans, and seems likely to be so for any AI.
But in all these areas - the gathering, processing, and transmission of data - humans are fundamentally limited. We have relatively weak senses, have limited ability for some metal processes like mathematics and memorisation, and can only communicate throughout the inefficient process of speech. What this means is that we tend to simplify. What we perceive is a simplification of the world around us. Our memories are prioritised according to what we need to remember. And in communication! The fact that two people never mean the same thing when using the same words alone is enough to make it inaccurate, but we are forced to simplify even further by the time it would take to transfer all the information we hold on a subject by speech. Names are the product of this process, representing a huge amount of implied information with a single quickly spoken word.
AI is different. While there are limited to sensor technology there is no reason for an AI, if it should so wish, not to be connected to sensors that give it a view of an entire solar system. It also sees things in more depth - all frequencies of light, electric and magnetics fields, gravity gradients, etc. There is more precision - an AI would know exactly what it saw, down to many decimal places. Given, of course, that its sensors are that accurate.
Then too it has better memory. Even with current electronic storage huge amounts of data can be recorded with little effort. And unlike a human, who cannot select which stuff to remember, and AI can organises its memory as it desires. Perfect recall is also a given for electronic memory. The AI will never have to question if it remembers something correctly, and thus will have no need of the numerous devices - cameras, computers, etc - that humans use to store information.
In processing the information the AI also is fundamentally different to a Human. There would seem to be no limit to how much the AI can think of at once. The intelligence itself could merely be a controller directing the operations of hundreds of subsidiaries, but doing so with an efficiency that a human with fallible memory and concentration could not manage. It could also be free of bias, and be aware of exactly what impact any preconceptions have on its perception. Humans are not aware of the working of their brain, but an AI could use diagnostic software to ensure that their thoughts were on track.
When communicating the AI is faster than a human thanks to its ability to directly transfer information. Nor will it be diluted and skewed by perception as a humans spoken word is - although the possibility that the AI can lie is a very real one. AI might not use words, a person could be referred to not as 'bob' or 'Jane' but as a file reference that leads to the sum of all extant knowledge on that person. And while this effect would be more pronounced the more powerful and AI is in processing capacity, it should be evident even in lesser versions.
If these factors - information gathering, processing, and transmission -all shape human intelligence and consciousness, it is not logical to assume that an AI with widely different abilities would also have differences in its intelligence?
So what will AI be like? I don't know. But the chances of a full AI being as similar to a human as is often portrayed seems to me to be unlikely in the extreme. This is only my opinion however, the at the question is one that can only be answered by the development of a Artificial Intelligence(and then we'll have bigger questions, like who thought self modifying software was a good idea for a nuclear defence computer). I intend to look at the more nitty-gritty details of AI in a future post, once I've read up on current developments in the field.