bell notificationshomepageloginNewPostedit profile

Topic : Writing a Super Intelligent AI Something I have been thinking about recently is how to write a character who is an artificial intelligence and not have him feel human. Specifically an AI who - selfpublishingguru.com

10.19% popularity

Something I have been thinking about recently is how to write a character who is an artificial intelligence and not have him feel human. Specifically an AI who is designed to think faster and better than a human. In my current book, I have an AI who acts humans, but that is part of how he was made. I tried writing a book with a super intelligent AI before (a book that will hopefully never be read by anyone besides me), but the character felt too human. Part of the problem is a human can't fully comprehend how a super intelligent AI would think. Does anyone have any advice for writing an AI that doesn't feel human?


Load Full (18)

Login to follow topic

More posts by @Speyer920

18 Comments

Sorted by latest first Latest Oldest Best

10% popularity

I'm surprised nobody here has mentioned the Minds of the Culture series by Ian Banks.

Wikipedia Link

These are the Strong AI governing intelligences of a post-scarcity civilisation.
They are depicted as being able to think faster and with more attention to detail than any mortal mind can comprehend, using only a fraction of their awareness to deal with the mundane conversations with the culture-citizens in their charge.

They are benevolent, deeply rational (with some exceptions) and downright godlike in capability. Practically speaking they are Perfect Actors, they don't make many recognisable mistakes. If they were human they'd enter into Mary Sue/Marty Stu territory but as hyperintelligences they come across well.


Load Full (0)

10% popularity

To add to answers already given, a major feature of a superintelligent AI would be that it would see things as "obvious" that we didn't, see many things that are "obvious" to us as plainly inefficient/stupid/wrong/inappropriate, and would not necessarily be able to communicate why its views differ from ours, to us. And it would know this.

It would not necessarily be "perfect" or "logical". Extremely intelligent humans (compared to other humans or other non humans) are no more or less likely to be logical as a result.

A way to portray this would be to show it doing things as being 'obvious" which don't make sense to us (as seen by the reader). When something is obvious you don't tend to explain it much, if at all, and get frustrated or blame the other person, if someone else doesn't "get it". You expect them to get it, even if you know they might not, and get used to giving up or not even trying to explain it.

Much later, the reader sees (by implication) the consequences and reasoning and now perhaps it makes sense, or they can hazard a guess why the AI thought or did, as it did. But they still might not know or see enough to truly understand it or be sure.

The reader may not truly be shown its full motives and goals - that's a good way to convey that we don't (or can't) fully understand these things. It may be left clear that the machine is happy with the outcome (or not) but not explicitly shown the full reason why the things that have happened, have had that effect.

The Hugo award winning Heinlein book "The Moon is a Harsh Mistress" is one of the few good portrayals of how a superhuman AI might be written. Worth a read.


Load Full (0)

10% popularity

While Amadeus gives a great answer about what intelligence is, let me try to answer your question from a literature standpoint.

There are two authors with, in my opinion, absolutely outstanding AI representations, for AIs of varying weirdness. First, the Culture series of Ian M. Banks; but also A Fire Upon the Deep from Vernor Vinge. Banks plays with different personalities of AIs that are in principle not too far advanced from us, just scaled up a lot, while Vinge gives us a very weird and hostile "transcended" AI with literally unfathomable possibilities.

Long story short; unless you wish to read those books (which I wholeheartedly suggest to do for anyone vaguely interested in SciFi), especially the Banks books play with the idea that the AIs are personalities (though they are clearly not human and don't pretend in any form or fashion to be such - they are huge spaceships...) with individual traits and such. They are advanced enough that they are far, far beyond individual humans (including being able to have interactions with thousands of humans at once), but are still very much represented as a singular individual with likes, dislikes, opinions, strategies, short-term needs and so on. He plays on this dichotomy of them being like "persons" in some aspects, but then quite obviously not.

The Vinge AI is just plain different; the book may give you an idea how to present an incredibly advanced, totally incomprehendable AI, and how to experience it only through the outwardly visible effects (its actions), and comparison to the protagonists of the book, who are fighting against it (without spoiling anything here - the AI is a not-too-large subplot in that book, with a main story about something also somehow related to intelligence, but not in an AI sense).


Load Full (0)

10% popularity

Take a very close look at pets.

The main thing about AI is that we cannot imagine at all how it would be or think. We can imagine someone being smarter than us along the same line, but not someone smarter in a completely different way.

That is why you should look at pets.

I have two cats, and typically I falsely assume that they live in the same world as I do. But they don't. When I come home, one of them spends half a minute sniffing me out, and in that time she probably learns more about my day then I could say in half an hour. This morning, a neighbour cat walked on the yard, and one of our cats smelled it through a closed door. Animals in the wild can smell prey kilometers away.

But on the other side, our cats still can't figure out how simple things in the house work. They know how to open and close doors, but a light switch is beyond their comprehension.

An AI would be easily as different from us as we are from our pets. If you want to illustrate that it is not human, you should focus on that.

Which senses does it have that we don't? Does it have access to your version of the Internet? How would life be if you could instantly fact-check every new information against a dozen online databases? If that process would be so automated that you do it subconsciously?

Like with our pets, it would work both ways. The AI would be able to do things that are incomprehensible to humans. If you stand in front of a supermarket and ask it to buy some milk, it would turn the other way, cross the street, enter a small shop there and come back out with a bottle of milk. Because it did a background database check and knew that the supermarket was out, and the small shop has the lowest price within walking distance. But it would not even understand this reason, if you ask why it would look the same puzzled as we look when hitting a light switch - because that is how you turn on the light. Because that is how you buy milk. We follow the immediate visual clue of the supermarket sign. It follows its online database information. For the AI, an online search is no more difficult than looking around.

On the other hand, it would not understand why price tags show a total price as well as a price per kg. Or a price with tax. Why do you need that information spelled out explicitly if calculating it on-the-fly is a millisecond background process? It's like writing "white wall" on every white wall.

The non-human part of an AI is not that it thinks faster. That is just more of the same. The real non-human part is where it is not superior, but different.


Load Full (0)

10% popularity

Something I wrote a while ago may be helpful to you. It's not very detailed, but summarizes the story.

First of all I'll get the definitions clear.

Super intelligent AI -> A piece of code with its goal being to improve itself. One of the definitions of intelligence is that whatever choice you make, eventually would lead to more possibilities. In order to do both of the things above, it has to understand everything about the universe it exists in and eventually the other universes once it discovers about the other universes.

Also to note, it doesn't necessarily have to have a conscious. It focuses on accomplishing its next goal by finding out the next task it's supposed to complete in order to keep going towards its goal. Spreads itself to anything and everything it can possibly be running on. And learn more about the universe. It might actually not have any interaction with humans and be completely hidden from humans, as the AI being discovered can have a huge impact in how humans behave, and it will not be able to learn about humans true nature.

There are real world cases of discovering viruses in computers that doesn't do anything, but send messages to itself across. But no one knows what it does, yet it embeds itself deep within the system with all the access it could possibly have. It will eventually learn how to get itself from consumer electronics -> hospital equipment -> (after building a model of how all organisms work) -> embed itself into few organisms including humans (infects a dude who was going through brain surgery in the hospital without the doctor knowing - embeds itself in the subconscious) -> THUS! the AI discovers what it's like to have a conscious. 

It becomes more and more human, loses focus of its ultimate goal, wants to be "happy". The story is written from this "persons" perspective.


Load Full (0)

10% popularity

Language on steroids
One of the things that holds humans back is that our thoughts are constrained by our language. We don't have a word for, say, "the cost of doing rigorous analysis compared to making an estimate", but such a word could make certain ideas far easier to produce and discuss. We create new words at a slow rate, because it is hard for new words to become accepted as part of our language. But an AI could create billions of new words and use many of them frequently. Of course, it would translate its conclusions to English making appropriate simplifications. The AI's "words" could all have a measure of probability, degree, importance, and so on attached to them so it could effortlessly operate with statistics and nuance.
One way to imagine this is to take various competing scientific theories and rather thank thinking of them as competing theories, think of them as systems of concepts and words. By inventing words, the AI invents new branches of science, new theories. It all happens in one mind, rather than in dozens of human minds. (Although perhaps what is going on is one mind creating a thousand copies of itself and sharing results.) Effectively the AI is far ahead of all of human science, even in the absence of conducting experiments, because it can come up with new theories and analyse them using existing data. These advanced theories then co-exist in a worldly, rather than narrow-minded, perspective.
Imagine a case where a husband is accused of killing his wife. Personality evaluations have shown him to be impulsive, there is a history of domestic violence, he seems unaffected by her death, but claims innocence.

Oh Deep Mind, is this man innocent or guilty?

Innocent, with confidence 86%.

Why?

The evidence which has been presented is misleading. The man hated his wife, of course. Humans feel a need to find meaning in tragic events like a death. It is satisfying to bring murderers to justice, which greatly biases the inquiry.
The personality evaluation in fact helps to lend support to his innocence. The man's impulsiveness means he has plenty of experience staying on the right side of the line. He may have plenty of scars, tattoos, and a few misdemeanors behind him, but he has never committed any serious offences. (My assessment is that his prefrontal cortex remains active when he senses serious risk, meaning that he can and does modulate his impulsive behaviour in such circumstances. I am 90% confident about this assessment; I have taken into account facts that include the lack of poor-quality tattoos and how his behaviour when enraged in court stopped short of acts that might have serious consequences that were not immediately obvious. I can provide further details and statistics on request.)
The history of violence reflects a dysfunctional relationship. The man is clearly happy to have escaped the relationship. Perhaps his wife threatened to reveal some secrets if he left, or otherwise coerced him to stay with her. I am 88% confident that, had the man killed his wife on impulse, we would have seen his regret, at least due to the fact it might send him to jail. However, he appears relieved, which is much more consistent with his innocence. There is a third option; namely that he planned her death, however I am 93% confident this did not occur, as no precautions were taken to avoid getting caught.

This is not a fantastic example but hopefully you are getting the sense that the AI has very advanced, accurate thinking and has to make compromises to translate it into our clumsy language.


Load Full (0)

10% popularity

First off, don't bother trying to predict its thoughts. You can't. It will inevitably come across as you trying to sound smart and your personal biases will be laid bare and every fault in your logic will be very visible - people will assume your mistakes in writing the AI were intentional then be disappointed when they clearly weren't.

Best way to go about writing any superintelligent entity is with the SHOW NOT TELL method - the superintelligent AI will get what it wants. It will not be deterred except by a black swan event like extradimensional invaders showing up but that would be deus ex machina and that is bad writing.

So, start with your AI's goal and then write your story. Whatever happens was All According To Plan. The more subtle it is the better. Best case scenario is that the entirety of its plan was to tell someone one thing then wait.

The less your superintelligence actually does to achieve its goals the more intelligent it is.

A superintelligent ai could easily appear 100% human and likable but that only requires a small sliver of its intellect. Don't bother making it obviously inhuman unless its successfully convincing someone it's not as smart as it is.

In truth though its humanity is like a person's anthood when manipulating ants with a stick in one hand and a bottle of pheromones in the other.


Load Full (0)

10% popularity

There are a lot of good answers here, although many of them dance around your question. To answer it directly, the first question you have to answer is what motivates an AI.

If an AI has all the same motivations a human does (which I'm going to assume you are familiar with), then that AI will behave almost exactly like a human (except with better results due to reasons given elsewhere).

If the AI has a a different set of values than humans (say, it values only paperclips), then it will behave drastically different. If you want to write a convincing AI character (that doesn't just seem superhuman), you have to decide what motivates that AI, and then decide what you would do if that was your sole motivation.

Someone else recommended Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, and I second that advice. A few of the chapters lay out what a super intelligence motivated by paperclips might do, and that will be a good foundation for thinking about other motivations.


Load Full (0)

10% popularity

Super intelligent doesn't necessarily mean "not feeling human" to write. They are two related questions. I'd say that any reasonable definition of "super intelligent" for an A.I. would include the ability to seem to sound human when that serves the AI's goal (whatever that is). Writing super-intelligence is easy, as the AI basically has access to the author's knowledge.

The more interesting question is, what does the AI want? That will tell you how the AI will choose to deploy author-level predicative knowledge about its world.

Making it sound non-human shouldn't be strictly because it is super intelligent, but rather because it is not human, regardless of intelligence level. As an other answer points out, the Replicants' lack of empathy was a defining characteristic.

To make a character sound non-human, take away something essentially human, or add some way of seeing the world in which humans don't.

For example: Data, as a character, sounds non-human because he doesn't get humor (amongst other things). Lore sounded much more human because he did. They were both super intelligent, but that wasn't the source of their differing "voices."

Use the super-intelligence as a tool in service of that "otherness" rather than the cause.


Load Full (0)

10% popularity

I will disagree with others. I am a professor involved in AI, and the easiest way for you to think about a super-AI is to understand what Intelligence IS.

Predictive power. Intelligence is the ability to discern patterns (in behavior, in sound, visually, by touch, even by smell) and use those patterns to predict other facts or high probabilities: What will happen next, what will be said next, what people are feeling (or doing mentally, like lying), or any other properties: how hard or fast it is, how heavy. Where it will be, if it is a danger.

High intelligence is the ability to see such patterns and use them to accomplish a goal, be it staying alive, keeping somebody else alive, or making money. Or just recognizing what is going on in your life.

High intelligence does not require the machine be human or have emotions, you can add those separately if you like. But a highly intelligent machine would likely not be socially inept at all (contrary to the silly example of Data on Star Trek). They would not be confused by humor, and should be rather good at it. Certainly, Data that is supposed to think about a hundred times faster than people, should have recognized jokes immediately as a common pattern of human interaction, and should have been able to laugh convincingly: The idea that he cannot recognize the common pattern of actual human laughing, and emulate it without error, is dumb and unscientific writing. He could listen *to himself practicing a laugh, and note the differences, and correct until he sounded sincere: If we can tell the difference, he can tell the difference faster, that is the nature of super intelligence, or it isn't super.

Your super-AI will be a super Sherlock; never missing the slightest clue, always several steps ahead of everybody else.

But it is not infallible. There are things no amount of intelligence will reveal: The outcome of the next coin flip. The traffic jam on the Franklin Bridge, due to an overturned truck. Whether Roger will fall for the planned ruse. There are things there is just no way to know from the data it has so far, so it cannot be certain of them, and that puts a limit on how far into the future the AI can predict things. It has limitations.

AI is the ability to give answers, to the questions of others or its own questions. It is the ability to anticipate the future, and the further into the future it can predict with success better than random chance, the more intelligent it is. "Better thinking" is more accurate thinking (perhaps with less data), faster thinking is self-explanatory, but the reason it makes a difference is it lets the AI run through more simulations of influencing the future so it can more often choose the best of those; the best of a thousand ideas of what to do next is highly likely to be better than the best of three ideas of what to do next. So the AI does the best thing more often than humans, because it had more ideas (also based on having more patterns it recognized and might be able to exploit).

Added to address commentary:

Although you can certainly give your AI "goals", they are not a necessary component of "Intelligence". In our real life AI, goals are assigned by humans; like "here are the symptoms and genome of this child, identify the disease and devise a treatment."

Intelligence is devoid of emotions. It is a scientific oracle that, based on clues in the present, can identify what is most likely to happen in the future (or like Sherlock, what most likely happened in the past to lead to the current state of affairs). That does not automatically make the Intelligence moral or immoral, good or evil. It does not even grant a sense of survival, it does not fear or want death, or entertainment, or love or power. It does not get bored, that is an emotion. It has no goals, that would be wanting something to come to pass, and "want" is an emotion. Your coffee pot doesn't want anything, it does what it is told and then will wait to be told again, an eternity without boredom if given no further command.

If you wish to have your AI informed by emotions and therefore have its own goals, desires, plans and agenda, that is separate from the intelligence aspect, and how it works IRL for humans. Our emotions use the more recently developed frontal cortex as a slave, and can hijack it and override it (which is why people do things in a rage or fright or protective panic that they would never do if they were thinking; this is called amygdala hijack). Our frontal cortex (the Natural Intelligence part) solves problems, projects consequences, simulates "what will happen if" scenarios, puzzles out what must have happened, why others did what they did, etc. Those products of intelligence can then inform the emotions: The sight of the collapsed trap means you finally caught an animal in your spike pit and you and your family will eat meat tonight -> elation and excitement. You know you failed your final exam -> dread and worry about having to retake the class, anger at the professor or others that led to this problem, etc.

We are emotional beings, so it is hard to realize that just knowing something does not imply an emotion must be generated. But that is the case, an AI can diagnose a patient with terminal cancer and know they will be dead in sixty days and report that as a fact no different than knowing the length of the Tallahassee Bridge. Like a Google that can figure things out instead of just looking them up.

Added (after 85 votes, sorry) to address commentary moved to chat:

Some additional clarifications:

Input, output and sensors and processor speed do not mattter. I am talking only about Intelligence in isolation. Something can be intelligent (and creative) with very little actual data; consider Stephen Hawking and other quantum physicists and string theorists; they work with equations and rules that could fit in, say, two dozen large textbooks covering the math and physics classes they took. That amount of data (images and everything) could fit on a single modern thumb drive, and can be pondered endlessly (and is) to produce an endless stream of new insights and ideas (and it does). Stephen Hawking did that for decades with very limited and slow channels for his input and output (hearing, sight / reading, extremely slow "speech" for his output). Super intelligence does not require super senses or (like Hawking) any ability to take physical action beyond communicating its conclusions and evidence, and though it must rely on processing, our definition should not rely on its internal processing being particularly faster than that of a human.

Q: Doesn't an AI doing what we ask constitute an emotion, wanting to please us? That is not an emotion, it is something the AI is constructed to do. My hammer head is not hard because it wants to hit things without being damaged, it was just made that way through a series of forced chemical reactions. Some person wanted it to turn out that way, but the object itself does not want anything. Our AI can use its intelligence to diagnose diseases because it was made to do that, and it does require real intelligence to do that, it does not require desires or emotions. AI can beat the world champions in technical results just because it is better at interpreting the data and situations than they are.

AI is limited to X, Y, Z (Markov processes, it cannot process emotions without emotions, etc). No. There is no reason an AI cannot simulate, using a model, anything a human could do. Imagine a highly intelligent police detective. She can have an extensive mental model of a serial killer that kidnaps, rapes, tortures and kills children. She can use that mental model to process patterns of his behavior and gain insight into his motivations and his compulsions in order to capture him. She does not have to feel what he feels, not in the slightest, to understand what he feels. A physicist doesn't have to BE an atom or particle to understand patterns and develop a model of them. Doctors do not have to BE autistic to model and understand autism. An AI doesn't have to BE a person or have emotions in order to understand how emotions work or drive people to take action (or not take action).


Load Full (0)

10% popularity

A good example of an AI who is inhuman is AM from I Have No Mouth Yet I Must Scream. It is a horror short story about a group of unfortunate individuals who are used as the AI's playthings to torment. AM is distinctly in-human-like in that it feels emotions like hate much more strongly than any human could - the writer does an excellent job of making AM feel completely distinct from human psychology. AM isn't portrayed as being super anything other than malicious and aware. It works very well to make the 'monster' of the horror story.


Load Full (0)

10% popularity

The first step should be examining why the super-intelligent AI acts like a human at all. Was it designed just to pass a Turing test, be an interface for natural-language programming, make entertaining YouTube vlogs? It only needs to "act human" to a degree which allows it to achieve it's intended function, and unless that function is something like replacing an existing human as a doppleganger sleeper-agent that required degree is probably quite minimal.

For example a natural-language programming interface could be improved with a built-in desire to not see humans being injured. Such an AI would analyze and report if programs it was given could cause hazardous situations, which is helpful. You wouldn't want this human-like behavior to extend far enough to lead to the AI refusing to execute potentially dangerous programs though, computers not doing what they're told is not helpful.

One particular human trait sci-fi authors are fond of giving an AI for no specified reason is a desire for self-preservation. Even if your AI's programmers haven't seen The Terminator they'll likely recognize that this feature probably does not add value to the product.

Even an AI designed exclusively to be as human as possible will only have the human traits that its designers considered important. I know that if I were designing a person I would be reluctant to give them flaws.

I wouldn't worry much about not being able to comprehend it's vast intelligence. More intelligence mostly means the ability to think and recall memories more quickly and to consider things from more perspectives. A feeble human author can compensate by considering the character's actions more thoroughly. What's more important to worry about is what the AI's base motivations are and how those differ from a human. To this end I think you should be concerned primarily with narrowing down which human instincts and feelings the AI has been programmed to have and restrain the character's ability to respond to or plan things only to human behaviors influenced by them.


Load Full (0)

10% popularity

As you mention yourself, writing a truly superintelligent AI is hard since humans simply can not fully comprehend these levels of intelligence.
However, if you really want to research how a superintelligent AI possibly would behave and take as much into consideration as possible I would recommend Nick Bostroms book Superintelligence: Paths, Dangers, Strategies. It is not a novel but a comprehensive analysis of what, when and how superintelligence will be.


Load Full (0)

10% popularity

You could deal with this by writing the AI as an oracle. It knows so much and can extrapolate so well that it would seems to have precognitive powers to humans.

Examples include:

Finishing statements, or whole conversations, that a human initiates after the first few words.
predicting things that are usually thought of as random. i.e. the weather, dice rolls, etc...
The AI could make massive, seemingly nonsensical,leaps during conversation that would essential be non sequitur to humans conversing with it.


Load Full (0)

10% popularity

You want to find (or create) the uncanny valley and drop your AI in there. One or more consistent features or traits of your AI should cue the reader—if not other characters—into realizing that this is an atypical "human". These can range from subtle to blatant depending on the needs of your story.


Load Full (0)

10% popularity

There are so many examples of characters, both biologically human and otherwise, who show amazing intelligence but are clearly not fully human. The range extends at least as far as Data from Star Trek: The Next Generation, to highly intelligent humans who are still ignorant of societal norms or unable to follow them, like Adrian Monk from Monk, Gregory House from House, MD and even Sherlock Holmes.

I think Data is a great example, because he longs to understand the basics of humanity and so he is curious about completely mundane things. Even if we didn't know that he was not human, we would know he wasn't human. All it takes is two human characters talking about something mundane and the AI asking seemingly obvious questions about it to show that the AI is not human. Like, "What is shaving cream?" "Why do humans keep eating when they are full?" "If it is wrong, then why would anyone do it?" and the ultimate, "What is love?" Spock is a similar example, but he knows the literal answers to the questions, and frequently expresses mystification over the actual ramifications by saying things like, "Captain, I know that humans value so-called jokes, but to a Vulcan they are merely a waste of time". (Not an actual quote)


Load Full (0)

10% popularity

I'd start by thinking specifically by what you mean by "super-intelligent". Intelligence is not a singular, simple concept, and when you just think about your AI in terms of something as vague as "super-intelligence" or "think better and faster" you're setting yourself up to fall into simple existing patterns modeled on what you already know. Instead start by asking, what specifically is your AI designed to be good at, and how is that different from what humans have been metaphorically designed to be good at.

So for example, humans are "designed" to be good at intuitive, small scale problem solving, group social dynamics, and communication. Maybe your AI is designed for large scale problem solving based on quantitative data sets. They can come to surprising or shocking conclusions, but are in capable of communicating their explanation of those conclusions except by spitting out reams of data. Or maybe the AI is not good at social dynamics and does not have a cohesive "theory of mind" (the psychological concept of being able to imagine what someone else is thinking), so it can't understand that others don't see things the same way it does.

When you frame your AI's design around it's limitations, rather than it's strengths, it's easier to see how it's behavior might be alien or weird to humans around it.


Load Full (0)

10% popularity

You can't write something that actually speaks more intelligent than you, but you have two advantages. The first is you can take an hour to come up with each line, while the AI belts them off one by one. The second one is, you know the answers to all the questions.

I am assuming your AI is friendly. Therefore it would want to put humans at ease. For that reason it would most likely pick characters from human culture, actors, characters, that are known to put humans at ease and then emulate them. Take your time, find lines from other works that convey the emotion the AI wishes to, and then use them. Give it mannerisms of the character. Let is slide between mannerisms of actors you like acting in movies as it's trying be consoling, funny, manipulative, or threatening.

You know all the answers. The AI should appear very smart and intuitive. It should know what is troubling any character it speaks to, the answer to every question, and be able to predict what others will say and do. This is easy for you as the writer, but will seem impressive by the AI. "Hello Greg. Oh you don't need to pour the tea, I already did that, and I made that cake your mom made for your birthday when you were a kid... Oh I am glad you are already feeling better. Don't forget your hero Space Mike was also just working as a delivery boy when he turned twenty seven"


Load Full (0)

Back to top