bell notificationshomepageloginNewPostedit profile

Topic : Characterizing a sentient robot: inhuman PoV Following the previous question: Characterizing a sentient robot: sensory data I'm writing a robot character with a particular PoV. In the previous - selfpublishingguru.com

10.02% popularity

Following the previous question:

Characterizing a sentient robot: sensory data

I'm writing a robot character with a particular PoV. In the previous question I wanted to talk about sensory data; here I would like to open a wider topic. I know that this risks going into opinion-based territory, but I want to take this chance.

Context:
Some of my sentient beings mimic humans (they experience the world as we do; they have a notion of emotions, bonding, and a similar rate of intellect), some others are more close to the classical all-knowing, uncaring AI (extremely analytical, with a purpose, but devoid of emotional intelligence or feeling).

On a middle ground, I have robots. Sentience is one of the themes in my novel, so I'm avoiding black-and-white definition. Some robots can think; some cannot. Some are capable of performing complex tasks, yet are not "sentient", and some have more mundane abilities, yet they have a notion of self.

In this gray scale of various degrees of intelligence, my PoV character has to feel both alive, sentient, relatable and alien, different, and inhuman.

I'm struggling to understand how I could characterize his way of thinking without resorting to cheap tricks. Yet I don't want him to be just a "brain in a metallic body" - as I mentioned, I already have cyborgs and simil-humans. I do not need more.

I'm interested in ways to portray an internal thought process different that the human one. To be fair, this same question could be applied to an alien character with a biology extremely different from our own.

But there lies the question. As a human, I can only think as a human would.

How do you write a PoV and thought processes of something inherently different from a human mind?

Related, in another genre:

How to Write an Eldritch Abomination?


Load Full (2)

Login to follow topic

More posts by @Sims2267584

2 Comments

Sorted by latest first Latest Oldest Best

10% popularity

You mention that the robot

has to feel both alive, sentient, relatable and alien, different, and inhuman.... I'm interested in ways to portray an internal thought process different that the human one.

Taking those first three points individually will help answer the other points:

Alive means there is some capability of growth and death. So how would a robot differ in this? How does a robot grow? Yes, it can gain more "information" in its memory (learn), but what does it do to change itself, its life status, its relations? Does it worry about having enough power to fuel its existence (and does it have a means of persisting in some "hibernation" should power fail or run too low)? Does it fear death... or seek death (perhaps feeling trapped in "life" by its makers)? How does it fix itself (or can it, does it require external robots or humans to do so) if damage occurs? The answer to that will then relate to what it sees as important to have "on hand" should the need for fixing arise. Can it feel pain (and if so, why... does it need such a warning to know that something is "wrong" with it)? Whatever the answers to these types of questions, they will be different (in various ways) from a human, but relate to its "life" as a robot.
Sentient means aware of sensory perceptions in some conscious way. I think your prior question already deals with some distinctions of how a robot might be different from a human in this area, yet still be sentient.
Relatable (in this context) deals with sympathy toward and similarities one can understand. Yet there are various ways something can be relatable. A human does not relate to a dog, an ant, and a tree in the same way, yet there are still things that a human can be sympathetic toward with any of these. So a robot might relate to a human who is feeling tired or weak in a parallel idea of being low on power (having to conserve energy by not being as "active" in certain ways it would normally). But could a robot relate to how a human "forgets"? Probably not normally: if something is erased, it is gone from its memory, and likely no inkling of that prior memory is "there" yet inaccessible (like what happens with the human brain; though there can be exceptions to that I guess, depending on how a robot's memory is set up to function). Humor would probably be different (if it exists at all in a robot).

So just thinking through how the robot is different in being alive, sentient, and relatable will automatically reveal ways that it is alien, different, and inhuman, and then affect its thought process appropriately.


Load Full (0)

10% popularity

One way this has been done is by using a human foil; perhaps somebody that doesn't trust the robot.
The way this works is the robot presents as a caring human you could care about. But the robot will answer honestly if the foil asks how it came to this decision, and the explanation can be much unlike a human, quite analytical and manipulative.

Foil: "I'm thinking of taking a walk."
AI: "This is an excellent time to take a walk. You should wear the light tan jacket, shall I bring it?"
Foil: "Wait. Tell me why this is an excellent time to take a walk."

When queried, the AI has checked the latest satellite weather data and extrapolated it to the local area, it has checked traffic patterns and local air quality sensors, it is monitoring the police channel and city services channel to ensure the walk will be safe, traffic will be light, air quality is decent and they won't encounter any trouble or obstacles. It has also been monitoring the apartment cameras, and Mrs. Razwicky returned with groceries twenty minutes ago, based on her normal schedule they should not encounter her in the halls until after four fifteen. It suggests the light tan jacket because the temperature on a walk will be three degrees beneath what the foil usually finds comfortable at this particular time of the day, and this jacket will achieve the closest fit to his comfortable temperature.
You probably need only one instance of something like this, just show an inhuman amount of academic and technical knowledge and thought going into a simple reply, without any delay. Readers will understand, in this throwaway conversation, that this is the norm for this AI. Kind of like IBM's Watson (the real thing) referencing 100,000 works and tens of millions of words in a few hundred milliseconds to answer a simple Jeopardy puzzle answer.
Then at the end of all this, your human foil can say, "Yeah, get my jacket."
Or, "I don't really feel like a walk."
Currently, at least, this is how AI differ from humans, they process enormous amounts of data and models to come up with relatively simple answers. We aren't sophisticated enough, in either measurement or understanding, to truly mimic a brain in the same way that a brain works. We have a very superficial understanding of how the brain works, or some researchers (like me) would say no real understanding of how it all works together. (It is like understanding 90% of the parts of a machine, without understanding how they all actually come together to make the machine.)
Human brains do not work like Watson; we know that. IBM found a way to use a million times the processing power of a human brain to simulate the results of a tiny part of a brain; without simulating how the brain actually does it.
That is the key to your desired alienation; the AI manages to appear human, while internally doing a million or a billion times as much work as a human to achieve this charade. And, like Watson, it can do that faster and better than any human ever could, just like computers can do arithmetic faster and better than any human ever could.


Load Full (0)

Back to top