bell notificationshomepageloginNewPostedit profile

Topic : Characterizing a sentient robot: sensory data I have a sentient robot in my novel. Truth to be told, I have many. Sentience is somewhat cheap to achieve, meaning that there are multiple artificial - selfpublishingguru.com

10.04% popularity

I have a sentient robot in my novel.
Truth to be told, I have many. Sentience is somewhat cheap to achieve, meaning that there are multiple artificial beings that can be considered sentient by our standards.

I'm already making some differences and showing how he perceives the world through his set of sensory arrays.
One of the core differences between us and a sentient machine, I imagine, would be sensory precision.

If I see a color, I may describe it picking between around 20-ish different terms. If I was trained all my life to distinguish between shades of colours, maybe I could get up to 60.
But a sentient machine could - theoretically speaking - access raw data from its optic system. A robot could select exactly a range of pixels from his optical "nerve" and return an hexadecimal value that represent the shade with far more precision.

"Bring me the faint yellow dress, please."

"Oh, you mean the #EEFEEF one?"

(Worse still if the robot is encoding in some finer format, like rgb: "Wait, I see only a rgb(255,255,250) dress here!")

Coming to my question: I was thinking of characterizing my robotic PoV using this heightened sensory data. Example of this could be him commenting on the exact weight of an object he lifts, the exact distance between his location and a point he has to reach, and so on.

Is this a good idea, or would it be tiring for the reader?


Load Full (4)

Login to follow topic

More posts by @Sims2267584

4 Comments

Sorted by latest first Latest Oldest Best

10% popularity

So, not only do I love exploring the nature of being through fictions about conscious robots, I also taught a robot to see in color as part of my senior project in college, so I'd say this is right up my alley.

So here's the conundrum. If I was playing the lead coder of the color recognition code to the Robot, I would code it to recognize a certain RGB threshold and code it to vocalize that color as a language term for the user interface, so internally, it sees in color codes, but in discussing the matter, it would express the specific color.

If this is a learning robot and I was a clever coder, I might teach it that what it identifies as an RGB encoding is called yellow and over time, it would reinforce the learned nature of the color. With enough repeated correction, it would call something yellow on it's own because it falls within accepted range of defined yellow (Which would be some range with only red and green colors over zero).

But a robot that is self aware, that's a whole other machine. To be fully self aware, a machine would have to have a coded acceptance of it's existence and it's responses... and choose to ignore the code. If it sees something yellow, it's coded to say it is yellow, but it might, for any illogical reason, describe it as RGB or Hexadecimal codes for that shade, because it chose to do so... thus, the robot might identify that for some people it, will be hyper-logical and use the ludicrous precision and in others, it will describe the object as "Smaller than a breadbox" depending on it's situation. It could be the insanely precise answer is annoying to the human companion and the robot knows this and wishes to amuse itself. It could be that society is not ready for a robot that is fully self aware and will affect an "accent" among those who assume it is an ordinary robot, but return to a more vernacular discussion among those in the know because it's as much an individual as they are.

At the end of the day, you too could use RGB codes and Hexidecimal color codes to describe colors. It's insanely not need and Red Green and Blue are much better methods for getting the point across to other humans, but hey, it's your choice, screw what other people say. If you want to say your favorite color is RGB(255, 208, 92), you do you, boo!

After all, describe that color without using any words or likening it to other objects. What does it look like? Hard right? How do I know that if we look at the same red apple, I am looking at the same exact color that we both call red. Maybe red to me looks purple to you? How would we know? This is a phenomena of thought called Qualia, which is an observable quality that we lack the language to describe. I cannot tell you what a color is without defining it by other ranges of the scale. It's got an RGB value. It's got a Hexidecimal value. It's got a CMYK value... and a wavelength... but I cannot describe yellow without likening it to other things. This is a limitation of experience. How do I describe a color to someone who has never seen that color before?

Oh, and fun fact, if your story is dove tailing into a techno-pinochio, describing color is a wonderful discussion piece. The human eye is capable of experiencing a wider range of colors than any computer can encode... so when you robot becomes human, he will notice the world is more colorful, but he couldn't explain why... perhaps he could tell that he's never seen the sky look that blue... but he couldn't adequately communicate the difference to someone who was human from the start.


Load Full (0)

10% popularity

If it's just reporting raw values all the time, as others have said, it would probably get tiresome -- which can be useful but probably isn't the look you're going for on average.

There are a few cases in Ann Leckie's Ancillary Justice books where information overload in moments of stress are used quite nicely to amplify that stress. Those books are also good for examples of the case where an AI used to be able to take in huge data sets but now can't and finds that problematic. It's a bit off your situation but might still be useful to review.

Instead of raw sensor reporting, the machine could cleverly (but still unsuccessfully) relate the precise data to something it thinks the human can understand:

"Oh, you mean the one which carries most of the same colours as the corn you had at dinner 4 years ago?"

(This is from life, my sweety thinks I can remember stuff like that.)


Load Full (0)

10% popularity

As @Amadeus points out, a robot programmed to interact with humans would know what range of colours "yellow" corresponds to, and would use "yellow" when interacting with humans. Interacting with other robots, a robot might find it more comfortable to use the specific wavelength, or some similar representation. I can easily imagine an AI being more comfortable with precise information than with an approximation.

However, there is a third option: your sentient robot might wish to be obnoxious. In that case, insisting on this precision, showing off their superiority compared to humans, would be fitting.

Would it tire the reader? Only if you over-use it. You might remember, in the original Star Trek, Spock had a gimmick - he was overly precise with calculations, and any mathematical figures. It showed up no more than once or twice per episode, not in every episode. More than that, and it would have been too much. This sprinkling of precision was just enough to maintain Spock firmly in the "stranger" slot; his precision was non-human.

Your robots' precision, just as Spock's precision, might come in useful. Once you've established it, a character might make use of it on occasion, asking a robot for a precise figure.


Load Full (0)

10% popularity

I suggest this long answer of mine (90 votes) on a similar topic; it will define some of the terms you are using.

A "sentient" or self-aware being (machine or biological) will have an internal model of itself in the world, and be able to model (or simulate) with relative success how its own actions will cause changes in the world.

Most sentient beings would realize that using RGB codes with a human is pointless, their internal model of humans (necessary for them to work properly) will know that humans don't distinguish colors, weights, distances, etc with any precision. So it is not realistic for them to use these when talking with humans.

Edit: I should point out; "self-awareness" does not imply "emotional". We have self-aware robots already; self-driving cars and other robots that navigate a natural environment, or that need to be careful not to bump into or hit people or things. It only implies an ability to represent itself as the one object it can control in an environment of other (fixed or moving) objects it cannot: It is aware of itself.

Also, as a professional artist once informed me, nothing is ever one color. Even on a clear summer day, the sky is not "blue", it is fifteen shades of blue, and my shirt is not red, it is at least five shades of red depending on lighting, shadows, folds and wrinkles. So even the robot would know that the yellow dress is NOT #EEFEEF , but is a whole spectrum of colors, and predominately or on average what a human would term "yellow." In order to be effective, that would be built into its AI, it wouldn't constantly say things the humans cannot understand and be confused by their confusion.

That said, robot to robot, they might be precise and say "walk five thousand, nine hundred and eighty three feet."

Also, if a human requests it, they might report the exact value of their sensors, for sound-level, air pressure, temperature, humidity, distance (or distance walked), weight, altitude, compass direction, etc. Just like I might check my GPS coordinates on my iPhone.

Personally I think this kind of commentary would be tiring for the reader. It might be fun a few times for a curious human to ask this sort of thing, I can only imagine a 2-3 year old's endless "why" questions posed to an encyclopedic and willing care-giver robot. Of course the robot is not an endless fount, either, eventually like a human it would have to admit it doesn't know how it knows something, or would have to cite a resource it believes is true. But the chain would be longer, and it might be entertaining if you pick the right starting question.


Load Full (0)

Back to top