bell notificationshomepageloginNewPostedit profile

Topic : Can anyone direct me to writers' resources on AI or robotic ethics? I am in the process of writing a science fiction novel that will involve a large quantity and variety of synthetic life - selfpublishingguru.com

10.01% popularity

I am in the process of writing a science fiction novel that will involve a large quantity and variety of synthetic life forms. I have been highly influenced by the works of Asimov, and while I may not explicitly state the laws I utilize for the governance of artificial life, I want to have a solid footing for how my robots -or whatever they might be called- are able to make ethical decisions. From a plot perspective, I also want some realistic ways ways that these rules could be hacked or otherwise subverted.

I know I could ask this question on a pure coding site, but I don't want to be inundated with lines of raw code. At the same time I don't want to read through tomes of philosophy that have no foundation in modern computer programming. I'm looking for something in between... So I thought that there must be some other sci-fi minded folks around writers.SE with experience into this type of research. Pseudo-code would be fine, but I really don't want to sit down with AI scripts and try to project them far out into the future. For those who might think that I'm trying to avoid doing research, I would merely say, I love doing research, but I could use some help on where to focus that research at the moment.

So to restate the question: Are there any writers here who can recommend good middle-ground resources that deal with the concepts of the philosophy and programming of artificial intelligence and/or systematic ethics?

Sorry if this is a strange question, and thank you to everyone who takes the time to consider it.


Load Full (1)

Login to follow topic

More posts by @Kaufman555

1 Comments

Sorted by latest first Latest Oldest Best

10% popularity

There is one simple problem with describing ethics as algorithms; there are too many different opinions on what's right and wrong! (Or if 'right' and 'wrong' even exist.)

Asimov, for example, had his robots follow a very simple ethic code (no wordplay intended!). As a result, Asimov's robots can't make difficult decisions. A robot would not be able to cope with a situation where it must kill a terrorist to save someone else's life, for instance.

Will your synthetic life forms all follow the same ethical guidelines, or will different 'species' (or models?) have different ideas of what is right and wrong?

Once you've settled on a set of ethical rules, I don't think you need to get too technical with the actual coding. Even nowadays, programming requires less technical details, and enables one to simply write what he wants done. (As someone expressed it, "Nowadays, you deal with the what and let the code deal with the how.)

In short, a simple algorithm should do, as long as it doesn't leave any holes.


Load Full (0)

Back to top