Transterrestrial Musings  


Amazon Honor System Click Here to Pay

Space
Alan Boyle (MSNBC)
Space Politics (Jeff Foust)
Space Transport News (Clark Lindsey)
NASA Watch
NASA Space Flight
Hobby Space
A Voyage To Arcturus (Jay Manifold)
Dispatches From The Final Frontier (Michael Belfiore)
Personal Spaceflight (Jeff Foust)
Mars Blog
The Flame Trench (Florida Today)
Space Cynic
Rocket Forge (Michael Mealing)
COTS Watch (Michael Mealing)
Curmudgeon's Corner (Mark Whittington)
Selenian Boondocks
Tales of the Heliosphere
Out Of The Cradle
Space For Commerce (Brian Dunbar)
True Anomaly
Kevin Parkin
The Speculist (Phil Bowermaster)
Spacecraft (Chris Hall)
Space Pragmatism (Dan Schrimpsher)
Eternal Golden Braid (Fred Kiesche)
Carried Away (Dan Schmelzer)
Laughing Wolf (C. Blake Powers)
Chair Force Engineer (Air Force Procurement)
Spacearium
Saturn Follies
JesusPhreaks (Scott Bell)
Journoblogs
The Ombudsgod
Cut On The Bias (Susanna Cornett)
Joanne Jacobs


Site designed by


Powered by
Movable Type
Biting Commentary about Infinity, and Beyond!

« Artificial Lunar Lakes | Main | Unintended Consequences »

Asimov 2.0

Phil Bowermaster has some thoughts on updating the Three Laws of Robotics.

Posted by Rand Simberg at May 21, 2007 07:32 AM
TrackBack URL for this entry:
http://www.transterrestrial.com/mt-diagnostics.cgi/7579

Listed below are links to weblogs that reference this post from Transterrestrial Musings.
Comments

I like the initiative, but not the first cut of a new draft. Based on the "revised" laws, a robot can assure the survival of life and intelligence and its safety by taking us 2 by 2 into protective care. Only the term "happiness" may prevent the extermination of all but "Adam and Eve" provided protective care to assure survival of life and intelligence. If "happiness" is determined by thoughtful individuals, then slaughter may not be acceptable. If the "happiness" is determined by jihadist, then all 3 rules can be met while robots systemically start the next holocaust.

Put it simply, apply the 3 revised rules to "The Matrix" and see if they fail. Life survives, intelligence is allowed free to seek happiness, freedom, and well-being in a virtual world. Safety is provided until usefulness of life is no longer valid. The decision to terminate life can also be made by other sentient beings in the virtual world.

Posted by Leland at May 21, 2007 08:29 AM

I can't comment over there -- his blog requires registration -- so I'll comment here instead. Phil needs to find and read an old SF story called "With Folded Hands," by Jack Williamson. It illustrates the utter folly of phrasing robot operating rules the way Phil wants to.

Not that the original Three Laws are much better. Especially the First Law. It was fine the way Asimov first conceived it, but he never thought about just how far-reaching an absolute rule can be.

Posted by wolfwalker at May 21, 2007 08:50 AM

The original three laws would never have survived multiculturalist nanny-stateism.

Or through inaction bring harm... If you look at the judicial awards provided for mental "suffering" it's not hard to imagine the system breaking down quick. Even a heated debate between two people would probably cause them to blow-out their posotronic net.

Posted by rjschwarz at May 21, 2007 09:04 AM

That kind of discussion on Asimov's three laws makes me want to crush my monitor or turn nihilist - it's concentrated stupidity. I almost couldn't stomach reading it all (and it didn't help that the non-Asimov parts were devoid of intelligence as well - simplistic and banal).

It's not that people can't read any book any which way they want to and get something different out of it (and which might or might not please them as readers) but if you want to read Asimov's works as science fiction with a cerebral message rather than simply some form of cheap fantasy it doesn't take much to see that Asimov made those laws and wrote about robots to comment on humans and human systems/society and our failings on a host of issues including science.

In particular he did so to exemplify the extremely common and routine error of humans - no matter how intelligent or well educated - in fooling themselves by invalid over-simplifications.

That's why the laws are shown to be complete failures by Asimov right there in the stories themselves; to hopefully make the point inescapable to any thinking reader.

It ought to be painfully obvious but apparently isn't since those involved at the the linked page commit exactly the mistakes Asimov was arguing and warning against.

Taking Asimov's intentionally flawed "laws of robotics" out of their context is idiocy, using them as templates in an attempt to actually make some kind of working system is simply astonishing. I guess they haven't actually read the stories.

Posted by Habitat Hermit at May 21, 2007 12:24 PM

Habitat Hermit, I'm not sure you're correct. I remember an anecdote from Arthur C Clarke about Asimov watching 2001 and getting bent out of shape that Hal9000 wasn't following the three laws. That doesn't sound like someone who wrote them to exemplify the extremely common and routine error of humans - no matter how intelligent or well educated - in fooling themselves by invalid over-simplifications.

Posted by rjschwarz at May 21, 2007 02:42 PM

And keep in mind that the Three Laws almost always worked in Asimov's world. For example, what's the point of having a detective story involving a robot as murderer, if violations of the Three Laws were a routine matter?

Posted by Karl Hallowell at May 21, 2007 02:53 PM

An interesting complaint in the article was that Asimov's Three Laws were too negative. He wished to replaced them with positive laws.

I took a bible class in college, and the professor pointed out a fundamental truth about the ten commandments. They're all phrased in the negative.

And there's a reason for it. Once you call out the ten specific things we're not allowed to do, everything else is open.

Likewise, the three/four laws told robots what they were not allowed to do. That allows a considerable amount of personal freedom.

And as it has been pointed out in these comments, one realizes that intelligence will find ways to break whatever laws that are supposed to constrain it. What we call humanity, with all its strengths and weaknesses, may simply be the core of any intelligent race, natural or artificial. Perhaps there's not a meaningful difference.

Of course, it's all conjecture until we find another intelligent race. :^)

Posted by MJ at May 21, 2007 03:38 PM

rjschwarz, using Google I only found Greg Bear briefly mentioning Asimov being "indignant" Clarke and Kubrick didn't use the three laws for HAL. He doesn't give any specific or deeper reason so it could be just about anything, nor does he say where he got it from or who said it.

The search terms I used were 'asimov hal robot', 'asimov hal robot law', and 'asimov indignant hal' but the only relevant result was that Greg Bear article. I didn't go past page 3 on any of the results as they quickly became irrelevant. The internet hasn't got everything but unless there's something more specific it's hard to say much about it.

This is where Greg Bear mentions it.

Karl Hallowell, "almost always" is what makes the stories. The more unusual and perplexing the violations of the laws appear the more it ought to underscore the unexpected but inherent limitations and fallacies of humans as well as their laws (for robots or otherwise).

Posted by Habitat Hermit at May 21, 2007 04:56 PM

My point here is that as far as human laws go, the Three Laws are unusually sound. Saying that they're "intentially flawed" misses the point. And if the breaking of certain robotics laws only occur through really bizarre and unusual circumstances, then it underscores the solidity of the laws in question rather than the limitations of those laws.

Ultimately, if you build something with a sophisticated brain, how do you keep it from causing you or others harm? Despite claims to the contrary, I still think something like the Three Laws is essential.

Posted by Karl Hallowell at May 21, 2007 08:37 PM

Although breaking those laws is bizarre and unusual in the stories it wouldn't be nearly as unusual in reality if one could even make the concept "work" and I doubt Asimov didn't realize that.

A sophisticated brain utilizing such laws would mostly be rendered inactive in relation to the outside world, the laws actually presuppose onmipotence to work "perfectly". The sophisticated brain would for the most part lose itself in a multitude of quite possibly endlessly reiterative loops of attempting to evaluate the consequences of the consequences of it's actions or inactions before choosing those actions or inactions.

That's insanity - or, if it actually did work, God (but the only way to do that would be for the brain to exist independent of space-time - that's the only place all consequences are simultaneously available).

If one decides on a cut-off point to evade those reiterations one would seriously undermine the intent of the laws in the first place.

It's a lose-lose situation.

Higher intelligence neccessitates some degree of free will, it's the only "rule" flexible enough to handle situations and actions/inactions and decide on a per-case basis the cutoff point for considerations - and it still won't be anywhere close to perfect: look at humans.

Posted by Habitat Hermit at May 22, 2007 05:31 AM


Post a comment
Name:


Email Address:


URL:


Comments: