Thoughts on crime, happiness, free will, and robots.
20 thoughts on “The Hollow Man”
“The gangs could attack the robots, but they’ll just send bigger robots. But that has not solved all the difficulties, some emanating from surprisingly philosophical directions. One of them is what happens to the “right to misbehave.” The delivery robots’ sensors are seen as threats to the privacy of malefactors because they see too much.”
Imagine a scenario in the not to distant future. The Tesla “police/security” bot(s) is assigned to a large crime ridden “project” in some large American inner city. A “pilot program” paid for by Musk/Tesla to test feasibility. Robots don’t sleep/take breaks/accept bribes/subject to threats/intimidation. Any attempt to do so is recorded by said robots augmented by security cameras everywhere in the project all coordinated by the controlling AI’s. After 18 months the crime rate in said inner city project drops by ~90+ percentage; especially violent crimes. Price for robots? Speculative think Musk is saying that each “factory” bot will cost $30-40K. Let assume our police bot costs twice that $80K (plus 20K for annual maintenance.) Each bot lasts 10yrs total cost (80+(20 X 10) = $280K Averaged over 10 yrs 28K per year; allot less than a full time Police officer and each bot would replace 2 -3 human officers; they (the bots) work 24/7 not 8 or 10 or even 12 hour shifts.
I remember seeing a documentary about just this idea back in 1987.
“I Always Do What Teddy Says”, Harry Harrison. Or “With Folded Hands”, Jack Williamson.
Supposedly the average person commits 3 felonies a day. Strict enforcement would be a pain. I’d hope it wouldn’t be the “You have 20 seconds to comply” type of pain.
I asked ChatGPT 5 to help me locate a paper on two methods of receiving FM radio broadcasts “from a professor at Utah State.”
It couldn’t find anything. I located where I saved a copy of the paper, and it was from Brigham Young University.
The AI is not sophisticated enough to say, “I cannot locate an article on this topic from Utah State, but there is such an article on this topic from Brigham Young, another university in the state of Utah. Do you want a link?”
With Folded Hands is a long way off.
“With Folded Hands is a long way off.”
Computer power AI intelligence is increasing exponentially so even if it’s not there yet how far off do you think a long way off is?
Computing power may still be growing exponentially, but it appears that so is the cost of new “server farms” along with their electric power consumption.
There is Moore’s Law and then there is the Law of Diminishing Marginal Returns.
I see the forced hardware upgrade to run Windows 11 as an example of limits to Moore’s Law. Newer PCs stopped improving enough that people would naturally want to replace their laptop and desktop computer that we had to be forced into it with scare stories of cyber security.
Oh yes, the cyber security threat is real, but Windows 11 is more secure than if Windows 10 received updates? Windows 11 could run on much of the older hardware but this is prevented. How is this not economic rent-seeking?
“Newer PCs stopped improving enough that people would naturally want to replace their laptop and desktop computer that we had to be forced into it with scare stories of cyber security.”
As you suggest a matter of cost vs benefit; Musk for instance is saying a massive increase in productivity by said Tesla Optima Bots combined with governing AI over humans. If said AI cost more in terms of power consumption it would likely be offset by said increases in productivity. Or in the context of discussed military/police/security bots increased productivity/efficiency/greatly improved results compared to the humans it would replace. In the case of a home computer user not sure how that would apply; better more complicated graphics/games/etc.?
“Supposedly the average person commits 3 felonies a day. Strict enforcement would be a pain…”
The Tesla bots’ job is law enforcement; it would up to the prosecutors/courts whether or not to charge/trial/convict/sentence/fine/jail. You have gone from having a strong police (Tesla bot) presence with supporting surveillance in high crime areas to everyone everywhere all the time. Fair enough; everything comes with a price. Perhaps the price for a (very) low crime rate is yes a loss of personal freedom (eventually) for nearly everyone. I expect whether you think is worth it depends on your individual circumstances; if you were an terrified/bullied elderly person leaving in said high crime neighborhood projects you would doubtlessly do the calculation of cost versus benefit differently then someone living in a low crime rate gated suburb.
Maybe trying to program a robot to determine whether someone was in violation of the law would force the government to clean up some of the laws.
Anyway, we need to use elected officials as beta testers.
“Anyway, we need to use elected officials as beta testers.”
Rightly so; you can’t bride/intimidate/blackmail/two standards of enforcement said Tesla police bots. If a rich politician is caught by said bots on tape buying cocaine/soliciting a prostitute/bribery etc. the outcome would be interesting. Inevitable said AI’s controlling the highly effective bots would eventually find its way more directly into the Court system. Dare I say it? AI powered judges/prosecutors/defense attorneys… For the first time in human history the rich/powerful/politicians etc. treated the same as us ordinary citizens.
Anyone who expects equal justice under the law just hasn’t been paying attention. That’s a pretty lie they teach to schoolchildren. It surely sounds nice but it isn’t and never has been true.
“Anyone who expects equal justice under the law just hasn’t been paying attention”
Okay. What if an AI run criminal justice system turns out to be much more equal justice than any system run by humans ever was or was ever going to be? Our thousands of years worth of past history is of a “system(s)” run by fallible/corruptible/mendacious humans; not sure that would apply to AI.
Of course it will apply to AI. Studies have shown repeatedly that AI tools are as biased and fallible as the people who programmed them in the first place.
And AI is simply not going to be prone to accepting bribes/coercion/threats and or political maneuvering etc. Like an ambitious prosecutor or judge worrying about being re-elected that sort of thing. A stronger argument for what you’re saying would be the so-called emergent qualities where the ai starts behaving in a manner different from the way it was programmed.
Remember Directive 4 from Robocop? You can count on something similar in AI cops to protect senior Democrats.
As stated a stronger concern wouldn’t be so much secret directives as the emergent qualities of the AI itself that would be a much stronger counter-argument as to its efficacy
I don’t think police-bots will be feasible for a long time, because they’re ultimately fragile, inflexible and not self-repairable. Unless you equip an Abrams or other vehicle with generalized AI (BOLOs anyone?) you’re going to record the police-bot being taken out by IEDs right and left. IMHO of course, I only read about the tech these days.
Well then I guess we don’t have to worry about the kissing cousins of police bots military style drones because they’re fragile and not self repairable. . I would assume that police bots would be larger stronger and more robust than the sort of Tesla bots that we’ve seen on TV so far that are intended to be general purpose robots. Not sure in a war between street gangs with improvised explosive devices versus said robots directed by an ever more intelligent AI that the street gangs would come out on top.
Apparently, before WW2, Americans had robot and rowboat as homophones. Which is funny because, as a kid, I used to sing “row your bot gently down the slot” and (if my parents weren’t around) “life is but a twat.” Then, when I (sort of) grew up, I realized it was!
Re: Hollow Man
You can be safe or you can be free.
The argument in the future will be over what is the proper context of freedom?
“The gangs could attack the robots, but they’ll just send bigger robots. But that has not solved all the difficulties, some emanating from surprisingly philosophical directions. One of them is what happens to the “right to misbehave.” The delivery robots’ sensors are seen as threats to the privacy of malefactors because they see too much.”
Imagine a scenario in the not to distant future. The Tesla “police/security” bot(s) is assigned to a large crime ridden “project” in some large American inner city. A “pilot program” paid for by Musk/Tesla to test feasibility. Robots don’t sleep/take breaks/accept bribes/subject to threats/intimidation. Any attempt to do so is recorded by said robots augmented by security cameras everywhere in the project all coordinated by the controlling AI’s. After 18 months the crime rate in said inner city project drops by ~90+ percentage; especially violent crimes. Price for robots? Speculative think Musk is saying that each “factory” bot will cost $30-40K. Let assume our police bot costs twice that $80K (plus 20K for annual maintenance.) Each bot lasts 10yrs total cost (80+(20 X 10) = $280K Averaged over 10 yrs 28K per year; allot less than a full time Police officer and each bot would replace 2 -3 human officers; they (the bots) work 24/7 not 8 or 10 or even 12 hour shifts.
I remember seeing a documentary about just this idea back in 1987.
https://youtu.be/IqvRDhW-XVA?si=dmG3BF82WVXuvHjR
“I Always Do What Teddy Says”, Harry Harrison. Or “With Folded Hands”, Jack Williamson.
Supposedly the average person commits 3 felonies a day. Strict enforcement would be a pain. I’d hope it wouldn’t be the “You have 20 seconds to comply” type of pain.
I asked ChatGPT 5 to help me locate a paper on two methods of receiving FM radio broadcasts “from a professor at Utah State.”
It couldn’t find anything. I located where I saved a copy of the paper, and it was from Brigham Young University.
The AI is not sophisticated enough to say, “I cannot locate an article on this topic from Utah State, but there is such an article on this topic from Brigham Young, another university in the state of Utah. Do you want a link?”
With Folded Hands is a long way off.
“With Folded Hands is a long way off.”
Computer power AI intelligence is increasing exponentially so even if it’s not there yet how far off do you think a long way off is?
Computing power may still be growing exponentially, but it appears that so is the cost of new “server farms” along with their electric power consumption.
There is Moore’s Law and then there is the Law of Diminishing Marginal Returns.
I see the forced hardware upgrade to run Windows 11 as an example of limits to Moore’s Law. Newer PCs stopped improving enough that people would naturally want to replace their laptop and desktop computer that we had to be forced into it with scare stories of cyber security.
Oh yes, the cyber security threat is real, but Windows 11 is more secure than if Windows 10 received updates? Windows 11 could run on much of the older hardware but this is prevented. How is this not economic rent-seeking?
“Newer PCs stopped improving enough that people would naturally want to replace their laptop and desktop computer that we had to be forced into it with scare stories of cyber security.”
As you suggest a matter of cost vs benefit; Musk for instance is saying a massive increase in productivity by said Tesla Optima Bots combined with governing AI over humans. If said AI cost more in terms of power consumption it would likely be offset by said increases in productivity. Or in the context of discussed military/police/security bots increased productivity/efficiency/greatly improved results compared to the humans it would replace. In the case of a home computer user not sure how that would apply; better more complicated graphics/games/etc.?
“Supposedly the average person commits 3 felonies a day. Strict enforcement would be a pain…”
The Tesla bots’ job is law enforcement; it would up to the prosecutors/courts whether or not to charge/trial/convict/sentence/fine/jail. You have gone from having a strong police (Tesla bot) presence with supporting surveillance in high crime areas to everyone everywhere all the time. Fair enough; everything comes with a price. Perhaps the price for a (very) low crime rate is yes a loss of personal freedom (eventually) for nearly everyone. I expect whether you think is worth it depends on your individual circumstances; if you were an terrified/bullied elderly person leaving in said high crime neighborhood projects you would doubtlessly do the calculation of cost versus benefit differently then someone living in a low crime rate gated suburb.
Maybe trying to program a robot to determine whether someone was in violation of the law would force the government to clean up some of the laws.
Anyway, we need to use elected officials as beta testers.
“Anyway, we need to use elected officials as beta testers.”
Rightly so; you can’t bride/intimidate/blackmail/two standards of enforcement said Tesla police bots. If a rich politician is caught by said bots on tape buying cocaine/soliciting a prostitute/bribery etc. the outcome would be interesting. Inevitable said AI’s controlling the highly effective bots would eventually find its way more directly into the Court system. Dare I say it? AI powered judges/prosecutors/defense attorneys… For the first time in human history the rich/powerful/politicians etc. treated the same as us ordinary citizens.
Anyone who expects equal justice under the law just hasn’t been paying attention. That’s a pretty lie they teach to schoolchildren. It surely sounds nice but it isn’t and never has been true.
“Anyone who expects equal justice under the law just hasn’t been paying attention”
Okay. What if an AI run criminal justice system turns out to be much more equal justice than any system run by humans ever was or was ever going to be? Our thousands of years worth of past history is of a “system(s)” run by fallible/corruptible/mendacious humans; not sure that would apply to AI.
Of course it will apply to AI. Studies have shown repeatedly that AI tools are as biased and fallible as the people who programmed them in the first place.
And AI is simply not going to be prone to accepting bribes/coercion/threats and or political maneuvering etc. Like an ambitious prosecutor or judge worrying about being re-elected that sort of thing. A stronger argument for what you’re saying would be the so-called emergent qualities where the ai starts behaving in a manner different from the way it was programmed.
Remember Directive 4 from Robocop? You can count on something similar in AI cops to protect senior Democrats.
https://youtu.be/Tr3t1uZNbKo?si=t4qw79R8MDlYzyBt
As stated a stronger concern wouldn’t be so much secret directives as the emergent qualities of the AI itself that would be a much stronger counter-argument as to its efficacy
I don’t think police-bots will be feasible for a long time, because they’re ultimately fragile, inflexible and not self-repairable. Unless you equip an Abrams or other vehicle with generalized AI (BOLOs anyone?) you’re going to record the police-bot being taken out by IEDs right and left. IMHO of course, I only read about the tech these days.
Well then I guess we don’t have to worry about the kissing cousins of police bots military style drones because they’re fragile and not self repairable. . I would assume that police bots would be larger stronger and more robust than the sort of Tesla bots that we’ve seen on TV so far that are intended to be general purpose robots. Not sure in a war between street gangs with improvised explosive devices versus said robots directed by an ever more intelligent AI that the street gangs would come out on top.
Apparently, before WW2, Americans had robot and rowboat as homophones. Which is funny because, as a kid, I used to sing “row your bot gently down the slot” and (if my parents weren’t around) “life is but a twat.” Then, when I (sort of) grew up, I realized it was!
Re: Hollow Man
You can be safe or you can be free.
The argument in the future will be over what is the proper context of freedom?