A World Without Humans

We don’t worry enough about it.

I think that AI is a much bigger danger than “climate change.” Of course, some people dream of the end of humanity. Many of them are the same ones who worry too much about climate change.

[Update a few minutes later]

Peripherally related: More thoughts on much of the Left’s apparent hatred of humanity:

You know, it’s almost as if, having lost the doctrine of original sin and Christian forgiveness, these poor women are left with nothing but the free-floating, universalized guilt that makes them hate themselves and life. Maybe that’s unfair. I don’t know these ladies. But life hatred — humanity hatred, self-hatred and ultimately God hatred — seem to permeate so much of radical leftism. Feminism and Marxism with their revulsion at human nature, environmentalism with its elevation of greenery over humankind, radical groups like PETA that put the love of animals before the love of neighbor, the sweaty insistence on self-esteem and feeling good about yourself, giving praise, praise, praise for nothing, nothing, nothing, the ceaseless need to define your opposition as hateful… and abortion as a positive. It all smacks of self-hatred, doesn’t it? The love of death over life.

Actually, Bob Zubrin wrote a good book about that.

7 thoughts on “A World Without Humans”

  1. I think that AI is a much bigger danger than “climate change.”

    I dunno.

    Climate change exists – it’s just not catastrophic or significantly anthropogenic.

    AI doesn’t exist just yet … and honestly might not ever, as much as we love imagining it in SF.

    (And if it does, remember that Moore’s Law is already running into quantum issues and silicon isn’t always getting A Lot Faster. If an AI as smart as a toddler ends up requiring a room full of computers in 20 years, it’s not going to Destroy Us All.

    Stross’ scenario where a rogue process on a random home PC server can be meaningfully self aware – and thus AI might be a significant threat – doesn’t seem likely to ever happen.)

    1. “And if it does, remember that Moore’s Law is already running into quantum issues and silicon isn’t always getting A Lot Faster”

      Moore’s Law isn’t a given. It relies on the people behind the technology working to see that it remains true.

    2. A fallacy to avoid is the idea that speed == intelligence. In fact many a book has been written to the point that AI may actually not depend upon digital computer circuitry at all. In fact the neurons in our brains operate at “speeds” that are already orders of magnitudes slower than even our slowest commercial CPU chips.

      Quantum computers may lead to an AI breakthrough but before that, the technology will have to first prove its value in non-AI applications. It has a very very long way to go.

      Neural networks are our closest physical approximation, but it is unknown how to scale that to human level AI. Nor is it clear it can be brought “to maturity” any faster than it would take to raise a human child.

    3. “AI doesn’t exist just yet … and honestly might not ever, as much as we love imagining it in SF.”

      AI sure does exist. It’s so ubiquitous that it melts into the background. Whenever we reach a goal in AI, the bar gets raised. Once Deep Blue beat Garri Kasparov, chess was no longer used as an AI benchmark. The best Jeopardy player in the world is Watson – and now recognizing speech isn’t an AI benchmark anymore . Cars can parallel park themselves, and once you’re sharing the road with lots of driverless cars driving itself will no longer be considered an AI benchmark.

      Google wird diese in Englisch für Sie übersetzen.

  2. I haven’t read Zubrin’s book yet – but it does seem that we have our own home-grown, whiny version of Fred Saberhagen’s Berserkers. Maybe their reluctance (and probable inability) to do weapons engineering (and hatred of evil firearms) is a godsend…

  3. Easily solved. Yesterday was Mother’s Day. See how fast a liberal leftist mother of a newborn can be converted by telling her that we can’t save her baby because the research that would have provided a cure would have killed a primate.

    1. AI is an interesting topic for me. Studied it long and hard. Then I had a dream one night. An AI is running in a lab environment as a “virtual entity”. One who’s parameters and living conditions in which it is “living” in, its “lab environment” as it were, is instantly alterable by the “experimenters”. Think SHRDLU on steroids. Then along comes a human researcher who is smart enough to devise a VR device that connects his brain directly into the simulation for better interaction. The AI soon takes notice of this new form of interaction and notices that it is a two-way street. Our poor experimenter soon suffers irreparable brain damage from an experiment conducted by the AI that has gone awry. Do we kill the AI because it was only practicing moral “equivalence” and without provable malice? The golden rule as a double-edge sword. Curiosity kills more than the cat. But if AI is achievable this little dream drives right to the crux of the dilemma. If AI holds fantastic promise, it also holds fantastic questions in ethics and morality.

Comments are closed.