28 thoughts on “The Biggest Threat To Humanity’s Future Existence”

  1. Intelligence is “the most powerful force in the universe?”

    Strange. I would have voted for sex. myself. At least, I have certainly observed (at very close hand, alas) the complete subjugation of intelligence by the primal urgings of sex. I’m half convinced that behind every great achievement of men lies a wish to really really impress a woman.

  2. So the real threat is that when the internets become selfaware, they will controll us through porn. Hasn’t that already happened?

  3. Carl, compound interest.

    Anyone who is afraid of (or smitten with) AI should take some time to get to know some AI researchers. They’re not close to building anything slightly worth of fear or worship.

  4. Yes, before we can create intelligence we must first discover it. As for potential futures if they somehow do…

    Benign neglect… this is just the superbug. It doesn’t require any intelligence at all to kill us all.

    Intelligent A.I. sees us as a threat to wipe out. The Terminator or Berserker scenarios…

    It’s still just code. Anti-Berserkers/Terminators fight our battle for us. We just need some safe hidey holes until it’s over. The antis should have the advantage because they will be more focussed on the fight by design.

    Transhuman/cyborg… most likely since it’s already happening in the very early stages… I don’t see this changing us much. Same battles/different day.

    Longevity? With this concentrate wealth and power or not?

    The assumption that intelligence grows on a curve is just not supported by data. I see the opposite actually.

  5. There are some arguments for why AI should keep us around. Combined biodiversity and AI diversity is worth something. Also, biological life has four billion years worth of very complicated life experience that AI is not going to retire willy nilly. Just as we try to preserve some of the past and protect endangered species, I expect AI will to some extent look after us.

    I suspect if space was part of the deal AI might also be quite inclined to leave Earth as mostly the province of biological life forms – it would be a small gesture and a useful back up. Which is not to say that people would not be welcome in space – plenty of space for everyone. Indeed AI is likely to have little interest in Earth and with a vast open frontier, very little interest in war.

    Long term, I think it would be presumptuous to think that humans should stop evolving and be preserved in their current state in perpetuity. AI entities will be our children, just as we are the distant children of the first cells to survive on this planet.

    I am hoping for all sorts of robust speciation in all sorts of directions, including biological, synthetic and everything in between, every niche finding its entity. The future is depending upon us, and we should endeavor to serve our evolutionary place in history, like all those species before us, and create a great future for all our children.

  6. This reminds me of one of Father Guido Sarducci’s appearances on the Tonight Show, with Johnny Carson. He was worried about nuclear winter and global warming, but hoped they’d at least happen at the same time…you know, to cancel each other out.

  7. Intelligence might be #4, well after sex, stupidity and laziness.

    Yeah, I was going to write up a list, in order of dangerousness — then I said, to hell with it.

  8. Strange. I would have voted for sex. myself. At least, I have certainly observed (at very close hand, alas) the complete subjugation of intelligence by the primal urgings of sex.

    As I told my sons, “every man has two heads and he’s bound to get into trouble if he goes through life thinking with the small one.”

    As for AI, “artificial intelligence will never overcome natural stupidity.”

  9. That’s on hold, Alan, until the Honored Matres perfect enslavement by sexual ecstasy. As a matter of intellectual equality I support this priority, of course.

  10. I find some of the assumed omniscience of an advanced AI to be somewhat odd. There are certain things you can’t do, no matter how much computational resources you start with.

    You could have a computer the size of (another) planet, and still not be able to predict the weather on Earth two weeks down the road. Predicting large groups of people would also be very difficult, considering that it would take the world’s largest supercomputer just to simulate one human brain in half-realtime today.

    Even if we do manage to come up with something intelligent, even very intelligent, that doesn’t mean that it can instantly speed-run the world.

    It seems the Singularians should realize that, regardless of whatever else superintelligence may be capable of, computation is not foreknowledge, being able to solve problems does not mean being able to effortlessly solve inverse problems, and there are always the infinite chaotic details of life to monkey with internal mental projections.

  11. PS – it seems the Singularity group is in the middle of a transition: Originally, it seems many of the founding members of that philosophical tribe were AI scientists themselves, and looked at the accelerating development of technology as something that could liberate mankind. Now that it has become a “mass-movement” of sorts, the tone has distinctly changed. Instead of looking for ways that we could build advanced life and intelligence improving technology, it seems most of the movements attention is taken up with either worshipping it or trying to stop it.

  12. Predicting large groups of people would also be very difficult, considering that it would take the world’s largest supercomputer just to simulate one human brain in half-realtime today.

    Predicting what large groups of people will do is much easier than predicting individual behavior — ask any actuary.

  13. It’s the Dilbert principle in action. Anything the boss doesn’t understand should be relatively simple to do. A.I. researchers do not understand intelligence, so they’ve been predicting electronic brains since the first branch and loop instruction.

    I don’t rule it out. We are in God’s image after all. Meaning we can do godlike things on some levels. So when we imagine A.I. taking over… Is that analogous to mankind taking over God? …or A.I. taking over God. …or more the existentialist bomb in Dark Star saying, “Let there be light?”

  14. I think it should be noted that we have no proof that intelligence significantly greater than ours is actually possible. It could be a situation like with cars, where people mistakenly think that because it’s possible to bump up MPG from 24 to 45 it must be theoretically possible to raise it to 100 or 200, which, of course, it is not.

    It’s an interesting question what might limit intelligence. In the first place, there is certainly an upper limit on raw clock speed, which I gather has kind of been reached even in silicon CPUs. No one seems readily able to create an electronic circuit that can change state in less than some tens of picoseconds. We can surmise that any computer, silicon or meat, won’t be able to change state in less than a few femtoseconds, simply because the speed of light limits the speed at which fields can change value over a moderate area. That puts an extreme upper limit on raw CPU speed not much more than 100 or 1000 times faster than what we’ve got now.

    To do better, we clearly need to go parallel. What are the limitations of parallel processing? One problem is the limits of communication, which becomes extremely critical in parallel processing, and again the speed of light puts a severe limit on how big your brain can be and still communicate effectively. A computer using femtosecond-scale state changes is not likely to be able to be more than a foot or two across and still comunicate effectively. How much parallel processing can we cram into a cubic foot? I don’t know, but I would hazard a guess that the human brain is pretty darned efficient about cramming in the maximum possible parallel processing power into a small volume, since cramming more neurons and connections in is a simple straightforward evolutionary change, and natural selection probably drove it to an optimal maximum quickly.

    I think in the end it depends a bit on your definition of “intelligence.” If we put a premium on “real time” processing and prediction, the kind of thing we usually see in movies when we see hyperintelligence, I’m thinking anything more than about 100x to 1000x human capability is unlikely.

    On the other hand, if we take a “slow and steady” tortoise definition, and credit something with extreme intelligence if it can explain (eventually) things in the past, perhaps we can do much better, because the brain can take advantage of stretching out its computation to be far more complex and powerful.

  15. I am thinking of some scenario where when computer are 1000 times faster than what we have right now, they will be running a Microsoft operating system system that will make them seem no faster than they are now, only the structure of the user interface will be completely changed that no one living today will know how to operate those computers.

  16. There is a debate over, Does Microsoft write an OS or a Virus?

    Continuing Carls thought, how much power can you put into that supersmart brain before it melts? Lightswitches might help but they bring in another set of problems.

    The brain uses about 20 watts I’m reading online.

  17. I saw that article back in college! Weird stuff.

    I’m not trying to be a complete curmudgeon. Obviously there are lots of things we can do with intelligence – it is an extremely profound subject to be working on.

    In addition, there are many things we can learn from the brain. The current cartoonishly simple simulations we call neural networks have already improved our ability to perform image recognition and certain complex identification problems.

    I’m just saying that there is a very very large gap between an image-rec AI, and something approaching general intelligence. There is furthermore a very large gap between any sort of general intelligence and something that can instantly speed-run the world and take over. Your chess robot is *not* an existential threat to humanity. I would even go so far as to say most conceivable setups with a generally intelligent AI would also not be an existential threat to humanity, even one that rewrites it’s own software to improve efficiency, due to the fact that it’s physical computing resources would be limited.

  18. “Continuing Carls thought, how much power can you put into that supersmart brain before it melts? Lightswitches might help but they bring in another set of problems.”

    Well, the original tube computers had to have plumbing brought in to cool the racks. If we end up in that situation again, I suppose you could have micro-channels in the processing substrate for circulating fluid, providing they wind their way around the optical paths.

    On the other end of things are Feynman’s hypothetical isentropic computers, which can perform a finite amount of computations without any mandatory increase in entropy or energy expenditure. Eventually, though, they have to reset their state to continue being of use.

  19. the founding members of that philosophical tribe were AI scientists themselves

    One marginal success of AI was expert systems (which had absolutely nothing to do with intelligence) which developed into some profitable niches. It turns out many early AI scientists were capitalists.

    People aren’t really doing AI like they did in the eighties. CPUs getting more powerful including more parallel processing hasn’t lead to the great breakthroughs expected.

    We are learning more and more about how the brain functions. Some of that can be copied. One bright morning it may all come together, but I doubt it. It’s an article of faith among some that we don’t have to understand consciousness for it to emerge as processing power increases.

    What could emerge is a goal seeking program that has absolutely no intelligence but is still capable of wiping out humanity as a side effect of its programming. Similar to gray goo perhaps.

Comments are closed.