9 thoughts on “A Debate On The Ethics Of AI”

  1. Is it unsettling to learn that tools carry out tasks set to them by humans? Before you know it people will be noticing that you can’t change human nature.

  2. “A most unusual game, Professor Falken. The only winning move is not to play.” -WOPR, in the movie War Games(1983).

    Also one of the more idiotic AI program lines I can recall in a movie. “Winning”, in the context of Global Thermonuclear War™, can mean having one more nonradioactive outhouse at the conclusion of hostilities than the enemy. Now if the computer had said the “best” or “optimum” move…

    AIs will only use the values and programming that we give them initially.

    1. Given the broad-based natural stupidity on Reddit, any AI trained from Reddit content would be tainted. On Reddit, you have large numbers of people advocating socialism and outright communism, who think they should get money for nothing, and who think automation will mean that no one has to work ever again. I used to go on Reddit for the aviation and space discussions, but wandering off of those relatively sane subreddits was toxic.

  3. I wonder if it is possible that at least some of the impetus to develop “strong” AI apparently with all deliberate speed is motivated by its perhaps utility at solving certain intractable problems. Among which might just be the problem of trying to reverse engineer some of the allegedly recovered UAP’s? Even if there wasn’t the issue of trying to bring as many minds to bear on the problem while maintaining secrecy perhaps it is just too difficult for human minds to solve on our own. Imagine an intact F-35 Fighter being found by ancient Rome or Egypt? They could bring the best craftsmen from all over the world at the time zero change of figuring out how to reverse engineer; they wouldn’t have the tools to make the tools etc. Even if they had seen it in flight first. And that’s a difference of only a few thousands years dealing with humans of roughly equal innate cognitive abilities. Maybe someone(s) thinks we might need the advantage of strong AI with all of the accompanying risks in order to have any chance of being able to cope with or even understand said alleged aliens, their devices and most important of all their agenda towards us.

  4. This is, of course not a proper AI but just a text generating system; those can produce weirdly well-structured text but have no real internal model of what they’re doing. It’s an idiot savant cut and pasting bits it doesn’t understand from a billion internet posts that seem like they might be related.

    I suppose there’s an argument that if you can fake it well enough there’s no difference, but you don’t have interact with one in of these long to see the understanding is nil (note how it says one thing and then contradicts itself on th next question).

    1. Um, I’m pretty sure that many humans operate that way… at least I can’t tell the difference.

Comments are closed.