12 thoughts on “Neuralink”

  1. “We are the Elon. Your biological and technological distinctiveness will be added to our own. Resistance is futile.”

  2. Imagine reading that article a day after reading EFF’s Trusted/Treacherous Computer article. We’re talking fascism to the google power. Yes, not googleplex. I was being intentionable.

  3. Reading Jon’s mind: “While I understand Musk’s sentiments, that’s quite an arrogant goal and does a disservice to the deity who designed thinking brain organs. It really lowers my opinion of Musk.”

    1. Getting out my ice skates and heading for Hell – Bob-1 finally wrote something I agree with.

  4. I’ve been hearing about this for decades. Thank God it does not work properly. This has to be some of the most intrusive systems one can think of. Elon is also being remarkably stupid. There are much lower hanging fruit in speech and visual recognition and we don’t even use those universally yet. They’re also a lot less noisier and simpler to process than analyzing brainwaves.

  5. The Ghost in the Shell remake is timely.

    The problem with this technology is that learning is idiosyncratic in many ways. Think back to when you were a student and people in class had many different opinions about the lessons. Everyone heard the same lecture, read the same book, and did the same homework but they all had different opinions and insights.

    Would this be lost by uploading learning into your brain?

    Who would be in charge of determining the lessons programmed into humans? The same people in charge of colleges right now?

    Because the experience and struggles of learning things shapes our personalities, what effects would this have on who we are?

  6. Long, but interesting article. I actually don’t think he can reach the “wizard hat” scale to allow for mind-meld engineering sessions though that would be kind of cool. As the article states, and was mentioned above, learning and storage in the brain is very idiosyncratic. You’d need an interface between all the BMI users that could handle a nearly infinite variety of permutations.

    Godzilla said Musk was kind of stupid and is ignoring “much lower hanging fruit in speech and visual recognition and we don’t even use those universally yet”. The writer of the article actually touched on this in his map of Elon’s overall process of transformation as shown in SpaceEx and Tesla. The obvious and stated driver is sight, sound and motor control as the money making business end that stimulates further research into his ultimate goal of a full mind-machine or mind-mind link. So sight and sound for those lacking it, and human controlled robotic extensions. It would be far easier and better to build habs from orbit around the moon or Mars and then move into them, than settle down in a tiny hab and gradually work build it out.

  7. “Because, as the human history case study suggests, when there’s something on the planet way smarter than everyone else, it can be a really bad thing for everyone else. ”

    Have humans been a bad thing for cats?
    Or cattle or chickens?
    Are native Americans worst off than before the white man?
    Assuming white man was smarter, and as the record seems to indicate it could/might have been otherwise.
    Or has multicellar life been bad for the microbe?
    Or has life been bad for rocks?

    It seems at worse, humans have been an insurance policy for all life on Earth- though it’s never a sure thing that any insurance policy will actually work. The only certainty is an insurance policy costs something at the present and provides vague promises regarding the future.

    It seems the only bad thing AI might “do” is prevent humans from reaching some kind of destination. Or not getting somewhere that otherwise humans would have eventually arrived at. And if apply such metric to cats, cattle, chickens, Indians, microbes, and rocks, then there was little danger of that actually being the case.
    Whereas if you “look’ at the political animals, they seemed to have done lots damage in this regard.
    https://richardlangworth.com/worst-form-of-government

    For example, it seems the political animals are preventing people from getting into space- by pretending they are ‘working on it’ when they clearly are not.
    They take our money and squander it [and that is the better thing {or least thing}] rather than worst thing that they do].

    “Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…”

    Now imagine if we had AI as politicians.
    I can’t make up my mind, if it would or would not be improvement- but the AI could be cheaper. And people could be more skeptical and expect better results [a part of an argument for it being an improvement]. Or I suppose one could ask if AI could be as dishonest as a typical human politician?

  8. I think it would be neat to avoid death through augmented intelligence, just not from the [check]point of backup….

Comments are closed.