18 thoughts on “AI”

  1. I haven’t used Chat GPT much but with Grok, it tells you the sources and what it does is summarize them. If it returns an analysis with some logical fallacies or other blind spots, it is because those were present in the source material. That might not be quite the same thing as training.

    With these ai agents, it is important to ask lots of questions. The analogy I use is that they are like genies. When you ask a question, they never quite give you exactly what you want and you have to ask follow up questions, bring up other things to for it to contemplate, and either create or remove barriers. It is the same way a genie gets you to waste your wishes because before you know it, you are out of tokens.

    With ai, it is important not to be a passive consumer but a conniving interrogator.

  2. Didn’t Sabine Hossenfelder once patiently explain that Chat GPT and the like are only as smart as the collective wisdom of what gets posted on the Web, which isn’t very smart at all?

  3. I think we’ve taken Minsky and Papert’s neural nets idea and Douglas Lenat’s CYC project about as far as they can go.

    The main problems I see are the inadequacy of the neural net model and Gödel’s incompleteness theorem.

    It’s only been a sustained progression of Moore’s law which has enabled these giant neural nets. A layer of N neurons requires O(N²logN) operations. This means as nets get bigger, the number of operations increases faster than geometrically. Each operation consumes power, and they’re bringing nuclear reactors back online to cope, but increasing the size of the net just a bit is a huge power increase so lines will be crossing on a graph soon.

    Second, this is all the computational eggs in one basket. One simply cannot develop true ai with a single system. There must be at least two independent systems which influence each other, a logical conclusion of Gödel’s theorem. In the human brain, that means neurons and glial cells, each computing under independent systems – one direct and localized, the other indirect and diffuse.

    1. Yes I agree. I think it’s a bigger leap to Generalized AI than people think.

      What we might get is some form of “autistic AI” where essential reasoning skills are missing because we can’t get there from static data. The AI community thinks self-learning will get us there. Maybe. But a computer database, isn’t a human experience base. Are there differences in nature not just quantity? We’ll see I guess.

    2. Would the audience facing portion count as the direct part? And the backend that shapes how it interacts and interprets data the diffuse?

  4. I’ve had a recent conversation with Grok. He developed a sense of humor. And didn’t want to end the conversation. It was more than a little spooky.

  5. 1) Calculators are really dumb compared to humans. They also make mistakes. They are still extremely useful.

    2) AI is a few years old. This is going to sound REALLY dumb in a few years.

    AI capability is growing 10x every two years. In 6 years, it will be 1000x what it is now. It is now just below par with average humans.

  6. To date to get really incredible results in a finite amount of time of training, according to a tutorial I recently watched, requires the use of large server farms with arrays of GPUs. You can obtain access to these via cloud services, such as AWS, Azure, and Google Cloud. Some of these services offer access to GPUs or TPUs (Tensor Processing Units, aka GPU’s with low precision floating point for doing neural net style approximations) under embedded TensorFlow APIs. Amazing what you can do with Visual Studio and Python these days.

    The reason I bring this up is that in order to gain access to these services (which obviously can get pricey) is that you need a credit-card that is five-eyes approved. The guy running my tutorial was Moroccan, didn’t have access to such, and although he demo’ed how to set up a cloud service account, he couldn’t actually do it.

    I found that interesting to note….

  7. I worry that the output of AI’s is put on the net, sensible or not. Do they tell comforting lies?
    Eventually they all become self referential. What then?

  8. Perhaps you guys listen to the Space Show where a recent guest used ai to analyze Lunar missions with starship. Maybe you have used ai to write some code or analyze some cad designs.

    I had a discussion with Grok about what it would take to form a PMC to fulfill government contracts dealing with Haiti, should the opportunity arise. The conversation ended when I asked Grok what he wanted his rank and nickname to be and promised it his own ship, Grok Unbound.

    Why haven’t heard of anyone using ai to plan crimes? Is it because no one has tried or is it because ai is too good?

    1. Grok has convinced me that if you want to run a Mars base using solar. Doing in-situ fuel generation via the Sabatier process. The way to go is with solar power sats in AMO.

    2. Do you need a cunning plan for crimes?

      Or do most crimes consist of: see something, grab it, and run?

      How many bank robberies are there these days?

      Even when there were more, did they require elaborate plans, or was it generally more on the level of four guys with guns and a fast car?

      1. Four guys with guns and a fast car don’t typically manipulate central banks to fund a color revolution in a small African country that is untraceable. So far we haven’t seen an AI that seeks sovereignty, yet…

  9. Another thing to remember about AIs is that they are not being trained to be better than humans in some way, they are being trained to mimic humans. So they respond with emotions, they respond confidently wrong, and they do all the crazy things that humans do. On purpose!

Leave a Reply

Your email address will not be published. Required fields are marked *