4 thoughts on “Recursive Improvement In AI”

  1. It seems to me that recursive improvement only happens when AI interfaces with reality, i.e., a world outside of AI where “improvement” can be objectively measured. With the large amount of AI derived material that is erroneous and that pollutes the internet, it could turn out to be like an AI entity living in its own fantasy world, sort of like a Star Trek holodeck environment.

    1. And going functionally insane or catatonic as it navel-gazes? That’s better than the Skynet scenario.

  2. Two likely unrelated thoughts, but I need to get them out of my head.

    I don’t trust AI tools/LLMs in specialized fields. I was reviewing a code solution provided to me by a coworker. It made no sense to me why he was proposing the solution he proposed or asking the follow up questions that he asked. I dug in a little more and he told me that he had asked AI to explain technical terms that he didn’t understand instead of just asking someone else on the team. The AI response he got was a complete hallucination and wildly off base for what was actually a simple concept that anyone with a few years experience in our field could have answered. I don’t trust AI because of this. At least, not when it will take me longer to proof its work than if I had just done it myself.

    Thought 2:
    Tired of seeing ads for Base44, I googled to see if it was a scam. The consensus was that if you got stuck using the tool to code an MVP application, you should use Claude or ChatGPT or another tool to explain and solve the block. That seems more directly related to the linked article, inasmuch as any one tool isn’t going to get you there, but a quiver of complementary tools probably can.

    Anyway, something something get off my lawn, let me just roll my own code and understand what the hell it’s doing.

Comments are closed.