I haven’t used Chat GPT much but with Grok, it tells you the sources and what it does is summarize them. If it returns an analysis with some logical fallacies or other blind spots, it is because those were present in the source material. That might not be quite the same thing as training.
With these ai agents, it is important to ask lots of questions. The analogy I use is that they are like genies. When you ask a question, they never quite give you exactly what you want and you have to ask follow up questions, bring up other things to for it to contemplate, and either create or remove barriers. It is the same way a genie gets you to waste your wishes because before you know it, you are out of tokens.
With ai, it is important not to be a passive consumer but a conniving interrogator.
Didn’t Sabine Hossenfelder once patiently explain that Chat GPT and the like are only as smart as the collective wisdom of what gets posted on the Web, which isn’t very smart at all?
I think we’ve taken Minsky and Papert’s neural nets idea and Douglas Lenat’s CYC project about as far as they can go.
The main problems I see are the inadequacy of the neural net model and Gödel’s incompleteness theorem.
It’s only been a sustained progression of Moore’s law which has enabled these giant neural nets. A layer of N neurons requires O(N²logN) operations. This means as nets get bigger, the number of operations increases faster than geometrically. Each operation consumes power, and they’re bringing nuclear reactors back online to cope, but increasing the size of the net just a bit is a huge power increase so lines will be crossing on a graph soon.
Second, this is all the computational eggs in one basket. One simply cannot develop true ai with a single system. There must be at least two independent systems which influence each other, a logical conclusion of Gödel’s theorem. In the human brain, that means neurons and glial cells, each computing under independent systems – one direct and localized, the other indirect and diffuse.
Yes I agree. I think it’s a bigger leap to Generalized AI than people think.
What we might get is some form of “autistic AI” where essential reasoning skills are missing because we can’t get there from static data. The AI community thinks self-learning will get us there. Maybe. But a computer database, isn’t a human experience base. Are there differences in nature not just quantity? We’ll see I guess.
To date to get really incredible results in a finite amount of time of training, according to a tutorial I recently watched, requires the use of large server farms with arrays of GPUs. You can obtain access to these via cloud services, such as AWS, Azure, and Google Cloud. Some of these services offer access to GPUs or TPUs (Tensor Processing Units, aka GPU’s with low precision floating point for doing neural net style approximations) under embedded TensorFlow APIs. Amazing what you can do with Visual Studio and Python these days.
The reason I bring this up is that in order to gain access to these services (which obviously can get pricey) is that you need a credit-card that is five-eyes approved. The guy running my tutorial was Moroccan, didn’t have access to such, and although he demo’ed how to set up a cloud service account, he couldn’t actually do it.
I worry that the output of AI’s is put on the net, sensible or not. Do they tell comforting lies?
Eventually they all become self referential. What then?
I haven’t used Chat GPT much but with Grok, it tells you the sources and what it does is summarize them. If it returns an analysis with some logical fallacies or other blind spots, it is because those were present in the source material. That might not be quite the same thing as training.
With these ai agents, it is important to ask lots of questions. The analogy I use is that they are like genies. When you ask a question, they never quite give you exactly what you want and you have to ask follow up questions, bring up other things to for it to contemplate, and either create or remove barriers. It is the same way a genie gets you to waste your wishes because before you know it, you are out of tokens.
With ai, it is important not to be a passive consumer but a conniving interrogator.
Didn’t Sabine Hossenfelder once patiently explain that Chat GPT and the like are only as smart as the collective wisdom of what gets posted on the Web, which isn’t very smart at all?
I think we’ve taken Minsky and Papert’s neural nets idea and Douglas Lenat’s CYC project about as far as they can go.
The main problems I see are the inadequacy of the neural net model and Gödel’s incompleteness theorem.
It’s only been a sustained progression of Moore’s law which has enabled these giant neural nets. A layer of N neurons requires O(N²logN) operations. This means as nets get bigger, the number of operations increases faster than geometrically. Each operation consumes power, and they’re bringing nuclear reactors back online to cope, but increasing the size of the net just a bit is a huge power increase so lines will be crossing on a graph soon.
Second, this is all the computational eggs in one basket. One simply cannot develop true ai with a single system. There must be at least two independent systems which influence each other, a logical conclusion of Gödel’s theorem. In the human brain, that means neurons and glial cells, each computing under independent systems – one direct and localized, the other indirect and diffuse.
Yes I agree. I think it’s a bigger leap to Generalized AI than people think.
What we might get is some form of “autistic AI” where essential reasoning skills are missing because we can’t get there from static data. The AI community thinks self-learning will get us there. Maybe. But a computer database, isn’t a human experience base. Are there differences in nature not just quantity? We’ll see I guess.
I’ve had a recent conversation with Grok. He developed a sense of humor. And didn’t want to end the conversation. It was more than a little spooky.
So, are we talking MYCROFT from the Moon is a Harsh Mistress or what?
1) Calculators are really dumb compared to humans. They also make mistakes. They are still extremely useful.
2) AI is a few years old. This is going to sound REALLY dumb in a few years.
AI capability is growing 10x every two years. In 6 years, it will be 1000x what it is now. It is now just below par with average humans.
To date to get really incredible results in a finite amount of time of training, according to a tutorial I recently watched, requires the use of large server farms with arrays of GPUs. You can obtain access to these via cloud services, such as AWS, Azure, and Google Cloud. Some of these services offer access to GPUs or TPUs (Tensor Processing Units, aka GPU’s with low precision floating point for doing neural net style approximations) under embedded TensorFlow APIs. Amazing what you can do with Visual Studio and Python these days.
The reason I bring this up is that in order to gain access to these services (which obviously can get pricey) is that you need a credit-card that is five-eyes approved. The guy running my tutorial was Moroccan, didn’t have access to such, and although he demo’ed how to set up a cloud service account, he couldn’t actually do it.
I found that interesting to note….
I worry that the output of AI’s is put on the net, sensible or not. Do they tell comforting lies?
Eventually they all become self referential. What then?
AIs can find autistic people by analyzing their hand movements. I just hope a Gom Jabbar will not be part of the test.
https://onlinelibrary.wiley.com/doi/10.1002/aur.70049