Tech Workshop ... AI cannot learn

Because the technical definition of learning is flawed. There, boom, done, end of post, end of line, end of file. Now that I have your mind sufficiently agitated and ready to flog me with whips made of CAT6 cables, I implore you to sit down and seriously listen in because you might actually align with what I’m saying. And brace yourselves because the stuff is probably going to give some really head-scratching moments and very likely shatter your ideas about “universal AI”.

Anyone who dabbled at least a little bit into coding or gamedev has heard terms like “machine learning” or AI. Hell, the behaviour of non-player actors in games is colloquially referred to as AI despite it being something completely different. When you look under the hood, you can see the “magic” behind; decision trees, neural networks, all the maths that would fry one’s brain for a while. And yet, none of this is anywhere remotely close to intelligence. It’s not even close to learning. This is not the fault of the algorithms or the tools but a fault of those who defined the technical term. How come? Well, first of …

Accumulating knowledge isn’t learning

How many times have you been told in school to “study” a certain subject only to memorise it and then repeat it when being asked about it during a small exam? Were you given an explanation? Have you sought reasons? If the answer to any of the latter two is “no” then you didn’t learn. You at best accumulated knowledge. It’s as if you got a book only to put it into a bookshelf without ever reading it by which I don’t just mean going through the pages but actually reading the book start to end, pondering the thoughts of the author.

Machine learning operates on this exact principle. You take knowledge and cram it into the machine without explanation or any reasoning. You’re effectively doing “Here, study this and the recite it word for word. Points down for any inaccuracy.” If that sentence made your blood boil, I’m sorry that you experienced having some shitty teachers (as we all did).

“But unsupervised learning …”; Yes, unsupervised learning is a thing. It’s effectively supposed to be an analogy of self-learning. But it still isn’t learning. Why? Exactly because the “why” is missing. Humans are naturally curious. Kids even more because they’re not jaded yet. Machine is not because it doesn’t understand the concept of curiosity. It won’t seek reasoning. And without reasoning, there can be no improvement because you can’t identify the mistakes to learn from. To quote Vesemir from the prologue of Witcher 3 (can’t believe I’m actually doing this): “Don’t practice alone. It’ll only embed your errors.” How does this quote apply? Because you don’t know you’re making a mistake. And once you find out, it’ll take ages to unlearn AND relearn the skill. To give you an idea, it takes roughly the same amount of time to unlearn a mistake as it takes to learn the skill to the current level you’re on with the mistake embedded in.

“But backpropagation, weight adjustments, …” let me stop you right here with all the fancy words. None of these are even remotely close to reasoning. Why? Because …

Intelligence isn’t purely statistical

Human brain doesn’t operate on statistics. Sure, thinking has the concept of uncertainty and probability but the mathematical equivalent of that is an approximation of those. And we don’t use these two as primary tools. They come into play if we’re dealing with something not previously observed. Otherwise we go for tools which are far more deterministic.

“Oh, so like associative memory. A cache …” technically yes. Except you’re not replacing items in the cache but always expanding it AND adjusting the mechanism which did the initial decision-making process. And the latter you need to do WITHOUT impacting anything unrelated unintentionally. Human brain can do that. Artificial “brain”? That’s a different story.

Now, to make matters worse, all of the above was about analytical ML; the kind of machine learning which intended to be fed a massive pile of data to give you a rough idea about what’s in there and make the pile into smaller piles which are easier to work with later. When we enter the space of generative ML the things crumble even more spectacularly because …

There can be no creation without intention

This simple sentence is basically the nail in the coffin of generative ML/AI and THE REASON why creative work is irreplaceable; by which I mean writers, visual artists, engineers etc. Machine learning doesn’t act with intention. It doesn’t act off of its own will for it has none. It acts without understanding. That AI-generated image with 6 fingers on one hand? That’s because AI doesn’t know it’s a human hand. And it’ll never learn that it’s supposed to be a human hand. The underlying model might “know” what human hand is. It might be full of references of different poses, shapes, sizes, etc. And yet it’ll still produce errors. Because the WHY isn’t there.

Same applies to text. Why is AI-generated text so “painful” to read? Because there’s no intention behind the words. It’s pretty much “math turned into words” which in itself is horrendous to read (if you ever read anything written by a mathematician, you know the pain). It’s void of intent, distilled into pure, rigid description. And the level of rigidity is so high that it doesn’t even work for spaces where precision is needed because then it fails due to loss of broader scope. Long story short, AI doesn’t understand that words have meaning and one formulates a thought in a certain way because there’s an intention behind it. Yes, even the seemingly boring scientific article is written the way it is intentionally.

And that intention is something that’s not possible to simulate. Not even agentic systems operate with intention. They’re autonomous and operate mostly without constant supervision but they don’t start doing anything without external stimulus. They still need a list of tasks. In the context of coding, they can at best implement a feature but not come up with a new one. So if someone tells you they use “AI” for brainstorming, then they don’t understand that they’re already doing that and they don’t need AI in the first place. In that case AI is literally doing nothing because it’s by definition incapable of doing so.

AI will never learn

In the end, Artificial “Intelligence” will never become intelligent. Why? Exactly because the WHY will never be there. It’ll never be capable of understanding the subject at hand, no matter how specialised it’ll be. It’ll never be more capable than its user for the user will ALWAYS be the limitation. And that limitation is irremovable because the user is the ONLY way to provide the model with some form of reasoning which is still very inaccurate and it’ll need constant adjustment because the retention of that reasoning is impossible. In the end, AI never learns. Even the most stubborn person in the universe will eventually learn (unless they intentionally refuse to but then again, there’s the intention).

So yeah, I “hate” to say it, universal AI is by definition impossible because your definition of learning doesn’t match the actual process of learning.

R.R.A.