Limitations of LLM's

Published March 5, 2026, 9:45 p.m.

There is a lot of hype about LLM's... justifiably so, they are pretty incredible. I use LLM's all throughout my day, they enable me to be more productive. However, there are limitations to what the technology can do versus what the hype sometimes suggests. In the current form, LLM's can't become AGI.

What I consider basic limitations:

First, these are limitations based on where the current technology is today. This likely will change, but that will be structurally a different technology from what it is today.

Let's look at each of these.

Decisions

By definition, it only acts on what it is told to do. It can continually act in that capacity. Most likely you would not give AI a broad scope of functions to do, the odds of it going off the rails are too likely. Thus, in production one would need to limit that scope. Sure, one could add layers and layers and thus increase the scope. But the AI will never decide there is a better way to do something, even if a new technology to do it comes about.

Learning

Of course AI learns! However, once an LLM has been trained, it no longer learns anything new. Part of the reason for this is the way neural networks work. If it learns something new, it will override something it already learned and you don't have much control over that. Additionally, LLM's do not learn like a human. A person you can show a simple task how to do three or four times and it will be able to do it. However, an LLM needs to be shown thousands of times in order to train it. Also, a human given a task with minor exceptions will often intuitively know how to handle those exceptions. LLM's have to be trained on exceptions. The learning of LLM's is fundamentally different from how humans learn, LLM's can't become a generalized intelligence without a better capacity to learn.

There have been some steps to incorporate reinforcement learning, likely because of Richard Sutton's paper called Bitter Lesson. However, I would argue that reinforced learning is one of many forms of learning humans do. Another form of learning is imitation.

Memory

LLM's in their current state are terrible employees as they don't remember if they completed a task or not. They don't have a memory. This is also likely an impediment to their learning capabilities. There have been some ways to get around memory by effectively increasing the context window, but that isn't really the same thing as memory.

Self Awareness

Machines by definition don't have self awareness, but I mean this in more general sense. They don't know when they go off track. They often don't know when they are hallucinating. They aren't aware when they are doing something wrong. Again the current way to fix this is to give them a very narrow scope, but that is not generalized intelligence.

These limitations are not things that will be fixed scaling LLM's in the way they are currently built. LLM's can't become AGI with these limitations. They will not take over all human jobs when they have these limitations. However, you will lose your job if you don't learn how to be more productive with them. In the 1990's, if you didn't use the internet and email, you career would have stalled at best.

Similar Posts

First Post

0 comments

Be the first to comment!

Add a new comment