How To Read The Future Of Large Language Models - A whiteboard style image displaying the following text "The future of LLMs"

In the realm of artificial intelligence (AI), large language models (LLMs) are contributing to a slew of innovative use cases. I have been tinkering with these models for a few years now. Not long ago, I recall trying to stitch together an AI conversational partner (a goofy one at that) powered by GPT-2.

Fast forward to the present, here I am power-coding complex applications with the help of models like GPT-4. We went from goofing around to power productivity in a matter of few years. What changed, and where are we headed?

As I am beginning to work ever more intensively with and on large language models, I feel that I have some insight into their future and how we (as a society) might be able to cope with / benefit from them.

This essay is supported by Generatebg

The Science Behind Large Language Models

Contrary to popular belief today, most of the fundamental science behind LLMs is not new; some of these concepts are even decades old. What is new is the innovation on two primary fronts:

1. Hardware

2. Engineering

Many Machine Learning and Artificial Intelligence “Researchers” would like to believe that the current state of the art LLMs are marvels of science. I don’t think so.

These models are beautiful creations, but they do not add much to the body of scientific truths as we know it.

Instead, AI researchers use our current scientific knowledge to “engineer” complex adaptive systems (LLMs) that are able to solve the challenging problems that we throw at them.

Don’t get me wrong. I am not trying to take any credit away from our excellent AI researchers who have created such models.

I am just differentiating between “science” and “engineering”; one pushes our understanding of fundamental ground truths, and the other uses the said fundamental ground truths to build useful tools for society.

Speaking of tools, none of these “engineering” feats would have been possible without the advancements in computational hardware that we have had over the decades. It has been a key contributor to how these complex adaptive systems are able to function so time-efficiently.

Who is Driving the Current State of the Art?

As I see the landscape, two groups of people are driving the current state of the art of LLMs:

1. AI and Machine Learning Researchers

2. Hackers and Enthusiasts

While the researchers deliver discrete jumps in technological advancement, the hackers and enthusiasts push the boundaries of these models and find interesting and unique applications for them.

This dance of alliance between these two unlikely groups makes for a fascinating and exciting time to be involved with this field.

While I have friends who are active researchers in the field, I belong to the latter category than the former.


My Latest Project

I have been intensely working on a project that involves an image-generation large language model. At this point, you might be asking the following question:

“What does image generation have to do with language?”

Well, it does a lot. Think of it this way: we use words all the time to describe images and we use our “imagination” all the time to construct mental images from words we hear.

In essence, our language is connected to images and vice versa. Throw in advanced image algorithms, tensor calculus, tensor algebra, and a healthy dose of engineering genius, you are left with large language models that enable us to generate and edit images. What a fascinating time to live in?!

I am making rapid progress with my project and all the signs indicate that this will be a product launch soon. The word “product” unfortunately also means that I have to be conservative about what I reveal about it here.

So, that is all I can share about my project at this point, I guess. But what about the future? Where is this journey headed?

How to Read the Future of Large Language Models?

First and foremost, I would like to openly admit that I possess no crystal ball and therefore, cannot say anything about the future with 100% certainty.

However, I do have some unique insights using which I could make some educated bets.

I won’t beat around the bush. Here are 5 educated bets that I am willing to make given my current knowledge of LLMs

1. The use cases for LLMs will continue to increase geometrically.

2. LLMs will not function without human guidance/assistance. This is due to the notion of tail risks that arise from edge cases. No matter how complex your tensor operations are, nature will feature edge cases that are at least an order above and out of reach.

3. Advances in scientific truths will slow down growth of LLMs and will give birth to the next generation of (engineered) artificial intelligence architecture.

4. These newer generations of AI architectures (that are unlikely to be language models) will be the ones that will slowly inch towards performing without human guidance/assistance.

5. LLMs will still strongly serve future AI architectures (that are not language models themselves) as these systems are likely to employ higher-order languages that are beyond human comprehension.

One of more of these points might be controversial takes, but I am not aiming to be scientifically correct here. These are my bets, given my current knowledge of LLMs.

The Future of Humanity

As the current generation of LLMs develops, I believe that we will continue to see more and more complex human problems get solved and automated. As a result, our society will get more efficient and environment-friendly.

This will lead to another problem, though. Many human beings who are in the workforce today will not have much to do in the near-future economy. What then?

How To Read The Future Of Large Language Models — A whiteboard style image displaying the following text “The future of LLMs”
The future of LLMs — Illustration created by the author

One solution is Universal Basic Income (UBI). But this still does not solve the problem of “life’s purpose”. Another alternative could be human beings working under LLMs that aim to maximise human fulfilment as well as operational efficiency.

It is indeed a strange thought to not only work with LLMs but work for them.

But given the rate of development of LLMs, I think that it is wise for us to mentally prepare ourselves to cope with both the potentially fruitful as well as the adverse changes that await us in the near future.


If you’d like to get notified when interesting content gets published here, consider subscribing.

Further reading that might interest you:

If you would like to support me as an author, consider contributing on Patreon.

Street Science

Explore humanity's most curious questions!

Sign up to receive more of our awesome content in your inbox!

Select your update frequency:

We don’t spam! Read our privacy policy for more info.