Blog

LLMs are the new fire. JEPA has the potential of new electricity.

April 25th, 2025
By: Skip van der Meer

Share

Skip van der Meer
svandermeer@yellowtail.nl

On 31 August 1955, John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon came up with a term that is now so commonplace that not a day goes by without hearing it: "Artificial Intelligence". As from February 2025 and three Matrix films, an Ex Machina and an I, Robot movie later, ChatGPT has approximately 400 million (!) weekly active users.

In recent times, Large Language Models (LLMs) have brought about a leap forward similar to what the discovery of fire once meant to humanity: suddenly there is a powerful tool that completely changes the world. At the same time, we also know the limitations and risks. This blog dives into that "talking fire", showing its possibilities, as well as its dangers. Next, we take a look at the potential successor to the current generation of LLMs: Joint Embedding Predictive Architecture (JEPA), which by some is already labelled as the new "electricity" that can further transform the world of AI.

What are LLMs and how do they work?

LLMs are AI systems trained on massive amounts of text, images, and videos. This amount of training data allows these models to generate content and have a contextual understanding of existing information. Hence the term Generative AI. This includes chatbots, automatic report generation or real-time risk calculations based on unstructured data.

In the financial sector – and beyond – LLM use cases are already in full development. This ranges from replacing telephone customer services to AI Co-scientists. The possibilities are almost endless and almost everyone uses them. However, it's important to be aware of the limitations of LLMs. An LLM is like a self-learning assistant that talks quickly, but is not always sure what they're talking about. As long as you deploy the LLMs in a structured and manageable way, it can act as a huge accelerator. Replacing a financial advisor with a fully autonomous AI robot like an NS-5 from I Robot? Well, that's still science fiction for now.

Limitations of LLMs

Despite their strengths in generating text, LLMs also have clear limitations – especially when looking within the financial sector, in which accuracy, reliability and risk management are key. For example, they don't really 'understand' the world the way people do. They have no common sense or mental representation of physical reality. LLMs predict purely based on patterns in language and the probability derived from it, without knowing whether a sentence is logical or true; it lacks an understanding of the real world.

A direct consequence of this are so-called hallucinations: answers that sound convincing, but that are factually incorrect. Compare it to Jordan Belfort selling stocks – you hesitate, but it sounds so good you almost have to believe it. These hallucinations can cause enormous problems when put in a business context. Take, for example, the lawsuit in which an Air Canada chatbot falsely promised a discount – and the company was ultimately held liable (source). This example shows how important it is not to blindly trust generative AI systems, especially in customer-oriented environments.

Another shortcoming of LLMs is that they lack long-term memory or planning skills. An LLM generates each subsequent word without an explicit plan for the entire sentence or paragraph. This can cause small errors at the beginning of an answer, ultimately growing into an incorrect or inconsistent outcome. In short, LLMs are great pattern recognizers in the world of language, but they lack a deeper understanding of what that language actually describes in the real world. In sectors such as finance, this is a significant problem. Some progress has been made recently by adding short-term and long-term memory, but that's not enough.

In short: LLMs make a lot possible, but can be frustrating at the same time. They are useful as assistants – not as advisors. Therefore, human supervision remains crucial. Only through professional assessment and critical thinking can we guarantee the quality and reliability of the output. After all, how can I trust an LLM to properly explain quantum mechanics to me, if it doesn't even know how many times the letter R appears in the word strawberry?

Applications in practice

As previously indicated in this blog, there are many valuable applications that can be realized with the current generation of LLMs and the potential still left in that architecture. It's therefore no coincidence that leading companies in the financial sector such as Klarna, Rakuten and Adyen are frequently using AI and LLMs to optimize processes and enhance customer experience. Setting this up properly and in a controlled manner is important. Without a clear framework, the application may spiral out of control or exhibit unwanted behaviour.

AI-powered mortgage advice platform

Conclusion is fully committed to generative AI. A textbook example of this is how Maarten Tellegen, Director of Hypact Advisor at Yellowtail | Conclusion, and his team developed an AI-driven platform that took mortgage advice to a higher level: from inventory and advice to brokerage and customer communication. With this platform, he lays the foundation for a data-driven and AI-enriched approach. Under Maarten's leadership, the platform is now being rapidly expanded with AI functionalities that, among other things, automatically read documents, extract customer data and check for accuracy. This means that applications are now more likely to be processed correctly ('first-time-right'), resulting in higher customer satisfaction and less workload for advisors. Even the preparation of the advisory report is automated using AI: customer meetings are translated into clear, substantiated reports, including personal motives — fully compliant, without additional manual work. Read the full article here.

Chat- en voicebots maken klantcontact efficiënter

Another example is how Yellowtail | Conclusion and Future Facts | Conclusion helped a large Dutch pension provider to improve the customer journey through generative AI. Where traditional chatbots often fall short, Yelllowtail and Future Facts have developed a smart voicebot that, thanks to GenAI, can conduct natural, interactive conversations and provide customers with the right information on the spot. This solution increases customer satisfaction through faster and more personal contact. Instead of a standalone application, they jointly developed a broader GenAI framework that allows for multiple AI solutions to be implemented with safety and scalability in mind. This framework takes into account risks such as hallucinogenic output and is aligned with existing IT processes.

JEPA: the potential of new electricity

LLMs have developed at lightning speed, but their limitations remain apparent. But what if we could tackle those LLM limitations? Yann LeCun (Meta), one of the 3 "godfathers of AI", highlights the fundamental shortcomings: "they don't understand how the world works and they can't remember anything, they can't reason like humans, nor can they plan" (Meta’s Yann LeCun Wants to Ditch Generative AI). That's why in 2023 his team developed JEPA: an AI model designed to let AI learn the way we humans do naturally: through experience and forming a mental model of the world. Instead of 'educated guesses' at the next word, JEPA focuses on representational learning.

At the risk of delving too much into the terminology, we can illustrate the difference between JEPAs and LLMs through a metaphor: imagine you are putting together a jigsaw puzzle and you need to find the next piece.

· The LLM approach:

Tries to guess which piece of the puzzle comes next by looking purely at shape and colour, piece by piece, without actually understanding the picture of the entire puzzle.

· The JEPA approach:

First forms a mental image of what the overall picture of the puzzle should be and then uses that to find the next logical piece of the puzzle. In other words, JEPA generates an abstract understanding of the puzzle rather than starting with a limited set of knowledge.

The first JEPA model focused on image recognition. A video version has now been added to this: V-JEPA, which simply learns by observing. Like a child who understands its environment by looking, V-JEPA can passively learn context and recognize patterns, without active input. This way of learning opens the door to AI systems that can plan and reason more effectively and are less susceptible to hallucinations.

However, JEPA is still in its infancy. It's not a one-size-fits-all model that generates ready-made texts like ChatGPT. Additional components are needed to create useful output. But the potential is enormous.

Conclusion: two variants of progress

We've seen that LLMs are already creating impact and are widely applicable, but they have their limitations as well. On the other hand, new AI architectures such as JEPA are emerging, which show a lot of promise in solving those limitations, but they are still under development. How quickly and how far both directions will develop is difficult to predict. One thing is certain: both technologies will continue to evolve and make an impact in their own unique way. That’s why the comparison to “fire and electricity” is so fitting — fire brought us far, but electricity sparked an entirely new era. Perhaps the same will soon apply to LLMs and JEPA. Until then, I'll put on Ex Machina again and dream of a future of JEPA-powered Avas, like the robot in Ex Machina.

Read moreAbout this subject

Research report AI in finance

Valentin Calomme - strategieconsultant Conclusion AI 360

The role of technology in complying with the AI Act

Would you like to respond quickly and effectivelyin your market?

Conclusion