A photo of Maximilian Schwarzmüller

Maximilian Schwarzmüller

AI Has A Favorite Tech Stack. That's A Problem!

AI Has A Favorite Tech Stack. That's A Problem!

Published

Large Language Models (LLMs) have rapidly become crucial tools for (web) developers.

Whether accessed via direct interfaces like ChatGPT or Gemini, integrated into coding environments like Cursor, or powering app builders like Lovable or v0, their ability to quickly generate code snippets and even entire (simple) application structures is very helpful.

Ask one of these models to build the frontend of a web application, and you’ll likely get functional code, often even visually decent for an initial draft (things often fall apart once applications become more complex, though).

But there’s a problem!

There’s a high likelihood of getting the same default tech stack over and over again: React, paired with Tailwind for styling and often ShadCN for components.

If you ask for a web app, that’s what you’ll typically get (for simple apps it might also just be HTML + JS + CSS, though).

Unfortunately, this presents a potentially significant problem for the future of web development.

Innovation? Innovation 😔

The most obvious concern with this default tech stack is its impact on competition.

If the go-to tool for generating frontend code consistently favors one ecosystem (React, Tailwind, ShadCN), what happens to alternatives like Angular, Vue, or the myriad of smaller, innovative frameworks and libraries?

While some might welcome a reduction in “framework fatigue” (2019 anyone?) - a lack of competition is rarely beneficial in any field. After all, it’s one of the key drivers of innovation.

It pushes developers to find better ways of doing things, leading to new ideas, improved performance, and diverse approaches to solving problems.

Different tasks often benefit from different tools. Framework A might be perfect for a complex enterprise application, while Framework B is ideal for a lightweight, static site. If LLMs (almost) always suggest only one path, developers might become less aware of alternatives that could be better suited for specific needs.

This reduces overall choice and adaptability in the ecosystem!

Of course, this does not mean that Angular, Vue and many other frameworks are suddenly going to disappear. They have large user bases, corporate backing, and a wealth of existing applications built on them.

But innovation outside the dominant stack could be hampered, making the ecosystem less vibrant and less responsive to diverse developer needs over time.

Who Cares About New Versions?

Beyond the issue of tech stack homogeneity, there’s another, perhaps more hidden, problem: version stagnation.

LLMs are trained on vast datasets scraped from the internet, including mountains of existing code. While models are periodically updated, there’s a delay between new versions of frameworks and libraries being released and that knowledge being incorporated into the model’s training data.

This could lead to LLMs habitually generating code based on outdated versions of React, Tailwind, or other libraries.

While model updates might eventually include newer versions, we could face a persistent lag - potentially six months, a year, or even more - between a technology evolving and the LLM catching up.

Making things worse, if a significant amount of new code being added to the internet is itself generated by LLMs using older versions, future models trained on this data might perpetuate the use of these outdated practices. We’ll end up with self-fulfilling prophecies.

Unless models are specifically fine-tuned or designed to prioritize up-to-date information, they could get stuck in a loop of generating legacy code.

There Are Solutions

There are potential ways to mitigate the outdated code problem.

LLMs could be fine-tuned to better understand versioning or, more practically, leveraged in conjunction with tools that provide access to current documentation.

Techniques like Retrieval Augmented Generation (RAG), where the model consults external, up-to-date information sources before generating a response, could help ensure code reflects the latest best practices and API calls.

Tools like Cursor or MCPs integrated into editors could build this bridge, allowing the AI to pull in relevant, current documentation for the libraries it intends to use.

However, while such solutions might address the versioning issue for popular libraries, they don’t fully solve the problem. They might still struggle with smaller, less well-known libraries.

More fundamentally, accessing the latest documentation doesn’t resolve the core issue of the LLM defaulting to one specific tech stack in the first place.

A Different Approach to Information

Consider how we used to research how to build something before LLMs became ubiquitous.

A Google search (despite its own modern issues with ads and SEO spam) would typically yield results from multiple sources, hence showing different approaches, frameworks, and libraries. You could hardly miss the fact that there are options and alternatives.

With an LLM, a simple prompt like “build a web app frontend” often results in a single, large code block using the preferred default stack. There’s no presentation of alternatives, no explanation of why React + Tailwind + ShadCN was chosen over Vue + Bootstrap, or Svelte + your-favorite-CSS-framework.

Discovering alternatives then requires the user to already possess the knowledge that other options exist and explicitly ask the LLM to use them. This demands a level of programming knowledge and experience that simply asking for “a web app” does not. It creates a barrier to entry for newcomers and limits the exposure of more experienced developers to stacks outside the LLM’s default bubble, especially if the number of actively maintained alternatives dwindles.

And this dynamic isn’t just about frontend development.

Ask for a simple backend, and Node with Express are common suggestions.

Need a utility script? Python is frequently the default.

The tendency towards a predominant, AI-preferred answer seems to span different programming domains and potentially extends beyond programming entirely.

The Path Forward

It’s not inevitable that we end up in a world with severely limited technological choice. Existing applications built with Angular, Vue, or other stacks won’t suddenly vanish just because LLMs favor React. Companies won’t rewrite massive codebases overnight.

However, the subtle, continuous pressure of newly generated code consistently favoring a narrow set of technologies could, over time, steer the ecosystem. It might make it harder for new frameworks to gain traction, reduce the incentive for maintaining less popular ones, and narrow the skill sets of developers who rely heavily on AI defaults without exploring alternatives.

This trend towards a potential monoculture, where a limited set of technologies dominates the landscape not necessarily because they are objectively superior for all tasks, but because they are the easiest default output for powerful generative models, is a development worth watching closely. It highlights the need for developers to remain critical, informed, and proactive in exploring the rich, diverse ecosystem of tools available, rather than passively accepting the AI’s default suggestions.