Why AI is redefining legacy modernisation

by Michael Heß, Area Manager Software Development
Legacy systems do not fail because of their age. They fail because nobody really understands them anymore.
That sounds trivial. But it isn’t. In most companies, the actual knowledge about business-critical systems isn’t found in the documentation, but in the minds of three people who have been there since Y2K. And one of them is retiring next year.
Modernisation has therefore always been an organisational risk. Not primarily a technical one. Anyone seeking to renew a system they do not fully understand is making decisions on an uncertain basis. The result is oversized projects, political safeguards and stagnation.
AI shifts the bottleneck. The focus is no longer on months of manual reconstruction, but on faster, reproducible system exploration. While the decisions remain political and economic, they are less often based on gut feeling.
And that changes everything that comes after.
What legacy modernisation used to mean
Legacy modernisation is defined as the systematic renewal of existing IT systems to restore their maintainability, security and adaptability. That is the clean definition.
The reality, however, is quite different.
In practice, legacy modernisation means months of analysis phases in which teams attempt to reconstruct how a system works. The documentation is either missing, out of date, or both. A cost calculation with 14 special cases that only a single developer can explain. Dependencies that only become apparent during operation. And stakeholders who cannot agree on whether the system should be touched at all.
The pattern is almost always the same: the technical complexity is manageable. What slows projects down is a lack of transparency – regarding the actual state of the system, the real dependencies and the economic implications of individual decisions.
If you don’t know what you’ve got, you can’t make sensible decisions about what to do with it.
Why modernisation fails before it even starts
If legacy modernisation is clearly necessary, why is it so rarely carried out? In almost every project, three root causes appear.
Loss of knowledge
The biggest risk in legacy systems does not come from outdated technology. It is the lost knowledge. System logic that was never documented. Business rules that only exist in the code, and even there only implicitly. Or interfaces whose behaviour nobody can explain anymore because the person who programmed them has been retired for five years.
And the code is rarely the whole truth. In many legacy environments, crucial knowledge is embedded in database schemas, stored procedures, batch jobs, configurations and runtime behaviour. Anyone who only reads the source code therefore sees, at best, half the picture.
Legacy is often not old. Legacy is undocumented.
Invisible complexity
Monoliths look stable from the outside. Inside, they resemble grown landscapes: layer upon layer, workaround upon workaround, special logic for a customer who has long since left. This complexity did not arise maliciously. It is the result of years of pragmatic decisions made under time pressure.
The problem, however, is that nobody knows the actual scope of a change. So it’s better not to change anything.
Fear of risk
Almost every modernisation becomes a large-scale project. Large projects create governance. This, in turn, generates coordination rounds, steering committees and risk analyses of the risk analyses. In the end, you get a project plan so conservative that the system is effectively preserved.
If a system is ‘too critical to touch’, it becomes more critical with every passing month. That is not a paradox. It is the norm.
The real paradigm shift through AI
The obvious expectation is: AI automates modernisation. Code in, modern code out. That sounds good in keynote speeches. In practice, however, it is unrealistic.
The real leverage lies elsewhere.
AI does not primarily change how modernisation is done. It changes the basis on which decisions are made. That is a significant difference.
Previously, every modernisation started with uncertainty: What exactly does the system do? What dependencies exist? What happens if we intervene here? The answers, if they came at all, came from workshops, interviews and manual code analysis. Often the result was an expensive approximation of the truth.
AI turns this equation around. Not because it is omniscient, but because, given access to code and artefacts, it can do in days what teams previously needed months for: read systems, extract relationships and recognise patterns.
This not only reduces the effort involved. It also changes the quality of decision-making. Those who understand a system before they touch it plan differently. Budget differently. Prioritise differently. And that is precisely the difference between a modernisation project that fizzles out in the steering committee and one that achieves its goals.
How AI works in practice
Strategy matters. But at some point it has to become concrete. What exactly can AI do in legacy modernisation, how does it differ from existing tools and where are its limits?
Reconstruction of system knowledge
Static analysis tools and architecture scanners have been available on the market for years. They provide dependency graphs and metrics. What they cannot do is reconstruct business logic semantically. They cannot answer what a system does and why, only how it is connected.
AI, on the other hand, can read and analyse source code and translate it into comprehensible contexts. That means business logic that has lived only in code for years can be extracted. Interface behaviour can be described even when no diagram exists. Documentation can be generated where none exists. Test scenarios can be proposed based on actual system behaviour rather than assumptions.
It is not the graph itself that is new. What is new is that teams can generate it more quickly, explain and enrich it with domain context, including source references and identified uncertainties.
This does not replace developers. But it saves weeks of manual analysis that previously marked the start of every project and whose results were nevertheless incomplete.
Complexity becomes measurable
One of the most persistent challenges with legacy systems is that no one can quantify just how complex an intervention actually is. Decisions are made based on experience and gut feeling. Sometimes correct, often not.
AI enables something that was previously simply too time-consuming: systematic impact analyses across the entire codebase. It creates dependency graphs that do not need to be maintained manually. Risk scoring at module level. And it enables the formulation of early hypotheses regarding the impact of changes, which are validated through testing and telemetry before a single line of production code is touched.
Measurability instead of intuition. That may sound dry. For project owners, it is liberating.
Modernisation becomes economically scalable
Incremental modernisation was possible even before AI and often the better approach. The problem was that each step required new analysis, new context building and new risk mitigation. Transaction costs per increment were so high that small steps rarely paid off. So either everything was done at once or nothing at all.
AI changes this equation. Once an understanding of the system has been established, it remains available: it is reproducible, updatable and shareable. Test scenarios can be generated specifically for each module rather than across the board. The impact of changes can be estimated upfront instead of discovered in production.
This means that the costs per modernisation step are reduced. This makes the approach that is actually always the right one viable: step-by-step, validated, without an all-or-nothing approach.
Four concrete outcomes of an AI-supported assessment
The previous steps – reconstructing system knowledge, measuring complexity and making modernisation economically scalable – culminate in a structured assessment. The result is not a set of slides with recommendations, but four actionable artefacts.
Knowledge map: A documented overview of business logic, domain rules and system behaviour, extracted from code, configuration and data flows. The implicit knowledge that previously existed only in individual minds becomes explicit and shareable.
Maintainability Map: A structured evaluation of maintainability: How understandable is the code? How well is it documented? How high is the coupling? Where does the most effort arise during changes? This transforms maintainability from a vague feeling into a controllable variable.
Risk Register: A systematic overview of risks across all levels is crucial: from security vulnerabilities and code quality to dependencies on individuals and macro risks such as availability of skills for a given technology stack.
Strategic Evolution Roadmap: A weighted roadmap makes it clear which modernisation paths are possible and what leverage they offer. It is not about individual recommendations, but about providing a basis for decision-making with options and the ability to think in terms of scenarios.
Conclusion
For a long time, legacy modernisation was seen as a necessary evil. It was costly, risky and difficult to justify. The problem was not missing technology, but missing transparency.
AI changes that. Not through automation at the touch of a button. But by making visible what was previously hidden: system knowledge, dependencies, risks and leverage points. What used to take months of manual analysis is now available within a few days. What used to be intuition becomes a reliable basis for decision-making.
This does not make legacy modernisation any easier. But it does make it manageable. As a result, it transforms from a project that is constantly put off into a strategic option that is consciously utilised.
Those who understand their systems can change them. Those who can change them can innovate. Addressing technical debt today is not just about better software, but about enabling everything that comes next.
Transparency is the prerequisite for any good decision. AI provides it. What becomes of it is up to people.
FAQ
Is the use of AI in legacy modernisation safe?
Yes, if controlled. AI provides analyses, suggestions and drafts. It does not make autonomous decisions or deploy code. Every recommendation is reviewed by humans and existing governance processes. The risk is not the AI itself, but using it without clear guardrails.
Does AI replace developers?
No. AI augments developers. It takes over time-consuming, repetitive analysis tasks that previously dominated early project phases. Responsibility for architecture, business evaluation and implementation remains with humans. Productivity increases. Judgement remains irreplaceable.
Does this also work with COBOL, RPG or very old technology stacks?
Yes, as long as the source code is structured and accessible. The challenge is less the age of the technology and more the quality of available data. A well-structured COBOL system is easier to analyse than a chaotic Java project from 2015.
How complex is getting started?
Less than expected. An initial assessment in the form of a codebase analysis, dependency mapping and risk assessment delivers reliable results within a few days. It is not a major project, and no months-long preliminary study is required. Anyone who wants to know where they stand can find out quickly. Shameless plug: 7P offers such an assessment. It is the ideal starting point for considering modernisation.
Contact
Are you looking for an experienced and reliable IT partner?
We offer customised solutions to meet your needs – from consulting, development and integration to operation.
You are currently viewing a placeholder content from Hubspot Meetings. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information