It’s an inescapable fact that any technology company will eventually have to deal with legacy code.
Systems naturally age over time, and often become more difficult to change or understand. They may have business logic becoming tangled with implementation details in a sprawling monolithic codebase, or may become stuck on older versions of languages or frameworks. Sometimes we can live with the risks for internal-only tools, but what about for external, revenue-generating applications?
In this post I’ll share my team’s decommissioning journey and how we rebuilt a legacy frontend as a modern microservice. If you’re reading this and currently involved with a decommissioning effort yourself, hopefully parts of this journey will resonate. These projects aren’t easy, but they can be done!
So why was the legacy code a problem?
The old application code resided within a monolith which made our releases slow, environments difficult to configure and testing difficult. The frontend used Jakarta Server Pages (JSP), a pattern that interweaves Java code with HTML. This worked fine at the time, but over the years became increasingly misaligned with the rest of Rightmove’s frontend ecosystem.
Some of the pain points we experienced with this app included:
- Non-standard frontend stack
Anyone making frontend changes needed to have some Java knowledge as well. Since this application wasn’t using React, we also couldn’t leverage our internal component library. - Integration challenges
It would have been a challenge to integrate directly with modern in-house services, e.g. address matching. - Poor user experience
The design had become outdated and there was limited flexibility to add new features or update it. - On-premise application
This reduced our flexibility around releases and configuring our CI/CD pipeline.
Actively trying to develop on an application like this presents a number of challenges and it was clear we couldn’t stick with the legacy frontend for much longer.
Microservices to the rescue!
We decided to rebuild the app from scratch into several cloud-based microservices. On the frontend, we paired React with an Express/Node backend-for-frontend, while on the backend we created a new service to manage the saving and retrieving of reports. We also leveraged an existing GraphQL API for fetching data. This allowed us to separate concerns and align with the rest of the Rightmove architecture, as well as:
- Use shared component libraries and middleware
The new app has a similar look and feel to other parts of the site. We didn’t have to reimplement logic that we already have libraries for. - Implement a modular design for the different user flows
We could easily request different data based on the user type, and amplify component re-use on the frontend for shared flows. - Modernise the UX
The new app has improved performance and better accessibility across a variety of devices and screen sizes.
Innovation through a new PDF service
We were also able to innovate by introducing a new microservice to produce PDFs of the final reports. We opted for a solution using Puppeteer, which provides more flexibility around PDF generation than some other third party libraries.
The PDF flow is fairly straightforward:
- The service receives JSON input, containing all the fields required for the PDF.
- The JSON is passed into React components representing the PDF template.
- Static HTML is created from these React components.
- Puppeteer spins up a headless Chromium browser, loads the static HTML and any CSS, and produces a styled PDF.
The big advantage of rendering it in a browser environment is that it fully supports CSS3 and JavaScript, so we can build complex and well-styled layouts with relative ease. That also means we can skip the steep learning curves and restrictive components sets offered by other PDF libraries. To further sweeten the deal, the approach is generic enough that other microservices across the company can use it too! 💪
Challenges and lessons learned
So far, I’ve condensed several years of work into just a few paragraphs. Although it may sound like a linear journey, it definitely wasn’t smooth sailing the entire time. As a team, we overcame late evolutions to requirements that weren’t in our control, complex pieces of work around permissions/user types, and finally challenges around migrations and onboarding. We already had beta users on the new app so minimising disruption to them was a critical requirement alongside getting the rest of the features done. In a sense, this meant finding ways to perform gradual open-heart-surgery on the new application.
A genuine highlight to come out of those challenges is that we achieved our goal of minimising user disruption. This came down to a few key practices, which were a mix of things we were already doing and processes we adopted.
- Release strategies 📋
We planned carefully for edge cases and always had a rollback strategy for releases. This meant no user downtime while we continued to add features and change core flows. - Front-loading complexity 💭
We tackled the hardest problems as early as possible, to save us from painful surprises later. - Balancing our focus ⚖️
We sprinkled some slower sprints to address tech debt alongside the intense feature-focussed sprints. - Proactive monitoring and alerting 📈
This was in place from the very beginning to allow us to investigate potential issues before they affected users. - Test coverage 🧪
We implemented functional tests for the critical user types and user journeys early on. This helped catch several bugs during development time and let us release new changes with confidence. - Scaling onboarding 🚌
We built internal mechanisms on the backend for bulk user onboarding. This helped us avoid managing permissions one at a time for thousands of users.
Each of these lessons strengthened the project and how our team approaches this kind of large-scale decommissioning work in general (though sadly we’re not done with legacy systems just yet 😄).
Parting thoughts
The new tool is now live and already making a measurable difference so far. It’s faster, easier to maintain, and better aligned with the rest of the company architecture. On top of that, our PDF service is now ready to be reused and start supporting other teams.
Decommissioning is not a glamorous topic, but it is a necessary one. Hopefully this post has shown that with the right approach, it’s possible to modernise and innovate while keeping user disruption low. If we can do it, you can too!





