Legacy Replacement Playbook
Problem StatementMany organisations often have Legacy Systems, note that this can equally be applied to processes as well as a system, these are usually characterised by more then one of the following:
- Documentation and / or code is not available,
- Hardware platform is probably outdated / costly,
- Only a few people have any idea of what happens,
- The lack of system flexibility is holding back the business,
- This leads to the risk of change / failure being unacceptable.
Many startups have legacy as they grow - this is usually in the form of data in spreadsheets and the processes that are used (stored in peoples heads) in order to manage them.
The common approach is to cobble together some partial replacement, or copy what is believed the system does (often trying to write it down first as a requirments document), build the copy, then "big-bang" replace, with your fingers crossed.
Alternativily, the code can be converted to some other language (e.g. COBOL to Java), but this often then leaves you with a nightmare of machine converted code that is often harder to maintain than what you started with, and one of th primary reasons for replacement is often that the system needs alteration in some way for compliance/regulation or business purposes.
We believe there is a better way...
Rulevolution ApproachThe approach is applicable whether this is a batch, inline (straight through processing) or screen based environment being replaced, although obviously some of the finer details may differ:
- First data is captured from the old system (or read from a database), volume does not matter, ideally a wide range of example should be found, Rulevolution allows this to be visualised in a structured manner known as a knowledge (or conceptual) graph,
- Consulting with existing personnel, the processes are replicated within the Rulevolution system (often bugs are found, and informed decisions will need to be made whether to replicate the bugs as well, often needed for downwind systems, or to correct them),
- Once believed to complete, the processes can be run against larger datasets, differences can be isolated, answers found (sometimes debugging existing code may be required if available), the process can then be refined and this step repeated.
- With the processese replicated, a period of parrallel running is recommended to confirm the duplication,
- The processes can now be evolved to the new requirements (where required) in a controlled and incremental manner,
Via this route a thorough replication of the existing system can be proven (more than in other approaches), a new system is avialable for running in a new environment quickly (e.g. Cloud), and the risk to the organisation quickly reduced. Incremental, controlled and version controlled adjustments can then be made to evolve the desired behaviour to the final required state.
- Actual examples are a much better way to extract tacit knowledge from the head of experts (who have an idea of what the system achieves), than asking them to write it down or read for obmision,
- The replicated processes can be “used as is” to reduce costs immediately,
- The replicated processes can be tested against (or used to test) any future system,
- Running the replicated processes in parallel for a period, reduces the risk of future issues, boundary conditions can be used to flag “new” / unseen data to allow checking, and thus not “to just process it blindly”,
- System evolution is then down to incremental (and controlable) steps, thus removing the risk of a large "big-bang" implementation.
- Any any point the requirements can be exported as a document (Coming soon), thus you are never at risk of unknown system or lock-in.
Then please leave us a message...