Core banking implementations have a striking resemblance
to changing engines while a plane is up in the air, as
the context here is not very differnet. the bank that is
implementing the solution in most situations is a live,
running and operational entity, and when the core engine
that runs the bank has to be changed, it has to be done with
zero disruption to its operations and minimal inconvenience
to its customers. ask anyone who has recently gone through
an implementation, and you would hear them totally agree,
and also have to say, that this is easier said than done!
So what does a typical core banking implementation typically entail? What does it take for a programme to be successful – or at least ensure that the most common mistakes and pitfalls are avoided? Where do implementations tend to go wrong and how does one pre-empt it? While the questions and answers may be quite unending, this article looks to identify the seven phases of a typical core banking transformation, what they are about and what they mean to the project, what happens in each phase, what to look out for and more importantly how to avoid some of the errors that are but too common and sometimes, inviting. Having successfully executed over 40 large technology transformation programmes, from fast-track eight month implementations to large end-to-end four-year programmes, there are indeed quite a few things that one learns along the way, and this is an attempt to share the key ones.
Programme planning: the journey starts
Perhaps the most obvious activity for any large project, is the plan. As they say, if you fail to plan, you are planning to fail.
The activities, inter-dependencies and pre-requisites
Even before we get to the timelines, it is important that all activities that govern the core banking programme, and all other aligned activities are defined exhaustively. For example, if the bank is undertaking a major expansion programme on its channels, it cannot be carried out in isolation without impacting the core banking transformation and vice versa. The identification of dependencies and pre-requisites is key
Timelines, milestones and critical path definition
A typical core banking programme plan would at least have three to four thousand line items. It’s important that at least ten major milestones and about 20 minor milestones are identified in the core banking programme, as they tend to provide the guiding validation if the programme is proceeding on track. Some of these milestones also get linked to payment, and therefore become even more important to be tracked with clear definitions of entry and exit criteria.
A critical path is also drawn from the start to the end of the project, which helps understand the impact on the final go-live date. Also, it helps to begin with the end in mind. For example, it’s important to determine if it’s likely to be a ‘big-bang’ roll-out or a phased roll-out?
Team definition and activity ownership
While the activities themselves have the nature of being either the primary responsibility of the bank or vendor (or third party service providers), it is also quite useful to ensure each activity gets assigned to a specific owner, who tends to be a part of the core team. Identifying the key members of the team, and clearly assigned responsibilities is integral to this exercise. More importantly, it is also critical that all key resources required to be mobilised from the supplier’s end are identified with a clear plan of onboarding for the programme.
Any programme plan or charter would be incomplete without having an explicit definition of what is the modus operandi for communication. This is both internal and external. The internal communication is the update of status on a periodic basis to all stakeholders, including the steering committee, and the format of reporting. External communication includes key milestones and schedules where customers and regulatory communication is to be made.
The plan, once finalised, would need to be agreed and signed off by all stakeholders.
There have been quite a few instances where there is more than one version of the plan that is being followed, and that could be quite ominous – having a single view of the programme and everyone being aligned to it is very important
Customisation: taming the animal
Even before the right core banking solution supplier is identified, banks would typically have gone through a long process of determining the key gaps in the system that need to be customised,in order for the solution to meet with the bank’s requirements. One of the very initial activities of the implementation programme is where the bank team reviews the product in detail, and reviews the above gaps once again, so that they are fully understood by the supplier’s development team, and are rightly reflected in the functional and system requirement specifications document. It’s important to have this validated and signed off, as this thereafter becomes the key reference document for the product enhancement team, who typically sit offshore where the customisations are carried out.
Other than the interfaces which are required for the core banking platform to co-exist with other surviving applications, customisations generally constitute both product level changes as well as requirements prevalent in the region where the bank operates in. Additionally, there would also be ‘bankspecific’ customisations that are required to align with the way in which the bank operates. Now this is where the trouble starts. As long as the changes required in the system are critical – either from a regulatory or regional practice standpoint, they are quite inevitable and need to be accommodated. However, floodgates tend to open up when the bank team tends to go a little overboard and looks to tweak the system to make it do what the bank has been ‘traditionally’ practising, from a process perspective. This could sometimes get as bizarre as having to buy an Airbus 380 and making it run on the road! This needs to be contained, and nipped in the bud. Remember, the lesser the customisations, the higher the chances of a successful implementation and a smooth sail thereafter.
A simple rule of thumb that helps determine if customisation is required would be to use this threepoint checklist:
- Is the absence of this customisation likely to violate regulatory compliance norms?
- Would there be a serious deviation from the local/regional practices without this customisation?
- Is there a very high financial or customer service impact, should this customisation not be done?
If the answer to one or more of the above questions is yes, then you should allow for the customisations to happen. Where the answer is no across all three, then it’s an obvious case for dropping. From my own experience, at least 50% or more of the initially identified customisation items do not have an affirmative answer for all of the above!
Data migration: lock, stock and barrel
Data migration is the single longest track that starts way upfront in the programme, and continues till the last migration run, which is also called the final cutover. The whole idea is to ensure that all the data that is required to run the new core banking system gets migrated from the existing sources of data, including the current core banking platform and the aligned ancillary systems. Akin to the shifting of a house, the process of migrating data brings about some striking similarities:
- The migration plan is drawn bearing in mind what the ‘end-state’ requirements are. The key aspect here is to ensure the new platform has what it needs, and the objective is not to migrate everything that is presently in the old system.
- Just as we align furniture to a new home’s layout, the data that is migrated will need to be enriched or enhanced to ensure requirements are met. There are multiple approaches from having default fill-in values to an end-to-end enrichment programme based on the complexity of data and availability of time.
- A series of ‘mock’ migrations are to be conducted to fine tune the migration logic, validate the migration code and finally, to clock the migration time.
- It’s important that the final cutover is executed in the span of 24 to 36 hours, and the repeated mock migrations help sharpen the axe for the final cut.
What to watch out for, is the readiness and accuracy of the approach adopted by the team assigned for validating the data migration. Remember, this is the activity that ensures all the details of the customer including his/her account balance and the financials of the bank are being migrated from one system to another.
Parameterisation: the fine print
It is always amazing to note, that no matter how many implementations are done with the same product, and no matter how similar two banks are – in terms of geography, size, operating model, regulations etc – there are always significant variances to the subtle nuances of what products and services they offer, what procedures are adopted and how the financials are recorded and reported.
Well-established global core banking solutions address these differences by way of a very important facet of the implementation, called parameterisation. Right from defining the key variables that build the attributes of a product, to the segments of a customer and the aligned fee structure, defining the accounting treatment that is to be followed and the resultant financial reporting, all of the fine prints of the bank get to be captured and defined in the system through this phase. Ensuring that a core team of people from the bank gets fully familiarised with the parameters and the application is key to ensuring the needs are logically articulated and captured in theseparameters. Some of the forwardlooking banks also look to leverage this opportunity to develop system aligned process flow documents (PFDs) that help to map the key processes of the bank, reflecting the steps that need to be executed, both within the system and outside the system. The PFD also helps to align with various roles of members within the bank, and serves as a quick reference document both for testing the solution flow, and also for training the end users
Testing: the defining moment.
If one lists all the core banking programmes that have failed in the history of technology platform transformations, it is most likely that most of them have been called off during the phase of testing. The ‘testing phase’ as it’s rightly called, is where the bank gets to validate whether the software is ready to be rolled out, and there are three important sub-questions that need a confirmation here:
- Is the product doing all that it is supposed to do – for which the bank had invested in it?
- Have all the customisations that the bank asked for, been delivered and are the parameterised values working well?
- Have the data used for testing shown they have been migrated accurately? The experience of the User Acceptance Test
(UAT) is always the lead indicator to what end users are likely to experience once the product is rolled out, and therefore it’s important to ensure this is managed well. Large core banking implementations typically would have at least four to five rounds of UAT before there is a general consensus for the product to be rolled out. Additionally, a series of System Integration Tests (SIT) are conducted, prior to the UAT, to validate the technical aspects of the system and the interfaces that have been built. There are also specialised third party testing vendors whose services are leveraged for executing this activity, and practices being adopted around AGILE testing methods as well. The key is to start this activity quite early in the game.
In addition to the UAT, there are two other popular tests that banks are looking to conduct, and quite rightly so. The first is performance testing, wherein the speed and performance of the system is validat-ed on the specific hardware, to ensure the response time and end user experience (and in the case of channel transactions, customer experience) is assured. The second test is penetration testing, which is to validate that there are no soft spots for an external access into the system. This is more important, where the system is exposed on the internet.
Training: unlearning and learning
No matter how good the system is, and no matter how well the product is customised, parameterised and tested, if the training of the product is incomplete or insufficient, then there is every likelihood that the product gets to be ‘disowned’ by the users. And the risk of this is quite large where the users are used to an old platform and the merits/ benefits of the new platform are not sufficiently explained and appreciated. The ‘unlearning’ of the old ways of doing things is equally, if not more important, than the learning of the new platform.
One of the most common errors that many banks tend to do in the training is to limit this to technology or system training in the classroom. This false comfort results in serious challenges after the system goes live, as the users find it difficult to apply this knowledge in the real world. The training should not only impart the knowledge of the new screens and processes around it, but also ensure the users have actually ‘played with the new system’ in their own environment (and not just in the classroom or CBT). This is also addressed by way of having ‘business simulations’, where all users across the bank are made to actually simulate the life of a normal working day in the bank, by posting the transactions just the way they would after the system went live, and the efficacy of the process and accuracy of the system and its reporting is reassured.
Going live and roll-out: rubber hits the road
When the big moment does arrive – and before you realise it, there will be a stage where the marathon becomes a little tiring, and you want to get the system rolled out as soon as you can. There will always be two schools of thought, where one believes we need ‘to take the plunge’, while the other says exactly the opposite – ‘it’s too deep to leap now’. It would therefore not be necessarily the function of people to think or feel about their respective readiness, but leverage a structured framework to measure things holistically. Cedar’s RAPID framework (see p29) is extensively used to determine if the bank is indeed ready to launch the new platform, and helps to measure across the five parameters – Resources, Application, Processes, Infrastructure and Data.
Needless to say, the above is not meant to be exhaustive, but a more definitive and vital check-list for the launch of the core banking platform. Independent of whether it is a ‘big-bang’ go-live or a phased roll-out, unless you have a green light across all of the above parameters, it would be a little premature to announce the arrival of the newborn. That being said, the success and efficacy of the new system does not get measured by how smooth the cutover was, but almost always, inevitably, based on how good the experience is after the system went live. And that’s a function, as we discussed before, of how well we could have the users learn the new platform and unlearn the old ways of doing things. After all, the only constant, as they say, is change!