There is much to learn from Ajit Kanagasundram, former managing director, technology, Asia Pacific at Citibank, who built the bank's global card processing capability from Singapore. This four-part series will reveal his insights into the people processes and decisions made over the years that provide considerable lessons for generations to come.
November 16, 2017 | Ajit Kanagasundram
- Citibank successfully converted 12 businesses into a new standardised platform based on Systematics
- The Asia Pacific data centre was set up to handle banking support across Asia
- Citibank allotted a central budget of $500 million to rectify the issues brought by the Y2K phenomenon, which was later revealed as over hyped
This is the third of four parts. Click here for part one of this article, How the bank built the CORE; Part two: The global cards project; and part four: Lessons learnt from the “Global Go to Common” project. In this part, Kanagasundram shares the story behind the Systematics banking project implemented by Citibank across 12 geographies and the Asia Pacific data centres while preparing for the Y2K challenge.
In 1993, following the example of the success of centralising cards processing, Rana Talwar decided to do the same for banking. This was much more complex than cards and the banking businesses had varied processes in each country.
He appointed George Dinardo, responsible for setting up a computer centre for Mellon Bank, to head this project. The task ahead was daunting – convert 12 businesses into a new standard platform based on Systematics but significantly enhanced for Citibank, and transfer data centre processing to Singapore simultaneously. All this had to be done in less than five years to meet the looming Y2K deadline.
This project was executed like a blunderbuss, full steam ahead. There were no major blunders, that the project was completed roughly on schedule although we exceeded our budget by $12 million.
When the Sytematics project in Asia was well underway, and clearly going to succeed, Europe started to show interest. This alarmed the New York Office and to show their relevance to the process, they came up with their own banking solution – a package called Sanchez and claimed it was superior as it allowed for on-line real time posting of transactions. This argument was spurious as real-time posting was unnecessary for consumer banking needs, where the illusion of real time processing can be created at touchpoints like the ATM, teller, call centre or internet using shadow transactions files.
The results were predictable – huge teams were set up in London and New York for requirement gathering, project management and with the vendor handling business specific changes. There was never any real progress made and with the Y2K deadline looming, the project was quietly cancelled without any fanfare in late 1998. The pilot business to be converted was Hungary with only 2000 customers and even this was not achieved despite the expenditure of $17 million – many times more that this business would ever have made even if it had been converted.
The Systematics project once completed left the bank in Asia and CEMEA with an enhanced platform with a high degree of standardization and full Y2k compliance by mid-1998. There were some variations in the code bases because two teams had developed the base code. In the event this was easily corrected after Y2K by a project called the Enhanced Banking System (EBS).
The Asia Pacific Computer Centre
When I set up the Regional Cards centre, I was given the freedom to set up my own captive data centre, which I named the Asia Pacific Computer Centre (APCC). I decided to order the latest equipment (largely IBM) and more importantly recruit the best possible team. Together we built a superior and robust foundation that was later tasked to handle banking support across Asia as well with the Systematics project. When we did the Latam cards project, the APPC also extended support to Latin America.
This grew into a massive, world class facility – the largest in Asia outside Japan, with superb technical infrastructure and 6 Sigma service levels. From the outset we maintained a perfect 5 rating from audit. We also pioneered many firsts – the first to use mirrored mainframes connected by an IBM technology called Sysyplex and CICSplex, the first to use a communications protocol called MQ series, the first to use a dedicated non-switched fibre optics connection for data centre back-up and so on. But we were not blinded by the new – we remained with an older IBM communications protocol (System Network Architecture – SNA) over the new protocol called TCP/IP as the former was more robust and manageable despite immense pressure from our communications unit. It was only when TCP/IP had matured and the tools were available to manage it that we switched.
When I eventually gave up the data centre in 1999 to be merged with the Corporate bank centre, it became the foundation for the combined facility and today runs the entire bank’s data processing. The Corporate bank facility was eventually closed and processing moved to APPC. The APPC was one of the less obvious reasons for the success of the International Cards and Systematics projects.
The India team independently developed an entire suite of applications for both cards and bank – a remarkable achievement. The two best examples of the talent and ideas coming out of India were the data analytics system and the mutual funds product processor.
S Ramakrishnan came to Singapore to develop and deploy the analytics system called the data warehouse for bank and cards. This was one of the first banking implementation of determined data cubes that were the basis for analytics. He used off the shelf components that were commercially available. This data warehouse was used throughout Asia and was later taken up by Europe and Latam.
Similarly, Rhagavan came to the Singapore regional office with a mutual fund processor, developed for the India business, where I helped to transfer it from its Oracle platform to the IBM DB2 platform. It was a complete solution for managing funds and was later extended to cover share finance as well. This too was taken up by Europe.
The Year 2K boondoggle
From the 1980s, all computer technologist knew that they faced a problem when the century changed on December 31 1999 to January 1 2000. This was because hardware microprocessors and software systems used two digits for the year in the date field. So computer applications with a two-digit date field would have seen January 1, 2000 as 01/01/00 – indistinguishable from January 1, 1900. This would have caused chaos in the date and above all interest calculation routines. This saving of two digits done partly to save on storage (that was expensive in those days) but mainly because no one thought so far ahead and expected their systems to be still in use 20 years on. COBOL, the main language in which banking applications was written, had this problem.
In the Citibank case, a central budget of $500 million was set up to rectify this and a corporate office was set up to monitor compliance by mid-1998, after which there would be a freeze on new developments. Every technology office had to have a Y2K compliance plan, monitored first monthly, then weekly and in 1999 even daily (!). All businesses used this pot of money to rectify their systems and replace hardware whether justified or not. Only the Asia business did not take a dollar for either cards or bank – we were able to do the Y2K rectification within our normal budget, as it was a point of pride for George Dinardo and his team to show the whole thing was over hyped.
Click here for part one of this article, How the bank built the CORE; Part two: The global cards project; and part four: Lessons learnt from the “Global Go to Common” project.
Watch the TABLive interview with Ajit Kanagasundram.
Ajit Kanagasundram is formerly managing director, technology, Asia Pacific at Citibank. The views expressed herein are strictly of the author.
Categories: Financial Technology
, Retail Banking
, Technology & Operations
, Transaction Banking