|By Clinton Jones||
|July 2, 2012 01:42 PM EDT||
The recent train wreck that was the CA batch process failure at RBS should probably be ringing alarm bells for some folks running batch processes into their systems; however, more importantly it should remind us of a couple of different things:
- Batch processing is real and present in many facets of IT data processing and systems at large
- Never take short cuts on adequate testing of changes and test planning
A call I participated on a couple of months back with some students from an esteemed college in North America resulted in a conclusion that what I was talking about and showing these students wasn't very interesting because a lot of what I was focusing on, was batch processing of data.
For some naïve and deluded reason they were of the opinion that real time OLTP was a more interesting story and so wanted to focus on that aspect of the Winshuttle technology stack rather than on the mass and batch related activities.
In reality, our world of data processing and systems relies heavily on both aspects. I have to confess that I hardly ever give the concepts of batch vs. real time much thought, but this recent failure in the payment runs for thousands of banking customers brought home the importance of batch processing and reminds us that batch processing is alive and well everywhere.
Payroll, transfers, interest calculations, reporting, diary actions, archiving, amends, deletes, applications, returns, balancing etc are all activities that in the banking industry are often processed using batch processing cycles. In the longer term, many of these batch processes may move to real-time but there are a lot of benefits associated with the current batched approach.
Practically speaking, batch processing often represents a better value proposition for certain activities that involve pre-staged or staged data. In addition, even though your system may be a genuine real-time OLTP it is likely that the statistical and key figure reporting aspects of your system are all still bound up in batched processing. In addition to this facet of summarization and aggregation reporting and data staging, there is the factor that OLTP resources that support all the things you could conceivably want to be able to do, is expensive and some processes really are not so urgent that they demand all the system resources to be available 24x7 in massively capable infrastructure.
Just the notion of point in time recovery of your system for example, can be incredibly expensive in terms of system resources and equipment. From a hardware perspective we have features like database mirroring, hardware redundancy, multithreading operating systems and built-in resource redundancy for failover contingency. For this reason too, database software companies have developed technologies that produce things like archive logs that facilitate point in time recovery of the database without having to revert to a state that only reflects the system view at the point of the last back up. This capability has improved disaster recovery and confidence that systems can reflect accuracy even after unexpected mishaps.
I said earlier, that batch processes aren't going to go away any time soon, in fact, major analyst firms estimate that some 70-90 percent of enterprise integration requirements are for batch processes and this situation is further compounded by analyst research that suggests that batch processes often represent a significant contribution to planned system downtime. Moving that figure will take some considerable cost and effort - it doesn't necessarily make sense to invest in changing that number even if there is the will to commit to it. If one considers that batch processes are often bound up in automation tools one then one becomes acutely aware of the fact that these batched automation procedures actually provide greater visibility into the circumstances of the business and can provide reassurance on the integrity of processes. Compliance facets of strictures introduced by legislation like Sarbanes- Oxley and HIPAA are more easily addressed in particular, since processing activity is more easily identifiable through batch processing audit reports that are easily identified and consumed.
My second point around this whole fiasco is regarding testing and test planning. In an ideal environment any change that you're planning to institute in a system should include testing in development environment, performance and regression tested in a QA or pre-production environment and only after all testing is done and all issues addressed, scheduled for application against productive systems.
The RBS incident reminds us that even the most mundane of changes can have very far reaching implications. There will no doubt be a lot of finger pointing and deconstruction of the events that led up to the problem, but most importantly some questions that should be asked are, why did this change happen mid-week and was it tested adequately before the change was agreed to be made?
As the post mortem of the event progresses no doubt we will learn more. For ourselves at least, we should take away a lesson learned and make sure that our own batch and non-batch processes are not rendered at risk by poorly planned, tested and ill-prepared-for changes to our own systems.
- Cloud Computing as a Competitive Advantage: Change at the Speed of Business
- Product Review: FICO Blaze Advisor
- Steps for Improving Capacity Management in the Age of Cloud Computing
- Cloud-Friendly BPM: The Power of Hypermedia-Oriented Architecture
- Using Business Process Management and Workflow Automation
- Understanding Business Intelligence and Your Bottom Line
- Cordys to Present at SYS-CON's Cloud Computing Expo
- GT Software and Cordys Partner For SOA Development
- Computer Inventory Software Management
- Best Practices for Business Transaction Management