|At the recent SAPPHIRE conference in Madrid, SAP Executive Board member Dr Vishal Sikka boldly declared that it is now within our ability to declare an end to batch processing. Speaking about the opportunities that in-memory database HANA offers, Sikka believes that just-in-time information processing will enable organizations to rid themselves of needing to run batch jobs.
Co-incidentally, achieving this goal will fulfil a long term ambition of SAP co-founder Hasso Plattner who included the letter ‘R’ to mean ‘real-time’ when naming ERP solutions SAP R/2 and R/3. This advance could represent a radical change to the way many IT departments operate. With SAP users running up to 700,000 batch jobs a day, transitioning away from environments built around rapid-response, on-line transaction processing backed up by high volume, heavy duty, batch updates and reporting working in tandem will require significant planning and execution.
Of course, any organization considering this exercise must be prepared to provision all the software and hardware required to support the journey.
While enterprises that are wall-to-wall SAP users might reap benefit from abandoning batch, you have to wonder how any organizations with even the slightest level of heterogeneity might fare. A manufacturer that relies on a third-party warehouse management system alongside SAP technology will probably still need a batch process to run if customer orders are going to be shipped.
For better or worse, and often this can be a consequence of today’s innovation becoming tomorrow’s legacy – think CRM or a company merger – most organizations have to wrangle with multiple in-house and packaged IT applications to support their daily business operations. Many business executives rely on corporate performance data to make informed decisions. Much of their input comes financial accounting, sales analysis, inventory control and other reports detailing overall company activity.
Generating these reports, especially around operational spikes such as end-of-quarter, requires significant number crunching which has historically been done by running batch jobs. Trying to leverage in-memory database technology to run this workload will mean large percentages of entire databases needing to reside in physical memory – making batch processing, which might previously have been viewed as cost efficient, a premium service to run. The SAPPHIRE announcement does draw attention to a changing role for batch in IT workload automation processing.
Advances in computing power, network performance, database and applications technology has enabled much more to be done on-line – be it transaction processing or real-time analytics. Run times for traditional high volume batch workload, such as month-end F&A reporting, have come down dramatically. However, at the same time, increasing globalization and growing IT complexity have resulted in additional back-end IT processes that need to be performed to drive business operations. This latter requirement is addressed not only by running batch request but also through the sequencing of multiple steps required to perform these back-end processes.
Many of these processes are driven by manual submissions, file transfers and the use of disparate technologies. Removing manual interventions, orchestrating business application requests and extracting unnecessary latency between steps, become critical if swift and accurate execution of business operations is to be assured. So, while most high volume jobs remain best suited to running in batch, another role for batch is emerging with the automation of batch processing offering significant advantages – especially in any environment where there are heterogeneous systems or activities driven by external events.
Clearly the role of batch processing has changed. However, are rumours of its impending demise greatly exaggerated?