In-Memory Computing

When engineers designed disk-based databases in the 1980s, they focused on the communication between memory and disks in order to make it as fast as possible to take data out of the main memory and write it to disks. In order to reach this goal, engineers focused on the data structures on disks to make the communication as smooth as possible – namely blocks. The new reality of having systems with lots of main memory enabled SAP engineers to think differently: SAP HANA engineers focused on the scarce resource of data transfer between memory and CPU and used data structures which were more suitable for this kind of communication.

This new technology is capable of leveraging the hardware to crunch through massive amounts of data in almost no time. Large volumes can be held in memory and can be processed when needed. The compute power in these new CPUs is capable to work with all of the detailed data coming from business applications, without preprocessing or transforming data in a load routing – the foundation of real time computing.

Because the cores doing the calculation work are full blown processors, they can also execute very complex workloads. For example, running forecasts on business data, solving optimization problems, or executing entire parts of the business logic, which used to be executed by applications. This ability to execute business logic directly on data improves the responsiveness of applications. If data is held in memory, it takes much longer to move the data to another server to execute some logic on the data, than it would take to transfer the logic to the data. Delegating complex or data intense procedures to in-memory computing opens up the potential for optimizing the run-time behavior of applications well beyond the pure database query times.

These capabilities provide the basis to overcome the divide between online transaction processing (OLTP) and online analytic processing (OLAP). The applications will write their transactional data into memory, where it is directly used in business transactions and to gain insights for analysis and prediction. Applications will be able to run complex transactions and analysis on massive amounts of data, also delivering immediate results to mobile and other devices. In addition, the removal of the additional analytic environments, predictive environments, search environments, as well as the design of new “extreme applications” running inside the data tier, has the potential to significantly reduce the complexity of enterprise IT landscapes.

If you add the reduced complexity in the software itself, applications become easier to deploy and maintain, as well as faster to adopt changes in business and new regulatory requirements. The new area in computing promises to remove constraints we got used to working with in the past. Business processes, which needed to be defined along technological constraints, can now be re-thought and re-focused on the actual business needs.