A customer loads 10 million records of cost center data into a data warehouse each week. The data is booked based on eight different booking levels to prepare it for the final reporting. Each level builds on the level before and persist its results after the level was calculated. In essence, this leads to storing data 9 times before it is ready for reporting. The analysis is done on all data of the current year, so at the end of each year up-to a billion records are crunched to get to the results.
Due to highly optimized reporting processes with many pre-calculated aggregates, the reporting speed is acceptable. Also, the data load can be done within the available load window. The main issue that the customer has is the effort required for the booking process (through the eight levels) if there is a change. In this scenario, changes to the data model / booking rules require all data to be unloaded and then, after the change, re-loaded / booked again.
After the customer implemented SAP HANA, these issues were eliminated and changes were less problematic. The data model / booking rules are now completely virtual. With SAP HANA, any change to the analytic model is done without touching the physical data. The time the customer lost prior to SAP HANA, due to the un-load and re-load process, produces measurable savings. The customer is able to adopt even small changes whenever they need to within the cost center process with a higher quality results.
Additionally, with SAP HANA the data is only stored once, so there is a dramatic improvement in overall load times and reporting speed. As a side effect, the system size was reduced dramatically as the redundant data storage could be stopped. This is a good example of a custom scenario building on SAP HANA’s flexibility.