Single most important thing about SAP data management
Keep the system growth under control. Improve performance for your most critical data. Reduce the total cost of ownership. These are one of the main selling points we always hear in data management discussions. It sounds good, and it makes sense. Yet data management (or archiving in a bit older dictionary) always plays second fiddle. I think it’s fair and that it should stay this way. Let’s explain why, but at the same time let’s propose how to tune this “melody” so we can actually enjoy it.
It’s all about priorities
There are plenty of tasks for a BW team every day. Daily maintenance, adjustment of processes thanks to evolving technical objects. Building new queries and reports in various reporting tools. ABAP development of different functions in reports and routines etc. All these activities should lead to fulfillment of business requirements. That’s the most important function of a BW team and BW warehouse in general.
On top of that, while gathering and refining the business requirements in order to build quality data warehouse, BW team must think about costs. Simple rule in life is, we keep and invest in things that generate value, and where return in benefits is greater than the resources spent. I’m not saying that BW systems aren’t worth it, quite the opposite. But the ratio between costs and generated value is getting inadequate over time. Simple demonstration is in the diagram #1. Amount of old data is increasing faster over time than the amount of reporting relevant data.
This is especially relevant for the new trend, which is in-memory databases like SAP HANA. Every Gigabyte is extremely valuable and should be spend effectively.
General recommendation is to categorize your data in to 2 to 3 categories, based on its importance and usage. Keep the reporting critical data in your HANA DB for the best performance. Historical data on the other hand should be stored in storage where the performance is less important than the availability and costs. This is shown in diagram #2.
The only way how this can work, is if we automate the process. BW is meant for automation and it works because of it. Just think about batch processes and process chains. Once the initial development and setup is done, most of the crucial processes in BW are automated. There is no way how any BW team can handle to monitor and run hundreds of activities manually. Why not use automation in data management if it’s already a proven methodology?
It already started
Automation for data management is already available today. SAP has introduced Data Tiering Optimization (DTO) with their release of BW4/HANA. It’s based on Advanced DataStore Object Partitions, where partition “temperature” is set as a local setting and periodic job moves data to defined storage. There’s no need to create Data Archiving Process (DAP) anymore and to define those archiving requests. DTO is a great thing, but the storage selection for cold data is limited and it still suffers from quite a lot of restrictions of its predecessor – NLS.
Another option for smart automated data tiering (available also for BW, not just B4) is offered by Datavard OutBoard. It’s a bit different approach based on 10+ years of archiving experience and best practices. To have something fully automated, we need to get rid of all the restrictions, otherwise it’s 10% automated and 90% exception handling.
OutBoard has features that shift this ratio around. With its automated straggler management and automated adjustment of dependent objects like lookups and queries, it can really ease the daily tasks of BW team. And it doesn’t stop there. By defining data management rules, it can regularly scan your system and propose new objects suitable for archiving, based on its size and usage.
Now if you look at diagram #2, this type of automation can really make it work. I think automation is the only way how data management in BW should be done, so it’s actually effective, with low effort, and without shifting the attention from more important topics.