SLT and BODS do not always make things easier

In this blog post, I want to show how easy it can be to tap into ERP data from SAP, i.e. find the relevant data, read, filter, enrich and contextualize it, and finally store the data natively on HIVE for further data processing.

You may appreciate this run-through especially when you’ve been performing the same tasks with SAP out of the box solutions, such as a combination of SLT and BODS. In their own right these SAP solutions are extremely powerful and especially BODS has a wide scope. However, for integration between SAP’s business solutions where you need to unravel the mysteries of SAP’s ERP data model on top of performing delta capture and run ETL processes, we feel that the solution with SLT and BODS tends to be a bit overcomplicated.

An alternative solution

In order to enrich a data lake (e.g. HIVE tables on Hadoop) using Datavard Glue, you need to perform a one-time setup of Datavard’s storage management component. This is similar as creating a SAP RFC-connection: you let Glue know what storage technology is where.

Next, you can use Glue’s BPL (Business Process Library) to identify relevant SAP tables. Or, if you know which data you want, you can simply go for the tables directly. The beauty of this solution is that you will find plenty of content in the BPL above and beyond information on SAP tables. For example, you can automatically translate SAP fields to friendly fields, and using the BPL Business Functions you can implement lookups for data contextualization without the need for any coding.

The solution is very light-weight, and since it is implemented in SAP’s own ABAP technology integrates natively. When it comes to agility and security, this has major advantages. I will discuss the topic of security in the next blog post of this series. However, when it comes to agility, you can literally create a data model for your business data on Hadoop (e.g. for financial documents), set up a data flow, and run that data flow within minutes.

The individual steps for this are:

  1. Optional: use the Glue BPL (Business Process Library) to identify the relevant table you want to tap into.
  2. My recommendation would be (for transactional data) to use the line item table (e.g. BSEG) as a template and create this table on Hadoop using the Glue BPL “friendly fields” option.
  3. Add fields from the header table you may need (e.g. the posting date, user, document type, etc. from BKPF).
  4. Create an extractor with a 1:1 field mapping between SAP and Hadoop. For the fields from the header table, you can easily use the BPL functions in the extractor to populate the additional fields.
  5. Create & execute a variant for your extractor.

These steps don’t take much time or effort. The trick is to leverage the Glue BPL, along with the SAP DDIC, to unlock the SAP data. This screenshot shows an example for how a HIVE table with SAP Finance data can look on Hadoop using the BPL “friendly fields” function:

To populate this table, we simply use a field mapping and populate additional fields using BPL Business Functions, e.g. “POSTINGDATE_OF_FIDOC”. These can be easily identified through the search help for business functions.

In a nutshell, these functions work like lookups, but with the additional benefit that anybody who can use a tool like MS Excel with calculation functions in cells can use the business functions. This way, you can implement lookups into other tables – without knowing the table names and without knowing how to code in ABAP!

like it? share it!


Reducing the footprint of BW on HANA systems When running a SAP Business Warehouse, one of the first-choices for integrating SAP with Big Data is NLS (Nearline Storage). NLS is a way to move data out of the primary database and store it in a more cost efficient, but slower ...
Archiving and Decommissioning
Business Warehouse
Data Management
Datavard OutBoard
Goetz Lessmann
Goetz Lessmann
Data growth remains one of the top challenges of SAP systems administrators. Our experience in working with various SAP environments shows that even up to 40% of your system can be freed up by archiving or deleting temporary data. For the best results, follow our 5 Principles of Lean ...
Jan Meszaros
Recent years have brought about an amazing uptick in international mergers and acquisitions, divestitures and companies going through major reorganizations or restructurings. But what often is a blessing for shareholders quickly turns into a nightmare for the SAP business applications. After all, major changes must be mirrored in the ...
Transformation & SLO
Goetz Lessmann
Goetz Lessmann
In my last blog post I presented an overview of System Landscape Optimization (SLO) and how it supports business transformation. I walked through some of opportunities and potential benefits of using SAP LT (Landscape Transformation) including flexible golive dates, conserving the integrity of SAP data, and minimizing the impact of a ...
Carve-Outs and Divestitures
Transformation & SLO
Goetz Lessmann
Goetz Lessmann
In my last post I discussed implementing divestiture scenarios in SAP landscapes using SLO (System Landscape Optimization) methods and tools. SLO is a tool based approach to implementing business transformation projects with unique features, including flexible go-live dates, preservation of  the integrity of SAP data, and minimized impact from ...
Mergers & Acquisitions
Transformation & SLO
Goetz Lessmann
Goetz Lessmann
In this fourth installment of my blog series about SLO (System Landscape Optimization), I will describe  how to handle organizational changes using the Company Code Merge approach.  So far, I’ve discussed the use and benefits of using SLO tools and methods for carve-outs and carve-ins when companies are involved in Divestitures or Mergers ...
Transformation & SLO
Goetz Lessmann
Goetz Lessmann