SAP BW Interview Questions & Answers

50+ SAP BW Interview Questions

Explain the architecture of SAP BW system and its components?

Answer

  • OLAP Processor
  • Metadata Repository,
  • Process designer and other functions.
  • Business Explorer BEx is a reporting and analysis tool that supports query, analysis and reporting functions in BI. Using BEx, you can analyze historical and current data to the different degree of analysis.

What all data sources you have used to acquire data in SAP BW system?

Answer

  • SAP systems (SAP Applications/SAP ECC)
  • Relational Database (Oracle, SQL Server, etc.)
  • Flat File (Excel, Notepad)
  • Multidimensional Source systems (Universe using UDI connector)
  • Web Services that transfer data to BI by means of push

When you are using SAP BI7.x, you can load the data to which component?

Answer

In BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for the latest versions.

What is an InfoPackage?

Answer

An info package is used to specify how and when to load data to the BI system from different data sources. An info package contains all the information about how data is loaded from the source system to a data source or PSA. InfoPackage consists of condition for requesting data from a source system.

Note that using an InfoPackage in BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

What is extended Star schema? Which of the tables are inside and outside cube in an extended star schema?

Answer

In Extended Star schema, Fact tables are connected to Dimension tables and dimension table is connected to the SID table and SID table is connected to master data tables. In Extended star schema you have Fact and Dimension tables are inside the cube however SID tables are an outside cube. When you load the transactional data into Info cube, Dim Id’s are generated based on SID’s and these Dim id’s are used in fact tables.

How extended Star schema is different from Star schema?

Answer

  • In Extended Star schema one fact table can connect to 16 dimensions tables and each dimension table is assigned with 248 maximum SID tables. SID tables are also called Characteristics and each character can have master data tables like ATTR, Text, etc.
  • In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.
  • Dimension Table contains a set of attribute that is used to analyze the data.

What is an InfoObject and why it is used in SAP BI?

Answer

Info Objects are known as the smallest unit in SAP BI and are used in Info Providers, DSO’s, Multi providers, etc. Each Info Provider contains multiple Info Objects.

InfoObjects are used in reports to analyze the data stored and to provide information to decision-makers.

What are the different categories of InfoObjects in the BW system?

Info Objects can be categorized into below categories −

  • Characteristics like Customer, Product, etc.
  • Units like Quantity sold currency, etc.
  • Key Figures like Total Revenue, Profit, etc.
  • Time characteristics like Year, quarter, etc.

What is the use of Info area in SAP BW system?

Answer

Info Area in SAP BI is used to group similar types of the object together. Info Area is used to manage Info Cubes and Info Objects. Each Info Objects resides in an Info Area and you can define it as a folder which is used to hold similar files together.

How do you access to source system data in BI without extraction?

Answer

To access data in BI source system directly. You can directly access to source system data in BI without extraction using Virtual Providers. Virtual providers can be defined as InfoProviders where transactional data is not stored in the object. Virtual providers allow only read access to BI data.

What are the different types of Virtual providers?

Answer

  1. VirtualProviders based on DTP
  2. VirtualProviders with function modules
  3. VirtualProviders based on BAPI’s

Virtual Providers are used in which scenario of data extraction?

Answer

VirtualProviders based on DTP −

This type of Virtual Providers are based on the data source or an Info Provider and they take characteristics and key figures of the source. Some extractors are used to select data in the source system as you use to replicate data into the BI system.

When to Virtual Providers based on DTP?

Answer

When only some amount of data is used.You need to access up to date data from an SAP source system.Only a few users execute queries simultaneously on the database.

Virtual Provider with Function Module −

This Virtual Provider is used to display data from the non-BI data source to BI without copying the data to the BI structure. The data can be local or remote. This is used primarily for SEM application.

What is the use of Transformation and how the mapping is done in BW?

Answer

The process is used to perform data consolidation, cleansing and data integration. When data is loaded from one BI object to other BI object, transformation is applied on the data. Transformation is used to convert a field of source into the target object format.

  • Transformation rules −

Transformation rules are used to map source fields and target fields. Different rule types can be used for transformation.

How do perform real-time data acquisition in the BW system?

Answer

Data is sent to delta queue or PSA table in real time.

  • Real-time data acquisition can be achieved in two scenarios −

By using InfoPackage for real-time data acquisition using Service API.Using Web Service to load data to Persistent Storage Area PSA and then by using real-time DTP to move the data to DSO.

  • Real-time Data Acquisition Background Process −

To process data to InfoPackage and data transfer process DTP at regular intervals, you can use a background process known as Daemon.Daemon process gets all the information from InfoPackage and DTP that which data is to be transferred and which PSA and Data sore objects to be loaded with data.

What is the InfoObject catalogue?

Answer

InfoObjects are created in the Info Object catalogue. It is possible that an Info Object can be assigned to different Info Catalog.

What is the use of DSO in BW system? What kind of data is stored in DSO’s

Answer

A DSO is known as a storage place to keep cleansed and consolidated transaction or master data at the lowest granularity level and this data can be analyzed using BEx query.

A DataStore object contains key figures and characteristics fields and data from DSO can be updated using Delta update or other DataStore objects or master data. DataStore objects are commonly stored in two-dimensional transparent database tables.

What are the different components in DSO architecture?

Answer

DSO component consists of three tables −

  • Activation Queue −

This is used to store the data before it is activated. The key contains request-id, package id and record number. Once activation is done, the request is deleted from the activation queue.

  • Active Data Table −

This table is used to store current active data and this table contains the semantic key defined for data modelling.

  • Change Log −

When you activate the object, changes to active data restored in the change log. The changelog is a PSA table and is maintained in Administration Workbench under PSA tree.

To access data for reporting and analysis immediately after it is loaded, which Datastore object is used?

Answer

DataStore object for direct update allows you to access data for reporting and analysis immediately after it is loaded. It is different from standard DSO in the way how it processed the data. Data is stored in the same format in which it was loaded to DataStore object for the direct update by the application.

Explain the structure of direct update DSO’s?

Answer

one table for active data and no change log area exists. Data is retrieved from external systems using API’s.

Below API’s exists −

  • RSDRI_ODSO_INSERT: These are used to insert new data.
  • RSDRI_ODSO_INSERT_RFC: Similar to RSDRI_ODSO_INSERT and can be called up remotely.
  • RSDRI_ODSO_MODIFY: This is used to insert data having new keys. For data with keys already in the system, the data is changed.
  • RSDRI_ODSO_MODIFY_RFC: Similar to RSDRI_ODSO_MODIFY and can be called up remotely.
  • RSDRI_ODSO_UPDATE: This API is used to update existing data.
  • RSDRI_ODSO_UPDATE_RFC: This is similar to RSDRI_ODSO_UPDATE and can be called up remotely.
  • RSDRI_ODSO_DELETE_RFC: This API is used to delete the data.

Can we perform Delta uploads indirect update DSO’s?

Answer

As the structure of this DSO contains one table for active data and no change log so this doesn’t allow delta update to InfoProviders.

What is write-optimized DSO’s?

Answer

In Write optimized DSO, data that is loaded is available immediately for further processing.

Where do we use Write optimized DSO’s?

Answer

Write optimized DSO provides a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further InfoProviders. You only have to create the complex transformations once for all data.

Write-optimized DataStore objects are used as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

Explain the structure of Write optimized DSO’s? How it is different from Standard DSO’s?

Answer

It only contains a table of active data and there is no need to activate the data as required with standard DSO. This allows you to process the data more quickly.

To perform a Join on a dataset, what type of InfoProviders should be used?

Answer

Info sets are defined as a special type of InfoProviders where data sources contain Join rule on DataStore objects, standard InfoCubes or InfoObject with master data characteristics. InfoSets are used to join data and that data is used in the BI system.

What is a temporal join?

Answer

Temporal Joins: are used to map a period of time. At the time of reporting, other InfoProviders handle time-dependent master data in such a way that the record that is valid for a pre-defined unique key date is used each time. You can define a Temporal join that contains at least one time-dependent characteristic or a pseudo-time-dependent InfoProvider.

Where do we use InfoSet in BI system?

Answer

Info sets are used to analyze the data in multiple InfoProviders by combining master data characteristics, DataStore Objects, and InfoCubes.

You can use temporal join with InfoSet to specify a particular point of time when you want to evaluate the data.

You can use reporting using Business Explorer BEx on DSO without enabling BEx indicator.

What is the different type of InfoSet joins?

Answer

  • Inner Join
  • Left Outer Join
  • Temporal Join
  • Self Join

What is the use of InfoCube in BW system?

Answer

InfoCube is defined as a multidimensional dataset which is used for analysis in a BEx query. An InfoCube consists of a set of relational tables which are logically joined to implement star schema. A Fact table in a star schema is joined with multiple dimension tables.

You can add data from one or more InfoSource or InfoProviders to an InfoCube. They are available as InfoProviders for analysis and reporting purposes.

What is the structure of InfoCube?

Answer

An InfoCube is used to store the data physically. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.

In SAP BI, an Infocube contains Extended Star Schema as shown above.

An InfoCube consists of a fact table which is surrounded by 16 dimension tables and master data that is lying outside the cube.

What is the use of real-time InfoCube? How do you enter data in real-time InfoCubes?

Answer

Real-time InfoCubes are used to support parallel write access. Real-time InfoCubes are used in connection with the entry of planning data.

You can enter the data in Real-time InfoCubes in two different ways −

Transaction for entering planning data

BI Staging

How do you create a real-time InfoCube in administrator workbench?

Answer

A real-time InfoCube can be created using Real-Time Indicator checkbox.

Can you make an InfoObject as info provider and why?

Answer

Yes, when you want to report on characters or master data, you can make them as InfoProvider.

Is it possible to convert a standard InfoCube to real-time InfoCube?

Answer

To convert a standard InfoCube to real-time InfoCube, you have two options −

Convert with loss of Transactional data

Conversion with Retention of Transaction Data

Can you convert an InfoPackage group into a Process chain?

Answer

Yes, Double Click on the info package grp → Process Chain Maintenance button and type in the name and description.

When you define aggregates, what are the available options?

Answer

  • H Hierarchy
  • F fixed value
  • Blank

Can you setup InfoObjects as Virtual Providers?

Answer

Yes.

To perform a Union operation on InfoProviders, which InfoProvider is uses?

Answer

MultiProvider

Explain the different between Operation Datastore, InfoCube and MultiProvider?

Answer

ODS −

They provide granular data, allows overwrite and data is in transparent tables, ideal for drill-down and RRI.

InfoCube −

This is used for star schema, we can only append data, ideal for primary reporting.

MultiProvider −

It contains physical data and allows to access data from different InfoProviders.

What do you understand by Start and update routine?

Answer

  • Start Routines −

The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.

  • Update Routines −

They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

What is the use of Rollup?

Answer

This is used to load new Data Package into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.

How can you achieve performance optimization in SAP Data Warehouse?

Answer

During loading, perform steps in below order −

First load the master data in the following order: First attributes, then texts, then hierarchies.

Load the master data first and then the transaction data. By doing this, you ensure that the SIDs are created before the transaction data is loaded and not while the transaction data is being loaded.

To optimize performance when loading and deleting data from the InfoCube −

  • Indexes
  • Aggregates
  • Line item and high Cardinality
  • Compression

To achieve good activation performance for DataStore objects, you should note the following points −

Creating SID Values

Generating SID values takes a long time and can be avoided in the following cases −

Do not set the ‘Generate SID values’ flag, if you only use the DataStore object as a data store. If you do set this flag, SIDs are created for all new characteristic values.

If you are using line items (document number or timestamp, for example) as characteristics in the DataStore object, set the flag in characteristic maintenance to show that they are “attribute only”.

What is Partition of an InfoCube?

Answer

It is the method of dividing a table for report optimization. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also, table maintenance becomes easier.

Explain the difference between InfoCube and ODS?

Answer

Infocube is structured as a star schema where a fact table is surrounded by a different dim table that is linked with DIM’ids.

ODS is a flat structure with no star schema concept and which will have granular data (detailed level). Overwrite functionality.

What is the use of Navigational attributes?

Answer

A navigational attribute is used for drilling down in the report.

While loading data from flat files, when separators are used inconstantly. How this will read in BI load?

Answer

If separators are used inconsistently in a CSV file, the incorrect separator is read as a character and both fields are merged into one field and may be shortened. Subsequent fields are then no longer in the correct order.

To load the data from a file source system, what is a requirement in the BI system?

Answer

Before you can transfer data from a file source system, the metadata must be available in BI in the form of a DataSource.

In SAP BW, is it possible to have multiple data sources have one InfoSource?

Answer

Yes.

How data is stored in PSA?

Answer

In the form of PSA tables

What is the use of DB connection in SAP BW data acquisition?

Answer

DB connect is used to define other database connection in addition to default connection and these connections are used to transfer data into the BI system from tables or views.

To connect an external database, you should have below information −

  • Tools
  • Source Application knowledge
  • SQL syntax in Database
  • Database functions

What is UD connect in the SAP BW system? How does it allow reporting in BI system?

Answer

Universal data UD connect allows you to access Relational and multidimensional data sources and transfer the data in the form of flat data. Multidimensional data is converted to flat format when Universal Data Connect is used for data transfer.

UD uses J2EE connector to allow reporting on SAP and non-SAP data. Different BI Java connectors are available for various drivers, protocols as resource adapters −

  • BI ODBO Connector
  • BI JDBC Connector
  • BI SAP Query Connector
  • XMLA Connector