SAP BODS Quiz

SAP BODS Expert Quiz




1) To enable a full push-down from the source to the target, which of the following features are used?

  1. Changing Array Fetch size & Rows per commit
  2. Running the dataflows in parallel
  3. Data_Transfer transform & Database links
  4. Linked datastores & caching data
Answer : C

2) Which transformation is used to  parse and format custom or person and firm data as well as phone numbers, dates, e-mail addresses, and Social Security numbers?

  1. Associate Transform
  2. Geocoder
  3. Match
  4. Data Cleanse
Answer : D

3) A global variable is set to restrict the number of rows being returned by the Query transform. Which method can you use to ensure the value of the variable is set correctly?

  1. Add the variable to a script inside a print statement
  2. Click Validate All to see the values
  3. View the job monitor log for the variable value
  4. Initialize the variable in a script
Answer : A

4) You are trying to improve the performance of a simple data flow that loads data from a source table into a staging area and only applies some simple remapping using a Query transform. The source database is located on the WAN. The network administrator has told you that you can improve performance if you reduce the number or round trips that occur between the data integrator job server and the source database. What can you do in your data flow to achieve this?

  1. increase the array reach size parameter in the source table editor
  2. Increase the commit size in the target table editor.
  3. Increase the commit size in the source table editor
  4. Replace the source table with the SQL transform
Answer : A

5) You have a data flow the read multiple XML files form a directory by specifying wildcard in the file name. Which method can use to link the XML file name to the records being read?

  1. Select “include file name column” in the XML source file
  2. Use the function get_xml file name in the query mapping
  3. Use the column “XML_fileNAME” listed at the top of the XML file structure
  4. Use the variable$ current_XML_file in the query
Answer : A

6) In which object can you use audit points and rules?

  1. Data flow
  2. Job
  3. Script
  4. Work flows
Answer : A

7) Which item not included on the Operational Dashboards?

  1. Job Execution Statistics History
  2. Job Execution Duration History
  3. Job Schedule History
  4. View Audit Data
Answer : C

8) Which of these functions can not be used in a script?

  1. sleep
  2. sysdate
  3. to_char
  4. Merge
Answer : D

9) Which Option is not available for selecting when an audit rule fails?

  1. Raise Exception
  2. Email to List
  3. Script
  4. Email to the user

Answer : D



10) When is a template table created in the database?

  1. You create a template table on the Data Flow
  2. You execute a Job
  3. You right-click and select “Import” a template table to a permanent table
  4. You right-click and select “Create” on a template table
Answer : B

11) Which one of the following engines process jobs on the Application server?

  1. Access Server
  2. Profiler
  3. Designer
  4. Repository Manager
Answer : A

12) What is the function of the Case transform?

  1. To join data sets from separate streams based on conditional logi
  2. To map a column value based on multiple conditions
  3. To select a Job path based on conditional logic
  4. To split data sets based on conditional logic into separate streams.
Answer : D

13) What techniques can’t be used when recovering a Job to avoid duplicate data loading?

  1. Run the Job in Recover mode
  2. Select “Auto Correct Load” on the Target Table Options
  3. Use the Table_Comparison transform
  4. Delete data and re-execute
Answer : A

14) Which function must you use to call an external program?

  1. Call
  2. Exec
  3. Run
  4. System
Answer : B

15) You want to print the “Employee’s name” string to the trace log. Which expression is correct?

  1. Print(‘Enployee\’s name’);
  2. Print(‘Enployee’s name’);
  3. Print(“Enployee’s name”);
  4. Print(‘Employee”s name’);
Answer : A

16) Which lookup function returns multiple columns?

  1. Lookup
  2. Lookup _Adv
  3. Lookup_Ext
  4. Lookup_Seq
Answer : C

17) Data integrator conations “Execute only once” logic on which two objects?

  1. Conditionals, Workflows
  2. Data flows, Work flows
  3. Work flows, Try Catch
  4. Conditionals, Dataflows
Answer : B

18) You are sourcing COBOL data from a Mainframe. Which COBOL features is Not Supported by native data integrator?

  1. OCCURS DEPENDING
  2. REDEFINES
  3. RECORD OCCURS
  4. RECORD DELETES
Answer : D

19) You are working in a multi-user central repository based environment. You select “Rename owner” on an object which is not checked out. The object has one or more dependent objects in the local repository. What is the outcome?

  1. Data integrator displays a second window listing the dependent objects. When you click “continue” the object owner is renamed and all of the dependent objects are modifie
  2. Data Integrator renames the individual object owner
  3. Data Integrator displays the “This object is checked out from central repository “X”. Please select Tools Central Repository. To activate that repository before renaming.
  4. Data Integrator renames the owner of all objects within the selected data store.

Answer : A



20) Which function must you use to retrieve the current row number of your data set?

  1. Row
  2. Current_Row
  3. Gen_Row_Num
  4. Key_Generation
Answer : C

21) You want to join a “sales”, “customer” and “product” table. Each table resides on a different data store and will not pushdown to one SQL comman The “sales” table contains approximately five millions rows. The “customer” table contains approximately five thousand rows. The “product” table contains fifty records. How would you set the source table Options to maximize performance of this operation?

  1. Set the sales table joins rank to 30 and cache to “No”. Set the customer table joins rank to 20 and cache to “yes”. Then set the product table join rank to 10 and cache to “yes”.
  2. Set the sales table joins rank to 10 and cache to “No”. Set the customer table joins rank to 20 and cache to “yes”. Then set the product table join rank to 30 and cache to “yes”.
  3. Set the sales table joins rank to 20 and cache to “No”. Set the customer table joins rank to 10 and cache to “yes”. Then set the product table join rank to 10 and cache to “yes”.
  4. Set the sales table joins rank to 20 and cache to “No”. Set the customer table joins rank to 20 and cache to “yes”. Then set the product table join rank to 10 and cache to “yes”.
Answer : A

22) Which two objects must you used to create a valid real_time job?

  1. Data flow that contains an XML-Source-message.
  2. Data flow that contains an XLS-Source-message
  3. Data flow that contains an XML-Source file and has the “Make Port” Option selected
  4. Data flow that contains an XLS-Target-message
Answer : A

23) A system configuration allows you to group data store configurations together with which of the following setup?

  1. Multiple data store configurations for multiple data stores.
  2. Single configuration from multiple data store.
  3. Multiple data store configurations for single data stores
  4. Single configuration for a single data store.
Answer : B

24) You create a job containing two work flows and three data flows. The data flows are single threaded and contain no additional function calls or sub data flow operations running as separate processes. How many “al_engine” processes will run on the job server?

  1. Four
  2. One
  3. Six
  4. Two
Answer : A

25) You need to build a job that reads a file that contains headers and footers. The header record always starts with 00. The body records start with 01. The footer record starts with 99. The header record contains customer details. The body records contain sales information. The footer indicates the number of rows in the file. The three record types contain different number of fields. You need to use all information in the file for your data flow. Which technique can you use to interpret this type of file?

  1. Create three file format templates one for the header body and footer records. Load the file using three data flows and use the “ignore row marks” to separate out the header body and footer records.
  2. Create three file format templates for the format that contains the most fields. Use this format in one data flow and use case transform to separate out the header body and footer records.
  3. Create one format template and three data flows to configure the “ignore row markers” Option to interpret the different parts of the file.
  4. Create one file format templates and select “yes” to “File contains Header/Footer” and specify the header and footer markers and use the format in one data flow.
Answer : A

26) What is correct on the lookup definition?

  1. lookup() : Briefly, It returns single value based on multiple conditions
  2. lookup_ext(): It returns one value based on single/multiple condition(s)
  3. lookup_seq(): It returns multiple values based on sequence number
  4. lookup_seq(): It does not return multiple values based on sequence number
Answer : C

27) You are unfamiliar with the data in your customer dimension table. You decide to run a column profile on the table. Which one is not correct about viewing the column profile results?

  1. Count of distinct values in a column
  2. Distinct values in a column
  3. Maximum string length for a varchar column
  4. Minimum string length for a varchar column
Answer : D

28) You need to use an web service within data integrator. What information will you need to configure data integrator to use this web service?

  1. Document Type definition (DTD)
  2. XML schema definition (XSD)
  3. Web Service Definition Language (WSDL)
  4. Web service URL
Answer : C

29) You have two related source tables (Department and Employees). Some employee records do not contain values for Department _I Which method ensures all Employees records are selected form the resulting query?

  1. Specify Department as outer source and Employees as inner source in the OUTER JOIN
  2. Specify Employees as outer source and Department as inner source in the OUTER JOIN
  3. Specify the Employees. Department _ID (+) = Department.Department _ID in the WHERE clause.
  4. Specify the Employees. Department _ID = Department . Department_ID (+) in the WHERE clause.

Answer : B



30) Which combinations of input and output schemas are not permitted in embedded data flow?

  1. 1 input and 0 outputs
  2. 0 input and 1 outputs
  3. 1 input and 1 outputs
  4. 0 inputs and 0 outputs
Answer : D

31) You load over 10,000,000 records from the “customer” source table into a staging area.
You need to remove the duplicate customer during the loading of the source table. You do not need to record or audit the duplicates. Which two de-duplicating techniques will ensure that best performance?

  1. Use a Query transform and do not sort the incoming data set and use the previous_row-value in the where clause to filter any duplicate row.
  2. Use the Query transform to order the incoming data set. Then a table_comparison transform with “input contains duplicates” and the “sorted input” Options selecte
  3. Use the table_comparison transform with the “input contains duplicates” and “cached comparison table” selecte
  4. Use map operation then table comparison and then in query transform remove duplicate
Answer : B

32) Which syntactical items you can not check by selecting “Validate All”?

  1. Contents of SQL transform
  2. Existence of variables
  3. Job structure
  4. Contents of the script
Answer : A

33) Some of your incoming data are rejected by the database table because of conversion errors and primary key violations. You want to edit and load the failed data rows manually using the SQL Query tool. How can you perform this action?

  1. In the target table editor select “use overflow file”, select “write SQL”and enter the filename.
  2. In the job properties select “trace SQL_Errors” and copy the failed SQL command from the job trace log
  3. Use the SQL contained in the error log file in the “BusinessObject/data integration/logs…”directory
  4. In the data flow properties, select “SQL Exception file” and enter the filename
Answer : A

34) You want to split your data set into three separate tables based on the region_id fiel  Which method will accomplish the desired result?

  1. Use the Case transform and specify three expressions based on the region_id value.
  2. Use three Table Comparison transforms based on the region_id value.
  3. Use the Vaildation transform and specify a validation rule based on the region_id value
  4. Use Data Integrator While loop to separate out the three region ids
Answer : A

35) Which method can you use to specify multiple files in the same source file format?

  1. Create a file list .txt file
  2. Use wildcards(*.?)
  3. Use pipe separated file
  4. Create a file list .xls file
Answer : B

36) You create a real-time job that processes data from an external application. Which two mechanisms enable the external application to send/receive messages to the real-time job?

  1. Adapter instance, Web service call
  2. E-mail
  3. Web service call, function call
  4. Function call, Adapter instance
Answer : A

37) You are running a Job in Debug mode, how long is the captured data can be persisted?

  1. Permanently stored in the Job Server Logs
  2. Until the Job is re-executed
  3. While the Debug is active
  4. While the Data Flow is active in the workspace
Answer : C

38) You create an expression that tests in the “Zip code” field matches a standard format to five numeric digits and the value begins with a 1or 2 which expression must you use to do this?

  1. match_pattern( value.'[12]9999′)
  2. match_pattern( value.'[112]9999′)
  3. match_pattern( value.’?[12]9999′)
  4. match_pattern( value.’?[112]9999′)
Answer : A

39) Your table contains the “sales_date” and “sales-time” fields both fields are data type archer (20). The “sales_date” format is 21-jan- 1980′. The “sales_time” format is ’18-30-12′. You need to combine both fields and load the results into a single Target field of the data time data type. Which expression must you use to perform the conversion?

  1. to_data(sales_datell’ ‘ll sales_time.’dd_mon-yyyy hh24:mi:ss’)
  2. to_data(sales_date &’ ‘& sales_time.’dd_mon-yyyy hh24:mi:ss’)
  3. to_data(sales_datell’ ‘ll sales_time.’dd_mmm-yyyy hh24:mi:ss’)
  4. to_data(sales_date&’ ‘& sales_time.’dd_mmmm-yyyy hh24:mi:ss’)

Answer : A



40) What is the maximum number of loaders?

  1. 5
  2. 4
  3. 3
  4. 2
Answer : A

41) How long is the table data within a persistent cache data store retained?

  1. Until the execution of the batch jo
  2. Until the job server is restarted
  3. Until the table is reloaded
  4. Until the real-time service is restarted
Answer : C

42) You are using an Oracle 10G database for the source and target tables in your data flow. In which circumstance will data integrator optimize the SQL to use the Oracle “merge” command?

  1. The “Auto Correct load” Option is selected on the Target table
  2. The Map_Operation is used to map all items from ” normal ” to” update now operations
  3. A table comparison is used to compare the source with the target
  4. The “Use input Keys” Option is selected on the target table editor
Answer : A

43) How do you set the degree of parallelism value at which a data flown use?

  1. Right-click work flow. Select properties and enter a number for the “Degree of parallelism.
  2. Select the target table editor and enter a number for the “Number of loaders”.
  3. Ans) Right-click data flow. Select properties and enter a number for the degree of parallelism.
  4. Select each transform in the data flow select properties and enter a number for the degree of parallelism
Answer : C

44) Which of the below Match transform transformation will be used to identify matching data records based on similar address data?

  1. AddressSingleField_MatchBatch
  2. Address_MatchBatch
  3. FirmAddress_MatchBatch
  4. NameAddress_MatchBatch
Answer : B

45) Which interface does not require any adapter data store?

  1. COBOL copybook
  2. MQ series
  3. Web service
  4. Excel Workbook
Answer : D

46) How do you create multiple instances of the same Data Flow?

  1. Right-click+Replicate on the Data Flow in the Local Object Library
  2. Right-click_Copy/Paste Data Flow from the Job wrorkspace
  3. Right-click_Copy/Paste Data Flow in the Local Object Library
  4. Right-click+Replicate Data Flow from the Job workspace
Answer : A

47) Your sales order fact table load contains a reference to a customer _id not found in the customer
dimension table. How can you replace the customer_id with a default value and preserve the
original record using the Validation transform?

  1. Select “Exists in table” and “Action On Failure I Send to Both”, select “For Pass, substitute
  2. with”.
  3. Select “Exists in table” and “Action On Failure I Send to Fail”, select “For Pass, substitute
  4. with”.
  5. Select “In” option and “Action On Failure I Send to Both”, select “For pass, substitute with”.
  6. Select “In” option and “Action On Failure I Send to Fail”, select “For pass, substitute with”.
Answer : A

48) Which lookup caching method reduces the number of round trips to the translate table?

  1. Demand_Load_Cache
  2. No_Cache
  3. Pre_Load _Cache
  4. Smart_Cache
Answer : C

49) Which SQL statement displays when the “Trace SQL Readers” option is set to “Yes”?

  1. SQL from the source tables
  2. SQL to the target tables
  3. SQL from the Lookup_ext function
  4. SQL from the Table_Comparison transform
Answer : A

50) What is the correct sequence of transforms to populate a Type || Slowly Changing Dimension
(SCD ||)?

  1. Key_Generation, Table_Comparison.History_Preserving
  2. History_Preserving, Table_Comparison.Key_Generation
  3. Table_Comparison.History_Preserving, Key_Generation
  4. Key_Generation, History_Preserving
Answer : C

51) You have a source table that contains fifty columns. You need to place business rules on thirty of
the columns to check the format of the source data and filter the valid and invalid records. You
also want to analyze the column values that fail. What is the recommended method you should
use?

  1. Use a Case transform to create two conditions that filter the invalid records
  2. Use a Map_Operation transform to map valid and invalid data rules
  3. Use a Validation transform and enable validation rules on the required columns
  4. Use two Query transforms with different WHERE clauses to filter the invalid records

Answer : C