5/20/08

What Is SPRO In BW Project?

What Is SPRO In BW Project?

1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?

1. SPRO is the transaction code for Implementation Guide, where you can do configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse Information.

2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW, and other data sources, link between BW system and Microsoft Analysis services, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in the format of the transfer structure. It is defined according to the Datasource and source system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for applications, which need to move data in or out of SAP Systems.

Use of manual security profiles with SAP BW

Use of manual security profiles with SAP BW

-----Original Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)

Our company is currently on version 3.1H and will be moving to 4.6B late
summer 2000. Currently all of our R/3 security profiles were created
manually. We are also in the stage of developing and going live with the
add-on system of Business Warehouse (BW). For consistency, we have wish
to use manual profiles within the BW system and later convert all of our
manual security profiles (R/3 and BW) to profile generated ones.

Is there anyone else that can shed any light on this situation? (Success
or problems with using manual security profiles with BW?)

Any feedback would be greatly appreciated.

Thank you,

-----Reply Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)

Hi ,
You are going to have fun doing this upgrade. The 4.6b system is a
completely different beast than the 3.1h system. You will probably find a
lot of areas where you have to extend you manually created profiles to
cover new authorisation objects (but then you can have this at any level).

In 4.6b you really have to use the profile generator, but at least there is
a utility to allow you to pick up your manually created profile and have it
converted to an activity group for you. This will give you a running start
in this area, but you will still have a lot of work to do.

The fact that you did not use PG at 3.1h will not matter as it changed at
4.5 too and the old activity groups need the same type of conversion (we
are going through that bit right now).

Hope this helps

-----End of Message-----

SAP Business Information Warehouse

SAP Business Information Warehouse

-----Original Message-----
Subject: Business Information Warehouse

Ever heard about apples and oranges. SAP/R3 is an OLTP system where as BIW
is an OLAP system. LIS reports can not provide the functionality provided
by BIW.

-----Reply Message-----
Subject: Business Information Warehouse

Hello,

The following information is for you to get more clarity on the subject:
SAP R/3 LIS (Logistic Information System) consist of infostructures (which
are representation of reporting requirements). So whenever any event (goods
reciept, invoice reciept etc. ) takes place in SAP R/3 module, if relevant
to the infostructure, an corresponding entry is made in the infostructures.
Thus infostructures form the database part of the datawarehouse. For
reporting the data (based on OLAP features such drill-down, abc, graphics
etc.), you can use SAP R/3 standard analysis (or flexible analysis) or
Business Warehouse (which is excel based) or Business Objects (which is
third party product but can interface with SAP R/3 infostructures using BAPI
calls).

In short, the infostructures (which are part of SAP R/3 LIS) form the data
basis for reporting with BW.

Regards

-----End of Message-----

SAP Data Warehouse

We have large amounts of historical sales data stored on our legacy system (i.e. multiple files with 1 million+ records). Today the users use custom written programs and the Focus query tool to generate sales'ish type of reports.

We are wanting that existing legacy system to go away and need to find a home for the data and the functionality to access and report on that data. What options does SAP afford for data warehousing? How does it affect the response of the SAP database server?

We are thinking of moving the data onto a scaleable NT server with large amounts of disk (10gb +) and using PC tools to access the data. In this environment, our production SAP machine would perform weekly data transfers to this historical sales reporting system.

Has anybody implemented a similar solution or have any ideas on a good attack method to solve this issue?

You may want to look at SAP's Business Information Warehouse. This is their answer to data warehousing. I saw a presentation on this last October at the SAP Technical Education Conference and it looked pretty slick.

BIW runs on its own server to relieve the main database from query and report processing. It accepts data from many different types of systems and has a detailed administration piece to determine data source and age. Although the Information System may be around for sometime it sounded like SAP is moving towards the Business Information Warehouse as a reporting solution.

The Three Layers of SAP BW


The Three Layers of SAP BW

SAP BW has three layers:

* Business Explorer: As the top layer in the SAP BW architecture, the Business Explorer (BEx) serves as the reporting environment (presentation and analysis) for end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx Map for analysis and reporting activities.

* Business Information Warehouse Server: The SAP BW server, as the middle layer, has two primary roles:

• Data warehouse management and administration: These tasks are handled by the production data extractor (a set of programs for the extraction of data from R/3 OLTP applications such as logistics, and controlling), the staging engine, and the Administrator Workbench.
• Data storage and representation: These tasks are handled by the InfoCubes in conjunction with the data manager, Metadata repository, and Operational Data Store (ODS).
* Source Systems: The source systems, as the bottom layer, serve as the data sources for raw business data. SAP BW supports various data sources:

• R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior to Release 3.1H (SAP BW regards them as external systems)
• Non-SAP systems or external systems
• mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM, or R/3 components) or another SAP BW system.



5/12/08

Tickets and Authorization in SAP Business Warehouse


Tickets and Authorization in SAP Business Warehouse

What is tickets? and example?

The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.

1. Loading any of the missing master data attributes/texts - This would be done by scheduling the infopackages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.

Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.

Checklists for a support project of BPS - To start the checklist:

1) InfoCubes / ODS / datatargets
2) planning areas
3) planning levels
4) planning packages
5) planning functions
6) planning layouts
7) global planning sequences
8) profiles
9) list of reports
10) process chains
11) enhancements in update routines
12) any ABAP programs to be run and their logic
13) major bps dev issues
14) major bps production support issues and resolution

Differences Between BW and BI Versions


Differences Between BW and BI Versions


List the differences between BW 3.5 and BI 7.0 versions.

Major Differences between Sap Bw 3.5 & SapBI 7.0 version:

1. In Infosets now you can include Infocubes as well.
2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.
3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.
4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :)
5. Search functionality hass improved!! You can search any object. Not like 3.5
6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too.
7. The Data Warehousing Workbench replaces the Administrator Workbench.
8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects.
9. The transformation replaces the transfer and update rules.

10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.
From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14.BI supports real-time data acquisition.
15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:

a) Renamed ODS as DataStore.
b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have the option to bypass the PSA Yes,

16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load.
New features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules
iv. Performance optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations.

What Is Different Between ODS & IC


What Is Different Between ODS & IC

What is the differenct between IC & ODS? How to flat data load to IC & ODS?

By: Vaishnav

ODS is a datastore where you can store data at a very granular level. It has overwritting capability. The data is stored in two dimensional tables. Whereas cube is a based on multidimensional modeling which facilitates reporting on diff dimensions. The data is stored in an aggregated form unlike ODS and have no overwriting capability. Reporting and analysis can be done on multidimensions unlike on ODS.

ODS are used to consolidate data. Normally ODS contain very detailed data, technically there is the option to overwrite or add single records.InfoCubes are optimized for reporting. There are options to improve performance like aggregates and compression and it is not possible to replace single records, all records sent to InfoCube will be added up.

The most important difference between ODS and BW is the existence of key fields in the ODS. In the ODS you can have up to 16 info objects as key fields. Any other info objects will either be added or overwritten! So if you have flat files and want to be able to upload them multiple times you should not load them directly into the info cube, otherwise you need to delete the old request before uploading a new one. There is the disadvantage that if you delete rows in the flat file the rows are not deleted in the ODS.

I also use ODS-Objects to upload control data for update or transfer routines. You can simply do a select on the ODS-Table /BIC/A00 to get the data.

ODS is used as an intermediate storage area of operational data for the data ware house . ODS contains high granular data . ODS are based on flat tables , resulting in simple modeling of ODS. We can cleanse transform merge sort data to build staging tables that can later be used to populate INOFCUBE .

An infocube is a multidimentionsl dat acontainer used as a basis for analysis and reporting processing. The infocube is a fact table and their associated dimension tables in a star schema. It looks like a fact table appears in the middle of the graphic, along with several surrounding dimension tables. The central fact is usually very large, measured in gigabytes. it is the table from which you retrieve the interesting data. the size of the dimension tables amounts to only 1 to 5 percent of hte size of the fact table. common dimensions are unit & time etc. There are different type of infocubes in BW, such as basic infocubes, remote infocubes etc.

An ODS is a flat data container used for reporting and data cleansing/quality assurance purpose. They are not based on star schema and are used primaily for detail reporting rather than for dimensional analyais.

An infocube has a fact table, which contains his facts (key figures) and a relation to dimension tables. This means that an infocube exists of more than one table. These tables all relate to each other. This is also called the star scheme, because the dimension tables all relate to the fact table, which is the central point. A dimension is for example the customer dimension, which contains all data that is important for the customer.

An ODS is a flat structure. It is just one table that contains all data. Most of the time you use an ODS for line item data. Then you aggregate this data to an infocube.



Difference Between PSA, ALE IDoc, ODS


Difference Between PSA, ALE IDoc, ODS

What is difference between PSA and ALE IDoc? And how data is transferd using each one of them?

The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc)

You determine the PSA or IDoc transfer method in the transfer rule maintenance screen. The process for loading the data for both transfer methods is triggered by a request IDoc to the source system. Info IDocs are used in both transfer methods. Info IDocs are transferred exclusively using ALE

A data IDoc consists of a control record, a data record, and a status record The control record contains, for example, administrative information such as the receiver, the sender, and the client. The status record describes the status of the IDoc, for example, "Processed". If you use the PSA for data extraction, you benefit from increased flexiblity (treatment of incorrect data records). Since you are storing the data temporarily in the PSA before updating it in to the data targets, you can check the data and change it if necessary. Unlike a data request with IDocs, the PSA gives you various options for additional data updates into data targets:

InfoObject/Data Target Only - This option means that the PSA is not used as a temporary store. You choose this update type if you do not want to check the source system data for consistency and accuracy, or you have already checked this yourself and are sure that you no longer require this data since you are not going to change the structure of the data target again.

PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data from the source system, writes the data to the PSA and at the same time starts the update into the relevant data targets. Therefore, this method has the best performance.

The parallel update is described in detail in the following: A dialog process is started by data package, in which the data of this package is writtein into the PSA table. If the data is posted successfully into the PSA table, the system releases a second, parallel dialog process that writes the data to the data targets. In this dialog process the transfer rules for the data records of the data package are applied, that data is transferred to the communcation structure, and then written to the data targets. The first dialog process (data posting into the PSA) confirms in the source system that is it completed and the source system sends a new data package to BW while the second dialog process is still updating the data into the data targets.

The parallelism relates to the data packages, that is, the system writes the data packages into the PSA table and into the data targets in parallel. Caution: The maximum number of processes setin the source system in customizing for the extractors does not restrict the number of processes in BW. Therefore, BW can require many dialog processes for the load process. Ensure that there are enough dialog processes available in the BW system. If there are not enough processes on the system side, errors occur. Therefore, this method is the least recommended.

PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series into the PSA table and into the data targets by data package. The system starts one process that writes the data packages into the PSA table. Once the data is posted successfuly into the PSA table, it is then written to the data targets in the same dialog process. Updating in series gives you more control over the overall data flow when compared to parallel data transfer since there is only one process per data package in BW. In the BW system the maximum number of dialog process required for each data request corresponds to the setting that you made in customizing for the extractors in the control parameter maintenance screen. In contrast to the parallel update, the system confirms that the process is completed only after the data has been updated into the PSA and also into the data targets for the first data package.

Only PSA - The data is not posted further from the PSA table immediately. It is useful to transfer the data only into the PSA table if you want to check its accuracy and consistency and, if necessary, modify the data. You then have the following options for updating data from the PSA table:

Automatic update - In order to update the data automatically in the relevant data target after all data packages are in the PSA table and updated successfully there, in the scheduler when you schedule the InfoPackage, choose Update Subsequently in Data Targets on the Processing tab page. *-- Sunil

What is difference between PSA and ODS?

PSA: This is just an intermediate data container. This is NOT a data target. Main purpose/use is for data quality maintenance. This has the original data (unchanged) data from source system.

ODS: This is a data target. Reporting can be done through ODS. ODS data is overwriteable. For datasources for which delta is not enabled, ODS can be used to upload delta records to Infocube.

You can do reporting in ODS. In PSA you can't do reporting directly

ODS contains detail -level data , PSA The requested data is saved, unchanged from the source system. Request data is stored in the transfer structure format in transparent, relational database tables in the Business Information Warehouse. The data format remains unchanged, meaning that no summarization or transformations take place

In ODS you have 3 tables Active, New data table, change log, In PSA you don't have.

Difference Between BW Technical and Functional


Difference Between BW Technical and Functional

In general Functional means, derive the funtional specification from the business requirement document. This job normally is done either by the business analyst or system analyst who has a very good knowledge of the business. In some large organizations there will be a business analyst as well as system analyst.

In any business requirement or need for new reports or queries originates with the business user. This requirement will be recorded after discussion by the business analyst. A system analyst analyses these requirements and generates functional specification document. In the case of BW it could be also called logical design in DATA MODELING.

After review this logical desing will be translated to physical design . This process defines all the required dimensions, key figures, master data, etc.

Once this process is approved and signed off by the requester(users), then conversion of this into practically usable tasks using the SAP BW software. This is called Technical. The whole process of creating an InfoProvider, InfoObjects, InforSources, Source system, etc falls under the Technical domain.

What is the role of consultant has to play if the title is BW administrator? What is his day to day activity and which will be the main focus area for him in which he should be proficient?

BW Administartor - is the person who provides Authorization access to different Roles, Profiles depending upon the requirement.

For eg. There are two groups of people : Group A and Group B.

Group A - Manager

Group B - Developer

Now the Authorization or Access Rights for both the Groups are different.

So for doing this sort of activity.........we required Administrator.

Tips by : Raja Muraly, Rekha

Which one is more in demand in SAP Job, ABAP/4 or BW?

In terms of opportunities a career in SAP BW is sounds better.

ABAP knowledge will help you excel as a BW consultant, so taking the training in ABAP will be worth it.

You can shift to BW coming from either an ABAP or functional consultant background. The advantages of the ABAP background is you will find it easier to understand the technical aspects of BW, such as when you need to create additional transfer structures or if you need to program any conversion routines for the data being uploaded, as well as being familiar with the source tables from SAP R/3.

The advantage of coming from a functional consultant background is the knowledge of the business process. This is important when you're modeling new infocubes. You should be familiar with what kind of data/information your user needs and how they want to view/group the data together.

Daily Tasks in Support Role and Infopackage Failures


Daily Tasks in Support Role and Infopackage Failures
1. Why there is frequent load failures during extractions? and how they are going to analyse them?

If these failures are related to Data,, there might be data inconsistency in source system.
though you are handling properly in transfer rules. You can monitor these issues in T-code -> RSMO and PSA (failed records).and update .

If you are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO).. and restart the extraction.

2. Can anyone explain briefly about 0record modes in ODS?

ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using this ODS will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is for deleting and removing records and X is for skipping records during delta load.

3. What is reconciliation in bw? What the procedure to do reconciliation?

Reconcilation is the process of comparing the data after it is transferred to the BW system with the source system. The procedure to do reconcilation is either you can check the data from the SE16 if the data is coming from a particular table only or if the datasource is any std datasource then the data is coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on that particular selections and used to get the data in the excel sheet and then used to reconcile with the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).

4. What is the daily task we do in production support.How many times we will extract the data at what times.

It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..

Usually You need to work on RSMO and see what records are failing.. and update from PSA.

5. What are some of the frequent failures and errors?

As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.

a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections

These are some of the reasons for the load failures

Questions Answers on SAP BW


Questions Answers on SAP BW

What is the purpose of setup tables?

Setup tables are kind of interface between the extractor and application tables. LO extractor takes data from set up table while initalization and full upload and hitting the application table for selection is avoided. As these tables are required only for full and init load, you can delete the data after loading in order to avoid duplicate data. Setup tables are filled with data from application tables.The setup tables sit on top of the actual applcation tables (i.e the OLTP tables storing transaction records). During the Setup run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate records for the same selections

We are having Cube. what is the need to use ODS. what is the necessary to use ODS though we are having cube?

1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas for a ods the default is overwrite.

What is the importance of transaction RSKC? How it is useful in resolving the issues with speial characters.

How to handle double data loading in SAP BW?

What do you mean by SAP exit, User exit, Customer exit?

What are some of the production support isues-trouble shooting guide?

When we go for Business content extraction and when go for LO/COPA extraction?


What are some of the few infocube name in SD and MM that we use for extraction and load them to BW?

How to create indexes on ODS and fact tables?

What are data load monitor (RSMO or RSMON)?


1A. RSKC.

Using this T-code, you can allow BW system to accept special char's in the data coming from source systems. This list of chars can be obtained after analyzing source system's data OR can be confirmed with client during design specs stage.

2A. Exit.s

These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report)
Some can be developed by BW/ABAP developer and inserted wherever its required.

Some of these programs are already available and part of SAP Business Content. These are called SAP Exits. Depends on the requirement, we need to extend some exits and customize.

3A.

Production issues are different for each BW project and most common issues can be obtained from some of the previous mails. (data load issues).

4A.

LIS Extraction is kind of old school type and not preferred with big BW systems. Here you can expect issues related to performance and data duplication in set up tables.

LO extraction came up with most of the advantages and using this, you can extend exiting extract structures and use customized data sources.

If you can fetch all required data elements using SAP provided extract structures, you don't need to write custom extractions... You can get clear idea on this after analyzing source system's data fields and required fields in target system's data target's structure.

5A.

MM - 0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)
SD - 0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..

6A.

You can do this by choosing "Manage Data Target" option and click on few buttons available in "performance" tab.

7A.

RSMO is used to monitor data flow to target system from source system. You can see data by request, source system, time request id etc.... just play with this..


What is KPI?

A KPI are Key Performance Indicators.
These are values companies use to manage their business. E.g. net profit.

In detail:

Stands for Key Performance Indicators. A KPI is used to measure how well an organization or individual is accomplishing its goals and objectives. Organizations and businesses typically outline a number of KPIs to evaluate progress made in areas where performance is harder to measure.

For example, job performance, consumer satisfaction and public reputation can be determined using a set of defined KPIs. Additionally, KPI can be used to specify objective organizational and individual goals such as sales, earnings, profits, market share and similar objectives.

KPIs selected must reflect the organization's goals, they must be key to its success, and they must be measurable. Key performance indicators usually are long-term considerations for an organization

Business Warehouse SAP Interview


Business Warehouse SAP Interview

1. How to convert a BeX query Global structure to local structure (Steps involved)?


To convert a BeX query Global structure to local structureSteps:
A local structure when you want to add structure elements that are unique to the specific query. Changing the global structure changes the structure for all the queries that use the global structure. That is reason you go for a local structure.

Coming to the navigation part--

In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog box: Choose Queries. Select the desired InfoCube Choose New. On the Define the query screen: In the left frame, expand the Structure node. Drag and drop the desired structure into either the Rows or Columns frame. Select the global structure. Right-click and choose Remove reference. A local structure is created.

Remember that you cannot revert back the changes made to global structure in this regard. You will have to delete the local structure and then drag n drop global structure into query definition.

When you try to save a global structure, a dialogue box prompts you to comfirm changes to all queries. that is how you identify a global structure.

2.I have RKF & CKF in a query, if report is giving error which one should be checked first RKF or CKF and why (This is asked in one of int).

RKF consists of a key figure restricted with certain charecteristics combinations CKF have calculations which fully uses various key figures

They are not interdependent on each other . You can have both at same time

To my knowledge there is no documented limit on the number of RKF's and CKF's. But the only concern would be the performance. Restructed and Calculated Key Figures would not be an issue. However the No of Key figures that you can have in a Cube is limited to around 248.

Restricted Key Figures restrict the Keyfigure values based on a Characteristic.(Remember it wont restrict the query but only KF Values)

Ex: You can restrict the values based on particular month

Now I create a RKFlike this:(ZRKF)
Restrict with a funds KF
with period variable entered by the user.

This is defined globally and can be used in any of the queries on that infoprovider. In columns: Lets assume 3 company codes are there. In new selection, i drag

ZRKF
Company Code1

Similarly I do for other company codes.

Which means I have created a RKF once and using it in different ways in different columns(restricting with other chars too)

In the properties I give the relevant currency to be comverted which will display after converting the value to target currency from native currency.

Similarly for other two columns with remaining company codes.

3. What is the use of Define cell in BeX & where it is useful?

Cell in BEX:::Use

When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.

Cell-specific definitions allow you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and in this way, to override implicitly created cell values. This function allows you to design much more detailed queries.

In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.

You need two structures to enable cell editor in bex. In every query you have one structure for key figures, then you have to do another structure with selections or formulas inside.

Then having two structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any row with any column can be defined as formula in cell editor.

This is useful when you want to any cell had a diferent behaviour that the general one described in your query defininion.

For example imagine you have the following where % is a formula kfB/KfA * 100.

kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 50%

Then you want that % for row chC was the sum of % for chA and % chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/% then:

kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 86%

SAP BW Interview Questions 2


1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data modelling?
6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?


Ans # 1: Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).

PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.

PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.

This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task. 1. Check the Source System for that particular PC.


2. Select the request ID (it will be in Header Tab) of PC


3. Go to SM37 of Source System.


4. Double Click on the Job.


5. You will navigate to a screen


6. In that Click "Job Details" button


7. A small Pop-up Window comes


8. In the Pop-up screen, take a note of a) Executing Server b) WP Number/PID


9. Open a new SM37 (/OSM37) command


10. In the Click on "Application Servers" button


11. You can see different Application Servers.


11a. Goto Executing server, and Double Click (Point 8 (a))


12. Goto PID (Point 8 (b))


13. On the left most you can see a check box


14. "Check" the check Box

15. On the Menu Bar.. You can see "Process"

16. In the "process" you have the Option "Cancel with Core"

17. Click on that option. * -- Ramkumar K

Ans # 2: Data Integrity is about eliminating duplicate entries in the database and achieve normalization.

Ans # 4: InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.

This compression can be done through Process Chain and also manually.

Tips by: Anand

Ans#3: Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.

Ans#5: Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....

Ans#6: We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...

Ans#7: Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..

Ans#8: Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...

Ans#9: Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.

*****





SAP BW Interview Questions


What is ODS?
It is operational data store. ODS is a BW Architectural component that appears between PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting.

It is not based on the star schema and is used primarily for details reporting, rather than for dimensional analysis. ODS objects do not aggregate data as infocubes do. Data are loaded into an IDS object by inserting new records, updating existing records, or deleting old records as specified by RECORDMODE value.

1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes?


1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it will take less than 30 mins.

3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization


4. Ans:
In no of records


5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different dim table which connects to sids. And the data wise, you will have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular data(detailed level).


6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as navigational attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the cube as a characteristic(that is the advantage) to drill down.


*****

Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
[use Delta upload method]

Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can.ODS is nothing but a table.

Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes ofcourse.For example, for loading text and hierarchies we use different data sources but the same infosource.

Q4. BRIEF THE DATAFLOW IN BW.
Data flows from transactional system to analytical system(BW).
DS on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.

Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES?

Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.

Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on LO-Cokpit extractions.

Q8. SIGNIFICANCE OF ODS.
It holds granular data.

Q9. WHERE THE PSA DATA IS STORED?
In PSA table.

Q10.WHAT IS DATA SIZE?
The volume of data one data target holds(in no.of records)

Q11. DIFFERENT TYPES OF INFOCUBES.
Basic,Transactional and Virtual Infocubes(remote,sap remote and multi)

Q12. INFOSET QUERY.
Can be made of ODSs and objects/Charactaristic InfoObjects

Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW

Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine

Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns,you can create structures.

Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization

Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets

Q18. WHAT ARE INDEXES?
Indexes are data base indexes,which help in retrieving data fastly.

Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.
Nope

Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI’s indicate the performance of a company.These are key figures

Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)

Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*,Number ranges,delete indexes before load ..etc

Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily(SM37 jobs)

Q26. AUTHORIZATIONS.
Profile generator[PFCG]

Q27. WEB REPORTING.

Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER.
Of course

Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help.What are you expecting??.Multicube works on Union condition

Q30. EXPLAIN TRANPORTATION OF OBJECTS?
Dev ---> Q and Dev ---> P

SAP BW FAQ

BW Query Performance
Question:

1.
What kind of tools are available to monitor the overall Query Performance?
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT

2.
Do I have to do something to enable such tools?
o Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)

3.
What kind of tools are available to analyse a specific query in detail?
o Transaction RSRT
o Transaction RSRTRACE

4.
Do I have a overall query performance problem?
o Use ST03N -> BW System load values to recognize the problem. Use the
number given in table 'Reporting - InfoCubes:Share of total time (s)'
to check if one of the columns %OLAP, %DB, %Frontend shows a high
number in all InfoCubes.
o You need to run ST03N in expert mode to get these values

5.
What can I do if the database proportion is high for all queries?
Check:
o If the database statistic strategy is set up properly for your DB platform
(above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)

6.
What can I do if the OLAP proportion is high for all queries?
Check:
o If the CPUs on the application server are exhausted
o If the SAP R/3 memory set up is done properly (use TX ST02 to find
bottlenecks)
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
Customizing default)

7.
What can I do if the client proportion is high for all queries?
o Check whether most of your clients are connected via a WAN Connection and the amount
of data which is transferred is rather high.

8.
Where can I get specific runtime information for one query?
o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
current data.
o To get to a specific query you need to drill down using the InfoCube
name
o Use Aggregation Query to get more runtime information about a
single query. Use tab All data to get to the details.
(DB, OLAP, and Frontend time, plus Select/ Transferred records,
plus number of cells and formats)

9.
What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime
o High Frontend Runtime

10.
What can I do if a query has a high database runtime?
o Check if an aggregate is suitable (use All data to get values
"selected records to transferred records", a high number here would
be an indicator for query performance improvement using an aggregate)
o Check if database statistics are update to data for the
Cube/Aggregate, use TX RSRV output (use database check for statistics
and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)

11.
What can I do if a query has a high OLAP runtime?
o Check if a high number of Cells transferred to the OLAP (use
"All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
is necessary (Stock Query, Exception Aggregation, Calc. before
Aggregation, Virtual Char. Key Figures, Attributes in Calculated
Key Figs, Time-dependent Currency Translation)
together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
as deep as possible. This limits the levels of the
hierarchy that must be processed. Use SE16 on the inclusion
tables and use the List of Value feature on the column successor
and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exist

12.
What can I do if a query has a high frontend runtime?
o Check if a very high number of cells and formattings are transferred
to the Frontend ( use "All data" to get value "No. of Cells") which
cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient

Important Transaction Codes For BW

1 RSA1 Administrator Work Bench
2 RSA11 Calling up AWB with the IC tree
3 RSA12 Calling up AWB with the IS tree
4 RSA13 Calling up AWB with the LG tree
5 RSA14 Calling up AWB with the IO tree
6 RSA15 Calling up AWB with the ODS tree
7 RSA2 OLTP Metadata Repository
8 RSA3 Extractor Checker
9 RSA5 Install Business Content
10 RSA6 Maintain DataSources

11 RSA7 BW Delta Queue Monitor
12 RSA8 DataSource Repository
13 RSA9 Transfer Application Components
14 RSD1 Characteristic maintenance
15 RSD2 Maintenance of key figures
16 RSD3 Maintenance of units
17 RSD4 Maintenance of time characteristics
18 RSBBS Maintain Query Jumps (RRI Interface)
19 RSDCUBE Start: InfoCube editing
20 RSDCUBED Start: InfoCube editing

21 RSDCUBEM Start: InfoCube editing
22 RSDDV Maintaining
23 RSDIOBC Start: InfoObject catalog editing
24 RSDIOBCD Start: InfoObject catalog editing
25 RSDIOBCM Start: InfoObject catalog editing
26 RSDL DB Connect - Test Program
27 RSDMD Master Data Maintenance w.Prev. Sel.
28 RSDMD_TEST Master Data Test
29 RSDMPRO Initial Screen: MultiProvider Proc.
30 RSDMPROD Initial Screen: MultiProvider Proc.

31 RSDMPROM Initial Screen: MultiProvider Proc.
32 RSDMWB Customer Behavior Modeling
33 RSDODS Initial Screen: ODS Object Processng
34 RSIMPCUR Load Exchange Rates from File
35 RSINPUT Manual Data Entry
36 RSIS1 Create InfoSource
37 RSIS2 Change InfoSource
38 RSIS3 Display InfoSource
39 RSISET Maintain InfoSets
40 RSKC Maintaining the Permittd Extra Chars

41 RSLGMP Maintain RSLOGSYSMAP
42 RSMO Data Load Monitor Start
43 RSMON BW Administrator Workbench
44 RSOR BW Metadata Repository
45 RSORBCT BI Business Content Transfer
46 RSORMDR BW Metadata Repository
47 RSPC Process Chain Maintenance
48 RSPC1 Process Chain Display
49 RSPCM Monitor daily process chains
50 RSRCACHE OLAP: Cache Monitor

51 RSRT Start of the report monitor
52 RSRT1 Start of the Report Monitor
53 RSRT2 Start of the Report Monitor
54 RSRTRACE Set trace configuration
55 RSRTRACETEST Trace tool configuration
56 RSRV Analysis and Repair of BW Objects
57 SE03 Transport Organizer Tools
58 SE06 Set Up Transport Organizer
59 SE07 CTS Status Display
60 SE09 Transport Organizer

61 SE10 Transport Organizer
62 SE11 ABAP Dictionary
63 SE18 Business Add-Ins: Definitions
64 RSDS Data Source Repository
65 SE19 Business Add-Ins: Implementations
66 SE19_OLD Business Add-Ins: Implementations
67 SE21 Package Builder
68 SE24 Class Builder
69 SE80 Object Navigator
70 RSCUSTA Maintain BW Settings

71 RSCUSTA2 ODS Settings
72 RSCUSTV
73 RSSM Authorizations for Reporting
74 SM04 User List
75 SM12 Display and Delete Locks
76 SM21 Online System Log Analysis
77 SM37 Overview of job selection
78 SM50 Work Process Overview
79 SM51 List of SAP Systems
80 SM58 Asynchronous RFC Error Log

81 SM59 RFC Destinations (Display/Maintain)
82 LISTCUBE List viewer for InfoCubes
83 LISTSCHEMA Show InfoCube schema
84 WE02 Display IDoc
85 WE05 IDoc Lists
86 WE06 Active IDoc monitoring
87 WE07 IDoc statistics
88 WE08 Status File Interface
89 WE09 Search for IDoc in Database
90 WE10 Search for IDoc in Archive

91 WE11 Delete IDocs
92 WE12 Test Modified Inbound File
93 WE14 Test Outbound Processing
94 WE15 Test Outbound Processing from MC
95 WE16 Test Inbound File
96 WE17 Test Status File
97 WE18 Generate Status File
98 WE19 Test tool
99 WE20 Partner Profiles
100 WE21 Port definition

101 WE23 Verification of IDoc processing
102 DB02 Tables and Indexes Monitor
103 DB14 Display DBA Operation Logs
104 DB16 Display DB Check Results
105 DB20 Update DB Statistics
106 KEB2 DISPLAY DETAILED INFO ON CO-PA DATA SOURCE R3
107 RSD5 Edit InfoObjects
108 SM66 Global work process Monitor
109 SM12 Display and delete locks
110 OS06 Local Operating System Activity

111 RSKC Maintaining the Permittd Extra Chars
112 SMQ1 qRFC Monitor (Outbound Queue)

-----------------------------------------------------------
coming soon!!!
professional web statistics
Powered by web analytics software program.