For all entries
Select using JOINS
Use the selection
Use the aggregated
Select with view
Select with index support
Select … Into table
Select with selection
Key access to multiple
Copying internal tables
Modifying a set of lines
Deleting a sequence of
Linear search vs. binary
Comparison of internal
Appending two internal
Deleting a set of lines
Tools available in SAP to pin-point a performance problem
Optimizing the load of the database
Other General Tips & Tricks for Optimization
The for all entries
creates a where clause, where all the entries in the driver table are combined
with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor,
several similar SQL statements are executed to limit the length of the WHERE
Large amount of
and reading of data
reprocessing of data
Memory could be
critical (use FREE or PACKAGE size)
Some steps that might
make FOR ALL ENTRIES more efficient:
from the the driver table
Sorting the driver
- If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement: FOR ALL ENTRIES IN i_tab WHERE mykey >= i_tab-low and mykey <= i_tab-high.
Small amount of
and reading of data
Easy to code – and
Large amount of
processing isn’t needed
Very large amount
Similar to Nested
selects – when the accesses are planned by the programmer
In some cases the
Not so memory
Very difficult to
and reading of data not possible
- The runtime analysis (SE30)
- SQL Trace (ST05)
- Tips and Tricks tool
- The performance database
Using table buffering
Using buffered tables
improves the performance considerably. Note that in some cases a stament can not
be used with a buffered table, so when using these staments the buffer will be
bypassed. These staments are:
ORDER BY / GROUP BY
/ HAVING clause
Any WHERE clasuse
that contains a subquery or IS NULL expression
A SELECT… FOR
If you wnat to
explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECR
Use the ABAP SORT
Clause Instead of ORDER BY
The ORDER BY clause is
executed on the database server while the ABAP SORT statement is executed on the
application server. The datbase server will usually be the bottleneck, so
sometimes it is better to move thje sort from the datsbase server to the
If you are not sorting
by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are
sorting by another key, it could be better to use the ABAP SORT stament to sort
the data in an internal table. Note however that for very large result sets it
might not be a feasible solution and you would want to let the datbase server
Avoid ther SELECT
As with the ORDER BY
clause it could be better to avoid using SELECT DISTINCT, if some of the fields
are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on
an internal table, to delete duplciate rows.
- Use the GET RUN TIME command to help evaluate performance. It’s
hard to know whether that optimization technique REALLY helps unless you test
it out. Using this tool can help you know what is effective, under what kinds
of conditions. The GET RUN TIME has problems under multiple CPUs, so you
should use it to test small pieces of your program, rather than the whole
- Generally, try to reduce I/O first, then memory, then CPU activity.
I/O operations that read/write to hard disk are always the most expensive
operations. Memory, if not controlled, may have to be written to swap space on
the hard disk, which therefore increases your I/O read/writes to disk. CPU
activity can be reduced by careful program design, and by using commands such
as SUM (SQL) and COLLECT (ABAP/4).
- Avoid ‘SELECT *’, especially in tables that have a lot of fields. Use
SELECT A B C INTO instead, so that fields are only read if they are used. This
can make a very big difference.
- Field-groups can be useful for multi-level sorting and displaying.
However, they write their data to the system’s paging space, rather than to
memory (internal tables use memory). For this reason, field-groups are only
appropriate for processing large lists (e.g. over 50,000 records). If you have
large lists, you should work with the systems administrator to decide the
maximum amount of RAM your program should use, and from that, calculate how
much space your lists will use. Then you can decide whether to write the data
to memory or swap space.
- Use as many table keys as possible in the WHERE part of your select
- Whenever possible, design the program to access a relatively constant
number of records (for instance, if you only access the transactions for one
month, then there probably will be a reasonable range, like 1200-1800, for the
number of transactions inputted within that month). Then use a SELECT A B C
INTO TABLE ITAB statement.
- Get a good idea of how many records you will be accessing. Log into your
productive system, and use SE80 -> Dictionary Objects (press Edit), enter the
table name you want to see, and press Display. Go To Utilities -> Table
Contents to query the table contents and see the number of records. This is
extremely useful in optimizing a program’s memory allocation.
- Try to make the user interface such that the program gradually unfolds
more information to the user, rather than giving a huge list of information
all at once to the user.
- Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the
number of records you expect to be accessing. If the number of records exceeds
NUM_RECS, the data will be kept in swap space (not memory).
- Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of
the records into the itab in one operation, rather than repeated operations
that result from a SELECT A B C INTO ITAB… ENDSELECT statement. Make sure
that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the number of
records you expect to access.
- If the number of records you are reading is constantly growing, you may be
able to break it into chunks of relatively constant size. For instance, if you
have to read all records from 1991 to present, you can break it into quarters,
and read all records one quarter at a time. This will reduce I/O operations.
Test extensively with GET RUN TIME when using this method.
- Know how to use the ‘collect’ command. It can be very efficient.
- Use the SELECT SINGLE command whenever possible.
- Many tables contain totals fields (such as monthly expense totals). Use
these avoid wasting resources by calculating a total that has already been
calculated and stored.
ABAP/4 Development Code Efficiency
ABAP/4 (Advanced Business Application Programming 4GL) language is an
“event-driven”, “top-down”, well-structured and powerful programming language.
The ABAP/4 processor controls the execution of an event. Because the ABAP/4
language incorporates many “event” keywords and these keywords need not be in
any specific order in the code, it is wise to implement in-house ABAP/4 coding
SAP-recommended customer-specific ABAP/4 development guidelines can be found
in the SAP-documentation.
This page contains some general guidelines for efficient ABAP/4 Program
Development that should be considered to improve the systems performance on the
Physical I/O – data must be read from and written into I/O devices. This can
be a potential bottle neck. A well configured system always runs ‘I/O-bound’ –
the performance of the I/O dictates the overall performance.
Memory consumption of the database resources eg. buffers, etc.
CPU consumption on the database and application servers
Network communication – not critical for little data volumes, becomes a
bottle neck when large volumes are transferred.
Policies and procedures can also be put into place so that every SAP-customer
development object is thoroughly reviewed (quality – program correctness as well
as code-efficiency) prior to promoting the object to the SAP-production
system. Information on the SAP R/3 ABAP/4 Development Workbench programming
tools and its features can be found on the SAP Public Web-Server.
CLASSIC GOOD 4GL PROGRAMMING CODE-PRACTICES GUIDELINES
Remove unnecessary code and redundant processing
Spend time documenting and adopt good change control practices
Spend adequate time anayzing business requirements, process flows,
data-structures and data-model
Quality assurance is key: plan and execute a good test plan and testing
SELECT * FROM <TABLE>
SELECT * FROM <TABLE>
In order to keep the amount of data which is relevant to the query the hit
set small, avoid using SELECT+CHECK statements wherever possible. As a general
rule of thumb, always specify all known conditions in the WHERE clause (if
possible). If there is no WHERE clause the DBMS has no chance to make
optimizations. Always specify your conditions in the Where-clause instead of
checking them yourself with check-statements. The database system can also
potentially make use a database index (if possible) for greater efficiency
resulting in less load on the database server and considerably less load on the
network traffic as well.
Also, it is important to use EQ (=) in the WHERE clause wherever possible,
and analyze the SQL-statement for the optimum path the database optimizer will
utilize via SQL-trace when necessary.
Also, ensure careful usage of “OR”, “NOT” and value range tables (INTTAB)
that are used inappropriately in Open SQL statements.
SELECT SINGLE *
If you are interested in exactly one row of a database table or view, use the
SELECT SINGLE statement instead of a SELECT * statement. SELECT SINGLE requires
one communication with the database system whereas SELECT * requires two.
SELECT * FROM <TABLE> INTO <INT-TAB>
SELECT * FROM <TABLE> INTO TABLE <INT-TAB>
It is usually faster to use the INTO TABLE version of a SELECT statement than
to use APPEND statements
SELECT … WHERE + CHECK
SELECT using aggregate function
If you want to find the maximum, minimum, sum and average value or the count
of a database column, use a select list with aggregate functions instead of
computing the aggregates within the program. The RDBMS is responsible for
aggregated computations instead of transferring large amount of data to the
application. Overall Network, Application-server and Database load is also
SELECT INTO TABLE <INT-TAB> + LOOP AT T
SELECT * FROM <TABLE> INTO TABLE <INT-TAB>.
LOOP AT <INT-TAB>.
SELECT * FROM <TABLE>
If you process your data only once, use a SELECT-ENDSELECT loop instead of
collecting data in an internal table with SELECT … INTO TABLE. Internal table
handling takes up much more space
Nested SELECT statements:
SELECT * FROM <TABLE-A>
SELECT * FROM <TABLE-B>
Select with view
SELECT * FROM <VIEW>
To process a join, use a view wherever possible instead of nested SELECT
Using nested selects is a technique with low performance. The inner select
statement is executed several times which might be an overhead. In addition,
fewer data must be transferred if another technique would be used eg. join
implemented as a view in ABAP/4 Repository.
· SELECT … FORM ALL ENTRIES
· Explicit cursor handling (for more information, goto Transaction SE30 – Tips &
SELECT * FROM pers WHERE condition.
SELECT * FROM persproj WHERE person = pers-persnr.
… process …
SELECT persnr FROM pers INTO TABLE ipers WHERE cond. ……….
SELECT * FROM persproj FOR ALL ENTRIES IN ipers
WHERE person = ipers-persnr
………… process .……………
In the lower version the new Open SQL statement FOR ALL ENTRIES is used.
Prior to the call, all interesting records from ‘pers’ are read into an internal
table. The second SELECT statement results in a call looking like this (ipers
containing: P01, P02, P03):
(SELECT * FROM persproj WHERE person = ‘P01’)
(SELECT * FROM persproj WHERE person = ‘P02’)
(SELECT * FROM persproj WHERE person = ‘P03’)
In case of large statements, the R/3’s database interface divides the
statement into several parts and recombines the resulting set to one. The
advantage here is that the number of transfers is minimized and there is minimal
restrictions due to the statement size (compare with range tables).
SELECT * FROM <TABLE>
SELECT <column(s)> FROM <TABLE>
Use a select list or a view instead of SELECT *, if you are only interested
in specific columns of the table. If only certain fields are needed then only
those fields should be read from the database. Similarly, the number of columns
can also be restricted by using a view defined in ABAP/4 Dictionary. Overall
database and network load is considerably less.
SELECT without table buffering support
SELECT with table buffering support
For all frequently used, read-only(few updates) tables, do attempt to use
SAP-buffering for eimproved performance response times. This would reduce the
overall Database activity and Network traffic.
LOOP AT <INT-TAB>
INSERT INTO <TABLE> VALUES <INT-TAB>
Whenever possible, use array operations instead of single-row operations to
modify the database tables.
Frequent communication between the application program and database system
produces considerable overhead.
SELECT * FROM <TABLE>
UPDATE <TABLE> SET <COLUMN-UPDATE STATEMENT>
Wherever possible, use column updates instead of single row updates to update
your database tables
DO….ENDDO loop with Field-Symbol
Using CA operator
Use the special operators CO, CA, CS instead of programming the operations
If ABAP/4 statements are executed per character on long strings, CPU consumprion
can rise substantially
Use of a CONCATENATE function module
Use of a CONCATENATE statement
Some function modules for string manipulation have become obsolete, and
should be replaced by ABAP statements or functions
STRING_CONCATENATE… —> CONCATENATE
STRING_SPLIT… —> SPLIT
STRING_LENGTH… —> strlen()
STRING_CENTER… —> WRITE..TO. ..CENTERED
STRING_MOVE_RIGHT —> WRITE…TO…RIGHT-JUSTIFIED
Moving with offset
Use of the CONCATENATE statement
Use the CONCATENATE statement instead of programming a string concatenation
of your own
Use of SEARCH and MOVE with offset
Use of SPLIT statement
Use the SPLIT statement instead of programming a string split yourself
Shifting by SY-FDPOS places
Using SHIFT…LEFT DELETING LEADING…
If you want ot delete the leading spaces in a string use the ABAP/4
statements SHIFT…LEFT DELETING LEADING… Other constructions (with CN and
SHIFT… BY SY-FDPOS PLACES, with CONDENSE if possible, with CN and ASSIGN
CLA+SY-FDPOS(LEN) …) are not as fast
Get a check-sum with field length
Get a check-sum with strlen ()
Use the strlen () function to restrict the DO loop to the relevant part of
the field, eg. when determinating a check-sum