Quantcast
Channel: SCN : Blog List - ABAP Testing and Troubleshooting
Viewing all 37 articles
Browse latest View live

Sample Code of eliminating High percentage of Identical selects.

$
0
0

Here is the Main concept of eliminating the high percentage of identical selects by local buffer.

  1. Two internal tables (A and B) are defined to buffer the result  of the “select statement “ on table bseg. Table A is for the search criteria that is existing in the database and table B is for the search criteria that is not existing in the database.

      
  2. Read internal table A, if existing, return the value. If not go to step 3.
      
  3. Read internal table B for the record that does not exist in the database, if it doesnot exist in the buffer, go to step 4.
      
  4. Make the database access and fetch the record and log the result in the two internal tables.

 

types: begin of s_bseg,

        bukrs like bseg-bukrs,

        gjahr like bseg-gjahr,

        belnr like bseg-belnr,

        buzei like bseg-buzei,

        wskto like bseg-wskto,

        shkzg like bseg-shkzg,

       end of s_bseg.

      

data: itb_bseg_found type sorted table of s_bseg with non-unique key bukrs gjahr belnr buzei,

      itb_bseg_notfound type sorted table of s_bseg with non-unique key bukrs gjahr belnr buzeiwa_bseg s_bseg.

 

   

      Read table itb_bseg_found into wa_bseg with key

bukrs = bsas-bukrs

                                  gjahr = bsas-gjahr

                                  belnr = bsas-belnr

                               buzei = bsas-buzei.

      if sy-subrc = 0.

  bseg-wskto = wa_bseg-wskto.

       bseg-shkzg = wa_bseg-shkzg.                   

      else.

      Read table itb_bseg_notfound transporting no fields with key

bukrs = bsas-bukrs

gjahr = bsas-gjahr

belnr = bsas-belnr

buzei = bsas-buzei.

     

      if sy-subrc <> 0. 

    SELECT SINGLE wskto shkzg

      FROM bseg

      INTO (bseg-wskto, bseg-shkzg)

      WHERE bukrs = bsas-bukrs AND

            gjahr = bsas-gjahr AND

            belnr = bsas-belnr AND

            buzei = bsas-buzei.

           

  wa_bseg-bukrs = bsas-bukrs.

              wa_bseg-gjahr = bsas-gjahr.

              wa_bseg-belnr = bsas-belnr.

              wa_bseg-buzei = bsas-buzei.

              wa_bseg-wskto = bseg-wskto.

              wa_bseg-shkzg = bseg-shkzg.            

             

        if sy-subrc = 0.

          insert wa_bseg into table itb_bseg_found.

        else.

          insert wa_bseg into table itb_bseg_notfound.

        endif.

    endif.

    endif.

 

Note: Change the data type of internal tables itb_bseg_found and itb_bseg_notfound to statistics if this is

called in the funcational module or form.

Welcome to comment on this topic.


ABAP performance on cluster Table

$
0
0

Recently i met a case of ABAP performance issue on cluster table.  i think blog is a good place to

share this topic.  Your comments are really appreciate.

 

Issue description and analysis

 

Database perspective:  SQL trace showed a table cluster  KOCLU:

SQL_Trace.gif

ExecutionPlan.gif

the Full table scan on table cluster KOCLU is done , which is the reason of this exepensive SQL Statement.

but why DBI ignore the values of the primary key fields as these values are already provied in the ABAP Source code.

ABAP perspective

source Code.gif

Cluster table KONV is only existing ABAP Dictionary and stored in the table cluster KOCLU on the database level.

KONV Primary key fields:         

MANDT
KNUMV
KPOSN
STUNR
ZAEHK

From the fields list above, the key field KNUMV was specified but not transfered to database side.

the reason behind this is the OR operation. After this is removed, the key fields were transfered to the

database side and the Index range scan was chosen instead of the full table scan.

The quality of an answer depends significantly on the quality of the question (or: how to ask good questions)

$
0
0

Try to remember, when was the last time you got a question and
you invested some time (and enjoyed it) to provide an answer? What were the
characteristics of the question? Was it a specific or rather unspecific question?
Was the question well prepared and provided all (most of) the details that you
needed to find the answer or did it take you several round trips to collect all
the details that you needed to provide the answer? Did you have the feeling the
person asking the question invested some time in asking the question or not?

At least for me the following is true: I invest time in finding
an answer when I have the feeling the person asking the question invested time
as well. If a question is well prepared and provides all the details I find it as
well motivating to invest time in finding an answer.

How to ask good questions:

If you have a question on ABAP coding, provide the relevant (not
more, not less)  ABAP code  and the SE30/ST12 traces you took. Some
background information on the context e.g. what are you trying to achieve, how
often the program will run and all kind of relevant figures you have (e.g. size
of an internal table, nr. of hits of a LOOP WHERE …).

If you have a question on ABAP OPEN SQL, provide the OPEN
SQL statement (ABAP) and the native SQL statement as it was sent to the DB from
ST05/ST12. Furthermore provide the KPIs from ST05 (nr. of exec, nr. of records
per exec, time per exec, avg. time per rec). The DB platform, the execution
plan and the statistics for all involved tables and all indexes for these
tables are very helpful as well.

In general: the more relevant details you provide in the
questions the better the quality of the answer can be. If you get answers that
are not very meaningful / helpful rethink your question. If you get the answer 42
you have asked the Ultimate Question of Life, The Universe, and Everything.

What’s the (practical) upper limit of indexes per table?

$
0
0

I recently delivered a service (ABAP OPEN SQL Optimization)
for a customer and the customer asked me: “What’s the upper limit of indexes
per table”. Three possible answers came immediately to my mind:

 

  1. 32767; since that was the technical limit on that DB.
  2. 42; since 42 is always a nice answer when asked for a figure.
  3. “IT DEPENDS”; since that answer is almost always correct.

 

I get this question on a regular basis, approximately once a
month. I immediately discarded 1.) since that’s written in the documentation and
a RTFM (Read The Fine Manual) would have been enough, but I thought the
customer wanted to know the practical upper limit not the technical upper
limit. So I was left with 2.) and 3.) and I decided for 2.). There were many
good old (grey haired) ABAP developers in the group so I thought it was save to
give this answer and then come to 3.).

 

This is what happened:

 

Customer: “What’s the upper limit of indexes per table”.

 

Me: “There is only one definite answer to this question: The upper limit of

indexes per table is 42!”

 

Nobody was laughing. Silence. Everybody was staring at me.
Obviously nobody knew The Hitchhiker's Guide to the Galaxy.

LEARNING: Never use 42 as an answer when people don’t know it.

Now my trouble was two-fold: Firstly I had to explain why I said 42

and next I had to explain that 42 is not the correct answer but “IT
DEPENDS”.

 

The nice thing about answer 3.) “IT DEPENDS” is that it is
almost always a correct answer. Always? No… of course it depends on the
question if “IT DEPENDS” is the correct answer… ;-) . The bad thing about this
answer is that you are supposed to explain on
what
it depends… .

 

So: The upper limit of indexes per table depends on

 

  • the type of the SQL and DML workload on the table
  • whether reading (SQL) or writing (DML) performance
    matters in your business processes
  • the columns in the indexes and the type of changes on
    these columns
  • your hardware, your cache size, your IO subsystem
  • and even more things

         

If you are really interested in indexes in detail I recommend reading
this book. In my book you will find a section on this topic as well.

 

You might be interested how many indexes per table you have
in your systems. On ORACLE you can find the top scorers with this SQL:

    

select table_name, count(*) as cnt

from dba_indexes

group by table_name

having count(*)  > 3

order by cnt desc

 

On DB6 you can find the top scoreres with this SQL:

 

select tbname, count(*) as cnt

from sysibm.sysindexes

group by tbname

having count(*)  > 3

order by cnt desc

Crossing Checkpoint Charlie in a SAAB

$
0
0

Confusing title? I am referring to the checkpoint group transaction SAAB and the third (Charlie=C=Third letter in the Latin alphabet) option of checkpoint groups, namely logpoints. ABAP offers logpoints as of NW 2004s.

 

Since the beginning of checkpoint groups in 2005 the topic has been covered in several blogs on SCN, for example

 

 

Despite the obvious advantages of checkpoint groups I get the impression that they are still not widely used in custom ABAP code. Maybe it is because developers cannot find good arguments that justify the implementation effort. Therefore this blog focuses on describing one scenario where logpoints are beneficial. I will also provide a step-by-step example of how to use logpoints.  Let me emphasize that I am not covering all functionality of checkpoint groups, please read about checkpoint groups on help.sap.com if you need this information. Also keep in mind that the ABAP coding is written with an aim of making the scenario understandable and is not an example of how to program spotless ABAP!

 

1. Scenario

Time and again I face a recurring problem. There is a critical issue in the production environment which cannot be reproduced in the development or test systems. The functional experts are blaming the developers and the developers are sure that the problem is configuration related. After all the developer can compare the code of the development, test, and production system and has not found any deviations. Since it is working fine in the development and test systems, the developer concludes that the problem must be configuration related. The functional expert, however, has compared the configuration and is sure that is just another ABAP bug that has found its way to the production system. In the meanwhile the business is stressing to get the problem fixed. The helpful developer therefore decides to have a further look at the problem. By reading the application log she can find out where the issue is approximately, but does not understand why it occurs. It would be helpful to see the values of the custom implementations and compare them with the successful values in the development system. Debugging the production system is not possible for numerous reasons.

 

2. Example

The next example is very simple and helps to show how easy it is to develop logpoints. It describes how to log input and output parameters of a method. My recommendation is that logging input and output parameters of important methods is a minimum. Developers, however, are free to add as many and complicated logs as they may wish on top of that.

 

2.1 Code

Following the theme of this blog, I have created a class with a method called BORDER_CONTROL. It has an importing table and an exporting table. The structure of the table consists of a name, description, nationality, date, and a flag indicating whether or not the person is allowed to cross the border at Checkpoint Charlie.

CCC1.jpg

 

CCC2.gif

 

For testing purposes I have created a little program calling the class method BORDER_CONTROL.

CCC3.gif

 

2.2 Activating the Checkpoint Group

In order to log the logpoints of method BORDER_CONTROL, it is necessary to activate the checkpoint group.

CCC4.jpg

 

Switch the radio button from Inactive to Log.

CCC5.jpg

 

It is possible to restrict who should trigger the log by selecting specific users. Press the User button.

CCC6.jpg

 

Press CCC7.jpg. Type in the user name.

CCC8.jpg

 

Save the checkpoint group. Upon saving the following dialog appears. Here you can restrict how long time the activation is valid. In a production environment, I recommend to activate the checkpoint group just before running a process and deactivating it immediately thereafter.

CCC9.jpg

 

Now you can run the above mentioned test program Z_CHECKPOINTS by hitting F8. A log entry has been created. It can be viewed by pressing the Log tab of the checkpoint group.

CCC10.jpg

 

By expanding the log tree it is possible to get to the log fields. In this case the importing and exporting tables of method BORDER_CONTROL.

CCC11.jpg

 

These are the values of the importing table.

CCC12.gif

 

The exporting table contains an allowed flag for Lou Gram.CCC13.gif

 

With this type of information the developer can get an overview of the data flowing in and out of custom methods, which should help identify potential issues and/or reproduce the problems in other systems.  The development effort is minimal and I hope that the use of checkpoint groups in customer ABAP code will increase.

 

In this blog I have described one reason why I use logpoints in my custom code. There are plenty of other reasons which are not covered here. How do you use checkpoint groups?

 

(For more information on Checkpoint Charlie and Peter Fechter, please visit http://en.wikipedia.org/wiki/Checkpoint_charlie)

SELECT..INTO TABLE faster than SELECT/ENDSELECT. The real reason.

$
0
0

Let me begin by asking a simple question. Say I have a database table - dbtab that contains 1000 rows of data. Say I run a SELECT/ENDSELECT query on dbtab. Now, does this statement hit the database a 1000 times?

Most beginners (and sometimes even ABAP “veterans”) would emphatically reply – YES. The purpose of this blog is to examine if this is really true.

 

Consider the following code snippet:

 

Case 1:

DATA: WA_T005 TYPE T005,
IT_T005
TYPE STANDARD TABLE OF T005.

SELECT *
FROM T005
INTO WA_T005.
  APPEND WA_T005 TO IT_T005.
ENDSELECT.

 

Now, go to ST05 (Performance Trace) transaction and activate the trace. Execute the above code snippet and then view the SQL trace of ST05. The following is a screenshot of “Detailed Trace List” view of the SQL Trace.

 

Detailed Trace List.jpg 

Fig 1: Detailed Trace List.

 

Before going any further, let me explain briefly about ST05’s SQL trace. By means of SQL trace, one can analyze all database statements that were sent from the Database to the DBI (Database Interface). In other words, every time the system hits the database and sends data to the DBI (which is a component of the work process. Therefore DBI is a part of the Application Server), it is logged in the SQL trace of ST05. So SQL trace is a crude way of knowing roughly - “How many times has the system gone from the DBI and hit the database?”.

 

So if the SELECT/ENDSELECT would have hit the database a 1000 times, we should have seen a 1000 entries with the FETCH operation in the above trace. Instead we see only 1 FETCH in the trace and all the 236 records present in the T005 table are pulled, in a single step in the FETCH operation!!

 

What actually happens under the hood is – the DBI automatically implements an optimization. The data is NOT sent from the database to the DBI, one row at a time. The data is sent from the database to the Application Server (specifically to the DBI) in “Packages”. The data in the packages is buffered in the DBI and then sent to the ABAP program. In case of a SELECT/ENDSELECT, the data is sent from the DBI to the ABAP program one row at a time. Why one row at a time? Because the ABAP program can store data in only wa_t005 (which is a work area; it can hold only a single line at a time). So it is important to realize that there are 2 steps involved– transfer of data from the database to the DBI  and the transfer of data from the DBI to the appropriate variable in the ABAP program. The transfer from database to DBI ALWAYS happens in packages (unless a SELECT..UP TO 1 ROWS or SELECT SINGLE is used).

 

Consider Case 2: If I had used the following code:

SELECT *
FROM T005
INTO TABLE IT_T005.

 

In this case also, the data is sent from the database in “packages” and buffered in the DBI. And the data is sent from the DBI to the ABAP program only after ALL the packages have been received. At this point, I have to back up for a moment and explain what exactly a package is.

 

A package is a “packet of data”, that is, a set of rows. The package size depends on the respective database platform and on whether it is a Unicode system or not. Usually, package sizes are between 8KB and 128KB and are a multiple of 8KB. So suppose, I have a database table with 100,000 records and its line width is 64 bytes. And my SELECT query is to fetch 2500 records from it. That means the data to be fetched will occupy a space of 160,000 Bytes or 160KB (2500 multiplied by 64). Assume the package size on my database platform is 32KB. So the entire data of 160KB will be transferred in 5 packets (160/32) from the database to the application server. So if the total size of records to be fetched is more than the size of a single package, the data is transferred in the form of multiple discreet packages. Now would be a good time to look back at Fig 1 where you would see columns named – “Array” and “Recs”. “Array” column represents the maximum number of records/rows that a single package may contain. “Recs” represents of records/rows transferred in that particular FETCH operation. So from Fig 1, it can be seen that the each package can accommodate 32767 rows. The total number of records in T005 is 236 and since my SELECT query has no WHERE condition, 236 records are to be fetched. The package can take in more than 236. Therefore all the data to be fetched in this query is fetched in a single package. So there is just 1 hit to the database.

 

Now coming back to case 2, the DBI waits until it receives all the data to be fetched in that query. Say there are 50,000 countries present in T005 (let’s hope the world doesn’t become so fragmented!), the DBI would wait for 2 packages to arrive from the database. And then, it would send all this data to the ABAP program (i.e. to the internal table IT_T005). In this case, the data is NOT sent 1 row at a time, from the DBI to the ABAP Program and this is because the query now has a variable that can hold multiple rows (i.e. the query has the internal table IT_T005). So in Case 2, optimization is implemented on 2 levels – at the database level, which transfers data, not in individual rows but in packets and at the ABAP program level, where data is transferred from the DBI to the program, not in single rows but in bulk.

 

In fact, this is the actual reason why SELECT..INTO TABLE.. is faster than SELECT/ENDSELECT. Very often I hear the answer – “SELECT/ENDSELECT is slower because the system hits the database many times. On the other hand, in the case of SELECT..INTO TABLE.. , the system hits the database only once”. That is not correct. In both statements, the no. of database hits would be the same (for Case 2, you may look into the SQL trace. The no. of DB hits would be 1). The correct answer is - in case of SELECT..INTO TABLE, there is an optimization at 2 levels. On the other hand, in SELECT/ENDSELECT the optimization happens at only one level.

 

 

Case 3:
SELECT *
FROM T005
INTO TABLE IT_T005_TEMP

       PACKAGE SIZE 10.

ENDSELECT.

 

In this case, the difference would be – the DBI will not wait until it receives all the packages from the database. As soon as it gets a package from the DB, it buffers it and immediately sends it to the internal table variable IT_T005_TEMP in the ABAP program. Once it gets the next package from the database, it sends the next set of data to the internal table IT_T005_TEMP. This will over write the data previously present in IT_T005_TEMP.

 

Summary:

  • An analogy may be drawn to Max Plank’s Quantum Theory here. Data is fetched from the database to the Application Server in discreet packets called “Package”, unless we use SELECT..UP TO 1 ROWS or SELECT SINGLE.
  • When a SELECT/ENDSELECT query is run on a table with 1000 records, it does NOT mean that the statement hits the database 1000 times. The number of database hits depends on the package size, the line width of the table and the number of rows to be fetched.
  • The reason why SELECT..INTO TABLE is faster than SELECT/ENDSELECT is because, there is an optimization on 2 levels (ABAP program interface and in database) rather than in just 1 level (database level).

 

References:

[1] Gahm, Hermann. ABAP Performance Tuning. SAP-Press. 1st Edition. 2009.

 

NOTE: This blog may not be expressing a very important point as this is highly theoretical stuff. Even without knowing this stuff, most ABAPers will know that SELECT..INTO TABLE is faster than SELECT/ENDSELECT. It is just that the reasoning given for the better performance of SELECT..INTO TABLE is not technically accurate. It would be nice to know the technically correct reason and that is the reason I felt this post has to be written. In fact, I started this post with a question and that question was asked to me in an interview. No surprises - I gave the wrong answer. The interviewer then explained the correct answer; I couldn't digest all that he said then and there. I had ponder on it for a while and then had to refer to the book by Hermann Gahm to clearly understand and organize my thoughts.






 






 

 

 


 



TechEd 2012: The Brand-New ABAP Test Cockpit – A New Level of ABAP Quality Assurance

$
0
0

Are you interested in the ABAP Test Cockpit but couldn't attend the corresponding SAP TechEd sessions?

 

Watch this recording of session CD101 by Ekaterina Zavozina and Boris Gebhardt to learn more about the ABAP Test Cockpit and how to use it efficiently:

 

The Brand-New ABAP Test Cockpit – A New Level of ABAP Quality Assurance

Bugs in your custom ABAP code can be quite expensive when they impact critical business processes, which is why ABAP quality assurance of custom code is receiving more and more attention in business. SAP is developing a great deal of ABAP code and they use the ABAP test cockpit as a central check tool. Why should customers not use the same ABAP check tool SAP is using? Good question. That's why SAP started a pilot project with two big customers in order to find out if the ABAP test cockpit is beneficial for custom code. The results of this pilot project were very promising and now we can proudly announce that we plan to release the ABAP test cockpit to customers. In this session, we will show you in live demos why the ABAP test cockpit is a “must have” for your custom ABAP development.

 

ABAP Unit Tests without database dependency - DAO concept

$
0
0

Hi

 

In this blog I would like to present you technique which is used for business logic testing without database dependency. It is implemented with object oriented design. Many developers complain that they cannot write too much unit tests because their reports use database and tables content may easily change.  Removing database from testing is the key factor to have successful unit tests.

 

Just to remind, a good unit tests:

  • Always give same result.
  • The order of tests is not important – each can be run independently and must work.

 

Let’s imagine that there is test that uses database:

  • Creates new row.
  • Runs business logic which reads that row.
  • Checks result .
  • Deletes the row at the end.

 

And this test can work fine. But it may not always give same result. In case if someone else will manually create row, or change/delete it during test runtime, we can have error that will interrupt our test or invalid results finally. 

 

That is why good unit tests:

  • Do not use database.
  • Do not rely on network calls or files.

 

I think that it is really bad thing to have “randomized” test failures. It means that logic of program and test is correct, but accidentally test is failing because of environment set up or other factors. We need to eliminate this, because unit test failure must notify about defect in business logic and not in testing environment.

 

Technique that I present is called dependency injection. In general we need to replace database queries with something that pretends (mocks) database. We inject new object with its new behavior to the test framework, so finally we are not using database queries – that is why it is called dependency injection.

There are many ways of doing it, like using interfaces or inheritance.

 

I want to recommend one approach that uses inheritance, because:

  • It is simple.
  • It does not require separate interface creation.
  • We only extend test code to pretend database, not influence the production code.

 

We need to make distinction between:

  • Production code – business logic executed by real program in production system. It is usually global class, report or include.
  • Testing code – used only for testing, never run in production. Test code cannot be put in the production code even if it is unused, so production global class should not have methods like set_customer_for_test_only( ).

 

There is a design template that we can use for testing database dependent logic with dependency injection. If you follow this approach, it is easy to extend production code, database queries and testing in the future.

 

 

1. Build Data Access Object (DAO) which will be single access point to database.

 

  • Create class method get_instance( ) which returns singleton instance of object.
  • Create class method set_instance( ) that makes it possible to inject mock instance if we need it.
  • Each business domain should have own DAO, like ZCL_CUSTOMER_DAO, ZCL_CONTRACT_DAO, ZCL_WORK_ORDER_DAO etc. Initially we can have one DAO for report, but if there are too many queries for different tables there, complexity increases. It is better to split it logically into separate DAO units that everyone can use, so try to make DAOs domain specific and not report oriented. Keep it simple.
  • Methods in DAO should suit our program need, especially for performance reasons. If our program reads table many times and require only single column values, then build method that returns table of that column values only. However if program reads table just few times, you can build method that returns full rows content and extract column in your program.
  • Methods in DAO are mainly database queries, but also function/bapi/objects calls that use database internally.
  • Database logic is extracted and separated from business logic.
  • There is only one access point to database queries because singleton pattern is used.

 

DEFINITION:          CLASS-DATA mo_dao_instance TYPE REF TO zcl_employee_dao.          CLASS-METHODS get_instance          RETURNING VALUE(ro_instance) TYPE REF TO zcl_employee_dao. 

IMPLEMENTATION: 

     METHOD get_instance.           IF ( mo_dao_instance IS INITIAL ).               CREATE OBJECT mo_dao_instance.           ENDIF.          ro_instance = mo_dao_instance.     ENDMETHOD.

 

 

2. Global class (production code) keeps attribute of mo_dao_instance, which is initialized in constructor.


     METHOD constructor.          me->mo_employee_dao = zcl_employee_dao=>get_instance( ).     ENDMETHOD.

 

 

3. All database operations from production code must be delegated to DAO instance.

 

  • Global class never runs direct queries on database inside own methods.
  • Instead, all queries are delegated to dao instance, for example:

 

     ls_employee    = mo_employee_dao->get_employee_by_id( i_employee_id ).     lt_employees   = mo_employee_dao->get_employees_from_department( i_department ).

 

 

4. For testing scenarios, we create new class that pretends real DAO, but has predefined results for each method.

 

  • It may be local class if we need to pretend results only for local program, or global class if we want to share it wider.
  • The class extends ZCL_EMPLOYEE_DAO, inheritance is used here.
  • I use _mock addition to the name to identify that this is mocking class (convention from Java development).

 

     CLASS lcl_employee_dao_mock DEFINITION INHERITING FROM zcl_employee_dao.

 

  • We need to redefine only methods that will be used in testing scenario.
  • However if we do not redefine some method and they are used during test, the real database access will be performed so just watch out on that.
  • Optionally all methods can be implemented with empty content (by default empty result returned from methods), then write implementation for methods that we need for test scenarios.

 

DEFINITION:     METHODS get_employee_by_id FINAL REDEFINITION.     METHODS get_employees_from_department FINAL REDEFINITION.
 FINAL REDEFINITION.

 

  • In lcl_employee_dao_mock methods implementation we create fixed values that we assume should be returned from database. We can program conditions to have different results for different input parameters.

 

METHOD get_employee_by_id.    DATA ls_employee TYPE zemployee_s.       IF ( i_employee_id = '00001' ).      rs_employee-id                = '000001'.      rs_employee-name          = 'Adam Krawczyk'.      rs_employee-age             = 29.      rs_employee-department = 'ERP_DEV'.      rs_employee-salary          = 10000.    ELSEIF ( i_employee_id = '00002' ).      rs_employee-id                = '000002'.      rs_employee-name          = 'Julia Elbow'.      rs_employee-age             = 35.      rs_employee-department = 'ERP_DEV'.      rs_employee-salary         = 15000.    ENDIF.  ENDMETHOD.                    "get_employee_by_id 

 

  • Implementing methods requires knowledge of database content. When I do development, I often take real database values found during debugging/manual queries and prepare test case. In this way, you show in code what can be actually expected from database, not fake but real possible values. That helps others to understand the logic as well.
  • We must know possible input values and expected results. Otherwise if we do not know it, how can we be sure that our production code logic works fine? Not knowing business domain and lack of testing data cannot be excuse for not having unit tests.

 

 

6. After we have everything above set up, we can easily inject mock DAO to unit tests.

 

  • In the class_setup method of Unit Test class which is run once before each tests are executed, insert mock DAO into real DAO:
DEFINITION.     CLASS-METHODS class_setup.

IMPLEMENTATION.

     METHOD class_setup.          DATA lo_employee_dao_mock TYPE REF TO lcl_employee_dao_mock.          CREATE OBJECT lo_employee_dao_mock.          ZCL_EMPLOYEE_DAO=>set_instance( lo_employee_dao_mock ).     ENDMETHOD.

 

  • And that is it. Now mock dao will be used and predefined result set is returned during all tests from our own implementation in LCL_EMPLOYEE_DAO_MOCK.
  • Initially I used to also set original instance of DAO in the tear_down method, which is run after all tests are finished. However this is not needed.
  • ABAP specification is that singleton instance defined as in point 1, works only within one session. It means that mock DAO instance will be injected to ZCL_EMPLOYEE_DAO only during Unit Tests execution. Even if Unit Tests are lasting longer, and in the same time I will run production code in parallel from new session (like new transaction or program run F8), because this is separate session, real DAO will be used there.

 

Below is the summary of all described steps, showing end to end example of production code and test code.

 

1. Types definition used in classes.

  • Let's define type that will be used in below example.
  • Structure represents basic data of employee.
  • Hashed table of employees with unique ID key.

01_types_definition.PNG

2. DAO class for employee - definition.

  • Get_instance( ) and set_instance( ) create according to described template.
  • 3 methods for database queries.

02_dao_definition.PNG

 

3. DAO class for employee - implementation.

  • get_average_age is specialized method which moves logic of average calculation to database.
  • get_employees_from_department method returns table of employees, that will be used for other statistics calculations.
  • For test purpose, zemployee_db_table is used and we assume that it contains same columns as structure.

03_dao_implementation.PNG

4. Business class - employee statistics - definition.

  • Example production code which reads employee statistics: employee data, average age of all employees and average salary in specific department.

04_employee_statistics_definition.PNG

5. Business class - employee statistics - implementation.

  • mo_employee_dao is initialized in constructor and this is access point to database for business logic.
  • No database direct access.
  • All queries to database are done through mo_employee_dao object.
  • It is simple example for demo purpose. In real life logic can be more complicated, but still only single queries are used to database, then program logic is processing results.
  • Average age is read directly from database through dao.
  • Average salary in department is calculated by program. For demonstration purpose, DAO is returning list of employees from department, then program calculates average. In reality it would be easier to program it as well in DAO as single database query.

05_employee_statistics_implementation.PNG

6. Mock DAO - definition.

  • Mock DAO extends real DAO, so it has same methods available.
  • All methods from real dao are redefined in this case.
  • FINAL REDEFINITION points that we do not want anyone to extend lcl_employee_dao_mock class methods, but as well we could use REDEFINITION keyword only.
  • In point 7 and 8 you will se different ways of implementing testing data, for demonstration purpose. In reality it is better to keep one convention in the mock DAO class.

06_dao_mock_definition.PNG

7. Mock DAO - implementation part 1.

  • One way of test data preparation.
  • There is internal table that corresponds to real database table.
  • In constructor of mock DAO we initialize table like we would prepare real database table before test.
  • In mock DAO methods, we use internal table to find results rather than real database table.

 

07_dao_mock_implementation_table.PNG

 

8. Mock DAO - implementation part 2

  • If we do not need to simulate full table content we can implement testing data directly in methods.
  • Based on input parameters conditions, we define received results.
  • It is easy to extend testing data in the future, just build own data for new input parameter designed for new test scenario.
  • Sometimes we can also hardcode database values as result of method, like in case of get_average_age.

08_dao_mock_implementation_direct.PNG

9. Unit Test class definition.

  • class_setup needed - will be run once before all tests. We need to replace real DAO with mock DAO there.
  • setup method will be run before each test. New fresh instance of object to test will be created.
  • lo_employee_statistics is the business logic object, that we want to test.
  • 3 methods tested, two of them are tested with found and not found values.

09_test_class_definition.PNG

 

10. Unit Test class implementation - part 1.

  • It is enough to replace instance in ZCL_EMPLOYEE_DAO with mock DAO instance before all tests are started.
  • This is the key point of dependency injection used.
  • Any further call during tests execution, by production code (example constructor in lo_employee_statistics) that tries to get instance of DAO by ZCL_EMPLOYEE_DAO=>GET_INSTANCE( ) will now get our fake prepared instance of MOCK DAO.
  • It is safe to inject fake DAO as this affects only current user session that will finish after tests are executed. Any other session that calls ZCL_EMPLOYEE_DAO=>GET_INSTANCE( ) will get real DAO.

10_test_class_implementation_1.PNG

 

11. Unit Test class implementation - part 2.

 

11_test_class_imlpementation_2.PNG

 

I am attaching also text file with all code from presented example so you can use it for testing.

 

I hope that after reading this blog you will see how easy it is to write unit tests even for logic that requires access to database. If it looks like much code for such simple example, believe me that it is worth to spent time and create unit tests anyhow. Even and especially complex reports need it, where simple change in the future may impact behavior and non-author is not sure if he can add new line there or not. If code is well tested, there is less chance for unexpected errors. Lately there are tools that allows you to easily execute unit tests and measure code coverage but that is another story.

 

Keep in mind that Unit Tests that skips database by pretending it are verifying business logic but not end to end program behavior. If there is error in real DAO method, in select statement for example, our tests will not discover it. That is why end user tests are important as well. But users have less to test or less probably will discover logic bug after code was already tested on unit level. Of course it is also possible to write unit tests for DAO class itself, by inserting, reading, validating results and deleting rows for example. But I mentioned at the beginning that this are not pure unit tests, but may be helpful anyhow. Just group them in category "may have randomize fail". 

 

There is one more advantage of using DAO concept. If we delegate all database operations to DAO classes in our development, they can be reused by anyone. Additionally class can be tested by F8, and single methods may be executed. In this way we can check database statements (or function methods) results that are already implemented in DAO, no need to implement temporal code or thinking how to query table or execute join statement.

 

I recommend you to use DAO concept and always extract database logic from business layer. I strongly encourage to write unit tests whenever it is possible. - try and see long term benefits.

 

Good luck!


Creating ABAP unit tests in eclipse and SE80

$
0
0

Hi,

 

I believe that writing unit tests is very important to have reliable and good quality software. I heard many times from other developers that customers are not interested in unit tests as they do not want to pay for that, but do they want to pay for even higher maintenance costs? Unit tests are not additional feature, they are part of good clean code that works. In this blog I want to present you how simple it is to create basic unit tests and what is the difference between SE80 and eclipse in this area. Even if you have already learned how to create unit tests in SE80, it may be difficult to start again in eclipse.

 

This article has few sections:

  1. Why we need unit tests
  2. Where to use unit tests
  3. Unit test class format
  4. Predefined unit test methods
  5. How to build single unit test case
  6. Unit tests in SE80
  7. Unit tests in eclipse
  8. Executing unit tests
  9. Summary

 

 

1. Why we need unit tests?

 

- They verify if code behavior is correct.

- They run fast and give quick status feedback.

- They lead to more reliable code, main bugs are found earlier.

- They lead to better and simpler code design.

- They help to consider all possible input/output values.

- They can be automated.

- In long term they reduce maintenance costs much.

 

Having just few unit tests is not a lot, but it is a good start. If every developer starts to create unit tests regularly, there will be small islands of tested code which finally will lead to larger, well tested areas.

 

 

2. Where to use unit tests?

 

Unit tests works easiest with classes. They are built inside class and are integrated part of object oriented design. However it is possible to write unit tests for function modules as well. Block of code (method, function, form) is easy to test if it is isolated - works only on input and output parameters and does not use external or global variables. That is why it becomes more difficult to test old legacy code and much easier to test new development - active usage of unit tests leads to better code design.

 

In general unit tests, as name suggests, are designed for limited scope, basic unit behavior testing. It is easy to test smallest methods but not complex logic of report. If we are not able to test full report flow, then try to test small parts of it, so we have solution tested on components level but not as a whole. For example lets have one method run_program_logic and 10 sub-methods inside. If we test all 10 sub-methods without testing main method, probably main flow logic will still work correctly, at least we will not experience problems with basic things like calculations, data conversion or format display. End to end testing must be anyhow done in user acceptance tests, not as unit tests. Testing full report in unit tests requires more time because we need to simulate many database queries etc. but at least we can try to test main business logic.

 

We cannot test directly blocks of report like INITIALIZATION, but we can implement these with object methods and then test them. For example:

 

INITIALIZATION.
lo_report->initialize_screen_fields( ).

 

For such code we can write unit tests for lo_report->initialize_screen_fields( ). Modularization is important.

 

 

3. Unit test class format

 

Unit tests are built in a class (local or global) with specific additions in definition part:

class ltcl_my_test_class

definition for testing

duration short

risk level harmless

.

 

Duration and risk level are test class attributes. Duration describes how long system accepts test run before termination, risk level allows to disable test execution in case of high risk. Good unit tests should have default values as above - run quickly with no harm to the system.

 

Usually global class will have local class for own code testing, but it is possible to use global test class if we want to share test code for more objects and reuse it outside class.

 

 

4. Predefined unit test methods

 

There are some method names that are already reserved and if we implement them, they will be automatically called by unit tests framework:

- class_setup - class method called once before all tests are run. Place for general initialization for static variables.

- class_teardown - class method called once after all tests are executed.

- setup - method called before each single test, commonly used for data preparation.

- teardown - method called after each single test.

 

All these methods are optional. I recommend to always use at least setup method to create new object for testing. Each single test case should perform steps on clear instance.

 

 

5. How to build single unit test case.

 

Single unit test is implemented as method in unit test class. We can identify 3 phases of test:

1. Initialization.

2. Code execution.

3. Results validation.

 

First two are nothing new - we need to write some code that will prepare data and run production code that we want to test, for example single method of a class. Third phase however requires additional, special methods to be called that will validate if results are correct. We call them assertion methods.

 

There are different types of assertions, they can be found as static methods in standard class cl_abap_unit_assert. Most popular assertions are:

- assert_equals - check if values are same,

- assert_initial - check if value is initial,

- assert_true - check if condition is true.

 

The goal of assertion method is to compare actual and expected value and raise error to unit test framework in case of not matching results. Important parameters are:

- act - what is the actual value retrieved from code execution (phase 2).

- exp - what is the value that we expect. Some assertions does not need it, like assert_initial or assert_true - only act value is needed.

- msg - what should be the message shown to user in case if test fails. I recommend to use it often although it is optional parameter - help others to understand what is the meaning of test case.

 

Example of method assertion:

 

    cl_Abap_Unit_Assert=>assert_Equals(      act   = lo_calculator->add(        i_num1 = 5        i_num2 = 10 )      exp   = 15                msg   = '5 + 10 must be 15'    ).

 

Example above shows case when all three test phases are written as one statement: initialization (i_num1 = 5 and i_num2 = 10), execution (lo_calculator->add) and verification (assert_equals). This flexible call is possible with object oriented approach and I recommend to use it as it saves space.

 

In general each test method should perform single scenario validation, so it is good to separate methods for different variants. As example consider test methods like:

METHODS divide_success.

METHODS divide_div_zero_exception.

METHODS divide_missing_params.

 

It is better to have 3 methods instead of one that tests all 3 cases inside. Why? If test case fails, we know from name what exactly stopped to work (let say only divide_div_zero_exception failed). In this case name of test is description not only which method was tested, but also which case of that method was run. It brings value especially if we have automated tests scheduled periodically and general overview of all tests. Of course it is also acceptable to have only one test method "divide" and test different scenarios inside - we will still see that "divide" method fails. Sometimes it is easier to have more scenarios in one test method as we do not need to copy paste same code, however in general it is better to split scenarios to different test methods and assign meaningful name.

 

 

6. Unit tests in SE80

 

It is easy to create unit tests for global class in SE80. There is a nice wizard (Utilities -> Test Classes -> Generate), that helps us to create unit tests for chosen methods of a class:

se80_wizard_methods.png

 

In the wizard we can also decide which predefined methods we want to have and what are attributes of class.

- Fixture creates setup and teardown methods.

- Class fixture creates class_setup and class_teardown methods.

- Invocation creates method execution and default initial parameters assigned.

- Default Assert Equals syntax may be also created.

I recommend to select all options to have full test class generated so we can remove later some parts if we do not need.

se80_wizard_predefined.png

 

After we go through wizard, local class with unit tests is automatically generated. If we selected "Generate Fixture" option, setup method will be created together with f_cut object which means class under test (f_ prefix although convention for object is mo_). And we see that new object instance is created in setup, so each test will have fresh instance to test.

 

Now the only thing we need to do is to update test methods content with test case scenario. In addition from source code editor we can add new test cases to the class if we want.

 

Wizard creates same names for test methods as in tested class. I think this is good approach. Normally we could add test_ prefix but there are only 30 characters available for method name.

 

 

7. Unit tests in eclipse

 

Eclipse does not have wizard for automatic tests creation. Initially I was missing that very much so I was creating unit tests from embed SAP GUI and then updating tests code again in eclipse. However after some time I learned how to write unit tests even more efficiently in eclipse and I do not need wizard any more. Eclipse offers dynamic and flexible templates. It is important to use them and be aware of that feature. These templates may be used for efficient code handling.

 

First difference that we see in eclipse are tabs on the bottom of source code, where we clearly see which part of class is global class (production code), which are local types or test classes. I like it, because it is easy now to find out place where to write different code.

 

bottom_tabs.PNG

 

To create class, just type "test" in "Test Class" view and press CONTROL + SPACE to see templates suggestions:

test_template.png

 

After choosing "testClass - Test class(ABAP Unit)" , initial version of test class is generated. In templates there are predefined variables which are marked with editor frame. We can jump by TAB pressing between variables and change names. Renaming single variable changes all occurrences in source code, that is convenient. In default template we need to update local test class name, test attributes and first method name. If we press enter, variables edition mode is finished and we have standard source code editor mode. Notice that default prefix for local class is "ltcl_" (local test class) and I use it as well.

 

source_template_variables.png

 

Templates are very powerful and useful. It is possible to change existing templates or create new. They can be used for any development not only related to unit tests. Templates can be modified in " ABAP templates" settings in  Window->Preferences menu:

 

templates_settings.png

 

In my example standard testClass template is extended with mo_cut object, which is by default created in setup method as I know I use this pattern often.

 

With default template we have only one test. How to add new tests now? Quite simple. Just type new line in definition section:

 

METHODS my_second_test FOR TESTING.

 

Then point mouse course to my_second_test and press CONTROL + 1 (Quick Fix) and confirm with ENTER that you want to create new method:

 

new_test_method.png

 

This will create empty implementation of your method and cursor jumps inside that method so you can immediately start to write code. Initialize variables, execute method that you want to test and write assertion. You do not need to remember syntax of assertions, again templates come with help. Just type assert and press CONTROL + SPACE and choose assertEquals template. By default assertion is written in one line, I updated my template to have each parameter in new line as it suits my needs better.

 

These small tricks with CONTROL + SPACE and CONTROL + 1 for automated code generations speed up development much and makes me working faster than in SE80. Technically it is now easier to practice Test Driven Development (TDD) in eclipse as we can write tests first and design global class by creating empty methods by CONTROL + 1 Quick Fix feature from test class.

 

Unfortunately there is no wizard which automatically generates tests for all methods as in case of SE80. On the other hand, it is now up to developer to decide which methods should be tested and it does not take much time to auto-generate methods signatures. Other option is to use wizard from embed GUI.

 

 

8. Executing unit tests.

 

CONTROL + SHIFT + F10 is the shortcut worth to remember. It runs unit tests for current object. We can also execute unit tests on package level, by right click and Execute Tests option. In eclipse there is also convenient shortcut CONTROL + SHIFT + F11 which runs unit tests with code coverage, which is useful to see how well unit tests are covering tested class functionality. In SE80 we must run it from menu - Local Test Classes -> Execute -> Execute Tests with -> Code Coverage.

 

It is possible to schedule automated tests run by:

  • code Inspector (SCI) with check variant that contains unit tests execution,
  • rs_aucv_runner program where we can specify packages/programs and automatic emails notification.

 

 

9. Summary


Unit tests are important. However it is not so easy to start creating them. Once I compared learning unit tests to trip to high mountain - it is hard to climb, you may think it is not worth to try. But if you reach the top, there are beautiful views and you do not regret decision. This is how I feel - standing on top, being happy with my unit tests as part of daily development. Unit tests make you thinking wider about code aspects, you need to consider all input and output parameters. You see wider horizon as you are on the top of that mountain.

 

It is easier to start unit tests in SE80 because there is automatic wizard. On the other hand it is more efficient to write them in eclipse as there are flexible templates and code completion. I spent more time on explaining eclipse part not because it is more complex, but it has more potential and tricks that are worth to know.

 

If you want to write more advanced unit tests without database dependency, please read also my blog:

http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2013/03/21/abap-unit-tests-without-database-dependency--dao-concept

 

Good luck with unit testing!

Unit tests for exceptions

$
0
0

Hi,

 

In this short blog I would like to explain how can we write unit test for method which throws exception. Even if it is simple case for many, I got at least one question about it and it means that hints may be useful for others.

 

As far as I know there is no assertion method or built in feature in ABAP unit test framework that would check if method throws exception. It would be a good candidate to extend framework by the way. We need to handle exception situation ourselves.

 

Normally if exception is thrown during test method execution, test is marked as failed. We need to avoid error escalation and implement test logic in the way that controls exception and verifies if it actually has occured. I propose two similar variants for that:

 

Variant 1:

If exception is not thrown, we call fail method which makes test not passing. If method that we test raises exception as expected, then fail line will never be reached and test will pass automatically:

 

        TRY.            mo_cut->raise_exception( ).             cl_abap_unit_assert=>fail( 'Exception did not occur as expected' )        CATCH cx_root.        ENDTRY.

 

Variant 2:

We introduce flag that monitors if exception occurs. It has abap_false value by default and it changes only if we enter into CATCH section:

 

     DATA l_exception_occured TYPE abap_bool VALUE abap_false.            TRY.            mo_cut->raise_exception( ).        CATCH cx_root.            l_exception_occured = abap_true.        ENDTRY.            cl_abap_unit_assert=>assert_true(          act = l_exception_occured          msg = 'Exception did not occur as expected'        ).

 

First variant is shorter but second is more self explained.

 

If you already use ABAP in eclipse, I recommend to create new template (Window -> Preferences -> ABAP Development -> Source Code Editor -> Templates), call it assert_exception and use it while unit tests creation just by typing "assert", CONTROL + SPACE + assert_exception + ENTER. That helps.

 

It is also worth to mention that there is already assertion that checks the system status code: cl_abap_unit_assert=>assert_subrc. This method is similar to assert_equals, the difference is that we can skip act parameter as it is by default sy-subrc.

 

Kind regards

Adam

Reduce run-time of batch program - Part 1

$
0
0

Most of the programs, especially reports which are generated during year-end or reports run once in every month, every quarter etc. takes lot of time.These reports will be run in background because it takes long time or sometimes automatic execution of these reports happens by Job scheduling.

There are various ways of improving the performance or rather reducing the time taken for execution, so that the report output is available ahead of time or at least within some timelines which the user or customer desires.

 

Here are some of the different ways on how to reduce the run-time of batch program, which I have come across, worked on and which would be helpful to all of you in some way or the other. This can be applied not only to the existing programs which takes lot of time, but also can be applied before creating such kind of report programs. There are two kinds of reports: standard and custom created report programs.

 

1. Standard programs:

 

When talking about the run-time of standard batch programs,  we don't have many things to control over it. As an ABAPer or developer, what can be done is to go and check for SAP notes available for the program which takes long time, as SAP would have released some notes if the program is generally run as batch program and many customers would have raised message to SAP. Implementing the SAP notes, would mostly resolve the issue or would reduce the run-time.

Sometimes, it would need some memory parameter setting or system setting in addition to the note implementation.

 

In case, if the SAP note not available, then we can raise a message to SAP after checking for the other parameters which can be configured such as memory, system parameters,etc.

 

Apart from setting or configuring of the parameters for the job, there are some things which needs to be taken care mostly related to the program logic i.e.performance of the program/job, in standard program - to check if there is any customer exit coding or BADI coding available which might need fine tuning.

 

2. Custom programs:

 

In case of custom programs, there are lot of things which as a ABAPer we can plan out before creating a report program or for tuning the existing program to reduce the run-time. Some existing programs can even be changed to run in foreground( if needed) from background if most of the below things are taken care.

There are two ways in which run-time can be reduced: one is the memory, system settings for the job/program and second is the coding part.

We have to concentrate more on the coding part, to improve the performance or reduce the run-time of the program. The report program has to be coded considering all the performance tuning techniques. Each and every performance technique plays a vital role in reducing the run-time of the program. For example, populating the internal table as below :

 

                         itab[] = itab1[].  ~ would be faster

 

                         LOOP AT itab.               ~ will take time when comparing above statement.

                          APPEND itab TO itab1.

                         ENDLOOP.

 

This will reduce some seconds depending on the total records in ITAB[]. However, writing code like this, as a whole makes a big difference and will reduce run-time of the program. Also,clearing the memory space used by the internal table at end of the perform if the data of that internal table is not needed later, will increase the memory space thereby indirectly helping in performance.

 

When it comes to performance tuning in coding, there are three things, .i.e. ABAP statements, Database statement or queries and system. Out of which ABAP and database statements can be controlled.

 

ABAP statements:

Each and every ABAP statement has a specific execution time. Thus, when coding, not only think of statement which suits requirement but we also need to analyse best statement which can be used i.e. as in my above example for populating internal table data if data type for both are same.

ABAP statements sometimes takes more time when compared to database statement when written without analysis or without knowing the impact. It is better to analyze even while writing a simple ABAP statement in a program.

 

For understanding on what is time taken for execution by different ABAP statements, login to SAP, execute program RSHOWTIM, alternatively we can use the path to navigate there, i.e. In menu bar->Environment->Examples->Performance Examples. This will be useful.

 

There are lot of artifacts, documents already available in SDN wiki, forums which might be useful to you for performance tuning.

http://wiki.sdn.sap.com/wiki/display/ABAP/ABAP+Performance+and+Tuning

 

Database statements:

 

Most of the time, when we do a run-time analysis, the percentage of  time consumed by the database hit or when pulling data from database table or updating/deleting/inserting data to the table is more. Hence proper analysis has to be done when coding queries. It is better to understand the requirements on what the program is going to do, how frequently it will run, how many records, how many tables, what kind of tables, whether the program be enhanced later etc. (Most of the programs are initially created without knowing it will be enhanced later or not).

We can take more time in analyzing on which type of query should be coded, whether going for joins like INNER JOIN,OUTER JOIN or FOR ALL ENTRIES, depending on the tables which are used, i.e. especially when using Cluster tables and views.

Sometimes FOR ALL ENTRIES takes less time when compared to using INNER JOIN, hence depending on the tables, use the proper query. Use the primary keys in where clause and in the same order it is there in table, while picking up data from tables.

Try to use standard FMs if available in place of SELECT queries, as it will not only reduce the execution time, but will also have error handling and query written in a better way.

 

There is also an option to configure the setting for every job/batch program with different parameters like priority or job class which will control the run-time of the program. This can be done for both custom and standard batch programs.

 

Existing custom programs:

For existing custom programs, if we want to reduce the run-time, then we can go for below approaches.

Do the run-time analysis in SE30 transaction, and check execution time for ABAP, database or system.After that, then we will get know which one we need to fine tune, either ABAP or database statements or anything needs to be done for system settings.Also go for performance analysis in ST05, activate trace and execute program and analyze how much time each select query of the program has taken.

 

The above approaches will also be applicable to Function modules, Class methods or Module pools.

 

Any additional information to this blog are welcome.

Basics of eCATT (Video Series) Part 1 : System Data Container

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 1 : System Data Container

 

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 2 : Test Script Initial Creation & Testing

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 2 : Test Script Initial Creation & Testing

 

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 3 : Test Script Recording

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 3 :Test Script Recording

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 4 : Test Script Recording Initial Dry Run

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 4 :Test Script Recording Initial Dry Run

 

 

 

Best Regards,

Gopal Nair.


Basics of eCATT (Video Series) Part 5 : Creating Test Script Parameters

$
0
0

In this video, we will be editing the test script we recorded, and replacing the hardcoded values with parameters.

 

Lesson 5 :Creating Test Script Parameters

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 6 : Creating Test Data Container

$
0
0

Introduction

In this video, we will discuss test data container, "Internal" and "External" variants. We will see how to import parameters we defined in our test script (discussed in part 5 of this video series). Finally, the internal variants defined will be used to create a template file for creating external variant file.

 

Lesson 6 : Creating Test Data Container

 

Best Regards,

Gopal Nair.

Overview of Eclipse IDE

$
0
0

Below is the easy to remember short description of Eclipse IDE

(E)ditor for many programming language

(C)ode Faster

(L)ess Typing with Code Completion

(I)ntegrated Development Environment

(P)latform

(S)yntax Highlighting

(E)xtensible

 

Note: This is not an official expansion of Eclipse.

 

If you are still wondering what does it means. Please read ahead.

 

Following are the advantages of using Eclipse as Development tool.

  • providing an open and extensible development environment - open plug-in architecture provides suitable platform for extending it with more specific features and combining it together.
  • cover full/holistic software life cycle (they can develop, build, deploy, and execute applications directly from the - design (modeling), construction (coding) and maintenance (deploying, debugging, monitoring, testing..) tools..
  • Integrated environment - seamless integration

 

It openness and inter operability through standards and facilitate open source integration.


Following are the Components of Eclipse Platform

  • Eclipse SDK
    • Eclipse JDT
    • Eclipse PDE
    • Eclipse Platform (RCP)
      • Eclipse UI
      • Eclipse File System
      • Eclipse Runtime
    • Eclipse Modeling Framework

 

References

The SAP Eclipse Story  - http://www.sdn.sap.com/irj/sdn/nw-devstudio?rid=/library/uuid/10c671f2-6364-2a10-8d96-8b3145d4a478]


Tutorials

  1. Eclipse IDE Tutorial - http://www.vogella.de/articles/Eclipse/article.html
  2. OSGi with Eclipse Equinox - Tutorial -http://www.vogella.de/articles/OSGi/article.html
  3. Eclipse Plugin Development Tutorial = http://www.vogella.de/articles/EclipsePlugIn/article.html
  4. Eclipse RCP Tutorial- http://www.vogella.de/articles/EclipseRCP/article.html
  5. Eclipse Modeling Framework (EMF) - Tutorial - http://www.vogella.de/articles/EclipseEMF/article.html

A useful trick to debug pop-up...

$
0
0

Sometimes we need to debug a process but the logic that you need to debug is after a button event after 3 pop-ups. So, what you do? Debug everything trying to figure it out when your point starts...  NO! You can create a shortcut on your desktop and drop down it into your pop-up or before the event and the debug will start after it.

 

Creating a debug shortcut:

Shorcut_step1.png

 

Change the title to help you to identify the client, change the tcode to /h and chose a place to save the shortcut

Shorcut_step2.png

 

And Finish.

 

Go to your desktop and find your shortcut:

Shorcut_step3.png

 

Now, how the magic happens:

 

Shorcut_step3.png

 

A message will be shown...

Continue the process... And the debbug will starts after your click event!

 

 

 

Hope it helps

How to trigger Code Inspector checks during the release of a 'task'?

$
0
0

Bugs in your custom ABAP code can be quite expensive when they impact critical business processes, which is why quality assurance of custom ABAP code is receiving more and more attention in business. Detecting bugs early in the development stages before they can be moved across the landscape ensure that the cost and risk impact is minimal. To reach this goal, SAP offers theABAP Test Cockpit (ATC)andCode Inspectoras quality assurance tools.

 

The ATC is available with EhP2 for SAP NetWeaver 7.0 support package stack 12 (SAP Basis 7.02, SAP Kernel 7.20) and EhP3 for SAP NetWeaver 7.0 support package stack 5 (SAP Basis 7.31, SAP Kernel 7.20).

 

General process for releasing a transport request

The transport organizer is a tool for managing the objects that gather the changes carried on during the development and configuration phases, and fortransportingthem across the landscape. The two kinds of objects used are theRequestand theTask.

The Request is the main container, which contains zero to any number of Tasks.The CTS automatically creates one task for each user who adds objects to the Request. An ABAP transport request may contain many tasks that are assigned to different users.When you want to transport the Request, you have to firstreleaseall the tasks of the request, and then the request itself. When it is released, the transport is done automatically or manually by the administrator. The transport goes towards the systems and clients defined in thetransport routes.

 

Current behavior of Code Inspector checks during the release of a transport request or a transport task

Releasing a transport request or a task can be considered as the first quality gate to ensure that poor quality custom code is not transported across the landscape. Currently, Code Inspector checks can be activated during the release of a transport request. To activate this feature, perform the following steps

  1. Go to transaction SE03
  2. Double click on the entry 'Global Customizing' (Transport Organizer)
  3. Under 'Check Objects when Request Released' , select the option 'Globally Activated'.

 

Now this activates the check of a transport request. But there may be the requirement to check also the single 'tasks'. Currently, automatically triggering  Code Inspector checks during the release of a 'task' is not available as a standard. To address this requirement, SAP provides a standard BAdI 'CTS_REQUEST_CHECK, that can be implemented by customers to trigger code inspector checks during the release of a task.

 

In this blog, I will illustrate the steps required to implement the BAdI, which when activated will trigger the checks during the release of a task.

(please adapt the naming conventions, texts, badi names, class names etc as per your requirement)

Steps for triggering Code Inspector Checks during the release of tasks

  1. Go to transaction SE19
  2. At the create implementation box provide the name of the classic BAdI „CTS_REQUEST_CHECK‟ and click on the button „Create Impl.‟

Image_1.png

   3. Provide a BAdi implementation name

badi_impl_name.png

   4. Provide a short text. Click on the „Save‟ button. Provide package details when prompted

badi_short_text.png

   5. Double clicking on the method ‚check_before_release„ of the BAdi interface takes you to the method implementation of the generated ABAP object class that was created during BAdi impl creation.

5_badi.png

   6. In the method „CHECK_BEFORE_RELEASE‟ first check if the release concerns a transport request or a transport task.Code the following portion in the method 'CHECK_BEFORE_RELEASE'

Code1.PNG

     7. For calling the actual Code inspector check itself create a new private method sci_check in the class „ZCL_IM__CTS_REQUEST_CHECK‟

  sci_chk1.png

     Provide the following parameters for the method SCI_CHECKsci_check2.png 

     create the method exception

sci_chk3.png

  8. The rest of the method SCI_CHECK contains the various steps of creating Code inspector check, assigning variants, object sets etc.It is sufficient if you copy the piece of code from the attachment 'sci_check.txt.zip'


  9. Finally create the message class „ZSCI‟ with the following valuesmsg_class.png

  10. Save and activate all your changes. Do not forget to activate the BAdI implementation in transaction SE19.

 

* Deactivate the BAdI in SE19, if you do not wish to use this feature

 

 

 


Viewing all 37 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>