Quantcast
Channel: SCN : Blog List - ABAP Testing and Troubleshooting
Viewing all 37 articles
Browse latest View live

Rolling Out the ABAP Test Cockpit - A First Experience Report

$
0
0

While there are quite some good documents about the setup of the ABAP Test Cockpit (ATC) on SDN (cf. http://scn.sap.com/docs/DOC-32791 and http://scn.sap.com/docs/DOC-32628) I haven't seen any experience reports about a roll out of ATC yet. Therefore I decided to blog about my current experiences in rolling out the ATC in our development organization.

 

Step 0: Some Background

Before starting to describe what we did in our ATC roll out I want to give you some background about the environment of the roll out. At my company we are managing and maintaining several SAP system landscapes for different customer. A typical customer landscape consists of a SAP CRM, a SAP IS-U (ERP) and a SAP BW together with several non-SAP systems (e.g. an output management system and an archive system). In addition to that we have a central development which is used to develop core functionality and distribute these across the customer systems. These core functionalities are typically developed in our own namespace. Therefore, each of our customer system contains a set of custom development in the customer namespace and a set of developments in our own namespace.

The second important aspect of our environment is the diversity of developers developing in the system. Firstly, we have a core development team. This team consists of people with a deep knowledge around software development and mostly some formal training (e.g. a computer science degree) in the area. Secondly, we have a team of functional consultants with a wide range of development skills, ranging from some basic ABAP knowledge to very deep knowledge. And finally we usually have several external consultants developing in the different customer systems as well.

As you might have guessed the result of this environment is a quite diverse code base containing anything from well designed, reusable components to unmaintainable one-time reports.

 

Step 1: Analysis of our Custom Code

The first step I took in order to roll out ATC was to perform a first check run using a default check variant in the customer system with the largest code base as well as in our central development system. The result of this first analysis was quite disillusioning. The first run of the default check variant of the ATC across this code base resulted in roughly 700 priority 1 errors, 2500 priority 2 errors and nearly 10.000 priority 3 errors.

 

Step 2: Discussion within the Core Developer Team

The next step was to discuss the check results with the core development team. This discussion basically consisted of two parts.

 

Firstly, when I presented the tool everyone agreed that it would be very useful and we should use it. When we then had a detailed look at the check results from the two systems they were not that positive any more. The main criticism was around the errors raised by the ATC. Especially some of the more common errors lead to quite some discussions whether the reported error was really an error or rather a false positive. Furthermore, it turned out that some of the default checks simply are not valid in our system landscape. An example of such a check is the Extended Program Check that checks for conditions that are always false. In the context of SAP IS-U the pattern "IF 1 = 2. MESSAGE..." is used extensively throughout the SAP standard. Consequently, it is also widely used in our custom code. However, the Extended Program Check reported each of these if statements. There reason is, that the check only allows for the pattern "IF 0 = 1. MESSAGE....".

 

Secondly, we discussed extensively how we should approach the large number of issues in our code base. It was obvious that we wouldn't be able to fix all reported issues. This would also have been not very sensible. One reason is that a lot of the programs for which issues were reported might not be in use any more.

 

As a result of the discussion we decided to:

  • define a custom check variant including only the relevant checks
  • define a custom object set.

 

Step 3: Definition, Testing and Rollout of a custom Check Variant

The next step we took was the definition of a custom check variant. The process of the definition of the custom check consisted of several parts. We started by defining an initial set of checks that we wanted to use. Furthermore, we adjusted the priorities of the checks to our need. It's pretty obvious that each error that might cause a short dump needs to be error of priority one. However, with other checks the correct priority is not that clear. Consider for example the check for an empty where clause in a select. A program containing such a statement might cause severe performance problems in production if it is executed on a large table, nevertheless it might be fine in a small correction program that is only executed once. Last but not least we modified some of the default checks (cf. the IF 1 = 2 pattern mentioned above) to suite ore needs. Unfortunately, the modification of the default checks required a modification of the SAP standard in some cases.

After the initial definition of the check variant we set up daily check runs in the QA system including the replication of the results into the development system. With this set up we worked for some weeks and iteratively refined our default check variant.

 

Step 4: Definition of a custom Object Set

Besides the executed checks we also needed an approach to cope with the large number of errors present in our code base. For this we decided that from now on we only wanted to transport objects into the production system without any priority 1 or priority 2 errors. However, we also decided that we didn't want to correct legacy code unless we were modifying it anyway (for example as a result of a bug fix or new feature request). Therefore we created a custom object set and a custom object collector. The custom object collector will only ad objects to the object set if it has been modified after a certain date. This way we were able to get check results only for new or recently modified objects.

Note that this approach has an important drawback. If for example the interface of a method is changed (e.g. by adding a additional required parameter) this might cause a syntax error in some other program using the class. However, with our custom object collector ATC will not be able to find this error as the program using the class itself is not changed. Nevertheless this was the approach we choose to cope with the large amount of legacy code.

 

Step 5: Rollout across all Developers

After the core development team had been working with the described set up for a while we were quite comfortable with the results that the ATC produced. Therefore we decided to roll out the ATC to all developers working in our system. This was done by informing everybody about the ATC as well as setting up the execution of the ATC checks upon release of transport request. Note that we for now only executed the checks upon release of a transport but did not block any transports because of ATC errors.

As a result of executing ATC upon the release of a transport request basically every developer was immediately using ATC, even if they had not integrated it into their workflow yet. This proved very successful, especially with the less experienced developers. As the ATC provides useful explanations together with each error it resulted in quite some discussion and learning regarding good ABAP code that wouldn't have happened otherwise.

 

Summary and Next Steps

After working with the described set up now for a few weeks the roll out of ATC proved quite successful in or development organisation. Especially the detailed documentation of the ATC errors help to improve the knowledge across the organisation. With respect to the roll out I think involvement of the core developers from the very beginning was very important. Only by agreeing on a set of ATC checks, sometimes only after a few discussions, everyone accepts the raised errors and fixes them. If we would have simply used the default check variants without the adaptations mentioned above I don't thinks the ATC would have been accepted as a tool to improve the code quality (e.g. due to a large number of false positives).

 

The next step we will take is the roll out of the ATC exemption process in our development organisation.The reason is that we already noticed that some priority 2 errors can't be fixed due to different restrictions (e.g. usage of standard SAP functionality in custom code that leads to error messages). Therefore we need the exemption process in order to remove the errors in those special cases. Furthermore, I see the exemption process also a prerequisite to disable the release of transport request as long as ATC errors are present.

 

Finally, I'd be happy to discuss experiences with other ATC users.

 

Christian


Source Code Inspector : Check objects according to creation date

$
0
0

Summary

This blog is about changing the way of work of Source Code Inspector(tcode:SCI) especially when Transport Organizer integration is activated. Transport Organizer integration can be activated using tcode SE03. Thanks to SAP for this talented and flexible tool.

 

Problem

In one of our projects we needed to seperate SCI controls according to creation date of objects. We needed this because afore mentioned project is started 12 years ago and as you guess quality and security standarts are changed over time. At sometime integration of SCI/ATC and Tranport  Organizer(SE01) is activated. So developers can not release a request before handling errors given by SCI. But how can you force developer who made just single line of change to a huge program? How can he/she handle all errors given after checks without knowing semantics of this huge program? What if this change should be transported to production system immediately? The solution was to seperate check variants according to creation date of objects .

Also periodic checks can be planned to adduct old objects to new standarts.

 

In this blog i will try to explain what I did to workaround this issue. To benefit from the solution you should be familiar with adding your own test class to SCI.

You can find information about adding your own test class to SCI at : http://scn.sap.com/community/abap/blog/2006/11/02/code-inspector--how-to-create-a-new-check and http://wiki.scn.sap.com/wiki/download/attachments/3669/CI_NEW_CHECK.pdf?original_fqdn=wiki.sdn.sap.com

 

Solution summary

First, I created a test class ZCL_SCI_TEST_BYDATE (derived from CL_CI_TEST_ROOT) that has just 2 parameters date (mv_credat) and check variant(mv_checkvar). This class decides if tests in mv_checkvar is required for the object under test by checking creation date . If object is 'new' it runs additional tests.

 

Secondly, I created two SCI check variants : BASIC_VARIANT and EXTENDED_VARIANT. The first one is for old development objects and second one is for additional tests for ‘new’ objects. ‘new’ means that object is created after certain date(ZCL_SCI_TEST_BYDATE->mv_credat). First check variant includes my custom test which is mentioned above (ZCL_SCI_TEST_BYDATE) and EXTENDED_VARIANT is given as mv_checkvar parameter. Also second check variant is complementary for the first one and includes different tests than the first one.

 

Finally, to enable navigation by double clicking at check results I had to make one simple repair and 2 enhancements.

 

Step 1 : ZCL_ SCI_TEST_BYDATE class :


 

Most important method of this class is -normally- run() .

run method checks if object is created after date mv_checkvar, gets test list for EXTENDED_VARIANT and starts new test procedure for new test list.

 

  1. METHOD run.
  2. DATA lo_test_ref TYPEREF TO cl_ci_tests .
  3. * Check whether the object is created after mv_credat
  4. IF me->is_new_object()NE abap_true .
  5. EXIT.
  6. ENDIF.
  7.   me->modify_insp_chkvar(RECEIVING eo_test_list = lo_test_ref ).
  8. * RUN_BEGIN
  9.   lo_test_ref->run_begin(
  10. EXPORTING
  11.       p_no_aunit ='X'
  12.       p_no_suppress ='X'
  13.       p_oc_ignore ='X').
  14. * RUN
  15.   lo_test_ref->run( p_object_type = object_type
  16.                     p_object_name = object_name
  17.                     p_program_name = program_name ).
  18. * RUN_END
  19.   lo_test_ref->run_end().
  20. ENDMETHOD.

 

Another important method is modify_insp_chkvar which returns test list for EXTENDED_VARIANT.

 

  1. METHOD modify_insp_chkvar.
  2. * Returns test list for mv_checkvar(EXTENDED_VARIANT).
  3. * Also this method combines BASIC_VARIANT and EXTENDED_VARIANT's test lists
  4. * on INSPECTION. Just needed when check results double clicked.
  5. * (I could not handle it with MESSAGE event of CL_CI_TEST_ROOT)
  6.   DATA lo_check_var TYPEREF TO cl_ci_checkvariant .
  7.   DATA lo_check_var_insp TYPEREF TO cl_ci_checkvariant .
  8.   DATA lt_var_test_list TYPE sci_tstvar .
  9.   FIELD-SYMBOLS:<l_var_entry>TYPE sci_tstval .
  10.   CLEAR eo_test_list .
  11. * Get reference for EXTENDED_VARIANT - additional checks for new objects
  12.   cl_ci_checkvariant=>get_ref(
  13.     EXPORTING
  14.       p_user =''
  15.       p_name = mv_checkvar
  16.     RECEIVING
  17.       p_ref = lo_check_var
  18.     EXCEPTIONS
  19.       chkv_not_exists =1
  20.       missing_parameter =2
  21.       OTHERS=3).
  22.   IF sy-subrc NE0.
  23.     MESSAGE e001(z_sci_msg)WITH mv_checkvar description ."Check variant &1-&2 does not exist.
  24.   ENDIF.
  25.   IF lo_check_var->variantISINITIAL.
  26.     lo_check_var->get_info(
  27.       EXCEPTIONS
  28.         could_not_read_variant =1
  29.         OTHERS                =2).
  30.     IF sy-subrc NE0.
  31.       EXIT.
  32.     ENDIF.
  33.   ENDIF.
  34. * Get test list of EXTENDED_VARIANT - addional checks for new objects
  35.   cl_ci_tests=>get_list(
  36.     EXPORTING
  37.       p_variant      = lo_check_var->variant
  38.     RECEIVING
  39.       p_result        = eo_test_list
  40.     EXCEPTIONS
  41.       invalid_version =1
  42.       OTHERS          =2).
  43.   IF sy-subrc NE0.
  44.     EXIT.
  45.   ENDIF.
  46. *...
  47. ENDMETHOD.

 

 

Important points about my custom class definition is ok now. I attached full source code.

If you want to add parameters to your custom test classes look at query_attributes, get_attributes, put_attributes methods of ZCL_SCI_TEST_BYDATE.

 

To add new test class to SCI test list I opened SCI->Management of tests and chose my new test class and clicked save button.

 

 

Step 2 : Check variants

As I mentioned before I created 2 checkvariants. Below is BASIC_VARIANT which is valid for all programs.Selected test list in figures below is just an example. Notice that my new test ‘Additional tests for new programs’ is selected. Parameters ofnew test can be seen in this picture.

 

 

Next picture depicts second checkvariant which is valid for objects created after ’01.01.2014’ (mv_date).

 

 

PS: SE01 uses SCI checkvariant TRANSPORT as default. But there is a way to change this – thanks to SCI : I changed default checkvariant with my BASIC_VARIANT. To achieve this I changed the SCICHKV_ALTER table’s record which has ‘TRANSPORT’ at CHECKVNAME_DEF field.

Note that AFAIK DEFAULT checkvariant is used by SE80, so it is modifiable too .


Step 3 : Adding check results of custom test class to SCI.

 

-This step is not related to the main idea, first 2 steps are sufficient to express my idea-

 

After creation of new test class and checkvariants I had been able to run additional checks for new objects but SCI result list was not navigating to EXTENDED_VARIANT’s test results when I double clicked. As I guess SCI is just aware of BASIC_VARIANT’s test list and can not navigate to unknown test’s results. I should add my additional tests to inspection object’s test list.

I made a single line of repair(CL_CI_INSPECTION->EXECUTE_DIRECT) and enhancement to CL_CI_TESTS->GET_LIST. The aim of these modifications is to fill the ‘inspection’ property of ZCL_SCI_TEST_BYDATE .( ZCL_SCI_TEST_BYDATE has a property named inspection inheriting from CL_CI_TEST_ROOT but it is empty when tests are running. I don’t know if it’s a bug or not ).

 

PS: CL_CI_TEST_ROOT class has method ‘inform’ and event Message. But I could not be able to pass  my additional check results to SCI result list . I will work on this and if its ok step3 will be useless.

 

CL_CI_TESTS->GET_LIST enhancement

 

 

  1. """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""$"$\SE:(1) Class CL_CI_TESTS, Method GET_LIST, End                                                                                                          A
  2. *$*$-Start: (1)---------------------------------------------------------------------------------$*$*
  3. ENHANCEMENT1  Z_SCI_ENH_IMP2."active version
  4. *
  5. IF p_inspection ISNOTINITIAL.
  6.     p_result->inspection= p_inspection .
  7. LOOP AT p_result->listINTO l_ref_test .
  8.       l_ref_test->if_ci_test~inspection = p_inspection .
  9. ENDLOOP.
  10. ENDIF.
  11. ENDENHANCEMENT.
  12. *$*$-End:  (1)---------------------------------------------------------------------------------$*$*

CL_CI_INSPECTION->execute_direct , repair
  1. *...
  2. call method CL_CI_TESTS=>GET_LIST
  3. exporting
  4.       P_VARIANT      = CHKV->VARIANT
  5.       p_inspection = me “added line
  6. receiving
  7.       P_RESULT        = L_TEST_REF
  8. exceptions
  9.       INVALID_VERSION =1
  10. others=2.
  11. *....

Portal & Backend behavior testing: Behave Alike Testing

$
0
0

I have used this testing technique during one of my test phase, where we were testing the portal applications

This test technique is applicable where we have portal application & equivalent functionality in R/3(back-end) as well.

I will take the examples from EAM where we have portal & R/3 transactions available to create/change/display the objects Equipment, Functional Location, Orders, Task list, Notifications etc.


Portal applications have its own benefits, End User need not to remember all the transactions. But at the same time its mandatory functionality should behave same whether it’s is launched from portal or R/3 transactions.

We have tested different combinations and ensured that functionality is behaving in same manner in all the cases. Wherever this deviates from expected behavior we can analyze the behavior further & report an issue.

If we test both (R/3 & portal) of them separately without comparison, it’s difficult to validate the exact & expected behavior.

 

Prerequisites:  

  • Portal configurations for system should already be taken care.
  • User Roles

 

I have described below few aspects of functionality which we should test.

Few Combinations which we validated were:


Open the object in change mode in portal & try changing it in R/3 and vice verse.      

Expected results are: object should be locked & not available for changes.


Change few customization in R/3 and check in portal for the impact.

Expected results are: customization change should have impact on portal too.


Create Object in Portal & check in R/3 transaction/ tables & vice verse.   

Expected results are: Objects created in portal should be available in database table in back-end system.


Block the Object status as inactive in R/3.   

Expected results are:  Status should be updated on portal for respective object & we should not be able to change it any further.


There are many other cases/combinations which can be compared. With this test technique we can ensure the functionality is robust & identical, does not change its behavior with change in test environment or technology.


This article might be useful for testers who are testing the portal & will help them in designing there testing  even better.

I will further share my findings & new ways of testing for any new  functionality from my future test phases.

                           

Data-driven testing with ABAP Unit

$
0
0

In this blog I would like like to describe the idea of data-driven testing and how this can be implemented in ABAP Unit.

 

Data-driven testing is used to separate test data and expected results from unit test source code.

It allows running the same test case on multiple data sets without the need of modifying test code.

 

It does not replace such techniques as test doubles and mock objects. It is still a good idea to abstract your business logic in a way that will allow you to test independently of data. But even if your code is build in that way you can still benefit from parametrized testing and the ability to check many inputs on the same code.

It is particularly useful for methods which solve more complex computational formulas and algorithms. Input space is very wide in such cases and there are many boundary cases to consider. It is easier to maintain them outside of the code then.

 

Other xUnit frameworks like .Net nUnit Java jUnit provide the built-in capabilities to run parametrized test cases and implement data-driven testing.

I was missing such features in ABAP Unit and started looking for potential solutions.

 

The solution which I will present is based on eCATT test data containers and eCATT API.

eCATT Data containers are used to store input parameters and expected results. ABAP unit is used as an execution framework for unit tests.

 

For the sake of example let's take simple class with method which determines triangle type.

It returns:

  • 1 for Scalene (no two sides are the same length)
  • 2 for Isosceles (two sides are the same length and one differs)
  • 3 for Equilateral (all sides are the same length)

and throws exception if provided input is not a valid triangle

 

METHODS get_type

  IMPORTING

    a TYPE i

    b TYPE i

    c TYPE i

  RETURNING value(triangle_type)TYPE i

  RAISING lcx_invalid_param.

Now we proceed with creating unit tests.

There are two typical approaches:

- Creating a separate test method for each test case

- Bundling test cases in single method with multiple assertions

 

Usually I'm in favor of the second approach as it provides better overview in the test logs when some of the test cases are failing. It is also easier to debug single test case.

 

Example test case could look like this:

...

METHODS test_is_equilateral FOR TESTING.

...

METHOD test_is_equilateral.

  cl_abap_unit_assert=>assert_equals(

      act = lcl_triangle=>get_type( a = 3

                                    b = 3

                                    c = 3)

      exp = lcl_triangle=>c_equilateral ).

ENDMETHOD.

Each time we want to add coverage and test some additional inputs either new test method has to be created or new assertion has to be added.

 

To overcome this we create a test data container in transaction SECATT.

container1.PNG

 

And define test variants

 

container2.PNG

 

In ABAP code we define test method which uses eCATT API class CL_APL_ECATT_TDC_API to retrieve variant values

 

METHOD test_get_type.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.

 

    DATA: lo_tdc_api TYPEREFTO cl_apl_ecatt_tdc_api,

          lt_variants TYPE etvar_name_tabtype,

          lv_variant TYPE etvar_id.

 

    lo_tdc_api = cl_apl_ecatt_tdc_api=>get_instance('ZTRIANGLE_TEST_01').

    lt_variants = lo_tdc_api->get_variant_list().

 

    "skip default variant

    DELETE lt_variants WHEREtable_line = 'ECATTDEFAULT'.

 

    " execute test logic for all data variants

    LOOPAT lt_variants INTO lv_variant.

      get_val: 'A' a,

              'B' b,

              'C' c,

              'EXP_TRIANGLE_TYPE' exp_type.

 

      cl_abap_unit_assert=>assert_equals(

          exp = exp_type

          act = lcl_triangle=>get_type( aa = a

                                        bb = b

                                        cc = c )

          quit = if_aunit_constants=>no ).

    ENDLOOP.

ENDMETHOD.

 

...

DEFINE get_val.

  lo_tdc_api->get_value(

          exporting

            i_param_name = &1

            i_variant_name = lv_variant

          changing

            e_param_value = &2 ).

END-OF-DEFINITION.

In my project I ended up creating a base class for parametrized unit tests which takes care of reading variants and running test methods.

It has one method which does all the job:

 

METHOD run_variants.

  DATA: lt_variants TYPE etvar_name_tabtype,

        lo_ex TYPEREFTO cx_root.

 

  "SECATT Test Data Container

  TRY.

      go_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( imp_container_name ).

      " Get all variants from test data container

      lt_variants = go_tdc_api->get_variant_list().

    CATCH cx_ecatt_tdc_access INTO lo_ex.

      cl_aunit_assert=>fail(

          msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text() }|

          quit = if_aunit_constants=>no ).

      RETURN.

  ENDTRY.

 

  "skip default variant

  DELETE lt_variants WHEREtable_line = 'ECATTDEFAULT'.

 

  " execute test method for all data variants

  " method should be parameterless and public in child unit test class

  LOOPAT lt_variants INTO gv_current_variant.

    TRY.

        CALLMETHOD(imp_method_name).

      CATCH cx_root INTO lo_ex.

        cl_aunit_assert=>fail(

            msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text() }|

            quit = if_aunit_constants=>no ).

    ENDTRY.

  ENDLOOP.

ENDMETHOD.

Modified test class using this approach looks as follows:

 

CLASS ltc_test_triangle DEFINITIONFOR TESTING DURATION SHORT RISK LEVEL HARMLESS

  INHERITING FROM zcl_zz_ca_ecatt_data_ut.

  PUBLICSECTION.

    METHODS test_get_type FOR TESTING.

    METHODS test_get_type_variant.

    METHODS test_get_type_invalid_tri FOR TESTING.

    METHODS test_get_type_invalid_tri_var.

ENDCLASS.

 

CLASS ltc_test_triangle IMPLEMENTATION.

  METHOD test_get_type.

    "run method TEST_GET_TYPE_VARIANT for all variants from container ZTRIANGLE_TEST_01

    run_variants(

        imp_container_name = 'ZTRIANGLE_TEST_01'

        imp_method_name  = 'TEST_GET_TYPE_VARIANT').

  ENDMETHOD.

 

  METHOD test_get_type_variant.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.

 

    get_val: 'A' a,

            'B' b,

            'C' c,

            'EXP_TRIANGLE_TYPE' exp_type.

 

    cl_abap_unit_assert=>assert_equals(

      exp = exp_type

      act = lcl_triangle=>get_type( a = a

                                    b = b

                                    c = c )

      quit = if_aunit_constants=>no

      msg = |Wrong type returned for variant { gv_current_variant }| ).

  ENDMETHOD.

 

  METHOD test_get_type_invalid_tri.

    "run method TEST_GET_TYPE_INVALID_TRI_VAR for all variants from container ZTRIANGLE_TEST_02

    run_variants(

        imp_container_name = 'ZTRIANGLE_TEST_02'

        imp_method_name  = 'TEST_GET_TYPE_INVALID_TRI_VAR').

  ENDMETHOD.

 

  METHOD test_get_type_invalid_tri_var.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i.

    get_val: 'A' a,

            'B' b,

            'C' c.

    TRY.

        lcl_triangle=>get_type( a = a

                                b = b

                                c = c ).

 

        cl_abap_unit_assert=>fail(

            msg = |Expected exception not thrown for invalid triangle - variant { gv_current_variant }|

            quit = if_aunit_constants=>no ).

      CATCH lcx_invalid_param.

        " OK - expected

    ENDTRY.

  ENDMETHOD.

ENDCLASS.

 

As you can see with this approach it's very easy to create parametrized test cases where data is maintained in external container. Adding new cases requires just modifying TDC by adding new variant.

It proved to be very useful for test cases checking complex logic requiring multiple input sets to be covered.

 

There are also some challenges with this approach:

- you need to remember to pass quit = if_aunit_constants=>no in assertions otherwise test will stop at first failed variant

- in ABAP Unit results report there is only one method visible and it is not reflecting number of variants tested

 

For those challenges I would love to see some improvements in the future versions of ABAP Unit. Similarly to what is available in other xUnit frameworks.

Ideally there should be a way to provide the variants in a declarative way and they should be visible as separate nodes in test run results.

 

Kind regards,

 

Tomasz

SAP Consulting X issues

$
0
0

By  Ramesh Vodela

A couple of months back I wrote blog in interoperability section ( mobile development with C# and Xamarin) I felt it was too Technical and wanted to write a blog which can be fun reading but also helps readers and  readers can participate to help others.  I titled this blog SAP Consulting X issues ( like X files) as I found some issues quite strange - However the issues I find as X issues listed here could be N issues ( Normal issues for others).  I would really encourage others to declassify my X issues as their N issues ( if they have answer) or raise new X issues so that readers can benefit by being aware of some issues and work out a suitable solution or avoid a potential time consuming issue.

 

X1)  In the year 1996 I was given SAP help CD(My first exposure to SAP I'am a developer) and randomly clicked on a topic and the topic turned out to be Special Purpose Ledger ( FI configuration).I Came to US in 1997 thru a consulting company and went to my first project in Hershey for Hershey(PA) Canada Project to develop Report Painter report. I was in FI team. In the first team meeting there a issue that was becoming critical (to do with multi-currency reporting).  Prior to my arrival the team had about 12 possible solutions to solve this. I suggested the use of Special Purpose Ledger to create a ledger with the required data and this was the 13th solution.  The idea was accepted to be tried.  I was given a sandbox to try this out.  I configured the SPL and could populate all the fields except two fields which involved the use of ABAP Exits - As a developer I thought this will easy as I did the config which was not my skill set.  I wrote the exists and configured the ABAP program as mentioned in the documentation. But no matter what I did the control did not come to the exit and hence the two field could not be populated (BATCH population was not accepted).  The manger was obviously disappointed.  Some colleagues used to call me SAP ALL as although I was developer I showed interest in functional modules- From SAP ALL I came to SAP NONE.  After this I went about the job I came to Hershey - developed 50 Report Painter reports - Hershey Canada Project Went live - There was party for go live. My project ended - The next phase was Hershey US which was to start later.

 

PS1) Late 2001 I was watching CNN news and heard that there were problems with the SAP implementation which affected share price.

PS2) Sydney 2003 - I was asked by a professor in Accounting to configure and document the Special Purpose Ledger - I had the exact same document which I used at Hershey - I configured and wrote the exists as well - The Exit worked the very time with the exact same steps I has used in Hershey. I was dumb founded and tried to search for answer on the net.  I am not 100% sure of the accuracy of what I read which is "There is a Basis setting that actually makes sure that Flow control does not come to the Exit" .  This was strange finding.

 

X2) After Hershey my consulting company sent to another project in Wisconsin (1998).  This project was Reporting using Logistic Information System (Can the client put of BW reporting and manage with LIS reporting).  Having faced the Exit issue I made sure that all the Exists were working in my company's system before heading off to Wisconsin.    Again in this project I configured LIS and wrote the Exists - Again I had the same issue the control was not coming.  I spoke to the manager and we had finalized to raise OSS - But before I could that Exists started working on its own.  I find this strange as well

 

X3)  In 2006 I was doing Application development with .NET C# and ABAP Services - my ASP.NET screens invoke ABAP Services.  In one situation I found that I was sending a char30 field to SAP.  What I found in the debugger ( I could step from  ASP.NET to ABAP code) one of the characters in the middle of the string was getting corrupted(not the same as the one sent from ASP.NET). This was happening only to one particular FM.  I had no explanation but could circumvent this by sending another duplicate variable which was not getting corrupted.  I find this very strange

 

X4) 2013 I was developing ABAP in ECC with CRM and PI.  The Sales order Creation Starts in CRM and flows to ECC- I had to make a number of enhancements in ECC to implement some rules - As there different teams working and to make trouble shooting easy I create a Z table which populates some values which CRM is sending so that if any issue came up I could classify this as CRM issue or ECC issue so that the problem can be resolved.  To populate the Z Table I implemented an enhancement in the FM in ECC which is the first point of entry from CRM to ECC.  After a few weeks I found this table was not getting populated and on close examination I found the FM that was being called (Sales order Creation FM) was totally different from the FM it was calling before - The other FM where I populating earlier was not being called at all.  The Basis people told me after verifying the system that they made no changes.  I find this issue was strange

 

If you have experienced such issues do document it as it will help others.

 

Automation.. lessons learnt

$
0
0

Ask a layman what he understands by "Automation" and the most expected answer is "Doing something automatically" .

     Right!!! When something is done without the intervention of a human, it is automation. And how would you answer "Why automation?”Is it because we trust the machines more than humans or because machines can work tirelessly or because they can do the same job tenfold faster?

The answer is "All of it and much more”.

     Automation helps us with all of this. But keep in mind that we are humans, and 'it is human to err'. What if the creator of this unit of automation (in our context the automated script) does it the wrong way? The “wrong” would also get multiplied and multiply faster than we can realize something is not right. The whole idea is to do it the right way, and in the very beginning itself. It is those small things we ignore in the initial stages which later manifest as huge problems when the automation happens at a large scale. Everything multiplies, including the mistakes we have done and it becomes very difficult to correct it.

This is one of the reasons why some people still prefer manual testing, as they think more time goes in the maintenance and the correction of scripts in addition to their creation and execution.

     The power of automation has always been undermined because of the lack of being organized, structured and methodical in its creation. An automated script is best utilized when it is most reliable and re-usable. These two factors contribute towards easy execution (once the scripts are ready), maintenance (whenever there's a change in application) and accuracy of results (when the scripts get executed).

 

     A reliable script can be created only when the tester has a good understanding of the application, its usage and the configurations behind it. This requires a lot of reading and investigation of the application to know how it behaves under a given circumstance. Once this is done, the script can be created such that it handles the application for all possible application flow.

 

     A reusable script truly defines the meaning and purpose of automation. With a perfectly reusable script, further automation of upcoming applications becomes easier and faster. Maintenance is another take away from this attribute of a script. Reusability is a result of standardization of a script in all aspects like structure and naming convention. Let us look at them individually and see how they add to the script’s reusability.

 

Structure of an automated script: A well-structured script becomes easy to understand and adapt especially to those who take it over from others. It makes the script crisp without any unwanted coding. It is important to strictly limit the script to its purpose and keep only the absolute necessary.

For example, when it comes to validation part, which can be done in many ways (message check, status check, table check, field check and so on) it might not be required for every case. Also remember that a DB Table check takes extra efforts from the script to connect to the system and read the table. One execution may not make a difference, but on a large scale execution, it – does – matter. 

 

Such additional coding needs to be identified and eliminated. Let us analyze the necessary coding according to the purpose of the test:

1.      Functional Correctness: Validation is required before and after the execution of the application to see how the transaction has affected the existing data.

                         Validation before test --> Execution of tcode under test  --> Validation after test

2.      Performance Measurement: Performance is considered only after the application has been tested for its functional correctness. Validation has no purpose here as the focus of test is non-functional

         Execution of tcode under test

3.      Creation of Data for Performance: Usually massive data is required for Performance measurement.

      For e.g. a 1000 customers with 150 line items each… the same could be repeated for vendors, cost centers, and so on. Table checks on this scale of execution would create a huge load on the system and it would take hours to create such data, may be even days in some cases. It is best to avoid validation/table reads of any kind. Another point to keep in mind here is that using a functional module or a BAPI to create data saves a lot of time and effort. A TCD recording or a SAPGUI recoding should only be the last option.

                        Execution of tcode for data creation --> Validation after test

4.      Creation of data for system Setup: this is usually done on a fresh system, with no data. Hence verification only at the end would suffice.

                         Execution of tcode for data creation --> Validation after test

 

There is also a subtle aspect of being structured…  The Naming Convention.

Testers usually tend to name their script to suit their need, ignoring the fact that these transactions can be used by anyone in an integrated environment. Searching for existing scripts becomes easy when proper rules are followed while naming them. It may happen that more than one script exist for the same purpose, such duplication has to be avoided. Attributes like the purpose (unit or integration testing or customizing or performance), tcode executed, action (change or create or delete), release need to be kept in mind while setting up the rules.

The same goes for Parameters as well. Work becomes easy while binding the called script and the calling script (script references). Also quick log analysis is another take away from common naming conventions for parameter.

There is another factor that makes automation more meaningful and complete in all sense. That is documentation. Documentation is a record of what exactly is expected of the script. Its importance is realized at the time of hand over, maintenance and adaptation. However ‘Document Creation’ itself can be dealt with as a separate topic. The idea is that document creation should not be disregarded as unimportant.

Having done all this, we need to watch out for the scope of test. With new functionality getting developed over the older ones (e.g. enhancement of features), re-prioritization needs to be done regularly. Old functionality may not be relevant anymore or they must be stable enough to be omitted from the focus topics. This way the new features/developments get tested better.

Now let us summarize the write up. All the aspects mentioned above are not something we cannot do without. Automation can still happen without any of these factors. However, the benefits we draw from them can make a huge difference on time and efforts of both automation and maintenance. Understanding a script authored by someone else, Knowledge transfer, Adaptation, Corrections... these are just a few advantages to list down.

The world of automation is very vast and its benefits still remain unexplored.

Attachment feature in eCATT

$
0
0

Documentation is an important aspect of scripting. Good documentation should always go hand-in-hand with the automation script and it should clearly explain the whole purpose of the script. Moreover, nothing to beat, if this documentation is easily accessible to the user. Normally the documents would be stored in folders in local servers. For some reason, if the server is down, then these documents are not accessible. Also we might end up losing the documents if the server crashes.

 

The reason I’m writing this blog is to create awareness and also share my experience about one of the useful features of the eCATT which allows attachment of documents (usually eCATT Specification/Design documents) to the eCATT script. It provides an option to either directly attach the document or to provide a link to document. Once the documentation is attached, it will be visible from the Test Catalog and also from eCATT Log file. Anybody who executes the eCATT script can easily find the documentation as part of the eCATT log file. This documentation serves as a  ready-reckoner and one point reference for information regarding the script. Therefore this helps the script executor in understanding what the script does and also in troubleshooting the issue, if faced. Using this feature has helped me in effective maintenance of the documents and also it has freed up the local server space. I do not need to now go searching for the script documentation. It has also immensely helped in easy and effective handover of the scripts to the new joinees in the team.

 

Benefits:

  • Documentation is readily available along with the eCATT log file.
  • Helpful in script maintenance.
  • Easy Troubleshooting by comparing the log file with the document.
  • Quite useful during handover of automated
  • Frees up server space.

 

Limitations:

  • Consumption of ECA storage space,if the documents are directly attached. Nevertheless the document has to be stored somewhere then why not with the script itself ! To avoid this situation, a link to the documentation can also be provided, but in this case the document has to be maintained in the local server.

 

Steps to be followed:

    1. Call Transaction Code SECATT and give the eCATT Test Configuration name in “Test Configuration” field.

    2. Navigate to the “Attributes” tab and then navigate to “Attachments” tab.

    3. Attach the document either as a File or as a link at the Test Configuration level. If you have maintained individual documents for each     variant within the test configuration, then you can attach the same for each variant.


 

Attachment.png

  • Once the above steps are done, the documents will be visible in the Test Catalog.


catalog.png

  • eCATT Documentation is also  available within the eCATT Log file. Just click on the Doc icon to open the document.


log.png

Hope this information is useful. This has helped me and I am sure that this is going to help you as well.

STOP filling your Custom ABAP Code with Business hard coding

$
0
0

Updated: you may also want to check out subsequent blog Get rid of Business Hard Code from your custom ABAP Code 

 

Introduction

IMHO, "Business hard coding" is one of the worst and underestimated ABAP programming practice.

Here just an intro to the topic while in a subsequent blog you’ll find useful stuff to get rid of it.

 

I always considered hard coding really a bad practice but, only recently, I’ve got the real evidence of how much it is used. It happened during the Custom ABAP Code review services we're delivering at TechedgeGroup.

Hard coding requires the program's source code to be adjusted any time the context changes and in business, it happens quite often.

With “Business hard coding” I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units, Document Types and even Master Data and that is one of the worst kind of hard coding.

 

Some examples are Company Codes, Purchase Organizations, Sales Organizations and Accounting document types.

HardcodedCompanyCodes_sketch.jpg

 

Instead, I would not be too much worried about "Technical hard coding", the practice of hard coding strings corresponding to technical stuff like dictionary objects and output formats (e.g. tables, fields, colors, icons).

 

In add, hard coded strings returned to end-users as part of messages, titles and columns headers belong to a different bad practice related to the internationalization (i18n) topic.

 

A couple of examples

For a better comprehension, a couple of examples follow.

In the next picture, method ADDRESS_CONTROLS_IN contains two hard coded strings used to differentiate message severity. The first is related to Company Code and the second to Purchasing Group. Here hard code is even used generically to check everything starting with IN*.

HardcodedCompanyCodes_sample1.jpg

Monkey.jpgI would guess that India has a specific business requirement.

 

In the next picture, method MANDATORY_VATCODE contains multiple hard coded strings to differentiate message severity. The first is related to Country Code, then to Company Code and my favorite one verifies that the GL Account beginning with ‘004’.

HardcodedCompanyCodes_sample2.jpg

Monkey.jpg  I would guess that Poland has a specific business requirement to be combined with a type of GL Accounts.



Why real life SAP systems are full of Business hard code?

I'm sure, most of developers will justify the use of Business hard coding explaining they have been in hurry and the time to create new customizing tables or BRF+ rule was missing. In part they are right, I know that customers (internal or external) demand often for very fast results and developers operate accordingly.

I also have evidence that a large number of developers consider Business hard coding the only way to go and let's say even a good practice.

Discussing with them, to demonstrate they are wrong, I’m used to say that in hundreds of millions of lines of standard SAP code there is no occurrence of "Business hard coding" (to be honest with very few exceptions like country codes and partner functions).

 

Why Business hard coding is so bad?

Probably the hard-coder (the author) will be proud to show his/her skills solving issues and adjusting the business hard code only he/she is aware of (lock-in). Even if at customizing level everything is correct and identical to a working scenario, different behaviors of a transaction/report are often due to business hard code.

 

Time saved during the development phase will lead to much additional effort during the next roll-out or at next Merge&Split when business requirements will change.

Where Business hard coding is acceptable?

In the reality, Business hard coding is an acceptable practice in:

  • Throw away objects
  • Short living projects

 

Maybe it can also be useful to classify above exceptions assigning the objects to specific throw away Packages (Development Classes) similar to $TMP but transportable to production.

Speaking about serious and productive Custom ABAP Code, I'm sure you want to get rid of Business hard coding as soon as possible.

Best alternatives

In modern SAP systems, there are lot of alternatives to Business hard coding for example like:

Conclusions

I'm going to share very soon also the way we use at TechedgeGroup to perform a full scan of your Custom ABAP Code looking for Business hard coding and I’m also very interested to hear your experiences and ideas.


Get rid of Business Hard Code from your custom ABAP Code

$
0
0

In previous blog STOP filling your Custom ABAP Code with Business hard coding I started a discussion about a popular coding bad practice that affects most of the SAP ERP systems.

In a week, the blog got more than 2.000 visits, 5 stars rating and the several interesting comments are even more valuable than the blog itself.

 

To get rid of Business hard code, I'm describing here a way scan your SAP system (e.g. SAP ECC) and get a clear picture of its occurrences.

 

With the term Business hard code, I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units or Document Types and even Master Data. Examples are Company Codes, Purchase Organizations, Sales Organizations, Accounting document types and also Country Codes.

  HardcodedCompanyCodes_sketch.jpg

Problem Domain

The ABAP workbench provides lot of tools to perform source code scanning.

Occurrences of a given literal (e.g. 'IT01' ) can be easily obtained via report RS_ABAP_SOURCE_SCAN or the Code Inspector check Scan for ABAP Tokens. In the real-life, this is the use case of Split & Merge when, for example, a Company Code is going to be merged with another.


Here I try to solve the problem of obtaining occurrences of any literal referred to business related domains without knowing the values to be found.

 

Techedge@SCN ALM

Before deep diving into the solution, let me confirm the attitude we have at TechedgeGroup to share stuff (for free) in SCN.

First, we are proud of the idea and first implementation of abap2xlsx by Ivan Femia . I think it is one of the most popular SCN projects in terms of downloads, usage and software contributors.

Specifically in the domain of Application Lifecycle Management (ALM), it follows a short list of ideas and tools we shared in the years with the community:

 

 

Download and install

This time Techedge is sharing with the SCN community the product Doctor ZedGe - Hard!Code that you can get for free without worrying about license or expiration time.

Doctor ZedGe - Hard!Code is the Community Edition of the larger product Doctor ZedGe that includes an advanced DashBoard to analyze ABAP Test Cockpit results and publish them in nice looking MS Excel reports and also a specific ABAP to download to MS Excel the ATC results including the statements with issues.

So, at the bottom of the page Doctor ZedGe | Techedgeyou will find instructions To order Doctor ZedGe . Simple ask for the Community Edition. You'll soon receive the comprehensive documentation and the complete source code simply installable via Copy & Paste .


This time we decided to distribute Doctor ZedGe - Hard!Code from our Techedge web site not only because Code Exchange has been closed.

Indeed since we are delivering the software, we can assure enterprises that the software is secure, is well developed and well documented. We'll also provide technical support in case of issues.

 

Thanks to the step-by-step guides you will get something like the following ATC result in less than an hour.

HardcodedInATC.jpg

 

 

Or if you prefer, here it is the result in ABAP in Eclipse (AiE):

ATC_AiE.jpg

As you know, ABAP Test Cockpit provides an handy Statistics overview (top), the worklist (middle) and the finding detail (bottom). Navigation to the code (picture on the right) is a click-away.

ZedGeHardCode.jpg

 

How it works?

The idea of this custom Code Inspector check is first to get the hard-coded string (literal), then discover the corresponding operand (context) and recognize if it refers to a Business entity.

HardcodedCompanyCodes_sketch2.jpg

As you know, ABAP syntax is very flexible and the challenge is to determining the context (the related operand) of a given literal. In the above example, the related operand of the literal '3000' is field LT_FILE-PLANT2. This time it is on the left of the operator '='.

In case of '3200' it is instead GT_FILE-WERKS that is on the right of the operator '='.

 

CASE and WHEN are even more challenging:

HardcodedCompanyCodes_sketch3.jpg

In  the above example, the context of both 3000 and 3200 is to be found jumping back to the CASE statement to identify LT_FILE-PLANT2 as context (related operand).

 

Literals

Literals are strings and represent the hard coding Anti-pattern.

The Code Inspector check Doctor ZedGe - Hard!Code includes a Literal length parameter defaulted to consider those with length between 2 and 18.

Statements:

In this version, the analysis considers the following statements that cover mostly of the scenarios:

  • CONSTANTS, DATA, STATICS or TYPES
  • COMPUTE including the implicit form like A = B
  • conditional statements IF, ELSEIF, CHECK or WHEN
  • MOVE or WRITE "for WRITE x TO y
  • CALL, PERFORM or SUBMIT
  • Open SQL statements SELECT, INSERT, UPDATE or DELETE

Known limitations

It’s easy to catch hard coded strings (literals) but you'll get a huge number of false positives that make the scan unusable.

It’s not instead so easy to distinguish those related to business entities. After scanning millions of lines of code, we believe that Doctor ZedGe - Hard!Code can identify around 95% of the Business hard coding related to the following set of critical domains:

 

DomainDescription
BUKRSCompany Code
WERKSPlant
EKORGPurchase Org.
VKORGSales Organization
VTWEGDistribution Channel
SPARTDivision
LGORTStorage Location
GSBERBusiness Areas
WAERSCurrency
LAND1Countries
MSHEIUnit of measurements
PARVWPartner Function
KTOKDCustomer account groups
KTOKKVendor account groups
MTARTMaterial Type
AUARTSales Document Type
LFARTDelivery Type
FKARTBilling Type
BSARTPurchasing Document Category
BWARTMovement Type
VSBEDShipping conditions
PSTYPItem Category in Purchasing Document
PSTYVSales document item category
KSCHLCondition Type
BLARTDocument Type (FI)
PLTYPPrice list type
MATNRMaterial
KUNNRCustomer account groups
LIFNRVendor
SKA1G/L Account Master (Chart of Accounts)
BELNRAccounting Document Number BELNR

 

Note that in case you need, I confirm it's very easy to extend the code to take into account other domains.

Keep in mind that, since we target an installation performed via one copy & paste (one ABAP class) we avoided in this version the use of tables to define the list of domains to be analyzed. We'll see in the future if it makes sense.

 

In add, since Doctor ZedGe - Hard!Code leverages the power of ABAP Test Cockpit and Code Inspector, it also suffers of the same known limitations:

  • it works only on workbench objects belonging to a custom main object. It can analyze BADIs and Customer Exits (CMOD) but cannot analyze ABAP code contained in user-exits includes (es. SAPMV45A is a SAP standard object thus MV45FZ01 cannot be analyzed that as from Best Practices should just call custom Customer Exits FUNCTION MODULEs or custom BADIs)
  • it works on PROGRAMs (PROG), FUNCTION MODULEs (FUGR) and CLASSEs (CLASS) but not on SAPScripts and SMARTForms.

What's next?

Doctor ZedGe - Hard!Code could provide value not only to Developers and Quality Managers but also to Functional specialists, Team leaders, Project managers and even IT managers.

 

Follow a list of possible use cases:

  • get weekly system certification in terms of Business Process standardization (no hidden different behaviors)
  • before starting a roll-out, get the list of Business hard code that will require adjustments
  • during handover phase, when the Project team describe to the AMS what has been realized, get list of Business hard code and evaluate accurately
  • to discourage the bad practice, you may want to add ABAP Test Cockpit during Change Request release

Test Driving the ABAP Test Double Framework

$
0
0

SAP has been doing some really good work upgrading its  tools.  We  have recently upgraded to SAP_ABAP 740. I'm an advocate of ABAP Unit testing and this upgrade gave me the opportunity to try an example on the new Test Double Framework.   Prajul Meyana's  ABAP Test Double Framework - An Introduction says that the new framework is available from SP9. We're in SP8, but I can't wait to tesdrive this.  So I started poking around. One of my colleagues pointed out that CL_ABAP_TESTDOUBLE is delivered with the release.  YEY!


Below is an example of behavior verification using the framework and it appears to work. Maybe later, I'll make a much simpler cut. At this stage I just wanted to run it through a real life example within our code base.

 

Below is my application code. It's a simple custom service implementation to create Chart of Authority records for Opentext Vendor Invoice Management.   ( It's not relevant here but note that we use FEH to manage exceptions for enterprise service errors.  Maybe I can show a test of that exception in a later blog. )

 

Further below is one  of my test classes with one of the test methods implemented.

 

The test double framework does three important things in this example .

  • It sets the behavior of get_manager_id( ) for when the test is executed.
  • It sets the expected invocation parameters of set_coa_details( )
  • The verify_expectation( )  verifies that the set_coa_details( ) within the Service interface has been invoked as expected.

 

I use a factory implementation to inject test doubles. Some of you don't like it. I understand that. Hope that doesn't distract from the intent. Have fun. As I mentioned when my colleague Custodio de Oliveira pointed it out that it's available, "Let's break it".

 

App Code

  METHOD ZIF_FEH~PROCESS.

 

    DATA:

      lo_coa_user            TYPE REF TO zif_opentext_vim_coa_user,

      lo_cx_opentext         TYPE REF TO zcx_opentext_service,

      lo_cx_coa_user         TYPE REF TO zcx_opentext_service,

 

      ls_main_error          TYPE bapiret2,

      lt_coa_details         TYPE zopentext_coa_details_tt,

      ls_coa_details         TYPE LINE OF zopentext_coa_details_tt,

      lv_manager_id          TYPE /ors/umoid,

      lv_max_counter         TYPE /opt/counter.

 

    FIELD-SYMBOLS:

      <ls_process_coa_details>      TYPE LINE OF zopentext_coa_detl_process_tt.

 

    me->_s_process_data = is_process_data.

 

    TRY.

        TRY.

            lo_coa_user      = zcl_vim_coa_user_factory=>get_instance( )->get_coa_user( iv_windows_id = _s_process_data-windows_id

                                                                                        iv_active_users_only = abap_false ).

            lv_manager_id    = lo_coa_user->get_manager_id( ).

          CATCH zcx_opentext_service INTO lo_cx_coa_user.

            CLEAR lv_manager_id.

        ENDTRY.


        LOOP AT _s_process_data-coa_details[]

          ASSIGNING <ls_process_coa_details>

          WHERE start_date <= sy-datum

          AND   end_date   >= sy-datum" Record removed in ECC if not in validity date

          ADD 1 TO lv_max_counter.

          ls_coa_details-counter        = lv_max_counter.

          ls_coa_details-expense_type   = <ls_process_coa_details>-expense_type.

          ls_coa_details-approval_limit = <ls_process_coa_details>-approval_limit.

          ls_coa_details-currency       = <ls_process_coa_details>-currency.

          ls_coa_details-bukrs          = '*'. " Functional requirement in ECC to set CoCode to *. Assumption : From corp - 1 user = 1 co code

          ls_coa_details-kostl          = '*'.

          ls_coa_details-internal_order = '*'.

          ls_coa_details-wbs_element    = '*'.

          ls_coa_details-manager_id     = lv_manager_id"For new entries, Manager Id is the same as that on existing COA entries for the user.

          APPEND ls_coa_details TO lt_coa_details.

        ENDLOOP.

 

        " Ignore the message

        IF ( lo_cx_coa_user IS NOT INITIAL or lo_coa_user->is_deleted( ) )     " The user is deleted or does not exist

           AND lt_coa_details IS INITIAL.                                      " AND All the inbound records are deletions

          RETURN. " Ignore transaction - finish ok.

        ENDIF.

 

        " Raise missing user

        IF lo_cx_coa_user IS NOT INITIAL.

          RAISE EXCEPTION lo_cx_coa_user.

        ENDIF.

 

        " Updates

        IF lo_coa_user->is_deleted( ).

          " User &1 is deleted. COA cannot be updated.

          ""****  ZCX_FEH EXCEPTION RAISED HERE *****

        ENDIF.

 

        lo_coa_user->set_coa_details( lt_coa_details[] ).

        lo_coa_user->save( ).

 

      CATCH zcx_opentext_service INTO lo_cx_opentext.

        "****  ZCX_FEH EXCEPTION RAISED HERE *****

    ENDTRY.

 

  ENDMETHOD.



 

 

Local Test Class

CLASS ltc_process DEFINITION FOR TESTING

  DURATION SHORT

  RISK LEVEL HARMLESS

  FINAL.

 

  PRIVATE SECTION.

    METHODS: setup.

    METHODS: test_2auth                        FOR TESTING.

*    METHODS: test_2auth_1obsolete              FOR TESTING.

*    METHODS: test_missinguser_coadeletions     FOR TESTING.

*    METHODS: test_update_on_deleted_user       FOR TESTING.

*    METHODS: test_opentext_error               FOR TESTING.

 

    DATA : mo_coa_user TYPE REF TO zif_opentext_vim_coa_user.

    CLASS-DATA : mo_coa_user_factory TYPE REF TO zif_vim_coa_user_factory.

    DATA : mo_si_opentext_delegauth_bulk TYPE REF TO ycl_si_opentext_coa.

 

ENDCLASS.

 

CLASS ltc_process  IMPLEMENTATION.

 

  METHOD setup.

    mo_si_opentext_delegauth_bulk ?= ycl_si_opentext_coa=>s_create( iv_context = zcl_feh_framework=>gc_context_external ).

  ENDMETHOD.

 

  METHOD test_2auth .

*----------------------------------------------------------------------*

*  This tests the scenario where the user has 2 authority records      *

*  and both are saved properly.                                        *

*----------------------------------------------------------------------*

    DATA ls_process_data               TYPE zopentext_deleg_auth_process_s.

    DATA ls_coa_details_process        TYPE zopentext_coa_detl_process_s.

 

    DATA lt_coa_details                TYPE zopentext_coa_details_tt.

    DATA ls_coa_details                TYPE LINE OF zopentext_coa_details_tt.

 

         "config the test double call to manager id

    mo_coa_user ?=  cl_abap_testdouble=>create( 'ZIF_OPENTEXT_VIM_COA_USER' ).  

    cl_abap_testdouble=>configure_call( mo_coa_user )->returning( 'WILLIA60' ).

    mo_coa_user->get_manager_id( ).

 

    " expected results

    ls_coa_details-counter        = 1.

    ls_coa_details-currency       = 'NZD'.

    ls_coa_details-approval_limit = 200.

    ls_coa_details-expense_type   = 'CP'.

    ls_coa_details-bukrs          = '*'.

    ls_coa_details-kostl          = '*'.

    ls_coa_details-internal_order = '*'.

    ls_coa_details-wbs_element    = '*'.

    ls_coa_details-manager_id     = 'WILLIA60'.

    APPEND ls_coa_details TO lt_coa_details.

 

 

    ls_coa_details-counter        = 2.

    ls_coa_details-currency       = 'NZD'.

    ls_coa_details-approval_limit = 300.

    ls_coa_details-expense_type   = 'SR'.

    ls_coa_details-bukrs          = '*'.

    ls_coa_details-kostl          = '*'.

    ls_coa_details-internal_order = '*'.

    ls_coa_details-wbs_element    = '*'.

    ls_coa_details-manager_id     = 'WILLIA60'.

    APPEND ls_coa_details TO lt_coa_details.

 

         "configure the expected behavior of the set_coa_details( )

    cl_abap_testdouble=>configure_call( mo_coa_user )->and_expect( )->is_called_times( 1 ).

    mo_coa_user->set_coa_details( lt_coa_details ).

 

 

        " Inject the test double into the factory which will be used inside the  method under test.

    TRY.

        zcl_vim_coa_user_factory=>get_instance( )->set_coa_user( mo_coa_user ).

      CATCH zcx_opentext_service ##no_handler.

    ENDTRY.

 

    " SETUP - INPUTS To the Method under test

    ls_process_data-windows_id = 'COAUSER'.

 

     

    ls_coa_details_process-currency   = 'NZD'.

    ls_coa_details_process-approval_limit = 200.

    ls_coa_details_process-expense_type   = 'CP'.

    ls_coa_details_process-bukrs          = '1253'.

    ls_coa_details_process-start_date     = '20060328'.

    ls_coa_details_process-end_date       = '29990328'.

    APPEND ls_coa_details_process TO ls_process_data-coa_details.

 

    ls_coa_details_process-currency   = 'NZD'.

    ls_coa_details_process-approval_limit = 300.

    ls_coa_details_process-expense_type   = 'SR'.

    ls_coa_details_process-bukrs          = '1253'.

    ls_coa_details_process-start_date     = '20060328'.

    ls_coa_details_process-end_date       = '29990328'.

    APPEND ls_coa_details_process TO ls_process_data-coa_details.

 

    " EXECUTE the method under test

    TRY.

        mo_si_opentext_delegauth_bulk->zif_feh~process( is_process_data = ls_process_data ).

      CATCH zcx_feh  ##no_handler.

    ENDTRY.

 

 

    " Verify interactions on test double

    cl_abap_testdouble=>verify_expectations( mo_coa_user ).

 

  ENDMETHOD.

 

ENDCLASS.

 

 

 

 

Some Test Tools available in SAP_ABA 740

Test Summary - 1 test method successful
Capture1.GIF

 

 

Test Coverage - only 1 test >> so it's pretty poor
Capture2.GIF

 

 

 

Test Coverage - lots of  untested code in red!
Capture3.GIF

 

(Sorry for the eclipse fans. I re-flashed my PC to 64 bit. I haven't had the chance to re-install my Eclipse tools. Those coverage tools are there too! ).

CREMAS05 with extension not triggering through BD21 (change pointer) or background job of program RBDMIDOC when filters maintained for company code and purhcasing organization in BD64 (Distribution model)

$
0
0

We faced a weird issue today in the production system that we hadn't faced in quality. The scenario was we had to send the Location information of vendors to an external system from ECC. For this we are using the message type CREMAS with the basic type CREMAS05. This was extended with an extension to add a custom segment and custom fields in it. The change pointers were configured for these custom fields as well. And when the IDocs were triggered using BD21 for CREMAS after changing a vendor, it worked fine in the quality system.

However, when we moved the custom code i.e. the enhancement implementaion in the function module MASTERIDOC_CREATE_CREMAS along with the change pointers and all other configurations for partner profiles etc, it did not work as expected in the production system.

The difference was that no filters were maintained in the quality systems, or the IDocs were working fine for the filters maintained in the quality system, but not for the filters maintained in the production system.

The scheduled background job of the program RBDMIDOC was failing with the error saying that the custom segment created using the extension does not exist.

 

Exact Error message: "Segment <our Y custom segment name> does not exist for message type CREMAS"

 

Although when we checked, all the transports had happened correctly and we were able to view the custom segment in  WE30 in the production system.

Then we checked the partner profile too to see if the extension had been missed, but no, even that was maintained correctly.

 

After scratching our heads for a few days and trying out everything possible under the sun, we figured out that it was the filters in the distribution model that was causing this issue. On removing the filters, the IDocs were getting triggered fine. So, we narrowed down to the filters and then on searching around on SDN and on the internet in general, we stumbled across a few posts that said it had something to do with the conversion routines etc but finally after a lot of trial and error of the various solutions we found on the internet, the one that worked for us was to pass the name of the custom segment needed to  Function module MASTER_IDOC_DISTRIBUTE that exists at the end of the code in the function module MASTERIDOC_CREATE_CREMAS that we were using.

The structure  F_IDOC_HEADER which is a work area contains the field CIMTYP which needs to be populated with the name of the extension that has been created for the standard IDoc.

 

So, on adding one line of code:

 

F_IDOC_HEADER-CIMTYP = 'YR1UMMCREMAS05'.

 

before the line


CALL FUNCTION 'MASTER_IDOC_DISTRIBUTE'

 

solved our problem.

 

Now, the CREMAS IDoc started flowing fine eve with the filters for company codes and purchasing organizations maintained in the distribution model.

ANST can help in effective test automation

$
0
0

ANST – Automated Notes Search Tool, is a powerful tool to help searching SAP notes for issues you encounter in your SAP system. As this tool is part of SAP standard applications now and has been of great use for end customers, partners & development teams, in this blog I am exploring the possibilities of using it for Quality engineers from testing point of view.

 

Before doing any scenario testing & test automation, it is most important to ensure that the required customizing is correct & complete to support the execution of test case. This tool can be of great help in ensuring the same & achieving more effective testing.

 

Lets start from the point when we design a test case , its very important to define the prerequisite steps  including the required customizing correctly here. Often, one way of finding important customizing tables involved in process testing is through development colleagues/application responsible, however in that case we are dependent on correctness & completeness of the information provided.

 

Here is another way to do it using ANST, to find the right tables/views ensuring all customization are done in test automation prior to scenario testing. ANST has got capability to give all the tables which are used in a particular test execution. Before test automation, to get maximum coverage of tables which may impact the test execution, manually perform each test step while using ANST. The trace will capture all the table from different components  during the test execution.After capturing all the tables , you can select the area you want to test/automate  and from there you can navigate to corresponding tables/view.

With this, now you can ensure to  include all of the required customization in your automation script to avoid any customization errors during test execution. This tool offers excellent capability of getting all the customizing tables in one place for a scenario or transaction which is to be tested.

 

Lets do this with a simple scenario where user wants to create a warranty claim using transaction WTY:

 

Steps :

 

Login to test system and start the transaction ANST. Enter The transaction you are testing and Description .

I suggest you to give a meaningful description as this will help to search the trace later if needed.

 

Anst_1.png

 

After executing , tool will take you to the transaction screen. Enter the necessary parameters & perform the transaction.

 

ANST_2.png

ANST_3.png

 

On completion of transaction, click on customizing tables button on below screen.

 

ANST_4.png

 

 

On below screen, it will show all the tables which are touched upon during this test.There are component specific table lists as well and Important tables can be scanned for the data checks.

 

anst_6.png

 

 

You can double click on a particular table to navigate to the details of table. With this analysis, you can decide to include which all customizing steps should be included as prerequisite steps in the test automation script for this transaction.

 

You can check the trace later too by opening it with the description saved earlier, as below:

ANST_5.png

 

 

 

Hope this will help in designing automated tests better. If you want to know more about ANST, you can refer to some of the other blogs on this:

 

What is ANST....and why aren't you using it?

The power of tools - How ANST can help you to solve billing problems yourself!

Exchanging ST12 traces as files

$
0
0

You need to exchange ST12 trace with your counterpart (e.g. SAP support).

You have created traces in transaction ST12 as described here:

Single Transaction Analysis (ST12) – getting started[http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2009/09/08/single-transaction-analysis-st12-getting-started]

 

or here

 

ST12 – tracing user requests (Tasks & HTTP) [http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2010/03/22/st12-tracing-user-requests-tasks-http]

 

 

To store your trace into file, perform the following:

 

  1. Start transaction ST13
  2. Enter tool name: ANALYSISBROWSER
  3. Execute (F8)
    ST13.PNG
  4. Select your analysis

select_analysis.PNG

  5.  Select menu: Download -> Text file download -> Export to frontend

 

Export_to_frontend.png


   6.  Enter file name and format (leave ASC), click Transfer.

 

File_name.png

 

 

To upload the trace from the file, perform the following steps:

 

  1. Repeat steps 1 – 3 from above.
  2. Select menu: Download -> Text file download -> Import from frontend

 

import_from_frontend.png

  3.  Select your file (e.g. D:\trace1.trc) and click Open

 

  4.  Click "Yes" on Import Analysis popup.

Import_Analysis.png

 

 
Hint: When exchanging the traces files don't forget to compress them. You can use RAR or ZIP archivers for that.

To buffer or not to buffer a database table?

$
0
0

It is common knowledge that buffering of database tables improves the system performance, provided the buffering is done judiciously – i.e. only those tables that are read frequently and updated rarely are buffered. But how exactly can we determine if a table is read frequently or updated rarely?


Also, the state of a buffered table in the buffer area is a runtime property which keeps changing with time. How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant? This is a critical question to be considered while analyzing the performance of queries on buffered tables.


This blog post attempts to answer the above questions.


This blog post is divided into 3 sections and structured as follows:

  • Section 1: Prerequisites (Recap of Table Buffering Fundamentals and its Mechanism)
  • Section 2: How to use the Table Call Statistics Transaction
  • Section 3: Interpreting the results of the Table Call Statistics Transaction to answer the questions posed above.

You might find the blog to be slightly lengthy but the content will NOT be more than what you can chew. Trust me!


Section 1: Prerequisites (Recap of Table Buffering Fundamentals and its Mechanism)

Buffering is the processing of storing table data (which is always present in the database) temporarily in the RAM of the Application Server. Buffering is specified in the technical settings of a table’s definition in the DDIC.


The benefits of buffering are:

  • Faster query execution– A query is atleast 10 times faster when it fetches data from the buffer as compared to a query fetching data from the database. This is because the delays involved in waiting for the database and the network that connects it, are eliminated. The performance of the application which uses this query, is improved.
  • Reduced DB Load and Reduced Network Traffic– Since every query need not hit the DB, the load on the DB is reduced and also the network traffic between the Application Layer and the Database Layer is reduced. This improves the performance of the entire system.


The buffering mechanism can be visualized in Figure 1 below:

 

Figure 1.jpg

          Figure 1: Buffering Mechanism


The SAP work processes of an application server have access to the SAP table buffer. The buffers are loaded on demand via the database connection. If a SELECT statement is executed on a table selected for buffering, the SAP work process initially looks up the desired data in the SAP table buffer. If the data is not available in the buffer, it is loaded from the database, stored in the table buffer, and then copied to the ABAP program (in the internal session). Subsequent accesses to this table would fetch the data from the buffer and the query need not go to the database to fetch it.


It must be understood that RAM space in the application server is limited. Let’s say – dbtab1 is a buffered table whose data is present in the buffer. When there is a query on another buffered table - dbtab2, its data will have to be loaded into the buffer. This might result in the data of dbtab1 getting displaced from the buffer.


When there is a write access to a buffered table, the change is done in the database and the old table data which is present in the buffer (of the application server from which the change query originated) is just flagged as “Invalid”. At this instant, the buffer and the database hold different data for the same table. A subsequent read access to the table would initiate a reload of the table data from the database to the buffer. Now the buffer holds the same data as the database.


Buffering a table that gets updated very frequently might actually end up increasing the load on the DB and increasing the network traffic between the application layer and the database. This would slow down the system performance and defeat the purpose of buffering.


Key Takeaways from Section 1:

  • The contents of a buffered table in the buffer area is completely runtime dependent. At one instant, there might be data, and at another instant, it might not be present.
  • Only a table with the following characteristics must be buffered:

          (a)     Read frequently

          (b)     Updated rarely

          (c)     Contains less data


Section 2: How to use the Table Call Statistics Transaction


This is accessed by the Tcode – ST10. The following is the initial screen:

Figure 2.jpg

                         Figure 2: ST10 - Initial Screen


A few points may be noted in Figure 2:

  • An access to every table, regardless of whether it is non-buffered/single record buffered/generic area buffered/fully buffered, would be reported by this transaction.
  • Analysis of the table accesses may be restricted to a specified time frame (by choosing the radiobuttons - This day/Previous Day/This Month/Previous Month etc). Or the transaction may be run without any restriction on the time period by choosing - “Since Startup”.
  • If the SAP system consists of multiple application servers, the accesses (i.e. queries) to the tables originating from any of the servers can be reported by this transaction. On the other hand, the transaction may be restricted to table accesses originating from a specific server (by either choosing the radiobutton – “This Server” or by explicitly specifying the server).

 

Let’s explore the results returned by the transaction when the radiobutton – “Not Buffered” is chosen.

Figure 3.jpg

     Figure 3: Results of ST10 when the radio buttons - “Non-Buffered”, “This Server” and “From startup” are chosen


Let me explain the significance of each column –

  • Direct Reads – This gives the number of SELECT queries on a particular table in which the entire primary key was specified in the WHERE clause.
  • Seq. Reads – This gives the number of SELECT queries on a particular table in which the entire primary key was NOT specified in the WHERE clause. There can more than one record satisfying the WHERE clause.
  • Changes – This gives the number of write access (INSERT/UPDATE/MODIFY/DELETE) on a particular table.
  • Total – This is the total number of accesses (read + write) to a particular table in the time frame chosen and in the server chosen.

                    Total = Direct Read + Seq. Reads + Changes.

  • Changes/Total % = Also termed as “Change Rate”, this is the % of accesses that are write accesses.
  • Rows Affected – This not very relevant for an ABAP developer. Any operation that accesses the database will increase the Rows Affected. SWAPS would also increase this count.


Let’s explore the results when the radio button – “Generic Key Buffered” is chosen:

Figure 4.jpg

          Figure 4: Results of ST10 when the radio buttons - “Generic Key Buffered”, “This Server” and “From startup” are chosen


There are some new columns here, which were not present in Figure 3. They are:

  • Buf key opt – This describes the buffering type of the table. Its possible values are:

          (a)     SNG – Single Record Buffered Table

          (b)     FUL – Fully Buffered Table

          (c)     GEN – Generic Area Buffered Table

  • Buffer State – This describes the state of the table in the buffer. For all the possible values and their meaning, I would recommend you to just place the cursor on this column and hit F1. The following is a brief description of some of the possible values of Buffer State:

          (a)     VALID - The table content in the buffer is valid. Read access takes place in the buffer.

          (b)     ABSENT – The table has not been accessed yet. So the table buffer is not yet loaded with data.

          (c)     DISPLACED – The table buffer has been displaced

          (d)     INVALID - The table content is invalid and there are open transactions that modify the table content. Read access takes place in the                                         database.

          (e)     ERROR - The table content could not be placed in the buffer, because insufficient space.

          (f)      LOADABLE – The table buffer in the buffer area is invalid, but can be loaded in the next access.

          (g)     MULTIPLE – Relevant only in the context of Generic Area Buffered Tables. These have different buffer statuses.

  • Invalidations: Specifies how often the table was invalidated because of “Changes” (i.e. write accesses).


NOTE: All the table buffers in the current application server can be cleared by entering the Tcode- “/$TAB”.


Note that the user can toggle between one result set and another by using the buttons in the Application Toolbar (as shown in Figure 5):

Figure 5.jpg

          Figure 5: Application Toolbar of the primary list screen of ST10.


  • While the result screen of ST10 may be open in one session, there may be some accesses to tables in other sessions or by other users. Use the “Refresh” button so that the transaction shows the latest data – latest buffer state of tables, latest no. of accesses etc.
  • “Reset” button will set all the counts to zero (i.e. no. of reads/changes/DB Calls etc).
  • Detailed information about Buffer Administration etc may be viewed by double clicking on any of the entry, or placing the cursor on a row and clicking the “Choose” button on the application toolbar. The secondary list will look like Figure 6.

Figure 6.jpg

               Figure 6: Secondary List



Section 3: Interpreting the results of the Table Call Statistics Transaction to answer the questions posed above.


How to determine a non-buffered table which is suited to be buffered?

  • Begin the ST10 transaction by clicking on the “Non-Buffered” radiobutton.
  • Notice the “Change Rate” value for each table. The higher the Change Rate for a table, the less suited it is for buffering.
  • The non-buffered tables with the following properties may be considered for buffering:

        (a)     Low Change Rate (under 0.5%)

        (b)     High number of reads (Direct Reads + Seq.Reads)

        (c)     Data volume not too large

       

          If it is to be buffered, what should be its buffering type?

             

    • We are to be guided by the relative number of Direct Reads and Seq. Reads. If most of the reads are Direct Reads, categorize the table as “Single Record Buffered”.
    • On the other hand, if most of the reads are Seq. Reads, classify it as either Generic Area Buffered or Fully Buffered. If the data volume is less, the table can be considered for Full Buffering. If the data volume is higher or if certain “groups” of data of this table are accessed frequently, then classify it as “Generic Area Buffered”.


How to determine the efficiency of the buffer setting of already buffered tables?

  • Begin the ST10 transaction by clicking on either the “Generic Key Buffered” or “Single Record Buffered” radiobutton (depending upon the table whose buffer setting, you would like to verify).
  • Notice the “Change Rate” value for each table. The higher the Change Rate for a table, the less suited it is for buffering. One might consider switching OFF the buffering for such tables.
  • A wrong decision with respect to the Buffering Type may also be diagnosed here. For a Single Record Buffered table, if the no. of Seq. Reads is higher relative to the number of Direct Reads, one might consider changing the buffering type from Single Record to Fully Buffered or Generic Area Buffered.


NOTE: Ensure that the time frame for which the transaction is run is significant enough such that all the reports/applications were run in that period and all business scenarios occurred in that period. Only then, can this transaction guide us effectively in deciding which table’s buffer settings are to be altered.


Case Study:

Based on the above guidelines, let’s consider some examples in Figure 7, which shows the Non-Buffered Tables:

 

Figure 7.jpg

               Figure 7: List of accesses to non-buffered tables.

 

I would like to draw your attention to the 3 tables enclosed by a green rectangle. Based on the trends for these three tables, it can be temporarily concluded that:

  • The table – ABDBG_LISTENER is NOT a candidate for buffering. This is because, it has a high change rate.
  • The table – ABDBG_INFO can be considered for buffering and it may be set as “Single Record Buffered” table since all of its accesses were Direct Reads.
  • The table – ADCP can be considered for either Full Buffering or Generic Area Buffering. This is because most of its accesses were Seq. Reads.

  The above points are not the final decisions but just guidelines. Other aspects like data volume, size category, access frequency etc are to be considered.


How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant?


  • Clear all the table buffers from the buffer area by running the tcode – “/$TAB”.
  • Consider the single record buffer table – TSTC. Its buffer state would say – LOADABLE as shown in Figure 8 below:

Figure 8.jpg

          Figure 8: Buffer State of TSTC table after clearing the buffers using - /$TAB.


  • Now, run the following code snippet in a program:


   DATA: GW_TSTC TYPE TSTC.
CONSTANTS: C_SE38 TYPE TSTC-TCODE VALUE 'SE38'.

SELECT SINGLE *
FROM TSTC
INTO GW_TSTC
WHERE TCODE = C_SE38
.
  • After running the above code snippet, press the “Refresh” button in the Application Toolbar of the ST10 transaction. This would reflect the new buffer state of the TSTC table – VALID.

Figure 9.jpg

          Figure 9: Buffer State of TSTC table after the above code snippet is run


  • Basically, the SELECT SINGLE query first looked for the relevant record in TSTC’s table buffer first. It did not find it (because the table buffer had no data. Its state was LOADABLE earlier). Then, the query fetches the relevant record from the database (this fact can be confirmed from the ST05 – SQL Trace in Figure 10) and loads that data to the buffer.

Figure 10.jpg

          Figure 10: ST05-SQL Trace when the above code snippet is run for the first time. Data is fetched from database.


  • The subsequent reads to TSTC, looking for the same record (i.e. TCODE = ‘SE38’) would fetch the data from the buffer itself (This fact can be confirmed from ST05 – Buffer Trace in Figure 11 This fetch would be several times faster than fetching from the DB.

Figure 11.jpg

          Figure 11: ST05-Buffer Trace when the above code snippet is run for the second time. Data is fetched from buffer.


Conclusion:

ST10 is a very useful transaction that can guide you in answering the following questions:

  • Based on the accesses over a period of time from a particular server, can a non-buffered be buffered?
  • Can a table that was wrongly buffered, be identified?
  • How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant?


References:

[1]           Gahm, H., “Chapter 3 – Performance Analysis Tools,” ABAP Performance Tuning, 1st ed., Galileo Press, Boston, 2010, pp. 51-54.

SAP NetWeaver AS, add-on for code vulnerability analysis for ABAP 7.5 is out!

$
0
0

With SAP NetWeaver AS, add-on for code vulnerability analysis 7.5 scanning of ABAP sources for security weaknesses became even more easy. Allowing more systems to be scanned for even more types of defects, the new release is also more flexible and now can be deployed centrally.

 

Using the new central security scan support, customers are now able to overcome the release limitations of previous releases. Using this approach, only one SAP NetWeaver AS 7.5 basis system is required. Systems containing the code to be scanned can be releases down to SAP NetWeaver AS ABAP 7.00 (for details check SAP note 2190113).

 

 

The benefit of this approach is also, that in future an upgrade of the central scan system allows to use the latest checks also for all remote system.

 

Using an update scan engine, you can now analyze BSP pages and even navigate directly into the BSP sources to fix your web applications in case of security issues.

In addition, there were additional checks like checks to identify coding with insufficient authorization checks. You can find more details on the new and revised checks in SAP Note 1921820 - SAP NetWeaver AS, add-on for code vulnerability analysis - support package planning.

 

If you want to get more details, check our new roadmap https://service.sap.com/~sapidb/011000358700000256742014E.pdf on the SAP Service Market Place (SMP).
 


Unit test mockup loader for ABAP

$
0
0

Hi Community !

 

I'd like to share a tool for unit testing me and my team have developed for our internal usage recently.

 

The tool is created to simplify data preparation/loading for SAP ABAP unit tests. In one of our projects we had to prepare much tables data for unit tests. For example, a set of content from BKPF, BSEG, BSET tables (FI document). The output to be validated is also often a table or a complex structure.

 

Data loader

 

Hard-coding all of that data was not an option - too much to code, difficult to maintain and terrible code readability. So we decided to write a tool which would get the data from TAB delimited .txt files, which, in turn, would be prepared in Excel in a convenient way. Certain objectives were set:

 

  • all the test data should be combined together in one file (zip)
  • ... and uploaded to SAP - test data should be a part of the dev package (W3MI binary object would fit)
  • loading routine should identify the file structure (fields) automatically and verify its compatibility with a target container (structure or table)
  • it should also be able to safely skip fields, missing in .txt file, if required (non strict mode) e.g. when processing structures (like FI document) with too many fields, most of which are irrelevant to a specific test.

 

Test class code would look like this:

 

...

call method o_ml->load_data " Load test data (structure) from mockup

  exporting i_obj       = 'TEST1/bkpf'

  importing e_container = ls_bkpf.

 

call method o_ml->load_data " Load test data (table) from mockup

  exporting i_obj       = 'TEST1/bseg'

            i_strict    = abap_false

  importing e_container = lt_bseg.

...

call method o_test_object->some_processing " Call to the code being tested

  exporting i_bkpf   = ls_bkpf

            it_bseg  = lt_bseg

  importing e_result = l_result.

 

assert_equals(...).

...

 

The first part of the code takes TAB delimited text file bseg.txt in TEST1 directory of ZIP file uploaded as a binary object via SMW0 transaction...

 

BUKRS BELNR GJAHR BUZEI BSCHL KOART ...

1000  10    2015  1     40    S     ...

1000  10    2015  2     50    S     ...

 

... and puts it (with proper ALPHA exits and etc) to an internal table with BSEG line type.

 

Store/Retrieve

 

Later another objective was identified: some code is quite difficult to test when it has a select in the middle. Of course, good code design would assume isolation of DB operations from business logic code, but it is not always possible. So we needed to create a way to substitute selects in code to a simple call, which would take the prepared test data instead if test environment was identified. We came up with the solution we called Store. (BTW might nicely co-work with newly announced TEST-SEAM feature).

 

Test class would prepare/load some data and then "store" it:

 

...

call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name = 'BKPF'

            i_data = ls_bkpf. " One line structure

...

 

... And then "real" code is able to extract it instead of selecting from DB:

 

...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

 

 

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

    importing e_data  = me->fi_doc_header

    exceptions others = 4.

endif.

 

 

 

if sy-subrc is not initial.

  " Data not selected -> do error handling

endif.

...

 

In case of multiple test cases it can also be convenient to load a number of table records and then filter it based on some key field, available in the working code. This option is also possible:

 

Test class:

 

...

call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name   = 'BKPF'

            i_tabkey = 'BELNR'  " Key field for the stored table

            i_data   = lt_bkpf. " Table with MANY different documents

...

 

"Real" code:

 

...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

              i_sift  = l_document_number " Filter key from real local variable

    importing e_data  = me->fi_doc_header  " Still a flat structure here

    exceptions others = 4.

endif.

 

if sy-subrc is not initial.

  " Data not selected -> error handling

endif.

...

 

As the final result we can perform completely dynamic unit tests in our projects, covering most of code, including DB select related code without actually accessing the database. Of course, it is not only the mockup loader which ensures that. This requires accurate design of the project code, separating DB selection and processing code. But the mockup loader and "store" functionality makes it more convenient.

 

illustration.jpg

 

Links and contributors

 

The tools is the result of work of my team including:

 

The code is freely available at our project page on github - sbcgua/mockup_loader · GitHub

 

I hope you find it useful

 

Alexander Tsybulsky

Working effectively with ABAP legacy code - ABAP Test Seams are your friend

$
0
0

Hello Community,

ABAP Test Seams are available with the latest SAP_BASIS stack, but what they are good for?

Consider you need to change some legacy code of ancient times. The code is deeply nested, mixes concerns and has no unit tests. Changing the code seems risky and you want to add at least characterisation tests. But even to add such a minimal safety net requires changes to the old stuff. It seems you stuck between a rock and a hard place.

By using ABAP TEST Seams one can replace test unfriendly behaviour of any statement within the same program. Just tag the code region in the domain part with a TEST-SEAM and inject other statements into this region. The injected code can alter variables, perform extra validations or do simply nothing.

For example you can:

  • replace the effect of an authority-check by setting the return code in SY-SUBRC
  • substitute a database query by populating the internal table with well known values
  • create test doubles instead of test unfriendly depended on objects
  • validate content instead of writing it to database,

 

Methods of test classes basically replace the code of the test seam with the code of the test injection. The code of the test injection gets executed in the runtime context of the test seam. Consequently the code of the test injection has access to variables and members visible to the domain code only. Controversially the code in the test injection has no access to variables visible to the method with the injection. If you desire to pass content from the test class to the injected code you may consider the use of global variables. Although the test seams are declared within the domain code, they do not alter the behaviour of the domain code for the productive use case. Therefore test seams have not the smell of the anti-pattern "Test code in Production".


Test seams are expected to be especially useful to:

  • get legacy code under test
  • substitute dependency that shall not get exposed

Test seams are not the first choice for:

  • integration and component tests - as test injections are restricted to the same program
  • scenarios where dependency shall get exposed by object seams for architectural reasons

 

Try it and judge how to benefit from this new unique test technique in ABAP.

 

Example Snippet: Replace Authority Check

 

Domain Code with SeamTest Code with Injection



test-seamauthorization_Seam.


     authority-ckeck

       object'S_CTS_ADMI'
       id'CTS_ADMFCT'

       field'TABL'.


end-test-seam.


if( 0 eq sy-subrc ).

  is_Authorized = abap_True.
endif.




test-injectionauthorization_Seam.


  sy-subrc = 0.

           

end-test-injection.

 

Example Snippet: Substitute Database Query With Well Known Content

 

Domain Code with SeamTest Code with Injection


test-seamread_Content_Seam.


  select * fromsflight

    into table @flights[]

    where

      carridin @carrid_Range[] and

      fldateeq @sy-datum.

 

end-test-seam.
 


test-injectionread_Content_Seam.


  flights =

    value #(

      ( carrid = 'LHA'

        connid = 100seatsmax = 30 )

      ( carrid = 'AF3'

        connid = 7seatsmax = 1 ) ).


end-test-injection.

 

Example Snippet: Inject Test Double Instead of Test Unfriendly Dependency

 

Domain Code with SeamTest Code with Injection

test-seaminject_Double_Seam.    


  me->f_Repository =

    new zcl_Db_Query( ).


end-test-seam.
 

test-injectioninject_Double_Seam.


  me->f_Repository =

   new th_Dummy_Repository( ).  

 

end-test-injection.


Example Snippet: Validate Instead of Writing to Database Table

 

Domain Code with SeamTest Code with Injection


test-seaminject_Validate_Seam.


  modify sflight

   fromtable @altered_Fligths[].     

 

end-test-seam.
 


test-injectioninject_Validate_Seam.


  cl_Abap_Unit_Assert=>assert_Equals(

    act = altered_Flights[]

    exp =

      th_Global_Buffer=>exp_Flights[] ).


end-test-injection.


Full Example: Legacy Function Module With Test Seams

For example the  fury function module RS_AU_SAMPLE_PROPOSE_BOOKING mixes up authorisation, database access and business logic. In such a case one can embrace the authorisation and the repository logic each with a test seam as shown below.

 

function rs_Au_Sample_Propose_Booking   importing     value(i_Carr_Id) type s_Carr_Id     value(i_Conn_Id) type s_Conn_Id     value(i_Flight_Date) type s_Date   exporting     value(e_Booking_Id) type s_Book_Id   exceptions     not_Authorized     invalid_Flight     no_Free_Seats.   data:     is_Authorized      type abap_Bool,     flight             type sflight,     total_Of_Bookings  type i.   test-seam authorization_Check.     authority-check object 'S_CTS_ADMI' id 'CTS_ADMFCT' field 'TABL'.     if ( 0 eq sy-subrc ).       is_Authorized = abap_True.     endif.   end-test-seam.   if ( abap_False eq is_Authorized ).     message 'You not authorized to issue bookings'(noa) type 'E' raising not_Authorized.   endif.   test-seam read_From_Db.     select single *       from sflight into flight       where         carrid = i_Carr_Id         and         connid = i_Conn_Id         and         fldate = i_Flight_Date.     select count( * )       from sbook       into (total_Of_Bookings)       where         carrid = i_Carr_Id       and         connid = i_Conn_Id       and         fldate = i_Flight_Date.   end-test-seam.   " validate content   if ( flight is initial ).     message 'Flight does not exist'(nof) type 'E' raising invalid_Flight.   elseif ( flight-seatsocc >= flight-seatsmax ).     message 'Limit of Bookings exceeded'(nob) type 'E' raising no_Free_Seats.   endif.   " propose new id   e_Booking_Id = total_Of_Bookings + 1.
 endfunction.

The test injection of test class tc_Authorization_Check alters the content of the state variable is_Authorized in with literals. Please note it is not possible to pass the parameter i_Is_Permitted directly as this variable is not known in context of the seam. The code injected into the seam has access to the same artefacts as the domain code only!

class tc_Authorization_Check definition for testing risk level harmless.  private section.    methods:      pass_Permission_Check    for testing,      raise_Missing_Permission for testing,      set_Authorization        importing   i_Is_Permitted    type abap_Bool,      has_Passed_Authorization_Check        returning   value(result)       type abap_Bool.
endclass.
class tc_Authorization_Check implementation.  method pass_Permission_Check.    set_Authorization( abap_True ).    cl_Abap_Unit_Assert=>assert_True( has_Passed_Authorization_Check( ) ).  endmethod.  method raise_Missing_Permission.    set_Authorization( abap_False ).    cl_Abap_Unit_Assert=>assert_False( has_Passed_Authorization_Check( ) ).  endmethod.  method set_Authorization.    " the code within the INJECT SEAM statement will substitute    " the code within the according TEST SEAM statement in the same program    " please not the statements within the body may only make use of variables    " visible in the context of the seam.    if ( abap_True eq i_Is_Permitted ).      test-injection authorization_Check.        is_Authorized = abap_True.      end-test-injection.    else.      test-injection authorization_Check.        is_Authorized = abap_False.      end-test-injection.    endif.  endmethod.  method has_Passed_Authorization_Check.    call function 'RS_AU_SAMPLE_PROPOSE_BOOKING'      exporting     i_Carr_Id =     ''                    i_Conn_Id =     0                    i_Flight_Date = sy-datum      exceptions    not_Authorized = 1                    others =         9.    if ( 1 = sy-subrc ).      result = abap_False.    else.      result = abap_True.    endif.  endmethod.
endclass.

 

The test injection of test class tc_Propose defines in the setup method a static public member (global) variable that shall be used instead of the database query. As the static public member is visible to the domain code, the injected code and the domain code direct assignments are possible.(See also test class include LRS_AU_SAMPLE_TEST_SEAMST99 / SAP_BASIS 7.50 onwards).

 

" The test class 'tc_Propose' makes use of the test seam 'authorization_Check'
" to bypass the authorization check and makes use of the test seam
" 'read_From_DB' to provide well known input to be business logic. Finally
" the output of the business logic gets exercised and checked.
" As local data / private members of the test class can not be used in the
" Injection the test class exposes the test data via public attributes, another
" possibility might have been public getter methods.
class tc_Propose definition for testing risk level harmless.   private section.     class-data:       fg_Flight                type sflight,       fg_Total_Of_Bookings     type i.   methods:     setup,     book_Non_Existing_Flight_Fails for testing,     book_Full_Flight_Fails for testing,     book_Available_Flight for testing.
endclass.
class tc_Propose implementation.  method setup.    test-injection read_From_Db.      flight =              th_Dummy_Repository=>flight.      total_Of_Bookings =   th_Dummy_Repository=>total_Of_Bookings.    end-test-injection.    test-injection authorization_Check.      is_Authorized = abap_True.    end-test-injection.    clear th_Dummy_Repository=>flight.    clear th_Dummy_Repository=>total_Of_Bookings.  endmethod.  method book_Non_Existing_Flight_Fails.    call function 'RS_AU_SAMPLE_PROPOSE_BOOKING'      exporting     i_Carr_Id =     ''                    i_Conn_Id =     0                    i_Flight_Date = sy-datum      exceptions    not_Authorized = 1                    invalid_Flight = 2                    no_Free_Seats =  3                    others =         9.     cl_Abap_Unit_Assert=>assert_Subrc( act = sy-subrc exp = 2 msg = 'no flight' ).  endmethod.  method book_Full_Flight_Fails.    constants:  c_Count_Bookings type i value 10.    th_Dummy_Repository=>flight-seatsmax = c_Count_Bookings.    th_Dummy_Repository=>flight-seatsocc = c_Count_Bookings.    th_Dummy_Repository=>total_Of_Bookings = c_Count_Bookings.    call function 'RS_AU_SAMPLE_PROPOSE_BOOKING'      exporting     i_Carr_Id =     ''                    i_Conn_Id =     0                    i_Flight_Date = sy-datum      exceptions    not_Authorized = 1                    invalid_Flight = 2                    no_Free_Seats =  3                    others =         9.     cl_Abap_Unit_Assert=>assert_Subrc( act = sy-subrc exp = 3 msg = 'no seats' ).  endmethod.  method book_Available_Flight.    constants:  c_Count_Bookings type i value 10.    data:       next_Id type s_Book_Id.    th_Dummy_Repository=>flight-seatsmax = c_Count_Bookings + 10.    th_Dummy_Repository=>flight-seatsocc = c_Count_Bookings.    th_Dummy_Repository=>total_Of_Bookings = c_Count_Bookings.    call function 'RS_AU_SAMPLE_PROPOSE_BOOKING'      exporting     i_Carr_Id =     ''                    i_Conn_Id =     0                    i_Flight_Date = sy-datum      importing     e_Booking_Id = next_Id      exceptions    not_Authorized = 1                    invalid_Flight = 2                    no_Free_Seats =  3                    others =         9.      cl_Abap_Unit_Assert=>assert_Subrc( act = sy-subrc exp = 0 msg = 'return code').      cl_Abap_Unit_Assert=>assert_Equals(  act = next_Id exp = c_Count_Bookings + 1 msg = 'proposed id' ).  endmethod.
endclass.
Viewing all 37 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>