Extendend Notifications: Performance on SWWLOGHIST

Rick Bakker rbakker at gmail.com
Wed Jan 20 16:06:57 EST 2010


Hello,

The lengthy first-time run of SWN_SELSEN is a one-off so I usually
just let it go.

I'm pretty sure the timestamp is stored in table SWN_TIMESTAMPS. You
could always change this manually if you really want to, though of
course it's not recommended.

Note that the timestamps are stored like this:
20,100,109,030,010.8190000
=
2010.01.09.03:00:10 GMT/UTC
= translate to your own timezone

It certainly would be nice if SAP changed this so that the database
query would be more selective on the (usually) small number of tasks
that it's looking for.

regards
Rick Bakker
Hanabi Technology



On Wed, Jan 20, 2010 at 6:19 AM, Florin Wach <florin.wach at gmx.net> wrote:
> follow-up: I have checked SAP-Note 1105696 which seemed to be promising to solve the issue, however, that particular object method is not included ... and the note changes the statements from an even worse one to the one, which is already used where I have the problems with.
>
>
> -------- Original-Nachricht --------
>> Datum: Wed, 20 Jan 2010 17:11:30 +0100
>> Von: "Florin Wach" <florin.wach at gmx.net>
>> An: sap-wug at mit.edu
>> Betreff: Extendend Notifications: Performance on SWWLOGHIST
>
>> Hi dear wuggers,
>>
>> I'm currently running the intial (first) run of SWN_SELSEN in the
>> production system. There's a filter active, restricting the notifications down to a
>> handful of tasks-ID's. There are relatively few users using the
>> notifications (as receivers).
>>
>> The run takes now 3 hours, where I can trace it down to a large and
>> time-consuming number of sequential reads on table SWWLOGHIST, which I could
>> track down to ABAP class CL_SWF_RUN_GET_WI_DLT_REQUEST.
>>
>> Probably it's the method GET_DELTA_ENTRIES that ... when you run it for
>> the very first time, obviously gets a huge number of work items... so to say.
>>
>> Here is a statements that is sticking out:
>>
>> SELECT hist~wi_id hist~method hist~timestamp
>>              FROM swwloghist AS hist
>>              INNER JOIN swwwihead AS head ON hist~client = head~client
>>                AND hist~wi_id = head~wi_id
>>              INTO CORRESPONDING FIELDS OF TABLE lt_swwloghist
>>              FOR ALL ENTRIES IN lr_method
>>              WHERE hist~timestamp GE me->im_timestamp
>>              AND   hist~method EQ lr_method-low
>>              AND   head~wi_type EQ swfco_wi_normal
>>              AND   head~wi_rh_task IN lr_task.
>>
>> There's a secondary database index existing on SWWLOGHIST~002 for
>> [ CLIENT , TIMESTAMP , METHOD ]
>>
>> The join seems to be okay, since they match on the primary key of WI_ID.
>> Mayhaps the wi_id needs to be included in the database index, making it the
>> database easier to access the joined table w/o reading the record set, yet
>> at that stage.
>> Furthermore, the swwwihead-table needs to be read fully for each entry
>> found at the swwloghist table.
>> Hmm... Since the statement FOR ALL ENTRIES was used, a combined where
>> clause is generated, grouping about 5 to 10 select-clauses into one statement.
>> To the number of SELECT's is the number work items in the system (since
>> they're all within the requested me->im_timestamp (which is zeroe at that
>> time)).
>>
>> ** Well, did anybody else came across that performance issue?
>> ** Maybe someone knows the trick to manually set a timestamp, so I can
>> reduce the number of selects for the initial load?
>>
>> We have 3,559,095 work items currently in the system.
>>
>> Any ideas, even the most absurd one, are highly appreciated!
>>
>> With the very best wishes,
>> Florin
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> SAP-WUG mailing list
>> SAP-WUG at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/sap-wug
> _______________________________________________
> SAP-WUG mailing list
> SAP-WUG at mit.edu
> http://mailman.mit.edu/mailman/listinfo/sap-wug
>




More information about the SAP-WUG mailing list