Parallel processing of Workflow Deadline Monitoing in 4.6C

Mike Gambier madgambler at hotmail.com
Mon Nov 27 05:14:04 EST 2006


Just noticed that our 4.6C code also calls SWW_WIM_RFC_DESTINATION_GET but 
this just fetches the single entry from the config table that is populated 
by SWU3.

So again this is the whole Workflow destination and not just for deadlines 
so we don't really want to change that!

What we'll probably do is replace this FM call in the code with our own to 
return a server group instead and switch to aRFC.

MGT

>From: "Mike Gambier" <madgambler at hotmail.com>
>Reply-To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
>To: sap-wug at mit.edu
>Subject: RE: Parallel processing of Workflow Deadline Monitoing in 4.6C
>Date: Mon, 27 Nov 2006 10:03:40 +0000
>
>Hi Mike (and thanks Kjetil),
>
>Appreciate the response and initially that was our first approach too.
>Unfortunately this didn't quite cut it. All we found was that on the whole
>fewer dialog processes were hogged by the job on average (but actually
>sometimes no reduction was notcieable) but obviously this happened more
>often.
>
>We conducted a spike anaylsis and displaced some deadline settings in our 
>WF
>definitions to avoid the usual 'one minute past midnight' mad rush. We had 
>a
>few, of course, but on the whole it's the sheer volumes of steps that's
>causing us grief not the timing of them.
>
>On some days we have 30,000 WF instances (sometimes for single WF
>defintiions, sometimes not) hitting deadlines quite legitimately. A side
>effect of billing 16 MILLION customers in SAP you see.
>
>It's interesting to note that from version 6.2 onwards (maybe earlier?) SAP
>decided to introduce load balancing into RSWWDHEX using FM
>SWW_WIM_RFC_DESTINATION_GET which can return a destination dynamically that
>can also be a server group apparently (thanks Alon).
>
>So they must have decided to enable parallel processing at some point.
>
>Mike GT
>
> >From: "Mike Pokraka" <asap at workflowconnections.com>
> >Reply-To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
> >To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
> >Subject: RE: Parallel processing of Workflow Deadline Monitoing in 4.6C
> >Date: Mon, 27 Nov 2006 09:29:42 -0000 (UTC)
> >
> >Hi Mike,
> >
> >I would kinda agree with Kjetil here that a more frequent schedule may
> >help you out. A side effect of having it run more often also means it the
> >data is more likely to stay buffered on in the system and therefore each
> >run is faster in addition to having less items to process.
> >
> >Also, have you done a performance trace on the offending select? The 
>table
> >should be indexed & this should be a very efficient job particularly
> >because it's designed to run frequently.
> >
> >Perhaps you could also look at your WF design, you must be talking about
> >5000-10000 deadlines EXPIRING per hour on a moderately sized system to 
>hit
> >that sort of problem. Or is the problem with a lot of deadlines being hit
> >all at once? Maybe shift a 'days' deadline into an expression and add a
> >different amount of minutes to different steps in the WF builder to
> >stagger deadlines being hit.
> >
> >Cheers,
> >Mike
> >
> >
> >
> >On Mon, November 27, 2006 06:34, Kjetil Kilhavn wrote:
> > > First a general observation/recommendation: why do you copy the 
>standard
> > > program instead of modifying it? All you achieve is to *not* get SAP's
> > > standard enhancements of the program when you apply support packages.
> > > We've modified RSWUWFML2 quite seriously with our own enhancements
> > > (additional selection screen options), and since we modified instead 
>of
> > > copied it, when SAP had a fix on a logon language issue, it came in
> > > without us having to create a new copy of the program. I know SPAU 
>lists
> > > are a pain in the neck, but there are advantages too, such as becoming
> > > aware of changes in the standard programs which may allow you to 
>remove
> > > your modifications.
> > >
> > > The only real advantage I see in copying is that you avoid the danger 
>of
> > > someone resetting an object to standard (which just happened with one
> > > modification in our upgrade from 4.6C to 2005).
> > >
> > > Anyway, back to the issue at hand. Have you tried rescheduling the job
> > > to run more frequently? It sounds strange that this should help, but 
>it
> > > should be worth a try. We run it every three minutes. You should run 
>it
> > > non-scheduled first though, to clear the queue.
> > > --
> > > Kjetil Kilhavn, Statoil OFT GBS BAS DEV SAP
> > >
> > >
> > >> -----Original Message-----
> > >> From: sap-wug-bounces at mit.edu
> > >> [mailto:sap-wug-bounces at mit.edu] On Behalf Of Mike Gambier
> > >> Sent: Sunday, November 26, 2006 12:23 AM
> > >> To: sap-wug at mit.edu
> > >> Subject: RE: Parallel processing of Workflow Deadline
> > >> Monitoing in 4.6C
> > >>
> > >> Hi Dude, that's a variation on what we're planning to do.
> > >>
> > >> Our options 1 and 2 would insert a constant in the FM call
> > >> that would be our dedicated deadline server or our new
> > >> 'deadline' server group.
> > >>
> > >> The variable you mention in 6.2 would make sense if we were
> > >> intentionally choosing a target server for some tasks and not
> > >> for others I suppose. Or perhaps using some kind of
> > >> rotational logic to shift the load as we saw fit.
> > >>
> > >> Perhaps if you copied in the FM that sets the variable we
> > >> could see how SAP has enhanced the process? Have they added
> > >> parallel processing for Deadlines in 6.2 maybe?
> > >>
> > >> MGT
> > >>
> > >> >From: "Alon Raskin" <araskin at 3i-consulting.com>
> > >> >Reply-To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
> > >> >To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
> > >> >Subject: RE: Parallel processing of Workflow Deadline
> > >> Monitoing in 4.6C
> > >> >Date: Sat, 25 Nov 2006 09:31:44 -0500
> > >> >
> > >> >
> > >> >
> > >> >Mike,
> > >> >
> > >> >Would this work?
> > >> >
> > >> >I am lookig at a 6.2 system but I would assume that this would be
> > >> >applicable to your 4.6c system.
> > >> >
> > >> >If you look at the code of RSWWDHEX you will see that in order to
> > >> >restart the work item (it makes a dynamic function module
> > >> call). In my
> > >> >6.2 system this looks something like
> > >> >
> > >> >             call function ls_swwwidh-wi_action
> > >> >               in background task
> > >> >               as separate unit
> > >> >               destination lv_wim_rfc_dest
> > >> >               exporting
> > >> >                 checked_wi     = lv_wi_handle->m_sww_wihead-wi_id
> > >> >                 wi_dh_stat     =
> > >> lv_wi_handle->m_sww_wihead-wi_dh_stat
> > >> >                 restricted_log =
> > >> lv_wi_handle->m_sww_wihead-wi_restlog
> > >> >                 creator        =
> > >> lv_wi_handle->m_sww_wihead-wi_creator
> > >> >                 language       = lv_wi_handle->m_sww_wihead-wi_lang
> > >> >                 properties     = ls_sww_wihext.
> > >> >
> > >> >As you can see the 'destination' is specified in variable
> > >> >lv_wim_rfc_dest. This variable is populated by a FM call
> > >> >SWW_WIM_RFC_DESTINATION_GET. I assume that you will simply
> > >> update this
> > >> >call to ensure that this FM will be executed using the logon
> > >> group that
> > >> >you specify. That way the RSWWDHEX will execute on one
> > >> server but the
> > >> >workflow execution will occur in the logon group that you specify.
> > >> >
> > >> >Alon
> > >> >
> > >> >-----Original Message-----
> > >> >From: sap-wug-bounces at mit.edu [mailto:sap-wug-bounces at mit.edu] On
> > >> >Behalf Of Mike Gambier
> > >> >Sent: 24 November 2006 11:12
> > >> >To: sap-wug at mit.edu
> > >> >Subject: Parallel processing of Workflow Deadline Monitoing in 4.6C
> > >> >
> > >> >Hello fellow WUGgers,
> > >> >
> > >> >We are faced with a bit of a new dilemma regarding WF Deadlines and
> > >> >seem to be faced with some difficult choices.
> > >> >
> > >> >Our old dilemma was this: we used to run RSWWDHEX every 15
> > >> minutes to
> > >> >pick up our steps that had passed their deadline entries (SWWWIDH /
> > >> >SWWWIDEADL)
> > >> >until this started to time out during the SELECT statement
> > >> pulled too
> > >> >many entries back (simply because we have so many Workflows
> > >> running).
> > >> >We also had an issue with the standard program respawning
> > >> itself whilst
> > >> >its predecessor job was still running which caused us a bit
> > >> of grief.
> > >> >This last bit has been resolved since we hear in later SAP versions.
> > >> >
> > >> >So, to fix these issues we cloned the program and built in a
> > >> MAX HITS
> > >> >parameter to reduce the number of deadlines it processed per run and
> > >> >added a self-terminate subroutine to ensure no two jobs ran
> > >> >concurrently.
> > >> >
> > >> >But, even after these changes we are faced with a NEW
> > >> dilemma with WF
> > >> >Deadline Monitoring. Namely it has a nasty habit of loading
> > >> up whatever
> > >> >server the job is run on to progress the deadline! This manifests
> > >> >itself in dailog process 'hogging' or excessive local tRFC
> > >> entries in
> > >> >ARFCSSTATE where it can't get hold of a dialog process to
> > >> use on that
> > >> >particular server (which can happen a lot if we have other
> > >> heavy jobs
> > >> >running there). The load then shifts to RSARFCEX which then
> > >> struggles
> > >> >with the load as everything is processed locally on whatever
> > >> server it
> > >> >is run on.
> > >> >
> > >> >Unlike the Event Queue there is no standard ready made Parallel
> > >> >processing option for Deadlines that we know of, at least
> > >> not in 4.6C.
> > >> >So we're thinking of choosing one of these options:
> > >> >
> > >> >1. Amend our Deadline Monitoring program (will require a mod to SAP
> > >> >code as
> > >> >well) to redirect the first RFC with a new custom
> > >> destination that can
> > >> >be processed seperately to 'normal' Workflow tRFCs, e.g.
> > >> >'WORKFLOW_DL_0100'
> > >> >
> > >> >instead of 'WORKFLOW_LOCAL_0100'. The new destination would
> > >> be set up
> > >> >to
> > >> >
> > >> >point to a completely different server than the one the Deadline
> > >> >Monitor job is currently running on. This won't diminish the load on
> > >> >the server where dialog processes are available but at least it will
> > >> >shift the load on RSARFCEX when it runs. Obviously we would have to
> > >> >schedule a new run for
> > >> >
> > >> >RSARCFEX with this new destination into our schedule.
> > >> >
> > >> >2. Same as 1 (mod required) but the new destination will point to a
> > >> >server group destination (rather than a single server) to spread the
> > >> >load across mutltiple servers when the tRFCs are converted
> > >> into qRFCs.
> > >> >Has the added
> > >> >
> > >> >benefit of reusing the qRFC queue (and its standard config
> > >> settings and
> > >> >transactions) to buffer the start of each new deadline being
> > >> processed.
> > >> >Once
> > >> >a deadline step is executed, any tRFCs that result will be
> > >> appended as
> > >> >WORKFLOW_LOCAL_0100 as normal because they will result from
> > >> subsequent
> > >> >calls that will not affected by our mod setting. End result
> > >> should be
> > >> >the START of each deadline process chain is distributed
> > >> across multiple
> > >> >servers (and therefore will spread the demand for dialog processes
> > >> >accordingly), but any tRFCs that result will end up being
> > >> chucked back
> > >> >into the 'local' pot.
> > >> >Unfortunately this would mean that our version of SWWDHEX would pass
> > >> >the
> > >> >
> > >> >baton on to RSQOWKEX (the outbound queue batch job) to actually
> > >> >progress the deadline, i.e do any real work. We would therefore have
> > >> >two batch jobs to watch and have a noticeable delay between
> > >> deadlines
> > >> >being selected and deadlines actually being progressed.
> > >> Whether we can
> > >> >live with this we just don't know. The issue of different
> > >> deadlines for
> > >> >the same Workflow being
> > >> >
> > >> >progressed on different servers is also a concern but since we limit
> > >> >the
> > >> >
> > >> >number of deadlines we process per run anyway that is currently
> > >> >something we suffer from at the moment.
> > >> >
> > >> >3. Dynamic destination determination (OSS Note 888279)
> > >> applied to all
> > >> >Workflow steps, not just deadlines. Scary stuff. Breaks the
> > >> concept of
> > >> >a
> > >> >
> > >> >single server 'owning' a deadline process chain in its entirety.
> > >> >Considering
> > >> >the volumes of Workflow we have, we're uncertain as to what
> > >> impact this
> > >> >will have system-wide.
> > >> >
> > >> >4. Redesign Deadline Monitoring to use the same persistence
> > >> approach as
> > >> >Event Delivery and have a deadline queue. Complete overhaul using
> > >> >SWEQUEUE etc as a guide. Would be a lovely project to do but
> > >> honestly
> > >> >we can't really justify the database costs, code changes and 
>testing.
> > >> >
> > >> >We are currently favouring option 2 as a realistic way forward as it
> > >> >seems to offer the simplest way of shifting the load around
> > >> to prevent
> > >> >a single server from being hammered. It has risks and would require
> > >> >careful monitoring or the qRFC queues, but it seems a safer bet than
> > >> >overloading
> > >> >
> > >> >another single server (option 1), splitting up a single
> > >> deadline chain
> > >> >across multiple servers (option 3) or costing the earth and becoming
> > >> >unsupportable (option 4).
> > >> >
> > >> >Has anyone out there implemented option 3, the OSS Note?
> > >> We'd love to
> > >> >know...
> > >> >
> > >> >Or, if you have any alternative suggestions we'd be
> > >> interested to hear
> > >> >them
> > >> >:)
> > >> >
> > >> >Regards,
> > >> >
> > >> >Mike GT
> > >
> > >
> > > -------------------------------------------------------------------
> > > The information contained in this message may be CONFIDENTIAL and is
> > > intended for the addressee only. Any unauthorised use, dissemination 
>of
> > > the
> > > information or copying of this message is prohibited. If you are not 
>the
> > > addressee, please notify the sender immediately by return e-mail and
> > > delete
> > > this message.
> > > Thank you.
> > >
> > > _______________________________________________
> > > SAP-WUG mailing list
> > > SAP-WUG at mit.edu
> > > http://mailman.mit.edu/mailman/listinfo/sap-wug
> > >
> >
> >
> >_______________________________________________
> >SAP-WUG mailing list
> >SAP-WUG at mit.edu
> >http://mailman.mit.edu/mailman/listinfo/sap-wug
>
>_________________________________________________________________
>Stay up-to-date with your friends through the Windows Live Spaces friends
>list.
>http://clk.atdmt.com/MSN/go/msnnkwsp0070000001msn/direct/01/?href=http://spaces.live.com/spacesapi.aspx?wx_action=create&wx_url=/friends.aspx&mk
>
>_______________________________________________
>SAP-WUG mailing list
>SAP-WUG at mit.edu
>http://mailman.mit.edu/mailman/listinfo/sap-wug

_________________________________________________________________
View Athlete’s Collections with Live Search 
http://sportmaps.live.com/index.html?source=hmemailtaglinenov06&FORM=MGAC01




More information about the SAP-WUG mailing list