Parallel processing of Workflow Deadline Monitoing in 4.6C

Mike Gambier madgambler at hotmail.com
Sat Nov 25 18:23:20 EST 2006


Hi Dude, that's a variation on what we're planning to do.

Our options 1 and 2 would insert a constant in the FM call that would be our 
dedicated deadline server or our new 'deadline' server group.

The variable you mention in 6.2 would make sense if we were intentionally 
choosing a target server for some tasks and not for others I suppose. Or 
perhaps using some kind of rotational logic to shift the load as we saw fit.

Perhaps if you copied in the FM that sets the variable we could see how SAP 
has enhanced the process? Have they added parallel processing for Deadlines 
in 6.2 maybe?

MGT

>From: "Alon Raskin" <araskin at 3i-consulting.com>
>Reply-To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
>To: "SAP Workflow Users' Group" <sap-wug at mit.edu>
>Subject: RE: Parallel processing of Workflow Deadline Monitoing in 4.6C
>Date: Sat, 25 Nov 2006 09:31:44 -0500
>
>
>
>Mike,
>
>Would this work?
>
>I am lookig at a 6.2 system but I would assume that this would be
>applicable to your 4.6c system.
>
>If you look at the code of RSWWDHEX you will see that in order to
>restart the work item (it makes a dynamic function module call). In my
>6.2 system this looks something like
>
>             call function ls_swwwidh-wi_action
>               in background task
>               as separate unit
>               destination lv_wim_rfc_dest
>               exporting
>                 checked_wi     = lv_wi_handle->m_sww_wihead-wi_id
>                 wi_dh_stat     = lv_wi_handle->m_sww_wihead-wi_dh_stat
>                 restricted_log = lv_wi_handle->m_sww_wihead-wi_restlog
>                 creator        = lv_wi_handle->m_sww_wihead-wi_creator
>                 language       = lv_wi_handle->m_sww_wihead-wi_lang
>                 properties     = ls_sww_wihext.
>
>As you can see the 'destination' is specified in variable
>lv_wim_rfc_dest. This variable is populated by a FM call
>SWW_WIM_RFC_DESTINATION_GET. I assume that you will simply update this
>call to ensure that this FM will be executed using the logon group that
>you specify. That way the RSWWDHEX will execute on one server but the
>workflow execution will occur in the logon group that you specify.
>
>Alon
>
>-----Original Message-----
>From: sap-wug-bounces at mit.edu [mailto:sap-wug-bounces at mit.edu] On Behalf
>Of Mike Gambier
>Sent: 24 November 2006 11:12
>To: sap-wug at mit.edu
>Subject: Parallel processing of Workflow Deadline Monitoing in 4.6C
>
>Hello fellow WUGgers,
>
>We are faced with a bit of a new dilemma regarding WF Deadlines and seem
>to
>be faced with some difficult choices.
>
>Our old dilemma was this: we used to run RSWWDHEX every 15 minutes to
>pick
>up our steps that had passed their deadline entries (SWWWIDH /
>SWWWIDEADL)
>until this started to time out during the SELECT statement pulled too
>many
>entries back (simply because we have so many Workflows running). We also
>had
>an issue with the standard program respawning itself whilst its
>predecessor
>job was still running which caused us a bit of grief. This last bit has
>been
>resolved since we hear in later SAP versions.
>
>So, to fix these issues we cloned the program and built in a MAX HITS
>parameter to reduce the number of deadlines it processed per run and
>added a
>self-terminate subroutine to ensure no two jobs ran concurrently.
>
>But, even after these changes we are faced with a NEW dilemma with WF
>Deadline Monitoring. Namely it has a nasty habit of loading up whatever
>server the job is run on to progress the deadline! This manifests itself
>in
>dailog process 'hogging' or excessive local tRFC entries in ARFCSSTATE
>where
>it can't get hold of a dialog process to use on that particular server
>(which can happen a lot if we have other heavy jobs running there). The
>load
>then shifts to RSARFCEX which then struggles with the load as everything
>is
>processed locally on whatever server it is run on.
>
>Unlike the Event Queue there is no standard ready made Parallel
>processing
>option for Deadlines that we know of, at least not in 4.6C. So we're
>thinking of choosing one of these options:
>
>1. Amend our Deadline Monitoring program (will require a mod to SAP code
>as
>well) to redirect the first RFC with a new custom destination that can
>be
>processed seperately to 'normal' Workflow tRFCs, e.g. 'WORKFLOW_DL_0100'
>
>instead of 'WORKFLOW_LOCAL_0100'. The new destination would be set up to
>
>point to a completely different server than the one the Deadline Monitor
>job
>is currently running on. This won't diminish the load on the server
>where
>dialog processes are available but at least it will shift the load on
>RSARFCEX when it runs. Obviously we would have to schedule a new run for
>
>RSARCFEX with this new destination into our schedule.
>
>2. Same as 1 (mod required) but the new destination will point to a
>server
>group destination (rather than a single server) to spread the load
>across
>mutltiple servers when the tRFCs are converted into qRFCs. Has the added
>
>benefit of reusing the qRFC queue (and its standard config settings and
>transactions) to buffer the start of each new deadline being processed.
>Once
>a deadline step is executed, any tRFCs that result will be appended as
>WORKFLOW_LOCAL_0100 as normal because they will result from subsequent
>calls
>that will not affected by our mod setting. End result should be the
>START of
>each deadline process chain is distributed across multiple servers (and
>therefore will spread the demand for dialog processes accordingly), but
>any
>tRFCs that result will end up being chucked back into the 'local' pot.
>Unfortunately this would mean that our version of SWWDHEX would pass the
>
>baton on to RSQOWKEX (the outbound queue batch job) to actually progress
>the
>deadline, i.e do any real work. We would therefore have two batch jobs
>to
>watch and have a noticeable delay between deadlines being selected and
>deadlines actually being progressed. Whether we can live with this we
>just
>don't know. The issue of different deadlines for the same Workflow being
>
>progressed on different servers is also a concern but since we limit the
>
>number of deadlines we process per run anyway that is currently
>something we
>suffer from at the moment.
>
>3. Dynamic destination determination (OSS Note 888279) applied to all
>Workflow steps, not just deadlines. Scary stuff. Breaks the concept of a
>
>single server 'owning' a deadline process chain in its entirety.
>Considering
>the volumes of Workflow we have, we're uncertain as to what impact this
>will
>have system-wide.
>
>4. Redesign Deadline Monitoring to use the same persistence approach as
>Event Delivery and have a deadline queue. Complete overhaul using
>SWEQUEUE
>etc as a guide. Would be a lovely project to do but honestly we can't
>really
>justify the database costs, code changes and testing.
>
>We are currently favouring option 2 as a realistic way forward as it
>seems
>to offer the simplest way of shifting the load around to prevent a
>single
>server from being hammered. It has risks and would require careful
>monitoring or the qRFC queues, but it seems a safer bet than overloading
>
>another single server (option 1), splitting up a single deadline chain
>across multiple servers (option 3) or costing the earth and becoming
>unsupportable (option 4).
>
>Has anyone out there implemented option 3, the OSS Note? We'd love to
>know...
>
>Or, if you have any alternative suggestions we'd be interested to hear
>them
>:)
>
>Regards,
>
>Mike GT
>
>_________________________________________________________________
>Stay up-to-date with your friends through the Windows Live Spaces
>friends
>list.
>http://clk.atdmt.com/MSN/go/msnnkwsp0070000001msn/direct/01/?href=http:/
>/spaces.live.com/spacesapi.aspx?wx_action=create&wx_url=/friends.aspx&mk
>
>_______________________________________________
>SAP-WUG mailing list
>SAP-WUG at mit.edu
>http://mailman.mit.edu/mailman/listinfo/sap-wug
>
>_______________________________________________
>SAP-WUG mailing list
>SAP-WUG at mit.edu
>http://mailman.mit.edu/mailman/listinfo/sap-wug

_________________________________________________________________
Talk now to your Hotmail contacts with Windows Live Messenger. 
http://clk.atdmt.com/MSN/go/msnnkwme0020000001msn/direct/01/?href=http://get.live.com/messenger/overview




More information about the SAP-WUG mailing list