Quantcast
Channel: ATeam Chronicles
Viewing all 60 articles
Browse latest View live

Configure Adapter Threads in Oracle SOA 11G

$
0
0

Introduction

The following article summarizes the inbound (polling) adapter configurations for Oracle SOA 11g when configuring multiple threads.

Main Article

In an earlier posting, I mentioned that you can configure multiple threads of inbound (polling) adapters for Oracle SOA. However, the ways to configure multiple threads varies between adapters and product versions, and the information is scattered cross multiple pieces of documentation.  Therefore, it is worth consolidating them here. The following describes how to configure Adapter threads in Oracle SOA 11G:

1. JMS Adapter

  • Property Name: adapter.jms.receive.threads
  • Configuration File: adapter binding at composite.xml
  • Documentation: http://docs.oracle.com/cd/E21764_01/core.1111/e10108/adapters.htm#BABCGCEC
  • Example:
    <service name="dequeue" ui:wsdlLocation="dequeue.wsdl">
     <interface.wsdl interface="http://xmlns.oracle.com/pcbpel/adapter/jms/textmessageusingqueues/textmessageusingqueues/dequeue%2F#wsdl.interface(Consume_Message_ptt)"/>
     <binding.jca config="dequeue_jms.jca">
     <property name="adapter.jms.receive.threads" type="xs:string" many="false">10</property>
     </binding.jca">
     </service>

2. AQ Adapter

  • Property Name: adapter.aq.dequeue.threads
  • Configuration file: composite.xml
  • Documentation: http://docs.oracle.com/cd/E21764_01/core.1111/e10108/adapters.htm#BABDEBEE
  • Example:
    <service name="dequeue" ui:wsdlLocation="dequeue.wsdl">
     <interface.wsdl interface="http://xmlns.oracle.com/pcbpel/adapter/aq/raw/raw/dequeue/#wsdl.interface(Dequeue_ptt)"/>
     <binding.jca config="dequeue_aq.jca">
     <property name="adapter.aq.dequeue.threads" type="xs:string" many="false">10</property>
     </binding.jca>
     </service>

3. MQ Adapter

  • Property Name: InboundThreadCount
  • Configuration File: *.jca file
  • Documentation: http://docs.oracle.com/cd/E21764_01/core.1111/e10108/adapters.htm#BABDEBEE

4. Database Adapter

It takes multiple steps to configure database adapter threads.

Step 1: Configure distributed polling. The query in the polling database adapter needs to be a distributed polling in order to avoid data duplication. Please follow the two best practices in the documentation to establish the right kind of distributed polling.

Step 2. Set activationInstances as adapter binding property at composite.xml (SOA 11G) to achieve multiple threads in database adapter.

Alternatively, you can set NumberOfThreads in the jca file (SOA 11.1.1.3 onward). Technically, activationInstances and NumberOfThreads properties work differently, in such that NumberOfThreads works in the scope of per activation agent. Before SOA 11.1.1.3, NumberOfThreads is NOT supported in clustered environment or when activationInstances>1. From SOA 11.1.1.3 onward, you can use either activationInstances or NumberOfThreads properties to achieve the multi-threading effect. But if for some reason you set both, the total concurrent threads will be activationInstances x NumberOfThreads. For example, if you activationInstances=2, and NumberOfThreads=5, that means there are are 5 threads running within each activation instance.

Step 3. Tune MaxTransactionSize and MaxRaiseSize to throttle the incoming messages along side with activationAgents/NumberOfThreads. These two properties are configured either through the DbAdapter wizard at JDeveloper, or manually directly at the *.jca file.

<endpoint-activation portType="poll_ptt" operation="receive">
 <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
 ...
 <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
 <property name="MaxRaiseSize" value="5"/>
 <property name="MaxTransactionSize" value="10"/>
 ...
 </activation-spec>
 </endpoint-activation>

5. File/FTP Adapter

File/FTP adapter’s threading model is a bit complex. In essence, there is a separation poller thread and processor thread, except in the “Single Threaded Model” (comparatively, JMS/AQ adapters always use the same thread to poll and process). There is always only one poller thread, while there could be multiple processor threads. Please go through the documentation thoroughly so that you can choose a threading model appropriate to your application.

default_threading_a
Step 1: Choose a threading model

Step 2: Configure threads depending on the threading model you choose

If you choose the Default Threading Model, you can set the thread count of global processor through the oracle.tip.adapter.file.numProcessorThreads property at the pc.properties file. This pc.properties is read from the classpath. So you could, for example, copy it to some directory under ORACLE_HOME and then reference it in the WLS classpath in setDomainEnv.sh. However, the Partition Threading Model is recommended over the Default Threading Model (see below) if you do have a need to define processor threads.

If you choose the Single Threaded Model, set the SingleThreadModel=true at the *.jca file. And as the name applies, you don’t worry about any thread counts.

<activation-spec className="oracle.tip.adapter.file.inbound.FileActivationSpec">
 <property../>
 <property name="SingleThreadModel" value="true"/>
 <property../>
 </activation-spec>

If you choose the Partitioned Threaded Model, you can set the thread count for processor threads per adapter at the *.jca file:

<activation-spec className="oracle.tip.adapter.file.inbound.FileActivationSpec">
 <property../>
 <property name="ThreadCount" value="4"/>
 <property../>
 </activation-spec>

Please note the value of the ThreadCount is closely related to the kind of threading model you choose:

  • If the ThreadCount property is set to 0, then the threading behavior is like that of the single threaded model.
  • If the ThreadCount property is set to -1, then the global thread pool is used, as in the default threading model.
  • The maximum value for the ThreadCount property is 40.

Configure Adapter Threads in Oracle SOA 10G

$
0
0

Introduction

In a separate posting, I mentioned you can configure multiple threads of inbound (polling) adapters of Oracle SOA. However, the ways to configure multiple threads vary between adapters and product versions, and the information scatter across multiple documentations. Hence it is worth to consolidate them here.

Main Article

This post is for Oracle SOA 10.1.3.x. I have another blog post for configuring adapter threads in Oracle SOA 11G.

1. JMS Adapter

  • For BPEL: Set ‘adapter.jms.receive.threads’ as activation agent properties in bpel.xml
<activationAgents>
 <activationAgent className=“…" partnerLink="MsgQueuePL">
 ... <property name="adapter.jms.receive.threads”>5</property>
 </activationAgent>
 </activationAgents>
  • For OESB 10.1.3.3 onward: Set as endpoint property via JDeveloper (10.1.3.3) or directly at *.esbsvc file

OESBAdapterThreads
OESBAdapterThreadsFile

 

2. AQ Adapter

This blog summarizes the different kind of settings for AQ Adapter threads very well. Let me copy and paste it below:

#### FOR ESB/BPEL 10.1.3.4.x
 <property name="adapter.aq.dequeue.threads">NO_OF_THREADS</property>
 #### FOR BPEL 10.1.3.3.x
 <property name=activationInstances">NO_OF_THREADS<property>
 #### FOR ESB 10.1.3.3.x
 <endpointProperties>
 <property name="numberOfAqMessageListeners" value="NO_OF_THREADS"/>
 </endpointProperties>

3. MQ Adapter

Oracle Support Tech Note How to limit number of threads for reading messages from a queue (Doc ID 1144847.1] describes the details of how to setup the threads for Inbound MQAdapter . Copying it here:

There exists a parameter called “InboundThreadCount” which is valid for both 11g and also it is tested on SOA 10.1.3.5, and confirmed to be working on 10.1.3.4.

To set the parameter, Please add the following to the Inbound MQ Adapter

<jca:operation
ActivationSpec="oracle.tip.adapter.mq.inbound.SyncReqResActivationSpecImpl"
 MessageType="REQUEST"
 QueueName="INBOUND_QUEUE"
 Priority="AS_Q_DEF"
 Persistence="AS_Q_DEF"
 InboundThreadCount="1" <==== This parameter
Expiry="NEVER"
 OpaqueSchema="true" >
 </jca:operation>

4. Database Adapter

It takes multiple steps to configure database adapter threads.

Step 1: Configure distributed polling. The query in the polling database adapter needs to be a distributed polling in order to avoid data duplication.

To set usesSkipLocking in SOA 10.1.3.x, you must first declare the property in ra.xml, then set the value in oc4j-ra.xml. No re-packaging or redeployment of DbAdapter.rar is needed.

Step 2. Set activationInstances as activation properties at bpel.xml to achieve multiple threads in database adapter.

Note: There is another property called NumberOfThreads. This property is NOT supported in clustered environment or when activationInstances>1 in SOA 10.1.3.x, or even inversions prior to SOA 11.1.1.3.

Step 3. Tune MaxTransactionSize and MaxRaiseSize to throttle the incoming messages along side with activationAgents/NumberOfThreads. These two properties are configured either through the DbAdapter wizard at JDeveloper, or manually directly at the WSDL file of the DbAdapter

5. File/FTP Adapter

File/FTP adapter’s has a separate poller thread and processor thread (comparatively, JMS/AQ adapters always use the same thread to poll and process). There is always only one poller thread, while there could be multiple processor threads. In SOA 10.1.3.x, the processor threads are globally shared among File and FTP adapter instances, while in 11G you have an option to configure private processor thread pool per adapater *.jca file.

default_threading

 

In SOA 10.1.3.x, the configuration file for you to set the File/FTP adapter processor threads are:
[SOA_HOME]\bpel\system\services\config\pc.properties
[SOA_HOME]\integration\esb\config\pc.properties (need to rename from pc.properties.esb)

The property name is:
oracle.tip.adapter.file.numProcessorThreads=4

If BPEL and ESB are co-located on the same OC4J container, the pc.properties for BPEL takes precedence over that of ESB . In such cases, the values set in SOA_HOME\bpel\system\service\config\pc.properties will suffice.

Writing a Human Task UI in .Net (C#/ASP.NET) or in fact anything other than ADF

$
0
0

Introduction

As you know, you can create the user interfaces for your human tasks using ADF.  JDeveloper allows you to auto-generate a human task user interface (form) and it also includes a wizard that gives you a bit more control over what it produced.  Now ADF is a fine framework, but some people already have a pretty heavy investment in some other framework and a lot of these people would really like to be able to use their framework of choice to build their human task user interfaces.  And the good news it, they can!  And its not that hard to do, once you know how :-)

Main Article

In this article, we will look at how you build a human task user interface using C# and ASP.NET.  This human task UI will show up right there in BPM workspace, just like the ADF ones do.  Here’s what it will look like when it is done:

dotnet-tf1d715

Those red arrows wont be there!  They are just to show you where it is.  That part there where you normally see the ADF task form, that is a .Net application.

Now, as I said, you can use any framework to do this – as long as it is capable of calling Java APIs or web services and reading some data from the HTTP Request object’s Query String.

In this post we will build a UI that is specific to the task in question – like the ADF ones that you generate are.  But it would not be a long way from here to building a generator so that you could auto-generate .Net user interfaces just like you can for ADF.

What you need

To follow along this post you are going to need a couple of things:

  • JDeveloper with the SOA and BPM extensions installed, at least version 11.1.1.5 plus the Feature Pack patch,
  • A BPM server, the same version as JDeveloper,
  • A copy of Visual Studio with the C# and web applications options installed.  I used Visual Studio 2010 Professional, but the free Visual Studio Express editions will also work if you don’t own a copy of Visual Studio.  Just make sure you get one with web applications and C# included, and
  • either a lot of time and patience to type boring bolierplate code, or a copy of AutoMapper from here.  I recommend you take the AutoMapper option…

If you want to get a feel for calling the BPM/HWF APIs/web services from .Net, you might want to review this post.

Creating the composite

First thing we are going to want is a composite to play with.  We can just make a really simple one with just a human task in it.  That will be enough to do what we want to do here.  In fact, something as simple as this will do admirably:

dotnet-tf2

 To create this, open up JDeveloper and create a new BPM Application by selecting New from the Application menu.  In the wizard choose BPM Application in the Application Template section and give your application a name.  I called mine DotNetTest.  Then click on Next.  Give your project a name, I used the same name.  Then click on Next.  Choose Composite with BPMN Process and click on Finish.

In the next dialog, give your process a name, I called mine DotNetTest too.  Then take the defaults and continue.

Drag a User Task into your process from the component palette and drop it on the line between the Start and End nodes, as shown in the diagram above.  Notice that the line turns blue when you hover above it in the right spot.  If you don’t see the component palette, you can open it from the View menu.

Apologies that my names of things in this sample are not super imaginative…

Now we need to create some data definitions.  Open the BPM Project Navigator.  If you don’t see it, you can open it from the View menu.  Expand out your project and the Business Catalog.  Right click on the Business Catalog and choose New then Module from the popup menu.  Call the module Data.

Then right click on your new module and select New then Business Object from the popup menu.  Name your new business object.  I called mine BusinessObject1 (told you there were not very imaginative).  Then click on the green plus icon in the Attributes section to add two new attributes.  Make them both of type String.  I called mine attribute1 and attribute2. Go ahead and save your work (select Save All from the File menu.)

Now return to your process.  Click on the background of the process to make sure the structure pane shows the structure of the process.  It should look a bit like this:

dotnet-tf3

Now right click on the Process Data Objects in the structure pane and select New from the popup menu.  Give it a name, I chose dataObject1, and choose <Component> for the Type.  Then click on the little magnifying glass icon and choose the business object you just defined.

dotnet-tf4

Go ahead and create a second variable (process data object) of type String called dataObject2.

Now let’s set up our data in this process.  Right click on the Start node and open its Properties from the popup menu.  Go to the Implementation tab and add an argument by clicking on the little green plus icon in the Arguments Definition section. I called mine argument1.  Set the type to your business object (Data.BusinessObject1 if you used the same names as me.)

Now click on the Data Associations link.  Map your argument1 into your dataObject1 as shown in the image below.  Then right click on dataObject2 and choose Expression from the popup menu.  Enter “hello” (with the quotes) as the expression.  Your data associations should now look a little like this:

dotnet-tf5

That takes care of our inputs.  When you start the process you will type in the two strings that get put into dataObject1 and dataObject2 will get set to “hello.”

Now, let’s set up the human task.  To make it interesting we are going to allow editing of some data but not of others.

Open the human task properties and go the the Implementation tab.  Click on the green plus icon to create a new human task.  Give it a name, I called mine Humantask1.  Click on the little magnifying glass icon next to Outcomes and set the outcomes to just one option – OK.  Then click on the plus icon next to Parameters to open the Browse Data Objects window that you see on the right hand side of the image below.  Drag your two data objects into the Parameters area as indicated by the red arrow.  Tick the box to make only dataObject2 editable.  Then click on OK to complete the human task definition.  We will take the defaults for everything else.

dotnet-tf61

Now click on the Data Associations link.  Set up the input and output mappings as shown:

dotnet-tf7

Right, now we are ready to go ahead and deploy our composite to our server instance.  You can do this back in the Application Navigator using the Deploy option in the popup menu on the project.

dotnet-tf7

Right, now we are ready to go ahead and deploy our composite to our server instance.  You can do this back in the Application Navigator using the Deploy option in the popup menu on the project.


dotnet-tf8

Follow through the wizard, I assume that you know how to do this by now if you are a regular reader.  If not, you can go ahead and take the defaults on this one.

When the deployment is finished, go to Enterprise Manager (at http://yourserver:7001/em) and login as an administrative user (like weblogic) and then navigate into the SOA folder and you should see your shiny new composite there.  Something like this:

dotnet-tf9

Click on it to open the composite page, then click on the Test button and go ahead and launch a couple of instances.  We will use them later.

While we are here, let’s tell the runtime that we are planning to use our own user interface for this task.  We have not created it yet, but we have a pretty good idea what the details will be.  Go back to the composites main page.  Down the bottom you should see a list of the components in the composite, including your Humantask1.  Click on that to bring up the settings for that component.  Then open the Administration tab in these settings.

dotnet-tf10

Click on the green plus icon next to Add URI.  Provide the values you see in the image above.  We are just going to use the built in test environment in Visual Studio in this post.  No need to install IIS for this.  Of course in real life you would install it and deploy your web applications to IIS.  Make sure you click on the Apply button when you are done to save your changes.

The .Net Application

Now, let’s get to work on the fun part!

Open Visual Studio and start a new project by choosing New then Project from the File menu.  In the New Project dialog box, open the Visual C# folder and select the Web category.  Make sure you have .Net Framework 4 selected (you will have installed this with Visual Studio most likely – if not, stop and go install it now) and choose the ASP.NET Web Application template.  Give your project a name, I called mine WebApplication2.  Click on OK to create your application.

dotnet-tf11

We need to tell our project about AutoMapper. First of all, go extract the AutoMapper.zip into your project directory, e.g. c:\users\mark\documents\visual studio 2010\Projects\WebApplication2\WebApplication2.  This will give you three files – AutoMapper.dll, AutoMapper.pdb and AutoMapper.xml.  For a discussion of what AutoMapper is and why we want it – see this post.

In the Solution Explorer, right click on the References folder and choose Add Reference… from the popup menu.

dotnet-tf12

In the Add Reference window, go to the Browse tab.  You should see the AutoMapper.dll that you just unzipped right there in your project directory.  Select it and click on OK to add a reference to your project.

dotnet-tf13

Now, we need to add references to the services that we will be using.  We need to get the WSDL addresses for the TaskService and the TaskQueryService.  You can work these out using the following examples:

http://yourserver:8001/integration/services/TaskQueryService/TaskQueryService?wsdl

http://yourserver:8001/integration/services/TaskService/TaskServicePort?wsdl

To add the references, right click on the Service References folder and select Add Service Reference… from the popup menu.

dotnet-tf141

Add the two web services (one at a time).  Enter the WSDL URL in the Address field, then click on the Go button.  You will see a description of the services avilable as shown in the image below.  Enter a name for the service in the Namespace field and then click on the OK button.  I called mine TaskService and TaskQueryService.

dotnet-tf151

Now, because these two services use WS-Security, we need to tell .Net to use WS-Security.  This is done by editing the web.config file.  You should see it right there in the Solution Explorer, go ahead and open it and scroll down to the bottom.  Here is the part we are interested in:

<bindings>
      <basicHttpBinding>
        <binding name="TaskQueryServiceSOAPBinding" closeTimeout="00:01:00"
          openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
          allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard"
          maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"
          messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"
          useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
            maxBytesPerRead="4096" maxNameTableCharCount="16384" />
          <security mode="None">
            <transport clientCredentialType="None" proxyCredentialType="None"
              realm="" />
            <message clientCredentialType="UserName" algorithmSuite="Default" />
          </security>
        </binding>
        <binding name="TaskServiceSOAPBinding" closeTimeout="00:01:00"
          openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
          allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard"
          maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"
          messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"
          useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
            maxBytesPerRead="4096" maxNameTableCharCount="16384" />
          <security mode="None">
            <transport clientCredentialType="None" proxyCredentialType="None"
              realm="" />
            <message clientCredentialType="UserName" algorithmSuite="Default" />
          </security>
        </binding>
      </basicHttpBinding>
    </bindings>
    <client>
      <endpoint address="http://ps5.mark.oracle.com:8001/integration/services/TaskQueryService/TaskQueryService2/*"
        binding="basicHttpBinding" bindingConfiguration="TaskQueryServiceSOAPBinding"
        contract="TaskQueryService.TaskQueryService" name="TaskQueryServicePortSAML" />
      <endpoint address="http://ps5.mark.oracle.com:8001/integration/services/TaskQueryService/TaskQueryService"
        binding="basicHttpBinding" bindingConfiguration="TaskQueryServiceSOAPBinding"
        contract="TaskQueryService.TaskQueryService" name="TaskQueryServicePort" />
      <endpoint address="http://ps5.mark.oracle.com:8001/integration/services/TaskService/TaskServicePort"
        binding="basicHttpBinding" bindingConfiguration="TaskServiceSOAPBinding"
        contract="TaskService.TaskService" name="TaskServicePort" />
      <endpoint address="http://ps5.mark.oracle.com:8001/integration/services/TaskService/TaskServicePortSAML/*"
        binding="basicHttpBinding" bindingConfiguration="TaskServiceSOAPBinding"
        contract="TaskService.TaskService" name="TaskServicePortSAML" />
    </client>

For each of the two bindings you will need to update the security section as shown above to use UserName credentials and the Default algorithm for message security.  Not transport, message – make sure you get the right one!

Also, which we are here, you might want to note a couple of things in the client section.  First, this is where you will go and change the endpoint addresses if you want to use a different server later on.  Second, notice that there are two endpoints for each service.  One is SAML and one is not.  We are going to use the ones that are not SAML in this example.  You can make a note of the value in the name attribute for each attribute.  We will need those later.

Ok, that takes care of our services.  Now let’s do the user interface.  We will start by customising the template (Site.Master) first.  You don’t strictly need to do this, but it is best to get rid of some of that extra stuff that might cuase confusion.  Here is the template I used:

<%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="WebApplication2.SiteMaster" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head runat="server">
    <title></title>
    <link href="~/Styles/Site.css" rel="stylesheet" type="text/css" />
    <asp:ContentPlaceHolder ID="HeadContent" runat="server">
    </asp:ContentPlaceHolder>
</head>
<body>
    <form runat="server">
    <div class="page">
        <div class="header">
            <div class="title">
                <h1>
                    Sample C#/ASP.NET Task form
                </h1>
            </div>
            <div class="loginDisplay">
                        [ <a href="#" id="HeadLoginView_HeadLoginStatus">Sample</a> ]
            </div>
        </div>
        <div class="main">
            <asp:ContentPlaceHolder ID="MainContent" runat="server"/>
        </div>
        <div class="clear">
        </div>
    </div>
    <div class="footer">
    </div>
    </form>
</body>
</html>

Note that we are running this at the server, not the client, for those who know enough about .Net to care about the difference :)

Great, now le’s set up our main page.  Open up the Default.aspx page.  Here is how we want it to look:

dotnet-tf16

If you want to, you can go and drag and drop everything into place and edit the properties.  But in the interest of making this easier for you, and to make sure the names of the UI components match the sample code below, it would be better to copy the code below into the source view.

<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
    CodeBehind="Default.aspx.cs" Inherits="WebApplication2.QueryStringRecipient" %>

<asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
</asp:Content>
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
    <h2>
        Task Details
    </h2>
<table>
  <tr><td>Task Title:</td><td><asp:TextBox ID="TextBox1" runat="server"
          Enabled="False"></asp:TextBox></td></tr>
  <tr><td>Task State:</td><td><asp:TextBox ID="TextBox2" runat="server"
          Enabled="False"></asp:TextBox></td></tr>
  <tr><td>Task Number:</td><td><asp:TextBox ID="TextBox3" runat="server"
          Enabled="False"></asp:TextBox></td></tr>
</table>
    <h2>
        Payload
    </h2>
<table>
  <tr><td>Updatable Payload Data:</td><td>
      <asp:TextBox ID="TextBox4" runat="server" AutoPostBack="True"></asp:TextBox></td></tr>
  <tr><td>Read Only Payload Data:d Only Payload Data:</td><td><asp:TextBox ID="TextBox5" runat="server"
          Enabled="False"></asp:TextBox></td></tr>
</table>
    <h2>
        Actions
    </h2>
    <asp:Button ID="Button1" runat="server" Text="OK" onclick="Button1_Click"  />
</asp:Content>

Great, that’s the UI taken care of.  Now let’s put in the code behind it.  This is where the really interesting stuff happens :)

Go ahead and open up your Default.aspx.cs file and put this code into it:

# Copyright 2012 Oracle Corporation. 
# All Rights Reserved. 
# 
# Provided on an 'as is' basis, without warranties or conditions of any kind, 
# either express or implied, including, without limitation, any warranties or 
# conditions of title, non-infringement, merchantability, or fitness for a 
# particular purpose. You are solely responsible for determining the 
# appropriateness of using and assume any risks. You may not redistribute.

using System;
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Web;
using System.Web.SessionState;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;

namespace WebApplication2
{

    public partial class QueryStringRecipient : System.Web.UI.Page
    {
        protected System.Web.UI.WebControls.Label lblInfo;

        private string taskId;
        private string ctxId;
        private TaskQueryService.workflowContextType ctx;
        private TaskQueryService.TaskQueryServiceClient tqs;
        private TaskQueryService.task task;

        private void Page_Load(object sender, System.EventArgs e)
        {

                // setup the automapper
                setupAutoMapper();

                // BPM will pass us the taskID and the ctx token
                // need to read these out of the http request's query string
                taskId = Request.QueryString["bpmWorklistTaskId"];
                ctxId = Request.QueryString["bpmWorklistContext"];
                //System.Diagnostics.Debug.WriteLine("Task ID:\n" + taskId);
                //System.Diagnostics.Debug.WriteLine("Context:\n" + ctxId);

                // if this is running outside of the worklist, just exit
                if (taskId == null)
                {
                    // looks like we dont have a task
                    return;
                }

                // set up the BPM context
                ctx = new TaskQueryService.workflowContextType();
                ctx.token = ctxId;

                // get the TQS
                tqs = new TaskQueryService.TaskQueryServiceClient("TaskQueryServicePort");

                // set up the request to get the task
                TaskQueryService.taskDetailsByIdRequestType getTaskRequest = new TaskQueryService.taskDetailsByIdRequestType();
                getTaskRequest.workflowContext = ctx;
                getTaskRequest.taskId = taskId;

                // get the task
                task = tqs.getTaskDetailsById(getTaskRequest);
                //System.Diagnostics.Debug.WriteLine("task title:\n" + task.title);

                // populate the UI with task details
                TextBox1.Text = task.title;
                TextBox2.Text = task.systemAttributes.state.ToString();
                TextBox3.Text = task.systemAttributes.taskNumber;

                if (!Page.IsPostBack)
                {
                    // populate the UI with current payload data
                    System.Xml.XmlNode[] payload = (System.Xml.XmlNode[])task.payload;
                    TextBox4.Text = payload.ElementAt(0).ChildNodes.Item(1).InnerText;
                    TextBox5.Text = payload.ElementAt(1).ChildNodes.Item(0).InnerText;
                }

        }

        #region Web Form Designer generated code
        override protected void OnInit(EventArgs e)
        {
            //
            // CODEGEN: This call is required by the ASP.NET Web Form Designer.
            //
            InitializeComponent();
            base.OnInit(e);
        }

        /// <summary>
        /// Required method for Designer support - do not modify
        /// the contents of this method with the code editor.
        /// </summary>
        private void InitializeComponent()
        {
            this.Load += new System.EventHandler(this.Page_Load);

        }
        #endregion

        protected void Button1_Click(object sender, EventArgs e)
        {
            System.Diagnostics.Debug.WriteLine("PRESSED OK");

            // update the task payload from the UI
            System.Xml.XmlNode[] payload = (System.Xml.XmlNode[])task.payload;
            payload.ElementAt(0).ChildNodes.Item(1).InnerText = TextBox4.Text;
            task.payload = payload;

            // get the TS
            TaskService.TaskServiceClient ts = new TaskService.TaskServiceClient("TaskServicePort");

            // update task
            TaskService.taskServiceContextTaskBaseType updateTaskRequest = new TaskService.taskServiceContextTaskBaseType();
            updateTaskRequest.workflowContext = AutoMapper.Mapper.Map<TaskQueryService.workflowContextType, TaskService.workflowContextType>(ctx);
            updateTaskRequest.task = AutoMapper.Mapper.Map<TaskQueryService.task, TaskService.task>(task);
            TaskService.task updatedTask = ts.updateTask(updateTaskRequest);

            // complete task
            TaskService.updateTaskOutcomeType updateTaskOutcomeRequest = new TaskService.updateTaskOutcomeType();
            updateTaskOutcomeRequest.workflowContext = AutoMapper.Mapper.Map<TaskQueryService.workflowContextType, TaskService.workflowContextType>(ctx);
            updateTaskOutcomeRequest.outcome = "OK";
            updateTaskOutcomeRequest.Item = updatedTask;
            ts.updateTaskOutcome(updateTaskOutcomeRequest);

            // redirect to empty page
            Response.Redirect("/Empty.htm");
        }

        private void setupAutoMapper()
        {
            // set up the automapper
            AutoMapper.Mapper.CreateMap<TaskQueryService.workflowContextType, TaskService.workflowContextType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.credentialType, TaskService.credentialType>();

            AutoMapper.Mapper.CreateMap<TaskQueryService.task, TaskService.task>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.attachmentType, TaskService.attachmentType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.callbackType, TaskService.callbackType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.customAttributesType, TaskService.customAttributesType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.documentType, TaskService.documentType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.EvidenceType, TaskService.EvidenceType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.processType, TaskService.processType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.commentType, TaskService.commentType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.identityType, TaskService.identityType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.ucmMetadataItemType, TaskService.ucmMetadataItemType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.systemAttributesType, TaskService.systemAttributesType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.actionType, TaskService.actionType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.displayInfoType, TaskService.displayInfoType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.shortHistoryTaskType, TaskService.shortHistoryTaskType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.assignmentContextType, TaskService.assignmentContextType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.assignmentContextTypeValueType, TaskService.assignmentContextTypeValueType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.collectionTargetType, TaskService.collectionTargetType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.collectionTargetActionType, TaskService.collectionTargetActionType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.preActionUserStepType, TaskService.preActionUserStepType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.systemMessageAttributesType, TaskService.systemMessageAttributesType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.flexfieldMappingType, TaskService.flexfieldMappingType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.scaType, TaskService.scaType>();
            AutoMapper.Mapper.CreateMap<TaskQueryService.UpdatableEvidenceAttributesType, TaskService.UpdatableEvidenceAttributesType>();

            // check automapper config is valid
            AutoMapper.Mapper.AssertConfigurationIsValid();
        }

    }
}

Now let’s walk through and understand what is happening here. First we declare some variables that we will use.  Notice that some of these are from the namespaces we just created with our web service references.  We are going to be using the TaskQueryServiceClient and the workflowContextType (the credentials that we use to authenticate to the BPM server).  We also will be using the task object.  We define these at the class level as we want them to be avialable for the whole lifecycle of the page.

private string taskId;
private string ctxId;
private TaskQueryService.workflowContextType ctx;
private TaskQueryService.TaskQueryServiceClient tqs;
private TaskQueryService.task task;

As for the class, we are extending the class QueryStringRecipient and extending System.Web.UI.Page.  We say partial class because this class is already defined elsewhere (in the .Net libraries) and we are just adding some extra logic to it.  We use this class so that we can get access to the Query String from the HTTP Request.

public partial class QueryStringRecipient : System.Web.UI.Page

Now let’s take a look at the Page_Load method.  This gets run (as you might guess) when the page is loaded.  Every time the page is loaded – even post backs.  So you have to be careful not to overwrite data from the user in here.  This method is probably doing more work than it needs to on each page load the way I have it implemented – I guess you can optimise it some more :)   The first thing we need to do is set up the AutoMapper.  This is done by calling a convenience method setupAutoMapper() which is down the bottom of the source file to hide all that ugliness.  As I mentioned earlier, why and how we are using AutoMapper is discussed over here.

       private void Page_Load(object sender, System.EventArgs e)
        {

                // setup the automapper
                setupAutoMapper();

Next, we need to read the data that BPM sends us in the Query String.  When we configure a URI for a human task, like we did earlier in this post, BPM will append some data into the Query String for us.  This is the data we want to read now:

  • bpmWorklistTaskId is the taskId for the particular task instance we are interested in, and
  • bpmWorklistConext is the BPM workflow context (security token) for the currently logged on user (logged on to BPM Workspace that is).

With these two pieces of information, we are able to do everything we need to do to that task – get its details, payload, take an action on it (system or custom actions), update it, etc.  Here is the code to grab these from the Query String:

                // BPM will pass us the taskID and the ctx token
                // need to read these out of the http request's query string
                taskId = Request.QueryString["bpmWorklistTaskId"];
                ctxId = Request.QueryString["bpmWorklistContext"];

Now we are ready to start talking to BPM.  We should first check that we actually got a task ID in the previous step, otherwise, we probably got called from outside of BPM Workspace, so we should just stop.

Then we can create our workflowContextType.  To use the details that BPM sent us, all we need to do is put them into the token property of this type, as you see below.  Then we are ready to create our TaskQueryServiceClient, note that we pass in to the constructor the name of the port we want – you wrote that down earlier, didn’t you? :)

                // if this is running outside of the worklist, just exit
                if (taskId == null)
                {
                    // looks like we dont have a task
                    return;
                }

                // set up the BPM context
                ctx = new TaskQueryService.workflowContextType();
                ctx.token = ctxId;

                // get the TQS
                tqs = new TaskQueryService.TaskQueryServiceClient("TaskQueryServicePort");

Next, we want to retrieve the task.  We do this by calling the getTaskDetailsById() method on the TaskQueryServiceClient.  First, we need to set up our inputs.  This is done by creating a TaskQueryService.taskDetailsByIdRequestType and setting its workflowContext and taskId properties using the values we retrieved from the Query String earlier.  Then we can call the method.  We get back a task object.

                // set up the request to get the task
                TaskQueryService.taskDetailsByIdRequestType getTaskRequest = new TaskQueryService.taskDetailsByIdRequestType();
                getTaskRequest.workflowContext = ctx;
                getTaskRequest.taskId = taskId;

                // get the task
                task = tqs.getTaskDetailsById(getTaskRequest);

Now, let’s read the data out of the task and populate our UI components.  Here we set the various text boxes to the task metadata we have chosen to show – title, status and task number.

Then we set the other group of text boxes (the ones for the payload) to those parts of the payload that we are interested in.  Of course, we don’t need to display everything, just the fields we are interested in.  In this example, we are going to take one of the two String fields from our dataObject1 and the String dataObject2.  You can see that we access the payload through a System.ml.XmlNode[] – we can just cast the payload property of the task to this type and then we can easily read the payload data.  You can see from the code here that this is specific to the payload type – so in this case we do need to know the structure of the payload.  We could of course be a bit smarter and introspect the payload to find the data we want, but for now, hardcoding it will serve our purpose here.

Notice that we only want to populate the payload UI fields from the task payload the first time we load the page.  That is why we check if (!Page.IsPostBack) – otherwise any changes that the user had made would be overwritten when they post back those very changes.

                // populate the UI with task details
                TextBox1.Text = task.title;
                TextBox2.Text = task.systemAttributes.state.ToString();
                TextBox3.Text = task.systemAttributes.taskNumber;

                if (!Page.IsPostBack)
                {
                    // populate the UI with current payload data
                    System.Xml.XmlNode[] payload = (System.Xml.XmlNode[])task.payload;
                    TextBox4.Text = payload.ElementAt(0).ChildNodes.Item(1).InnerText;
                    TextBox5.Text = payload.ElementAt(1).ChildNodes.Item(0).InnerText;
                }

That completes the Page_Load method.  Now, let’s take a look at what happens when the user clicks on the OK button on our page.  That button represents the OK (custom) action (or outcome) for that task.

The first thing we want to do is update the payload with the data that the user has entered in the form (if any).  Note that only one of the two payload fields that we are displaying is editable – you may remember when we created our task that we set only one of the two parameters to be editable.  That is why we are only taking the value from one of the TextBox components and updating the payload.  If you tried to update the non-updatable field, you would get an exception (as you might expect.)  Updating the payload is pretty mush the reverse of reading the payload:

        protected void Button1_Click(object sender, EventArgs e)
        {
            System.Diagnostics.Debug.WriteLine("PRESSED OK");

            // update the task payload from the UI
            System.Xml.XmlNode[] payload = (System.Xml.XmlNode[])task.payload;
            payload.ElementAt(0).ChildNodes.Item(1).InnerText = TextBox4.Text;
            task.payload = payload;

Now we have actually just updated the payload in our local copy of the task, we need to actually tell BPM to update the ‘real’ task on the server.  We do this by calling the updateTask() method on the TaskService.  Just like we did earlier for the TaskQueryService, we create an instance of the TaskService and pass in the name of the port we want to use.

Then we create the input for the updateTask() method, which is the oddly named TaskService.taskServiceContextBaseType.  We can then populate it with the workflowContext we got from the Query String, and our newly updated task object.

Notice how we use the AutoMapper to convert between types in the two different namespaces created for our services.

The updateTask() method returns to us a new task object, which is represents the newly updated task object on the server.  We will need to use this new object to take any further actions on this task, our old task object is now no longer of any use to us.

            // get the TS
            TaskService.TaskServiceClient ts = new TaskService.TaskServiceClient("TaskServicePort");

            // update task
            TaskService.taskServiceContextTaskBaseType updateTaskRequest = new TaskService.taskServiceContextTaskBaseType();
            updateTaskRequest.workflowContext = AutoMapper.Mapper.Map&lt;TaskQueryService.workflowContextType, TaskService.workflowContextType&gt;(ctx);
            updateTaskRequest.task = AutoMapper.Mapper.Map&lt;TaskQueryService.task, TaskService.task&gt;(task);
            TaskService.task updatedTask = ts.updateTask(updateTaskRequest);

Finally, we can complete the task by setting the outcome to “OK” using the <strong>updateTaskOutcome()</strong> method on the <strong>TaskService</strong>.  I won’t go through all the details, but you can see we create the input, populate it and the call the method.

Once this is done, we can redirect the browser to an empty page – just like the auto-generated ADF task forms do – so we maintain the normal user exerpience in the BPM Workspace.  This is done with the Response.Redirect(“/Empty.htm”) call on the last line.

            // complete task
            TaskService.updateTaskOutcomeType updateTaskOutcomeRequest = new TaskService.updateTaskOutcomeType();
            updateTaskOutcomeRequest.workflowContext = AutoMapper.Mapper.Map<TaskQueryService.workflowContextType, TaskService.workflowContextType>(ctx);
            updateTaskOutcomeRequest.outcome = "OK";
            updateTaskOutcomeRequest.Item = updatedTask;
            ts.updateTaskOutcome(updateTaskOutcomeRequest);

            // redirect to empty page
            Response.Redirect("/Empty.htm");

Now obviously we need to have such a page, so go ahead and create a new HTML page called Empty.htm and put the following code into it:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title></title>
</head>
<body>

</body>
</html>

Ok, so that completes our .Net project.  Go ahead and run itusing the ‘big green arrow’ icon.  This will start up the embedded .Net server and run the project.  It will probably open a browser window and show you the page.  Just close this, we don’t need it.

dotnet-tf17

Now, go and log in to BPM Workspace as an administrative user (like weblogic) and click on the Administration link in the top right corner.  Select the role this task is assigned to, mine is DotNetTest.Role1 and then click on the little ‘new’ icon (highlighted in the image below) to assign some users to this role.

dotnet-tf18

In the popup box, serach for weblogic and then move it across from Available to Selected.  Then click on OK and then on Apply to save this new mapping.  Now these tasks can be routed to the weblogic user.

dotnet-tf19

Log out of the BPM Workspace and back in again so that it will refresh the mappings.  You should now see the DotNetTest tasks that you created earlier in Enterprise Manager sitting in weblogic‘s queue.  Click on one of them and your shiny new .Net task form will load.  It should look a little like this:

dotnet-tf1d715

Try it out!  You can update the payload data, and then click on the OK button to action the task.  After you do that, go take a look at the instance flow trace in Enterprise Manager and you can verify that the payload data was in fact changed.

So there you have it, a .Net task form fully integrated into the BPM Workspace application.  Enjoy!

Thanks go to Carlos Casares for giving me the incentive to write this in the first place and to my reader in Saint Petersburg for giving me the incentive to publish it.

Choosing BPMN or BPEL to model your processes

$
0
0

With Dave Shaffer

Introduction

Recently, I had a discussion with several colleagues and one of our large customers about when to use BPMN and when to use BPEL to model business processes.  I have discussed this topic before in this post but this conversation opened up an interesting new angle on the topic, which I wanted to share with you all.

Main Article

First of all, let me be clear about the scenario here.  This is a large customer who is going to make extensive use of both BPMN and BPEL in their BPM/SOA environment.  They are not trying to decide which one to use exclusively – it is given that both BPEL and BPMN will be used extensively.  So the question here is about how to come up with a consistent approach to deciding which to use when – what to do with BPMN and what to do with BPEL.

The new perspective in this conversation was about how to choose the modeling language based on the fault/exception handling requirements of the process.  If you are not familiar with the options available for fault handling in BPMN and in BPEL, you should quickly review these sources:

You will notice that there are some differences in the capabilities of the products today.  If you look back over the last few releases of BPM, you will see that there has been a significant investment in adding more fault management capabilities for BPMN processes, and it would be reasonable to assume that this will continue.

But even while the Oracle platform holds the promise of equally rich fault handling in either BPEL or BPMN, one can make a case that the fault handling capabilities in BPEL are especially suitable for system to system integration, particularly when you need to use a distributed transaction or when compensation is required.  Besides, these kinds of highly technical fault handling capabilities are a task probably best suited for more technical kinds of processes and people.  And finally, the directed graph nature and ‘alter flow’ capability in BPMN can make it more difficult (or potentially impossible) to employ the same techniques.

However, the fault handling in BPMN is rather well suited to ‘business’ faults – not enough stock to fill an order, credit check failed, order line contains a discontinued product, order cancelled, things like that.

So this leads to the following suggested approach for this customer use-case:

  • Write top level (i.e. true ‘business’ processes) in BPMN,
  • Do not perform any kind of system interaction in these processes – don’t use adapters, call web services, etc.,
  • Use activities or embedded sub-processes with boundary events and event sub-processes to handle all business faults that may occur,
  • Make sure to have a ‘catch all’ event sub-process to handle any failures that are not specifically handled,
  • Theoretically there should never be a system fault in these BPMN processes,
  • Whenever there is a need to do some actual work, delegate this to BPEL, i.e. use a service activity with implementation type ‘service call’ to have BPEL go do the work,
  • Make the BPEL processes atomic, so that they can easily be retried, rolled back, etc.,
  • Use the fault management framework to control the handling of faults in the BPEL processes, and
  • Keep BPEL ‘worker’ processes in separate composites from BPMN ‘business’ processes.

This may not be perfect, but we think it offers a new, and very relevant, real-world perspective on how to decide which modeling notation is right for your processes.  We are sure interested to hear your thoughts and comments.

For a start, here are Dave’s thoughts on this idea:

First I would just like to reinforce the constraints around the scenario that this advice applies to because I think customers would come up with different approaches, and Oracle would have different best practices, when the question is asked around a greenfield project as to when to use BPEL and when to use BPMN.  In those scenarios, the advice would not be to wrap each system call in BPEL, since the unique value proposition of the Oracle platform is that the same system integration capabilities are available natively in both BPEL and BPMN.  However, I think the guidelines above make sense for a customer who will be mixing and matching BPEL and BPMN throughout their processes and applications and where it is assumed that most projects will include both.  In this case, it becomes important to strive for consistent guidelines as to where to draw the line for what to do in the BPEL part of a process and what to do in the BPMN part.  In this case, the fault handling capabilities are indeed richer in BPEL today vs what is possible in BPMN and even though they may be equivalent in the future, there are certainly many kinds of fault handling logic, compensation, etc. that will be deeper than the business would want to go.  The division of labor described above by Mark then results in not just using the best tool for the job, but also leverages the heterogeneous BPEL/BPMN architecture at this customer to make the BPMN “business view” of the process as clean as possible.

Unit Testing Asynchronous BPEL Processes Using soapUI

$
0
0

Introduction

This is a topic I have been interested in for a while.  I have seen it done by some of my colleagues, especially in AIA environments, and I have been waiting to get an opportunity to work on it and document it.

Main article

But I recently found a great article from Daniel Amadei here.  I strongly encourage you to take a look at it if you are at all interested in test automation and/or continuous integration.

I am planning to build this kind of testing into my continuous integration project here.

SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs

$
0
0

Introduction

When faced with integrating SOA Suite 11g with SAP R/3 the recommended approach is using the Oracle Application Adapter for SAP R/3 (SAP JCo 3.0).  This adapter has been used in many enterprises very successfully for integrating with SAP outbound (calls from the adapter to SAP) and inbound (events from SAP to the adapter).  However, with any type of product there are edge cases where the features may fall short or require some creativity to work with those edge cases.  One of those edge cases for the SAP adapter is the ability leverage a feature of SAP where multiple IDOCs can share the same Program ID.

Main Article

The standard case for SAP to send an IDOC to the SAP adapter is via a Channel configuration done in the Application Explorer (a utility provided with the Oracle Application Adapters).  If you notice when you are generating the artifacts for JDeveloper via Application Explorer, the window clearly states in red “* You must create a separate channel for each inbound service”:

SingleChannel_04

 

The reason for the channel definition in the Application Explorer is to provide a correlation between the adapter and a Program ID defined in the SAP system.  Basically, when an IDOC is released it is associated with a Program ID that is used to locate a hostname and port (i.e., channel) to send the IDOC to.  When the adapter receives the IDOC, it will then determine which partnerlink to send the IDOC to resulting in some SOA component instance being created.  One of the limitations of the adapter is that when you export the artifacts from the Application Explorer for different IDOCs but select the same channel, all partnerlinks that are created from the generated artifacts will receive copies of all IDOCs that flow on the same channel/Program ID.  For example, let’s say artifacts for DEBMAS06 and MATMAS05 are generated using the same channel and two partnerlinks are created for DEBMAS06 and MATMAS05. Now if two BPEL processes are created and one is wired to DEBMAS06 partnerlink and the other is wired to MATMAS05 partnerlink, both BPEL processes will get instantiated regardless of the IDOC that has been released from SAP on the common channel/Program ID.

This fan-out behavior of the adapter presents issues for companies that have a large number of IDOCs flowing and don’t want the administrative hassles of defining a channel/Program ID for every IDOC type.  Luckily, there is hope for this scenario but requires the introduction of either a Mediator component or Oracle Service Bus (OSB).  This write-up will focus on the Mediator solution where details of the OSB solution can be found at: [link to Single Channel/Program ID with Multiple IDOCs via OSB is coming soon :)]

The basics behind controlling the fan-out of IDOCs from the adapter are fairly straightforward:

  1. Use the generated artifacts from the Application Explorer for one and only one of the IDOCs that will flow on the common channel to create the partnerlink (see Configuring a Mediator Inbound Process).

SingleChannel_05

 

  1. Create a Mediator Component using the generated WSDL specified in the partnerlink.

SingleChannel_06

 

  1. Wire the adapter partnerlink to the Mediator and from the Mediator to the components that are capable of handing the various IDOCs that will be flowing down the common channel.

SingleChannel_07

 

It is important to remember that every IDOC type on the channel will flow to the configured IDOC partnerlink regardless of the associated WSDL, XSD, and JCA file. The BPEL components in the diagram have been created with a one-way interface that accepts a specific IDOC type (i.e., the xsd generated from the Application Explorer for each IDOC).

  1. Add a filter expression for each static routing in the Mediator based on the XML document root element name (e.g, name($in.event_DEBMAS06/*) = ‘DEBMAS06′).

SingleChannel_08

 

  1. Save and deploy your composite application.

The filter expression for each static routing in the Mediator looks a bit strange because it appears like it is evaluating the payload of the IDOC that was used to create the partnerlink (e.g, name($in.event_DEBMAS06/*) = ‘MATMAS05′). The filter expression will evaluate to the root element of the payload and the name() function will retrieve the root element name where that is compared to a string containing the name of the IDOC. Each IDOC from the adapter contains a root element with the name of the IDOC type, therefore we can route the document accordingly. As new IDOCs are added to the channel/Program ID all that is required is a new static routing in the Mediator based on the IDOC name.

Encapsulating OIM API’s in a Web Service for OIM Custom SOA Composites

$
0
0

Introduction

This document describes how to encapsulate OIM API calls in a Web Service for use in a custom SOA composite to be included as an approval process in a request template.

We always recommend customers to follow this approach when trying to invoke OIM’s APIs inside SOA composites used as approval processes for the following reasons:

  • A web service implementation allows the instantiation of all related APIs once at service startup as opposed to getting a remote reference to each required API interface. This improves performance and reduces the memory footprint of the composite if these API’s are instantiated in embedded Java Tasks.
  • This paradigm allows the implementation of HA for the Web Service encapsulating the API calls and provides the ability to deploy the web service in a separate server from the SOA and OIM servers is so desired. This increases the robustness and reliability of the solution.
  • According to BPEL’s documentation Embedded Java Tasks should only be used for quick utility logic, no business logic should be included in these tasks. For details refer to http://docs.oracle.com/cd/E15586_01/integration.1111/e10224/bp_java.htm#BABHJHBG section 13.2.3 How to Embed Java Code Snippets into a BPEL Process with the bpelx:exec Tag. The reason for that is because all memory required for objects being instantiated within the embedded Java code is adding to the memory space of the composite instance itself which will be kept for the life of the composite instance. This means that if a composite has an asynchronous BPEL process (which is definitely the case for OIM’s Approval Process composites) and that can make the BPEL process to remain there for days or weeks, memory problem may start to arise.

Main Article

The assumption here is that JDeveloper is going to be used to edit the SOA composite and there are no other tools suitable for this purpose. JDeveloper is also a good tool to create the Web Service wrapping the OIM API calls. All that is needed is to create a POJO (Plain Old Java Object) and convert it to a Web Service, and then deploy it to an application server (Weblogic in this case); all of which can be accomplished with JDeveloper.

Please refer to JDeveloper 11g documentation for information on how to create a Web Service out of a POJO since this is out of scope for this document. Once the web service is created and deployed one can obtain the WSDL from the Web Logic Admin console. Just access the deployments and drill down to the Test Client of the web service. The WSDL will be available from the Test client window or from the table showing the testing points in the Weblogic Admin Console. All that is needed is to copy the URL for the WSDL and paste it in the proper text box when configuring the Web Service reference in the composite.

Once the Web Service reference is configured in the Composite, it can be linked to the BPEL process inside the composite. All we need to do is to connect the icon representing the BPEL process with the Web Service reference by stretching an arrow connecting the two of them. Consult the SOA Composite Editor documentation from JDeveloper’s 11g users guide. To invoke methods on the newly wired in Web Service an Invoke Task must be included for each method to be called. The Invoke Task allows you to define the following elements:

  • An input variable that will include the input values for the specific method call taken from the WSDL of the Web Service.
  • An output variable that will receive the returning data from the invocation of the Web Service method formatted as specified by the WSDL of the Web Service.

Before an invocation there typically is an Assign Task that will populate the input parameters of a Web Service call by copying values from other variables or assigning literal values to the input parameters in the Input Variable. So inserting the Invoke Task prior to inserting the Assign Task allows you to create the Input and Output Variables that will be populated by the Assign Task for the case of the Input Variable and with the output data from the Web Service method call in the case of the Output Variable. Now the values in the Output Variable can be used anywhere else in the composite and can be transferred using other Assign Tasks within the BPEL Process flow.

Summary

SOA Suite allows the execution of embedded Java logic within the composites. OIM Java APIs are not a good candidate to be included in Embedded Java Tasks, especially if the composites are meant to serve as approval processes that can potentially keep instances of the composite for a long time. The recommended approach is encapsulating the OIM APIs in Web Services with a SOAP interface. Then invoke operations on the OIM API Wrapping Web Service and just manipulate the results. This allows for other benefits from the architecture design perspective and from the performance and memory footprint stand point as well to prevent Out of Memory issues.

A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g

$
0
0

Introduction

Monitoring and optimizing BPEL Thread Pool utilization (and other metrics) is one of the key activities in performance tuning of BPEL/SOA based integrations. Although the EM console provides some basic monitoring of the BPEL engine statistics, it is limited regarding the update interval, detail and the recording interval and cannot display historic data. Of course you can setup Grid Control 11g with its Repository, but this is, in many cases, too complex to setup just for monitoring during performance and load testing.

Main Article

So, the idea came to create a tool which can easily record these statistics and export them to MS Excel or OpenOffice to create charts for the thread pool utilization over a time period (for example a whole load test execution).

All values of WLS or the SOA engine can be queried using the JMX MBean framework. I have designed the JMXClient to be able configure which MBeans should be queried by using a property file (beans.properties). I decided to connect in this first release to only one Managed Server of WLS to record / export data. This means that if you have a WLS cluster, you need to start multiple JMX Clients to record the values of each node. (In a later release the JMX Client could be optimized to query all nodes automatically).

JMXClient can be used by downloading from the project page at Sourceforge. (including JDeveloper 11.1.1.6 project and sources)

After that you need to configure

  1. your connection properties, JAVA_HOME and WLS_HOME of your WLS managed server of SOA in jmxclient.bat (or jmxclient.sh)
  2. the MBeans names, WLS Server name and the attributes to record in classes/beans.properties (you can find the MBean names form the System MBean Browser in EM)

The syntax in jmxclient.bat is

java -cp classes;%INCLUDE_LIBS% jmxclient.JMXClient <server> <user> <password> -monitor 1000

For example

java -cp classes;%INCLUDE_LIBS% jmxclient.JMXClient 192.168.56.101 7001 weblogic welcome1 -monitor 1000

“1000” specifies the interval in milliseconds between the recording.
Then you can run it with

jmxclient > out.txt

Then simply import this text file using Excel or OpenOffice and a comma “,” as delimiter and create a line chart using line 2 as titles and lines 3 to end as data.

Let me first show a couple of results using JMXClient using the properties to record the BPEL thread pool statistics:

The following chart shows a scenario where the invoke thread pool is much too low (20) so that the queue of scheduled invocations waiting for a free thread is growing rapidly:

ujc1

The second example shows a scenario where invoke and callback threads are within normal limits:

ujc2

In the next posts I will show how to use JMXClient to record the BPEL process execution times or the number of messages in the AIA 11g JMS queues by simply exchanging the beans.properties file…..!

Update: The post mentioned above for recording BPEL process execution times can be found here.

Have fun,
Stefan

DISCLAIMER: JMXClient is provided for free use “as is” without any support or warranty. Please provide enhancements or modifications you make yourself.

PS: for the experts: the format of the beans.properties file:

Every line contains 3 items separated by semicolon:

  1. the name of the MBean to query
  2. the attribute to query
  3. the title string which should be displayed for the column

Example for the bpel thread pools:

oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;active_maxValue;Invoke Active Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;scheduled_maxValue;Invoke Scheduled Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;scheduled_value;Invoke Scheduled Current
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;active_value;Invoke Active Value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;threadCount_value;Invoke Threads Value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/invoke,type=soainfra_bpel_requests;threadCount_maxValue;Invoke Threads Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;active_maxValue;System Active Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;scheduled_maxValue;System Scheduled Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;scheduled_value;System Scheduled Current
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;active_value;System Active value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;threadCount_value;Invoke Threads Value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/system,type=soainfra_bpel_requests;threadCount_maxValue;Invoke Threads Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;active_maxValue;Engine Active Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;scheduled_maxValue;Engine Scheduled Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;scheduled_value;Engine Scheduled Current
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;active_value;Engine Active value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;threadCount_value;Engine Threads Value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/engine,type=soainfra_bpel_requests;threadCount_maxValue;Engine Threads Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;active_maxValue;Audit Active Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;scheduled_maxValue;Audit Scheduled Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;scheduled_value;Audit Scheduled Max
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;active_value;Audit Active value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;threadCount_value;Audit Threads Value
 oracle.dms:Location=AdminServer,name=/soainfra/engines/bpel/requests/audit,type=soainfra_bpel_requests;threadCount_maxValue;Audit Threads Max

New BPEL Thread Pool in SOA 11g for Non-Blocking Invoke Activities from 11.1.1.6 (PS5)

$
0
0

Up to release 11.1.1.5 there have been 4 thread pools in Oracle SOA Suite 11g to control parallelism of execution:

  • Invoke Thread Pool (for asynchronous invocations)
  • Engine Thread Pool (i.e. for callback execution)
  • System Thread Pool
  • Audit Thread Pool

Starting with 11.1.1.6 there is one (still undocumented) new thread pool introduced for non-blocking invoke activities.

Here is a view of the System MBean Browser:

image

The MBean name is: 
oracle.dms:Location=soa_server1,name=/soainfra/engines/bpel/requests/non-block-invoke,type=soainfra_bpel_requests

You can change a synchronous invoke activity from a blocking call to non-blocking by using the partnerlink level property:

image

This thread pool is configured in SOA-Administration –> BPEL Service Engine Properties under “More BPEL Configuration Properties...” with the property DispatcherNonBlockInvokeThreads:

image

Be aware that the default is only 2 – so this can be a bottleneck in high load scenarios if not changed. Especially if you have multiple partnerlinks using non-blocking calls – because all of them share this thread pool…

Have fun, Stefan

AIA/SOA Trips & Tricks (4) : How to Save AIA/BPEL 11g Execution Time Statistics Programmatically in a File

$
0
0

Accessing and saving statistics is quite different in SOA 11g – this is done through JXM MBeans and not anymore by calling a BPEL API.

The following example shows how to retrieve the execution time statistics for all BPEL components deployed to one SOA server.

The example output is:

FOUND 15
Time    BPEL Name    Count    Min    Avg    Max
11:48:19    ProcessFOBillingAccountListRespOSMCFSCommsJMSProducer    6    326    2568.6666666666665    3068
11:48:19    UpdateSalesOrderSiebelCommsProvABCSImplProcess    6    1482    1821.5    2236
11:48:19    CommsProcessFulfillmentOrderBillingAccountListEBF    6    16590    22458.5    29167
11:48:19    ProcessFulfillmentOrderBillingResponseOSMCFSCommsJMSProducer    6    28    166.5    842
11:48:19    AIAAsyncErrorHandlingBPELProcess    4    1459    1758.5    2065
11:48:19    ProcessFulfillmentOrderBillingBRMCommsProvABCSImplProcess    6    1805    2462.8333333333335    4031
11:48:19    QueryCustomerPartyListSiebelProvABCSImplV2    10    640    2639.8    11079
11:48:19    AIASessionPoolManager    20    13    96.0    1344
11:48:19    ProcessSalesOrderFulfillmentOSMCFSCommsJMSProducer    10    94    562.9    1930
11:48:19    ProcessFulfillmentOrderBillingBRMCommsAddSubProcessProcess    6    773    1211.0    1577
11:48:19    SyncCustomerPartyListBRMCommsProvABCSImpl    10    323    2956.0    4045
11:48:19    TestOrderOrchestrationEBF    6    39979    46680.166666666664    52206
11:48:19    ProcessSalesOrderFulfillmentSiebelCommsReqABCSImplProcess    10    1125    2247.1    6522
11:48:19    CommsProcessBillingAccountListEBF    10    7342    12365.5    22876
11:48:19    AIAReadJMSNotificationProcess    4    9    54.5    124

You can easily paste the output in Excel to display charts like:

image

image

You also can periodically retrieve the statistics to determine if there is any performance degrade for some BPEL processes over time.

Lets see how the JMX API is used to achieve this:

First we need to establish a connection to the MBean server – for this we use the same method as we did in our JMXClient:

public static void initConnection(String hostname, String portString,
                                  String username,
                                  String password) throws IOException,
                                                          MalformedURLException {
    String protocol = "iiop";

    Integer portInteger = Integer.valueOf(portString);
    int port = portInteger.intValue();
    String jndiroot = "/jndi/";
    String mserver = "weblogic.management.mbeanservers.domainruntime";

    JMXServiceURL serviceURL =
        new JMXServiceURL(protocol, hostname, port, jndiroot + mserver);

    Hashtable h = new Hashtable();
    h.put(Context.SECURITY_PRINCIPAL, username);
    h.put(Context.SECURITY_CREDENTIALS, password);
    h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
          "weblogic.management.remote");
    // Wait timeout 60 seconds
    h.put("jmx.remote.x.request.waiting.timeout", new Long(60000));
    connector = JMXConnectorFactory.connect(serviceURL, h);
    connection = connector.getMBeanServerConnection();
}

After that we retrieve all Mbeans which have the same pattern:

String mBeanName =
    "oracle.dms:Location=" + servername + ",soainfra_composite_label=*,type=soainfra_component,soainfra_component_type=bpel,soainfra_composite=*,soainfra_composite_revision=*,soainfra_domain=default,name=*";

Set<ObjectInstance> mbeans =
    connection.queryMBeans(new ObjectName(mBeanName), null);
System.out.println("FOUND " + mbeans.size());

This matches the display in Enterprise Manager “System MBean Browser”:

EM2

Now, we can query each MBean for the attributes

  • Name
  • successfulInstanceProcessingTime_completed
  • successfulInstanceProcessingTime_minTime
  • successfulInstanceProcessingTime_avg
  • successfulInstanceProcessingTime_maxTime

That’s it!

You can find the complete JDeveloper project here.

The same statistics can of course be retrieved as well programmatically for composites (services) and references.

New whitepaper “SOA 11g – The Influence of the Audit Level on Performance and Data Growth”

$
0
0

Introduction

I have created a new whitepaper comparing the effect of different Audit Level settings in SOA/AIA 11g:

Main Article

SOA 11g – The Influence of the Audit Level on Performance and Data Growth – A comparison using AIA 11.1 and 11.2 COMMS Order-to-Bill PIPs.
Please download from here.

SOA Suite

$
0
0

Oracle SOA Suite is a key component of Oracle Fusion Middleware and provides an integrated and comprehensive set of tools to build, deploy and manage Service-Oriented Architectures (SOA). The components of the suite benefit from common capabilities including consistent tooling, a single deployment and management model, end-to-end security and unified metadata management.

You can read more about SOA Suite by accessing Oracle Technology Network.

 

Index of SOA Suite articles

$
0
0
  • SOA Suite
  • Adapters
  • B2B
  • BAM
  • BPEL
  • Business Rules
  • Human Workflow
    • Index and Navigation Pages

    • Index of SOA Suite articles (02/26/2013 - Pete Farkas)
    • SOA Suite (01/07/2013 - Pete Farkas)
    • Most Recent Articles

      (up to 30)

    • Human Workflow in 11g (01/08/2010 - Mark Nelson)
  • SOA for Healthcare
  • How to Recover Initial Messages (Payload) from SOA Audit for Mediator and BPEL components

    $
    0
    0

    Introduction

    In Fusion Applications, the status of SOA composite instances are either running, completed, faulted or staled. The composite instances become staled immediately (irrespective of current status) when the respective composite is redeployed with the same version. The messages (payload) are stored in SOA audit tables until they are purged. The users can go through Enterprise Manager and view audit trails and respective messages of each composite. This is good for debugging composite instances. However there are situations where you want to re-submit initiation of SOA composite instances in bulk for the following reasons:

    • The composite was redeployed with the same version number that resulted in all respective instances (completed successfully, faulted or in-flight) becoming stale (“Staled” status)
    • Instances failed because down-stream applications failed and the respective composite did not have an ability to capture the initial message in persistence storage to retry later

    In these cases, it may be necessary to capture the initial message (payload) of many instances in bulk to resubmit them. This can be managed programmatically through SOA Facade API. The Facade API is part of Oracle SOA Suite’s Infrastructure Management Java API that exposes operations and attributes of composites, components, services, references and so on. As long as instances are not purged, the developer can leverage SOA Facade API to retrieve initial messages of either Mediator or BPEL components programmatically. The captured messages can be either resubmitted immediately or stored in persistence storage, such as file, jms or database, for later submission. There are several samples, but this post takes the approach of creating a SOA composite that provides the ability to retrieve initial message of Mediator or BPEL components. The sample provides the frame work and you can tailor it to your requirements.

    Main Article

    SOA Facade API

    Please refer to this for complete SOA Facade API documentation. The SOA audit trails and messages work internally as follows:

    • The “Audit Level” should be either Production or Development to capture the initial payload
    • The “Audit Trail Threshold” determines the location of the initial payload.  If the threshold is exceeded, the View XML link is shown in the audit trail instead of the payload. The default value is 50,000 bytes. These large payloads are stored in a separate database table: audit_details.

    Please refer to the following document for more details on these properties.

    Since the SOA composite we are developing will be deployed in the same respective SOA Server, you do not require user credentials to create the locator object. This is all you need:

    Locator locator = LocatorFactory.createLocator();

    Please see the SOA Facade API document for more information the Locator class.

    Once the Locator object is created, you can lookup composites and apply various filters to narrow down the search to respective components. This is all explained in detail with examples in the SOA Facade document. Here, we focus on how to retrieve the initial messages of the Mediator and BPEL components to resubmit them.

    How to retrieve initial payload from BPEL?

    In BPEL, the initial payload is either embedded in the audit trail or has a link to it. This is controlled by the audit trail threshold value. If the payload size exceeds the audit threshold value then the audit trail has a link. This is the main method to get audit trail:

    auditTrailXml = (String)compInst.getAuditTrail
    /* The “compInst” is an instance Component that is derived from: */
    Component lookupComponent = (Component)locator.lookupComponent(componentName);
    ComponentInstanceFilter compInstFilter = new ComponentInstanceFilter(); compInstFilter.setId(componentId);

     

    If the payload size exceeds the audit threshold value, then the actual payload is an XML link that is stored in the “audit_details” table. The following is the API facade to get it:

    auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +”:”+ componentId, auditMethod, new String[]{auditId});

    The “auditId” for SOA is always “0”.

     

    How to retrieve initial payload from Mediator

    The initial payload in Mediator is never embedded in the Audit Trail. It is always linked and the syntax is similar to BPEL (where payload size exceeds the audit threshold value). However, the “auditID” is in the Mediator audit trail and it must be parsed to get that value for the initial payload. This is the code snippet to get the “auditId” from Mediator audit trail:

    if (componentType.equals("mediator")) {
    DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
    DocumentBuilder db = dbf.newDocumentBuilder();
    Document document = db.parse(new InputSource(new StringReader(auditTrailXml)));
    NodeList nodeList = document.getElementsByTagName("event");
    String attribute = nodeList.item(0).getAttributes().getNamedItem("auditId").getNodeValue();
    addAuditTrailEntry("The Audit is: " + attribute); 
    auditId = attribute;auditMethod="getAuditMessage";} 
    
    /* Once you have the "auditID" from above code, the syntax to get the initial payload is the same as in BPEL.*/
    auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +":"+ componentId, auditMethod, new String[]{auditId});

     

    Complete Java embedded code in BPEL

    try { 
    String componentInstanceID = new Long(getInstanceId()).toString();    
    addAuditTrailEntry("This Run time Component Instance ID is "+componentInstanceID);  
    
    XMLElement compositeNameVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:compositeName");
    String compositeName = compositeNameVar.getTextContent();  
    
    XMLElement compositeIdVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:compositeId");
    String compositeId = compositeIdVar.getTextContent();  
    
    XMLElement componentTypeVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentType");
    String componentType = componentTypeVar.getTextContent();  
    
    XMLElement componentNameVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentName");
    String componentName = componentNameVar.getTextContent();  
    
    XMLElement componentIdVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentId");
    String componentId = componentIdVar.getTextContent();  
    
    String auditDetailXml = "null";
    String auditTrailXml = "null";
    String auditMethod = "getAuditDetails";
    String auditId = "0";
    
    addAuditTrailEntry("The lookup Composite Instance Name is "+compositeName);  
    addAuditTrailEntry("The lookup Composite Instance ID is "+compositeId);  
    addAuditTrailEntry("The lookup Component Instance Name is "+componentName);
    addAuditTrailEntry("The lookup Component Instance Type is " + componentType);
    addAuditTrailEntry("The lookup Component Instance ID is "+componentId);  
    
    Locator locator = LocatorFactory.createLocator();  
    Composite composite = (Composite)locator.lookupComposite(compositeName);  
    Component lookupComponent = (Component)locator.lookupComponent(componentName);  
    
    ComponentInstanceFilter compInstFilter = new ComponentInstanceFilter();  
    
    compInstFilter.setId(componentId);
    
    List<ComponentInstance> compInstances = lookupComponent.getInstances(compInstFilter);  
    if (compInstances != null) {  
        addAuditTrailEntry("====Audit Trail of Instance===");  
        for (ComponentInstance compInst : compInstances) {  
            String compositeInstanceId = compInst.getCompositeInstanceId(); 
            String componentStatus = compInst.getStatus(); 
            addAuditTrailEntry("Composite Instance ID is "+compositeInstanceId);  
            addAuditTrailEntry("Component Status is "+componentStatus);  
    
            addAuditTrailEntry("Get Audit Trail");
            auditTrailXml = (String)compInst.getAuditTrail();
    
            if (componentType.equals("mediator")) {
                DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
                DocumentBuilder db = dbf.newDocumentBuilder();
                Document document = db.parse(new InputSource(new StringReader(auditTrailXml)));
                NodeList nodeList = document.getElementsByTagName("event");
                String attribute = nodeList.item(0).getAttributes().getNamedItem("auditId").getNodeValue();
                addAuditTrailEntry("The Audit is: " + attribute);
    
                auditId = attribute;
                auditMethod="getAuditMessage";
                }
    
            addAuditTrailEntry("Received Audit Trail");
    
            addAuditTrailEntry("Get Audit Details of: "+ componentType +":"+ componentId + "for auditId: " + auditId);
    
            try {
                auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +":"+ componentId, auditMethod, new String[]{auditId});
            } catch (Exception e) { 
            addAuditTrailEntry("Exception in getting audit details:" + e);
            }
    
            addAuditTrailEntry("Received Audit Details");
    
            setVariableData("auditTrailString", "payload", "/client:AuditTrailString/client:auditTrail", auditTrailXml);
            setVariableData("auditDetailString", "payload", "/client:AuditDetailString/client:auditDetail", auditDetailXml);
    
            addAuditTrailEntry("BPEL Variables set");
        }  
    } 
    
    } catch (Exception e) { 
        addAuditTrailEntry("Exception in getting Audit Trails and Details"); 
    }
    
    The sample payload to run above composite is:
    
        <element name="process">
            <complexType>
                <sequence>
                    <element name="compositeName" type="string"/>
                                    <element name="compositeId" type="string"/>
                                    <element name="componentType" type="string"/>
                                    <element name="componentName" type="string"/>
                                    <element name="componentId" type="string"/>
                </sequence>
            </complexType>
        </element>

    Sample Code

    Please get the complete Jdeveloper Project as follows:

    1. DummySOAApplication to retrieve initial payload of Mediator and BPEL components

    2. The SOA Audit Trail Composite “SOAAuditTrails” that contains the logic to get initial payload of “Dummy Composite”

    3. Sample Payload “SOA_audit_payload

     

     

    Unable to start SOA –INFRA if the immediate and deferred audit policy setting “ isActive” parameters were set to the same value

    $
    0
    0

    In PS3 (11.1.1.4) and PS4 (11.1.1.5), the SOA-INFRA application will not be able start up when you set both immediate and deferred audit policy MBean attributes to active. This is a known bug (13384305), and there is a patch for PS5 (11.1.1.6) to resolve this issue, and there is also a cumulative patch (18254378) for PS4. If you need a quick workaround to start up the SOA-INFRA application before the patch is fully tested, this blog describes how to find the MBean configuration in the MDS schema and change the value in order to start up the SOA-INFRA application.

    When you encountered this issue, the Weblogic console would display the server status as “RUNNING” but SOA-INFRA wouldn’t show up in EM Console. In the SOA server log file, you would see the following exception:

    [/WEB-INF/fabric-config-core.xml]: Cannot resolve reference to bean
      'DOStoreFactory' while setting bean property 'DOStoreFactory'; nested
      exception is org.springframework.beans.factory.BeanCreationException: Error
      creating bean with name 'DOStoreFactory' defined in ServletContext resource
      [/WEB-INF/fabric-config-core.xml]: Invocation of init method failed; nested
      exception is java.lang.NullPointerException
      at
      org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolve
      Reference(BeanDefinitionValueResolver.java:275)
      at
      org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolve
      ValueIfNecessary(BeanDefinitionValueResolver.java:104)
      at
      org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.
      applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245)
      at
      org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.
      populateBean(AbstractAutowireCapableBeanFactory.java:1010)
      at
      org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.
      doCreateBean(AbstractAutowireCapableBeanFactory.java:472)

    The audit policy attribute settings are stored in the MDS schema. There are 3 tables we can use to fix this problem: MDS_ATTRIBUTES, MDS_COMPONENTS and MDS_PARTITIONS. The MDS_COMPONENTS table stores the version information; the latest version has the highest value in the COMP_CONTENTID column.

    select * from mds_components

    UnableToStartSOA-1

    The MDS_ATTRIBUTES stores the attribute values of the MBean configuration properties, in this case “audit-config”. As we are only interested in audit-config settings related to the SOA-INFRA application for this issue, we can find the correct partition id for the SOA-INFRA application in MDS_PARTITIONS table (see below):

    select * from mds_partitions

    UnableToStartSOA-2

     

     

     

    To find the latest audit-config attribute values, run the following SQL statement to retrieve the latest audit policy configuration:

    SELECT *
    FROM MDS_ATTRIBUTES
    WHERE ATT_CONTENTID=
    (SELECT MAX(COMP_CONTENTID)
    FROM MDS_COMPONENTS
    WHERE COMP_LOCALNAME = 'audit-config'
    )
    AND MDS_ATTRIBUTES.ATT_PARTITION_ID=
    (SELECT PARTITION_ID FROM MDS_PARTITIONS WHERE PARTITION_NAME='soa-infra'
    );

    The default configuration is shown below:

    UnableToStartSOA-3
    To resolve this issue as the workaround, you will need to ensure that one of the active attribute value (either immediate or deferred) is set to “true”, and the other attribute value is set to “false”, then you will be able to proceed to start the SOA-INFRA application.

     

     


    Resequencer Health Check

    $
    0
    0

    11g Resequencer Health Check

    In this Blog we will see a few useful queries to monitor and diagnose the health of resequencer components running in a typical SOA/AIA Environment.

    The first query is a snapshot of the current count of Resequencer messages in their various states and group_statuses.

    Query1: Check current health of resequencers

    select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') time, gs.status group_status, m.status msg_status, count(1), gs.component_dn 
    from mediator_group_status gs, mediator_resequencer_message m
    where m.group_id = gs.group_id
    and gs.status < = 3
    and gs.component_status!=1
    group by gs.status, m.status, gs.component_dn
    order by gs.component_dn, gs.status;

    Table below lists a representative sample output of the above query from a running SOA Environment containing Resequencers collected at 12:04:50

    Query 1 sample output

    For our analysis, let us collect the same data again after a few seconds

    2

    Refer to the appendix  for a quick glossary of Resequencer group_status and message_status state values

    Let us dive a bit deeper into each of the above state combinations, their counts and what they imply.

    1. GRP_STATUS/MSG_STATUS = 0/0 – READY

    These show the messages which are ready for processing and eligible to be Locked and processed by the resequencer.  For a healthy system, this number would be quite low as the messages will be locked and processed continuously by the resequencer.  When the messages arriving into the system have stopped, this count should drop to zero.

    A high count for this combination would suggest that not enough groups are being locked by the resequencer for the rate at which messages are arriving for processing.  The Mediator property – “Resequencer Maximum Groups Locked” should be adequately increased to lock groups at a higher rate.

    Refer here to see how this property can be changed from EM Console

    2. GRP_STATUS=0/MSG_STATUS=2 – PROCESSED

    This count indicates the number of processed messages. This number will be seen to be growing over time. A Very high count (like > 1 million in the above example) indicates that the Resequencer purge is due and should be run soon to delete the processed messages.

     

    1. 3. GRP_STATUS=0/MSG_STATUS=5 – ABORTED

      This count shows the number of message that are currently manually aborted by the administrator.  Refer here for how Resequencer messages can be aborted using the SOA EM Console.

    1. 4. GRP_STATUS=1/MSG_STATUS=0 – LOCKED

      This combination of states shows the messages within groups which are being currently processed. For a healthy system, this number would be quite low as the messages belonging to locked groups are processed continuously by the Resequencer Worker threads.  When the messages arriving into the system have stopped, this count should drop to zero.

    A high count for this combination would suggest that not enough worker threads are available to process the messages for the rate at which groups are locked for processing.  The Mediator property – “Resequencer Worker Threads” should be adequately increased to boost the message processing rate.

    Refer here to see how this property can be changed from EM Console

     

    5. GRP_STATUS=1/MSG_STATUS=2 – LOCKED

    The count against this combination shows the number of messages which are processed for locked groups. This is a transient state and once all messages for the locked groups are processed, these counts change status to GRP_STATUS=0/MSG_STATUS=2

     

    6. GRP_STATUS=3 – ERRORED

    These show the messages against error’ed groups. These will need to be manually recovered from EM Console or the AIA Resubmission tool. They indicate messages which have failed processing due to various errors. If these messages can be recovered and processed successfully, in which case they transition to state GRP_STATUS=0/MSG_STATUS=2. If the errors are non recoverable, then they can be aborted from the EM Console and they move to GRP_STATUS=0/MSG_STATUS=5.

    Refer to my earlier blog here for details on recovery of resequencer errors.

     

    Query2: Check ContainerID’s  health

    select * from MEDIATOR_CONTAINERID_LEASE ;

    Table below shows a sample output for the above query from a 2 node clustered SOA installation.

    3

     

     

    It shows that time when both the nodes last renewed their mediator containerids. These containerid renewals serve as heartbeats for the mediator/Resequencer. It is vital in maintaining the load balance of messages among the nodes and failover of groups/messages that were allocated to expired nodes.

    
    

    Query3: Load Balance between cluster nodes

    select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') time, gs.container_id container, gs.status group_status, m.status msg_status, count(1)
    from mediator_group_status gs, mediator_resequencer_message m
    where m.group_id = gs.group_id
    and   gs.status  in (0,1)
    and component_status!=1 
    group by  gs.container_id, gs.status, m.status
    order by gs.container_id, gs.status;
    
    

    The above query can be used to monitor the load balance of messages between nodes of a cluster. Sample output below shows an output for a 2 node clustered SOA environment.

    4

    This sample output shows the messages of ready and locked messages are roughly evenly distributed across the cluster. If a major skewness is observed for a specific container, then further analysis may be required. Thread dumps and Diagnostic logs of the slower node may indicate towards the cause of the skewness.

     

    Appendix:

    Below table lists the important status values of MEDIATOR_GROUP_STATUS and MEDIATOR_GROUP_STATUS tables and how the values can be interpreted.

    6 5

    White Paper on Message Sequencing Patterns using Oracle Mediator Resequencer

    $
    0
    0

    One of the consequences of Asynchronous SOA-based integration patterns is that it does not guarantee that messages will reach their destination in the same sequence as initiated at the source.

    Ever faced an integration scenario where

    – an update order is processed in the integration layer before the create order?

    – the target system cannot process two orders for the same customer?

    Common fixes used in the field include

    – Singleton BPEL implementations, singleton JCA adapters, custom sequencing logic using tables etc.

    These common ‘fixes’ often result in performance bottlenecks since all messages are usually funneled through a single threaded component. These approaches also become unreliable and counter-productive when used in clustered deployments. Error scenarios can also cause unexpected behavior.

    To address the sequencing requirement without these shortcomings, Oracle SOA Suite provides the Mediator Resequencer component that allows you to build/rebuild a sequence from an out-of-sequence set of input messages. The Resequencer enforces sequential processing of related messages and performs parallel processing of unrelated messages, thereby keeping up the performance levels.

    The white paper below aims to provide a common set of use cases for using a Resequencer, Resequencer modes, best practices, configurations, handling error scenarios, HA, failover, etc.

    Oracle Mediator Resequencer.pdf

    Custom Message Data Encryption of Payload in SOA 11g

    $
    0
    0

    Introduction

    This article explains how to encrypt sensitive data (such as ssn, credit card number, etc ) in the incoming payload and decrypt the data back to clear text (or original form) in the outgoing message. The purpose is to hide the sensitive data in the payload, in the audit trail, console and logs.

    Main Article

    Oracle provides Oracle Web Services Manager (OWSM) message protection, but it encrypts the entire payload. However, Oracle OWSM gives us the capability to create our own custom policies and custom assertions. The framework is implemented in Java and allows us to write our own custom assertions which can be attached to a policy to encrypt and decrypt message data. These policies must be attached to the SOA composites in order to execute the policy assertion.

    Step by step guide:

    1. 1. Create a custom Java encryptor class

    This is the Java implementation class for encrypting the data in incoming messages. It must extend oracle.wsm.policyengine.impl.AssertionExecutor and must have the method execute

     public IResult execute(IContext iContext)

    This method is invoked by the policy framework. The execute method  gets the xml nodes in the SOAP message that require encryption from the SOAP message and encrypts the value. It then sets the node value to the encrypted value.

    1. 2. Create a custom Java decryptor class

    This is the Java implementation class for decrypt the data in outgoing message. It must extend oracle.wsm.policyengine.impl.AssertionExecutor and must have method execute

     public IResult execute(IContext iContext)

    This method is invoked by the policy framework. The execute method  gets the xml nodes in the SOAP message that require decryption from the SOAP message and decrypts the value. It then sets the node value to the decrypted value.

    3. Compile and build Java encryptor and decryptor in a jar file

    Required libraries are:

    $ORACLE_COMMON_HOME\modules\oracle.wsm.common_11.1.1\wsm-policy-core.jar

    $ORACLE_COMMON_HOME\modules\oracle.wsm.agent.common_11.1.1\wsm-agent-core.jar

    $ORACLE_COMMON_HOME\modules\oracle.osdt_11.1.1\osdt_wss.jar

    $ORACLE_COMMON_HOME\modules\oracle.osdt_11.1.1\osdt_core.jar

    4. Copy the jar file to $SOA_HOME\soa\modules\oracle.soa.ext_11.1.1

    5. Run ant in $SOA_HOME\soa\modules\oracle.soa.ext_11.1.1

    6. Restart SOA server

    7. Create a custom encryption assertion template

    This custom assertion template calls the custom Java encryptor class which encrypts the message data.

    When this assertion is attached to a policy that is attached to the SOA composite web service then whenever a request is made to a SOA composite service, OWSM applies the policy enforcement and the execute method of the custom encryptor Java class is invoked.

    <orawsp:AssertionTemplate xmlns:orawsp="http://schemas.oracle.com/ws/2006/01/policy"
                              orawsp:Id="soa_encryption_template"
                              orawsp:attachTo="generic" orawsp:category="security"
                              orawsp:description="Custom Encryption of payload"
                              orawsp:displayName="Custom Encryption"
                              orawsp:name="custom/soa_encryption"
                              xmlns:custom="http://schemas.oracle.com/ws/soa/custom">
      <custom:custom-executor orawsp:Enforced="true" orawsp:Silent="false"
                       orawsp:category="security/custom"
                       orawsp:name="WSSecurity_Custom_Assertion">
        <orawsp:bindings>
          <orawsp:Implementation>fully qualified Java class name that will be called by this assertion </orawsp:Implementation>
          <orawsp:Config orawsp:configType="declarative" orawsp:name="encrypt_soa">
            <orawsp:PropertySet orawsp:name="encrypt">
              <orawsp:Property orawsp:contentType="constant"
                               orawsp:name="encryption_key" orawsp:type="string">
                <orawsp:Value>MySecretKey</orawsp:Value>
              </orawsp:Property>
            </orawsp:PropertySet>
          </orawsp:Config>
        </orawsp:bindings>
      </custom:custom-executor>
    </orawsp:AssertionTemplate>

    8. Use Enterprise Manager (EM) to import the custom encryption assertion template into the Weblogic domain Web Services Policies

    9. Create an assertion using the encryption assertion template that was imported

    10. Create custom decryption assertion template

    This custom assertion template calls the custom Java decryptor class which decrypts the message data.

    When this assertion is attached to a policy that is attached to the SOA composite web service then whenever a request is made to that SOA composite web service then OWSM applies the policy enforcement  and the execute method of the custom outbound decryptor is invoked.

    <orawsp:AssertionTemplate xmlns:orawsp="http://schemas.oracle.com/ws/2006/01/policy"
                              orawsp:Id="soa_decryption_template"
                              orawsp:attachTo="binding.client" orawsp:category="security"
                              orawsp:description="Custom Decryption of payload"
                              orawsp:displayName="Custom Decryption"
                              orawsp:name="custom/soa_decryption"
                              xmlns:custom="http://schemas.oracle.com/ws/soa/custom">
      <custom:custom-executor orawsp:Enforced="true" orawsp:Silent="false"
                       orawsp:category="security/custom"
                       orawsp:name="WSSecurity Custom Assertion">
        <orawsp:bindings>
          <orawsp:Implementation>fully qualified Java class name that will be called by this assertion</orawsp:Implementation>
          <orawsp:Config orawsp:configType="declarative" orawsp:name="encrypt_soa">
            <orawsp:PropertySet orawsp:name="decrypt">
              <orawsp:Property orawsp:contentType="constant"
                               orawsp:name="decryption_key" orawsp:type="string">
                <orawsp:Value>MySecretKey</orawsp:Value>
              </orawsp:Property>
            </orawsp:PropertySet>
          </orawsp:Config>
        </orawsp:bindings>
      </custom:custom-executor>
    </orawsp:AssertionTemplate>

    11. Create an assertion using the decryption assertion template that was imported

    1. 12. In Enterprise Manager (EM), export custom encryption policy to a file and save it to $JDEV_USER_DIR/system11.1.1.x.x.x.x/DefaultDomain/oracle/store/gmds/owsm/policies/oracle

    13. In Enterprise Manager (EM), export custom decryption policy to a file and save it to $JDEV_USER_DIR/system11.1.1.x.x.x.x/DefaultDomain/oracle/store/gmds/owsm/policies/oracle

    14. In JDeveloper, attach the custom encryption policy to the SOA composite inbound services that require message data encryption

    15. In JDeveloper, attach custom decryption policy to the SOA composite outbound services that have message data are in encryption format but need to be decrypted for outbound message

    16. Compile and deploy the SOA composite

    11g Mediator – Diagnosing Resequencer Issues

    $
    0
    0

    In a previous blog post, we saw a few useful tips to help us quickly monitor the health of resequencer components in a soa system at runtime. In this blog post, let us explore some tips to diagnose mediator resequencer issues. During the diagnosis we will also learn some key points to consider for Integration systems that run Mediator Resequencer composites.

    Please refer to the Resequencer White paper for a review of the basic concepts of resequencing and the interplay of various subsystems involved in the execution of Resequencer Mediator composites.

    Context

    In this blog post we will refer to the AIA Communications O2C Pre-Built Integration pack (aka O2C PIPs) as an example for understanding some issues that can arise at runtime with resequencer systems and how we can diagnose the cause of such issues. The O2C PIP uses resequencing-enabled flows. One such is the UpdateSalesOrder flow between OSM and Siebel. It is used to process the OSM status of Sales Orders in proper time sequence within the Siebel system.

    Isolate the server within the soa cluster

    Many a times the resequencer health check queries point us to an issue occurring only in one server within the soa cluster. While the Database queries mentioned here give us the containerId of the specific server, it does not specify the server name. This is because mediator uses a GUID to track a runtime server.

    Trace Log messages generated by the Mediator can help us correlate this GUID to an individual server running in the cluster at runtime. The oracle.soa.mediator.dispatch runtime logger can be enabled from the FMW EM console to TRACE:32 level. Figure below shows the screenshot.

    med_logger2Enabling this logger just for a few minutes will suffice and one can see messages such as below in soa servers’ diagnostic logs, once every lease refresh cycle. The default refresh cycle is 60s apart.


    [APP: soa-infra] [SRC_METHOD: renewContainerIdLease] Renew container id [34DB0F60899911E39F24117FE503A156] at database time :2014-01-31 06:11:18.913


    It implies, the server which logged the above message is running with a containerId of 34DB0F60899911E39F24117FE503A156 !

    Locker Thread Analysis

    When one observes excessive messages piling up with a status of GRP_STATUS=READY and MSG_STATUS=READY, it usually indicates that the locker thread is not locking the groups fast enough for processing the incoming messages. This could be due to Resequencer Locker thread stuck or performing poorly. For instance the locker thread could be stuck executing updates against the MEDIATOR_GROUP_STATUS table.

    It is generally useful to isolate the server which is creating the backlog using health check queries and then isolate the server name by using the logger trace statements as described in previous section. Then a few thread dumps of this server, could throw more light on the actual issue affecting the locker thread.

    Usually thread dumps show a stack such as below for a resequencer Locker thread.

    "Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
    " id=330 idx=0x1b0 tid=28794 prio=10 alive, sleeping, native_waiting, daemon
        at java/lang/Thread.sleep(J)V(Native Method)
        at oracle/tip/mediator/common/listener/<strong>DBLocker.enqueueLockedMessages</strong>(DBLocker.java:213)
        at oracle/tip/mediator/common/listener/DBLocker.run(DBLocker.java:84)
        at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
        at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
        at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
        at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

    In the above thread the Locker is enqueing messages from locked groups into the in memory queue for processing by the worker threads.

    During times of any issue, the Locker thread could be seen stuck doing database updates. If this is seen across thread dumps with no progress made by the thread, then it could point to a database issue which needs to be attended.

    A poor performance of the locker query on the database side will adversely impact the Resequencer performance and hence decrease the throughput of the integration flow that uses Resequencers.

    Recollect that the Locker thread runs an update query continuously attempting to lock eligible groups. Below shown is a sample FIFO Resequencer Locker query as seen database AWR reports.

    update mediator_group_status a set a.status=7 where id in ( select id from (select distinct b.id, b.lock_time from 
    mediator_group_status b, mediator_resequencer_message c where b.id=c.owner_id and b.RESEQUENCER_TYPE='FIFO' and 
    b.status=0 and b.CONTAINER_ID=:1 and c.status=0 and b.component_status!=:2 ORDER BY b.lock_time) d where rownum<=:3 )

    The Database AWR reports can also very useful to check the average Elapsed Time and other performance indicators for the locker query.

    Huge data volume due to no proper purging strategy for Mediator tables is a common reason for deteriorated Locker query performance. Regular data purging, partitioning, statistics gathering and creation of required indexes on MEDIATOR_GROUP_STATUS will usually ensure good performance of locker query.

    Note that there is only one Resequencer Locker thread running per server at runtime. Any database issue that impacts the locker thread will impair all the Mediator Composites that use the same resequencing strategy. The mediator resequencer uses database for storage, retrieval of messages to implement the reordering and sequencing logic. Hence, the proper and timely maintenance of SOA database goes a long way in ensuring a good performance.

    Worker Thread Analysis

    Recollect that Worker threads are responsible for processing messages in order. There are multiple worker threads per server to parallel-process multiple groups, while ensuring that each group is exclusively processed by only one worker thread to preserve the desired sequence. Hence, the number of worker threads configured in Mediator properties (from FMW EM console) is a key parameter for optimum performance.

    Below sample snippets from server thread dumps show Resequencer Worker threads. The first stack shows a worker thread which is waiting for messages to arrive on the internal queue. As and when Locker thread, locks new eligible groups, such available worker threads will process the messages belonging to the locked groups.

    Idle Worker Thread:
    "Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
    " id=208 idx=0x32c tid=26068 prio=10 alive, parked, native_blocked, daemon
        at jrockit/vm/Locks.park0(J)V(Native Method)
        at jrockit/vm/Locks.park(Locks.java:2230)
        at jrockit/proxy/sun/misc/Unsafe.park(Unsafe.java:616)[inlined]
        at java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:196)[inlined]
        at java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)[optimized]
        at java/util/concurrent/<strong>LinkedBlockingQueue.poll</strong>(LinkedBlockingQueue.java:424)[optimized]
        at oracle/tip/mediator/common/listener/<strong>AbstractWorker.run</strong>(AbstractWorker.java:63)
        at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
        at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
        at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
        at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
        -- end of trace

    The next partial stack shows a worker thread which is processing a message from a group that has been locked by the Locker.

    Busy Worker Thread:
    ….
        at oracle/tip/mediator/service/BaseActionHandler.requestProcess(BaseActionHandler.java:75)[inlined]
        at oracle/tip/mediator/service/OneWayActionHandler.process(OneWayActionHandler.java:47)[optimized]
        at oracle/tip/mediator/service/ActionProcessor.onMessage(ActionProcessor.java:64)[optimized]
        at oracle/tip/mediator/dispatch/MessageDispatcher.executeCase(MessageDispatcher.java:137)[optimized]
        at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processCase(InitialMessageDispatcher.java:500)[optimized]
        at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processCases(InitialMessageDispatcher.java:398)[optimized]
        at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processNormalCases(InitialMessageDispatcher.java:279)[inlined]
        at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageDispatcher.processCases(ResequencerMessageDispatcher.java:27)[inlined]
        at oracle/tip/mediator/dispatch/InitialMessageDispatcher.dispatch(InitialMessageDispatcher.java:151)[inlined]
        at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageHandler.handleMessage(ResequencerMessageHandler.java:22)[optimized]
        at oracle/tip/mediator/resequencer/<strong>ResequencerDBWorker.handleMessag</strong>e(ResequencerDBWorker.java:178)[inlined]
        at oracle/tip/mediator/resequencer/ResequencerDBWorker.process(ResequencerDBWorker.java:343)[optimized]
        at oracle/tip/mediator/common/listener/AbstractWorker.run(AbstractWorker.java:81)
        at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
        at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
        at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
        at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
        -- end of trace

    It should be noted that all further processing of the message until the next transaction boundary happens in the context of this worker thread. For example, the diagram below shows the O2C UpdateSalesOrder Integration flow, from a threads perspective. Here, the BPEL ABCS processing, the calls to AIA SessionPoolManager, as well as the Synchronous invoke to the Siebel Webservice, all happen in the resequencer worker thread.

    o2c_updso_diagramNow consider an example thread stack as shown below seen in the server thread dump. It shows a worker thread seen to be engaged in http communication with an external system.

    Stuck Worker Thread:
    "Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
     " id=299 idx=0x174 tid=72518 prio=10 alive, in native, daemon
      at jrockit/net/SocketNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BIII)I(Native Method)
      at jrockit/net/SocketNativeIO.socketRead(SocketNativeIO.java:32)[inlined]
      at java/net/SocketInputStream.socketRead0(Ljava/io/FileDescriptor;[BIII)I(SocketInputStream.java)[inlined]
      at java/net/<strong>SocketInputStream.read</strong>(SocketInputStream.java:129)[optimized]
      at HTTPClient/BufferedInputStream.fillBuff(BufferedInputStream.java:206)
      at HTTPClient/BufferedInputStream.read(BufferedInputStream.java:126)[optimized]
      at HTTPClient/StreamDemultiplexor.read(StreamDemultiplexor.java:356)[optimized]
      ^-- Holding lock: HTTPClient/StreamDemultiplexor@0x1758a7ae0[recursive]
      at HTTPClient/RespInputStream.read(RespInputStream.java:151)[optimized]
    ….
    ….
      at oraclele/tip/mediator/dispatch/resequencer/ResequencerMessageDispatcher.processCases(ResequencerMessageDispatcher.java:27)
      at oracle/tip/mediator/dispatch/InitialMessageDispatcher.dispatch(InitialMessageDispatcher.java:151)[optimized]
      at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageHandler.handleMessage(ResequencerMessageHandler.java:22)
      at oracle/tip/mediator/resequencer/<strong>ResequencerDBWorker.handleMessage</strong>(ResequencerDBWorker.java:178)[inlined]
      at oracle/tip/mediator/resequencer/ResequencerDBWorker.process(ResequencerDBWorker.java:343)[optimized]
      at oracle/tip/mediator/common/listener/AbstractWorker.run(AbstractWorker.java:81)
      at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
      at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
      at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
      at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
      -- end of trace

    If this thread remains at the same position across thread dumps spanning few minutes, it would indicate that the worker thread is blocked on external Webservice application. If such external system issues block a significant number of worker threads from the pool of available worker threads, it will impact the overall throughput of the system. There will be fewer workers available to process all the groups that are being locked by the Locker thread, across all composites that use resequencers. When the rate of incoming messages during such time is high, this issue will show up as a huge backlog of messages with status GRP_STATUS=LOCKED and MSG_STATUS=READY in the resequencer health check query.

    Note that JTA timeout will not abort these ‘busy’ threads. Such threads may eventually return after the JTA transaction has rolled back, or in some cases depending on how sockets are handled by the external system, may not return at all.

    For such integration flows, it is advisable to configure HTTP Connect and Read timeouts for Webservice calls at the composite’s Reference properties. Figure below shows a screenshot of the properties. This will ensure that worker threads are not held up due to external issues and affect processing of other components that rely on worker threads.

    WSRef_timeoutsFew more Loggers

    The below loggers can be enabled for trace logging to gather diagnostic information on specific parts of the Mediator/resequencer.

    - Logger oracle.soa.mediator.dispatch  for Initial message storage, Group Creation, Lease Renew, Node failover

    - Loggers oracle.soa.mediator.resequencer  and oracle.soa.mediator.common.listener for Resequencer Locker, Resequencer Worker, Load Balancer

    Conclusion

    We have explored into how problems at various different layers can manifest at the resequencer in an Integration system and how the cause of these issues can be diagnosed.

    We have seen

    – Useful pointers in diagnosing resequencer issues and where to look for relevant information

    – How a good SOA database maintenance strategy is important for resequencer health

    – How timeout considerations play a role in resequencer performance

     

    -Shreeni

    Submitting an ESS Job Request from BPEL in SOA 12c

    $
    0
    0

    Introduction

    SOA Suite 12c added a new component: Oracle Enterprise Scheduler Service (ESS). ESS provides the ability to run different job types distributed across the nodes in an Oracle WebLogic Server cluster. Oracle Enterprise Scheduler runs these jobs securely, with high availability and scalability, with load balancing and provides monitoring and management through Fusion Middleware Control. ESS was available as part of the Fusion Applications product offering. Now it is available in SOA Suite 12c. In this blog, I will demonstrate how to use a new Oracle extension, “Schedule Job”, in JDeveloper 12c to submit an ESS job request from a BPEL process.

     

    Set up a scheduled job in Enterprise Scheduler Service

    1. Create a SOA composite with a simple synchronous BPEL process, HelloWorld.
    2. Deploy HelloWorld to Weblogic.
    3. Logon to Fusion Middleware Enterprise Manager.
    4. Go to Scheduling Services -> ESSAPP -> Job Metadata -> Job Definitions. This takes you to the Job Definitions page.

    2

     

    5. Click the “Create” button, this takes you to Create Job Definition page. Enter:

    Name: HelloWorldJob

    Display Name: Hello World Job

    Description: Hello World Job

    Job Type: SyncWebserviceJobType

    Then click “Select Web Service…”. It pops up a window for the web service.

    39

    6. On the “Select Web Service” page, select Web Service Type, Port Type, Operation, and Payload. Click “Ok” to finish creating job definition.

    8

    Secure the Oracle Enterprise Scheduler Web Service

    The ESS job cannot be run as an anonymous user, you need to attach a WSM security policy to the ESS Web Service:

    1. In Fusion Middleware Enterprise Manager, go to Scheduling Services -> ESSAPP, right click, select “Web Services”.

    3

    2. In Web Service Details, click on the link “ScheduleServiceImplPort”.

    4

    3. Open tab “WSM Policies” and click on “Attach/Detach”.

    5

    4. In “Available Policies”, select “oracle/wss_username_token_service_policy”, click “Attach” button to attach the policy and then click on “Ok” to finish the policy attachment.

    6

    5. You should see the policy attached and enabled.

    7

    Create a SOA Composite to Submit a HelloWorldJob

    1. Create a new SOA Application/Project with an asynchronous BPEL (2.0) process, InvokeEssJobDemo, in JDeveloper 12c.

    2. Create a SOA_MDS connection.

    14

    3. Enter SOA MDS database connection and test connection successfully.

    15

    4. Add a Schedule Job from Oracle Extensions to InvokeEssJobDemo BPEL process.

    16

    5. Double click the newly added Schedule Job activity. This brings up the Edit Schedule Job window.

    6. Enter Name “ScheduleJobHelloWorld”, then click “Select Job” button.

    17

    7. This brings up the Enterprise Scheduler Browser. Select the MDS Connection and navigate down the ESS Metadata to find and select “HelloWorldJob”.

    18

    8. To keep it simple, we did not create a job schedule. So there is no job schedule to choose. If you have job schedules defined and would like to use them, you can choose a Schedule from the MDS connections.

    9. Set Start Time as current date time, and click OK.

    19

    10. You may see this pop up message.

    20

    11. Click “Yes” to continue. In the next several steps we will replace WSDL URL with concrete binding on the reference binding later to fix this.

    12. In EM, go to Scheduling Services -> Web Services.

    21

    13. Click on link “SchedulerServiceImplPort”

    22

    14. Click on link “WSDL Document SchedulerServiceImplPort”.

    23

    15. It launches a new browser window displaying the ESSWebService wsdl. WSDL URL is in the browser address.

    24

    16. Update EssService WSDL URL.

    25

    17. You need to attach WSM security policy to EssService request.

    26

    18. Add Security Policy: oracle/wss_username_token_client_policy.

    27

    19. Setting up the credential store for policy framework is beyond the scope of this blog. We will use a short cut, the default weblogic user and password, as Binding Properties on the EssService reference binding to which the security policy is attached.

    40

     

    20. Build and deploy InvokeEssJobDemo.

    21. Test InvokeEssJobDemo web service.

    29

    22. It should show that the web service invocation was successful.

    34

    23. Launch flow trace. We can see that Job 601 was successfully submitted.

    32

    24. Go ESSAPP -> Job Requests -> Search Job Requests. Find Job 601. Job was executed successfully.

    35

     

    Summary

    In this blog, we demonstrated how to set up a SOA web service ESS job and how to invoke ESS web service to submit a job request from BPEL process in SOA Suite 12c.

     

    Viewing all 60 articles
    Browse latest View live