Asynchronous processing

See Also

SCA Import and Export protocol bindings

When we spoke about SCA Import and SCA Export components, we found that there are protocol bindings associated with them. These protocol bindings describe how communication inbound (for exports) and outbound (for imports) are achieved. There are a variety of options for protocol bindings. The following sections drill down into each one. The choice include:

SCA Bindings

SCA bindings is the notion that a module can expose itself using SCA itself. This allows another SCA module to call it with as optimal a communication as possible.

Web Services Bindings


The SCA bindings for SOAP/HTTP leverage the underlying WAS Web services capabilities.


SOAP over JMS is the idea that a service caller and service provider will interact with each other by the service caller building a SOAP request message and depositing that message on a JMS queue. The service provider will be monitoring that queue and when a message arrives on it, the provider will retrieve the message and process it sending back a response as needed. The usage of JMS as opposed to the much more common HTTP protocol is a consumer's choice.

To illustrate how SOAP/JMS works, we will start with a simple story. Consider the following simple SCA module that shows an unbound SCA Export connected to a component. The component will simply log the fact that it has been called. The interface is a simple one-way operation was a simple Business Object as input.

When we select the generate bindings for Export1 and then Select Web Services Binding as shown next:

We have the opportunity to build a SOAP/JMS inbound listener.

The next page of the wizard looks as follows:

As we can see, this is a pretty sparse wizard. When the binding has been built, the result looks like a standard Web Services binding:

However, if we look at the binding details we find that the transport type is defined as SOAP/JMS.

With SOAP/HTTP transport of Web Services, the Address corresponds to a standard URL, however for SOAP/JMS, the format is different. Examining the format, we find it to be:

jms:/queue?destination=<jndi queue>&connectionFactory=<jndi connectionFactory>&targetService=<service port>

We see that there are two properties that pertain directly to JMS. One is the JNDI name of a connection factory for forming connections to the JMS provider and the other is a JNDI name for a JMS queue resource that will be monitored for incoming SOAP messages.

Experimentation has thrown up some useful nuggets of information:

By using an Import component and writing a test message to a JMS queue, we can see what an example JMS message contains. We find that the JMS message type is a BytesMessage and a typical content is as shown next:

<?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenc="" xmlns:soapenv="" xmlns:xsd="" xmlns:xsi=""> <soapenv:Header/> <soapenv:Body> <i1:operation1 xmlns:i1="http://SOAPJMS/I1"> <input1> <f1>1</f1> <f2>2</f2> <f3>3</f3> </input1> </i1:operation1> </soapenv:Body> </soapenv:Envelope>

Looking at the JMS headers on the message, we find a header called targetService. It appears that this is used to correlate the message with the service associated with that message.

For an Export component, it appears that the JNDI connectionFactory and JNDI queue name resources are created automatically as part of the application deployment. This is rather unusual as other bindings for SCA components seem to offer us the choice of binding to administratively defined resources as opposed to the deployment creating them for us.

For example, if we deploy a module with an SCA Export with SOAP/JMS bindings, the following resources were found to have been automatically created:

Notice that we are going against the SCA System Bus. It is not clear if we can change these resource definitions and have them made permanent.

See also:


SCA modules support JMS for both inbound and outbound requests. A JMS Import looks as follows on the SCA assembly diagram:

If the interface is defined as one-way, then the message will be put to the queue and that will be the end of the story. However, if the interface is defined as two-way, then the message will be put to the queue and then wait for a response on the reply queue.

When configuring a JMS Import, we provide settings for the JMS Connection factory, the JMS Send Destination and for two-way requests, the Receive Destination.

When using a JMS Import being called by a mediation flow and the mediation flow expects a reply, make sure that the Asynchronous invocation quality is set to "Call" as opposed to the default of "Commit". When set to "Call", the mediation flow work is committed before the send of the message.

See Also


SCA modules support HTTP imports and exports. This means that they can listen for incoming REST requests or send outgoing REST requests. When placed on an assembly diagram, they look as follows:

This component is affected by the SMO HTTP Schema. On an incoming HTTP request, the URL property contains the URL that the client supplied to get here. For an outgoing HTTP request, the DynamicOverrideURL property can be populated to supply a new target URL. However, experience seems to be showing that this is deprecated in favor of /headers/SMOHeader/Target/address.

When sending or receiving REST requests, it can be useful to examine the data being transmitted. A tool that I recommend for this purpose is Fiddler. Not only can this be used to examine browser originated traffic but it has the awesome ability to be able to decode SSL traffic which means that if we are communicating with a remote HTTPS system, we can also examine the data.

For an HTTP import, we can define proxies in the HTTP Proxy tab:

This means that all HTTP traffic for the component will be routed through Fidler and Fidler can then do its job to decode and report upon the traffic.

When making REST requests to SSL protected services, we must specify additional SSL information.

See Also:


SCA supports WebSphere MQ as a source and target of messages. This means that we can have an SCA Export bound to an MQ queue such that when a message arrives on that queue, an SCA module will be triggered. Conversely, an SCA Import can be bound to an MQ queue such that when the Import is reached, a new message is delivered to a queue.

The message sent via an Import has to be serialized into a physical data stream in order to be placed on the queue. For an Export, the incoming message has to be parsed from its physical format and a Business Object built.

When working with an Import component, the MQ bindings can be selected from the list of available bindings:

The selection of a binding for an Import brings up a wizard page show next:

In order to connect to an MQ queue manager we must supply some connection details. SCA can connect to a queue manager using either client or bindings mode (these are MQ terms). In client mode, a network connection is made to a remote queue manager. In bindings mode, the queue manager must be co-located on the same machine as IBPM.

MQ Correlation messages

When sending a request MQ message through an SCA Import component and a response is expected it is common to use an MQ correlation ID. This allows the request and response messages to be correlated together. In the Properties > Binding > Message Configuration section of an MQ bound SCA Import, there are settings that describe how the request message and response message should be related.

There are three choices:

Correlation ID copy from Request Message ID – The matching response message will have a Correlation ID value equal to the Message ID in the request message.

Response Message ID copy from Request Message ID – The matching response message will have a Message ID value equal to the Message ID in the request message.

Correlation ID copy from Request Correlation ID – The matching response message will have a Correlation ID value equal to the Correlation ID in the request message.

When a message is sent by SCA, a new Message ID is generated. For some MQ applications, the Correlation ID in the request message is used for the matching response message. This can pose a problem as the default Correlation ID sent by an SCA Import is a value of all zeros … which is useless for a correlation value. Although MQ has an option at the MQ layer to generate a unique Correlation ID, that ability is not surfaced in the WPS product. To achieve what we want, we need a different technique.

First, we need to ensure that the SMO has an MQHeader in its structure. We do this by using the MQHeaderSetter primitive. Setting a dummy value in it.

Next comes a CustomMediation which allows us to code some logic in Java. Here is an example for setting the Correlation ID to a unique value:

MQMD myMQMD; UUID uuid = UUID.randomUUID(); myMQMD = smo.getHeaders().getMQHeader().getMd(); ByteArrayOutputStream bos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(bos); try { dos.writeLong(uuid.getLeastSignificantBits()); dos.writeLong(uuid.getMostSignificantBits()); dos.writeLong((long)0); dos.flush(); } catch (Exception e) { e.printStackTrace(); } byte[] data = bos.toByteArray(); myMQMD.setCorrelId(data);;

Note that MQMD class is

What this code does is get the MQMD from the SMO and then turn an instance of a Java UUID into an array of bytes. These bytes are then set to the Correlation ID in the MQMD in the SMO MQHeader. Note: It is very important that the data array for the Correlation ID be 24 bytes and not less.

Testing MQ bindings

The IBM Support Pac known as IH03 provides a wealth of MQ testing tools that can be used with great effect for WPS testing. Included in that package is the great RFHUTIL GUI tool for examining messages.

For testing request/response handling, the utility known as mqreply may be used. Its general syntax is not clear from the documentation but the following has been shown to work well:

mqreply -f parms.txt -r data.txt -m <queue manager name> -q <request queue name>

The parms.txt file may be empty.

The data.txt is the data to be sent back in n the reply. The request message replyToQ will be used.

MQ messages and XML

When a message is put to a queue, the message is placed there as a sequence of bytes. It is not uncommon to place a message on the queue which is a string representation of an XML document. Conversely, we also want the ability to retrieve messages from a queue which are also XML based. It is here that the XML Data Handler comes into play.

See Also:

See Also

EJB SCA Bindings

Within an SCA module we can couple together the relationship between a wide variety of components. This can include stateless session EJBs.

Within the ID tooling, we can open an Assembly Diagram and then drag/drop an EJB onto the canvas which will create an import that is bound to the EJB.

The icon for the import also shows that it is an EJB binding:

The source of the drag for the drag/drop is the EJB icon in the Project Explorer view in the J2EE perspective. This is usually found under:

EJB Projects > EJB Project Name > Deployment Descriptor > Session Beans > Bean Name

Once imported, an interface that can be used within SCA can be found. Unfortunately, the interface generated is a pure Java interface and can not be directly used in the majority of cases because WSDL is the norm.

Auto-generated WSDL to Java Bridge Component

When dropping the EJB onto the canvas you'll be asked if you want to have a facade map component (Bridge) generated. This will generate a WSDL version of the interface along with a Java component that maps the wsdl operations to java methods.

Currently if you have any overloaded methods on your EJB the generator will not work and you will have to perform the steps as if you were in version 6.0.1.

In 6.0.1 manually construct a WSDL version of the interface and create a Java Component that has the WSDL interface and the Java reference.

One way to achieve this is to manually create an Interface with the same operations as the EJB. The resulting interface will be WSDL typed. Next create a Java Component on the assembly diagram and give the component the newly created WSDL typed interface and also give it a reference of the interface for the EJB. Generate the code for the Java Component and, for each operation contained within the Java code, invoke the EJB operation.

The generated code assists with this technique. If the reference on the Java Component is called XYZReference then a method is created in the Java Code called:

public XYZInterface locateService_XYZReference();

where the return type is a Java Interface that corresponds to the operations available on the EJB itself.

See Also


For many of the SCA Imports and Exports, incoming data has to be parsed to construct Business Object while outbound data has to be serialized to a stream of bytes. The Import and Export SCA components, during their operation, invoke the services of a logical function called a DataHandler. A DataHandler is responsible for either parsing an input stream to construct a Business Object or serializing a Business Object into a data stream. Because there can be many different physical data structures (XML, comma delimited, fixed width etc), one DataHandler is not sufficient for all uses. To resolve this problem, IBM has created a Java Interface that describes the purpose of a DataHandler. During development, a developer defines which DataHandler to use for the task at hand. A DataHandler is no more and no less than a Java class implemented to conform to an IBM described Java interface that provides the DataHandler contract.

IBM supplies a set of pre-built DataHandlers that apply to many common formats.

|| |Name|Class| |Atom feed Data Handler|unknown| |Delimited Data Handler|| |FixedWidth Data Handler|| |JSON Data Handler|| |WTX Invoker Data Handler|| |WTX MapSelection Data Handler|| |XML Data Handler||

XML Data Handler

Unlike other DataHandler's, the XML DataHandler holds a special place in SCA and needs a little more explanation than some of the others. Let us start with the basics of an XML DataHandler.

Like other DataHandlers, the XML DataHandler is responsible to taking raw data and creating a Business Object and for taking a Business Object and creating raw data. For the instance of a XML DataHandler, the raw data's encoding is the industry standard known as XML. XML continually seems to hold a special place in the hearts and minds of customers and programmers. Many people make a big thing of data being in XML format and think that is the end of the task. In reality, an interesting thing is found. When an XML document is received by an application, almost invariably the first thing that happens is the parsing of that XML document into a machine internal representation ... whether that is a DOM tree, Java Objects or something else ... the XML physical encoding is removed. Similarly when generating an XML document, the generator usually holds the data that will eventually become an XML document in some internal format. With this in mind, what we find is that the XML is actually used as an encoding between transmission at one end and reception at another. If the sender and receiver are the same, then XML becomes a common storage encoding for data.

As mentioned, the SCA XML DataHandler is responsible for taking in an XML document and building a Business Object and for taking a Business Object and building an XML document. But here comes an interesting concept. Business Objects, as we know, are created from templates. An instance of a Business Object can't simply contain any fields that it chooses, instead those fields have to conform to the rules laid down by the designer of that Business Object. The Business Object designer uses an ID supplied tool called the Business Object editor to create and edit the Business Object template definitions. We also find that Business Object templates are created for us when we perform tasks such as importing WSDL definitions or running adapter creation wizards. We are comfortable with this model.

If we ask ourselves, what is the nature of a Business Object template, we will quickly find that it is a named container that is able to describe a set of fields.

We also find that the container has a namespace associated with it that allows it to be uniquely identified even if there are other Business Object containers having the same name. If we now ask "Is there an existing encoding language that can be used to describe a container with fields and namespaces?" ... a language that can be used to describe Business Objects ... we find that such a language already exists and there is no need for IBM to create a new one. The name of that language is XML Schema Definition (XSD).

Although originally designed for describing the validity of well formed XML documents, XSD can serve the double duty of being used as a description language for what is to be logically considered a correct instance of a Business Object. With reference to the image of the Customer Business Object above, the following is an example of XSD that is an equivalent encoding of the Business Object:

<?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="" targetNamespace="http://Kolban_Module"> <xsd:complexType name="Customer"> <xsd:sequence> <xsd:element minOccurs="0" name="customerID" type="xsd:string" /> <xsd:element minOccurs="0" name="name" type="xsd:string" /> <xsd:element minOccurs="0" name="age" type="xsd:int" /> <xsd:element minOccurs="0" name="balance" type="xsd:double" /> </xsd:sequence> </xsd:complexType> </xsd:schema>

Very quickly we can map the constructs in XSD to the equivalent ideas in the Business Object. Thankfully, the Business Object editor and the rest of WID and WPS usage allows us to hide the "gorpy" details of the XSD encoding from the nice, high-level, representation of the Business Object allowing us to consider the Business Object at a much higher and abstract level.

Under the covers, ID and SCA actually use XSD as the encoding/description of Business Objects. In theory, some other encoding could have been used but IBM development leveraged the existing XSD description language to good effect. The fact that XSD is mechanically used to describe the encoding of Business Objects is mostly hidden from the end users through the Business Object editor, BPEL editor, BO mapper and other techniques ... but the fact that XSD still remains under the covers is a truism.

Now let us turn our attention back to the original discussion which is that of an XML DataHandler. Our goal is to turn an XML document into a Business Object and turn a Business Object into an XML document. Since a Business Object's description implementation is encoded via XSD, can this then be used to describe an XML document? The answer to this is yes and this is in fact exactly how the XML DataHandler works. The core principle here is that EVERY valid XML document can be described via an XSD. So if we want to produce or consume a valid XML document, then there has to be an XSD that can be used to describe it. Take care here to realize that this is being qualified by a valid XML. If the data desired to be produced or consumed is not a real XML document but is instead something that may on first glance "look" like XML, it may not be able to be XSD described.

The XML DataHandler can take an instance of a Business Object and using its XSD template of that Business Object can produce an XML encoding of the content of an instance of that Business Object.

Conversely, the XML DataHandler can take an XML document, and using the XSD template for the desired Business Object, create an instance of that Business Object from the XML document.

Default XML Data Handler serialization

Let us now look at default XML Data Handler serialization. We will do this with an example. Consider the following interface. This interface shows a one-way operation that takes a Business Object as a parameter.

With the following Business Object definition.

If we use the out of the box XML Data Handler, we see the following generated:

<?xml version="1.0" encoding="UTF-8"?> <way:operation1 xmlns:way="http://MQTests/OneWay"> <input1> <a>aVal</a> <b>bVal</b> <c>cVal</c> </input1> </way:operation1>

It is obviously XML but where did the encoding come from? The answer is that WPS encoded the request in a form that WPS will be able to decode using the same DataHandler. Notice that the encoding is of the form:

<Operation Name> <Parameter Name> <Parameter values …> <Parameter values …> </Parameter Name> <Parameter Name> <Parameter values …> <Parameter values …> </Parameter Name> </Operation Name>

Again, this encoding is great assuming that receiver also wants to apply this encoding. But what is the receiver already exists and expects different data?

The properties for the XML Data Handler allow us to provide an alternate Document root name. In this example, we set the value to XYZ. Re-running the previous test, we now find that the output is as follows:

<?xml version="1.0" encoding="UTF-8"?> <XYZ xsi:type="bo1:BO1" xmlns:xsi="" xmlns:bo1="http://BO1"> <a>aVal</a> <b>bVal</b> <c>cVal</c> </XYZ>

As we can see, this is quite radically different. Now we have a general format of:

<Document Root> <Parameter Values ...> <Parameter Values ...> </Document Root>

One special characteristic of note here is the introduction of the xsi:type attribute on the document root element. This attribute is key to SCA operation. It allows SCA to know which type of Business Object to create when presented with such an instance of an XML document.

Unfortunately, things change again if there is more than one parameter to the request. Examining the following interface, we see that it now has two formal parameters.

The generated XML looks as follows:

<?xml version="1.0" encoding="UTF-8"?> <MyRoot xsi:type="way2:operation1_._type" xmlns:xsi="" xmlns:way2="http://MQTests/OneWay2"> <input1> <a>a</a> <b>b</b> <c>c</c> </input1> <input2> <x>x</x> <y>y</y> <z>z</z> </input2> </MyRoot>

As we can see, the parameter names have now been introduced into the output as children of the document root.

XML Documents that are different from Business Objects

One potential problem with the scheme outlined above is the notion that the XML document provided may not match the Business Object we want to use. Looking again at the Business Object template that we may want to use in our solution:

If we have an XML document that is coming into us looking like:

<AccountHolder id='12345'> <DateOfBirth>1964-07-11</DateOfBirth> <FirstName>Neil</FirstName> <LastName>Kolban</LastName> <Balance>1234.56</Balance> <LastTransaction>2008-07-21</LastTransaction> </AccountHolder>

We can quickly see that this doesn't match the Business Object. What we have to do in this case is to find or create a XSD that describes the XML document. When we add this XSD to WID, it "appears" as a new Business Object (because to WID, a BO is described under the covers by an XSD). Now we have two Business Object definitions. One which matches the XML document and one which is the template for the Business Object we want to use in our solution. What we can now do is to employ a Mediation Flow to map from our our Business Business Object to our Data Business Object for generation of XML and map from our Data Business Object to our Business Business Object for incoming processing of XML.

Fixed Width DataHandler

IBPM provides a processor for "Fixed Width" data. It is supplied in two forms. One is a DataBinding for MQ or HTTP and the other is a DataHandler implementation.

The first step required is the creation of a DataBinding Resource Configuration.

When the Fixed Width handler is specified, a configuration panel is displayed to input the properties for the parsing and construction.

This panel is also later editable when looking at the Binding Resources for the defined Data Binding.

Like other DataBinding/DataHandler functions, the purpose of the FixedWidth processor it two fold. One part is to take raw physical data as input and populate a Business Object while the second part is to take a Business Object and serialize it to a physical representation. For the Fixed Width handler, the format of the data is expected to conform to the notion of it being a series of fields where each field is of some pre-defined (or fixed) size (or width).

Consider a logical purchase order whose Business Object might look as follows:

There are many ways that this data could be physically represented outside of the context of WPS or WESB but in this article, we care about Fixed Width representation. Fixed Width declares that each field in the data is contained within a fixed width of data.

For example, a Purchase Order that has the following:

|| |Field|Value|Size| |name|John Doe|15| |amount|3|3| |cost|1234.99|8| |item|Flat Screen TV|15|

May have a physical representation of:

| | | |


John Doe~~~~~~~~~3~1234.99Flat Screen TV~

When we look at this format, we can see that there are "rules" that govern its format. These rules have to be configured against the Fixed Width data handler.

commonj.connector.runtime.DataHandlerException: The input data has more elements than <x> which is the number of entries in the field width property

commonj.connector.runtime.DataHandlerException: CWLAP0300E: The 16 token is larger than the 15 field width; the token length cannot be truncated.

| | | |



Custom DataHandlers

The DataHandler is a protocol neutral transformation scheme. It can be called by a DataBinding!!

Let us consider the idea of character delimited data as an example. Imagine the physical data "A,B,C". This logically represents three data fields delimited by commas. Now let us think about how this data may arrive at IBPM. It could come in through a JMS queue, an MQ queue, an HTTP connection or read from a flat file. These are only some of the physical deliveries, there could easily be others and even more to come. Each one of these physical deliveries has its own associated DataBinding concept. Each DataBinding implementation concerns itself with converting the physical data to the logical data. But ... and here is the key point ... each DataBinding type is specific to the physical transport. There is a DataBinding type for MQ, a DataBinding type for JMS, a DataBinding type for HTTP and so on. The DataBinding is responsible for both the physical format of the data as well as responsible for creating/parsing Business Objects. A DataBinding for MQ has to concern itself with MQMD headers, a DataBinding for HTTP has to concern itself with HTTP headers. But if we pause for a second, we realize that there is commonality between all these DataBindings. Each one is still responsible for parsing physical data to construct a BO and constructing physical data from a BO and this is beyond the protocol specific nature.

This is where the concept of the DataHandler comes into play. A DataHandler takes a stream of physical data and converts that to a Business Object and conversely can take a Business Object and convert that into a stream of data. Although this superficially sounds just like a DataBinding, the DataHandler does NOT see any of the protocol specific header or transport information. An instance of a DataHandler is agnostic. What this means is that if we have a DataBinding for a specific protocol, that DataBinding can invoke a protocol neutral DataHandler to perform the core of the transformation work. And this is where the benefit lies ... the same DataHandler can be used by different DataBindings. So a DataHandler that knows how to work with delimited data could be used by an MQ DataBinding, a JMS DataBinding, an HTTP Databining and so on.

The DataHandler technology is part of the commonj.connector.runtime story and has a defined Interface called DataHandler.

The signature of the Interface looks as follows:

package commonj.connector.runtime; public interface DataHandler extends commonj.connector.runtime.BindingContext { public Object transform(Object source, Class targetClass, Object options) throws DataHandlerException; public void transformInto(Object source, Object target, Object options) throws DataHandlerException; }

Both of these methods are called to transform data from one format to another. They differ in that the first one is expected to create and return an object while the second is supplied an object that is to be populated. When a DataHandler is called to externalize data from IBPM, the source will usually be a DataObject. When IBPM is receiving data from an external source, then the target will usually be a DataObject.

The source parameter is the source of the data to be transformed. It is commonly one of the following:

The targetClass parameter describes the nature of the target data. It is an instance of this class that should be returned from the function call.

The DataHandlers supplied by IBM can be found in the JAR called These are useful to look at with a decompiler to see how IBM implemented certain functions.

During development, it is a good idea to provide some debugging of the parameters passed into the transform and tranformInto methods. For example:

System.out.println("tranformInto called: source is " + source.getClass().toString() + ", target is " + target.getClass().toString() + " and options value is " + options);

Even if a custom DataHandler has no custom properties of its own, it appears that a properties class as described in the following section is required for it to be seen/used as part of the IBM Integration Designer tooling.

DataHandler/DataBinding properties

But what of the configuration information for the DataHandler? This comes in two parts. One part is the build time and the other is the runtime. At build time, the properties need a way to be entered into the tooling. The description of the available properties is achieved through the creation of a JavaBean that is a companion class to the Data Handler implementation.

A JavaBean is created by the DataHandler developer that is highly name sensitive. If the Java class that implements the DataHandler is called com.sample.MyDataHandler then the Java Bean must be called com.sample.MyDataHandlerProperties. Any Java Bean properties exposed become parameters to the configuration of the Data Handler.


public class MyDataHandlerProperties implements Serializable { private String xyz; public String getXyz() { return xyz; } public void setXyz(String xyz) { = xyz; } }

The supported property types are:

Of these types, all but BindingTypeBeanProperty are self explanatory. The BindingTypeBeanProperty needs some clarification. The purpose of this property is to be able to select a DataBinding, DataHandler or FunctionSelector. When this property is defined in the JavaBean properties for the DataHandler, DataBinding or FunctionSelector, WID provides a mechanism to allow user selection. Consider the following code fragment:

public class MyMQDataBindingProperties { private BindingTypeBeanProperty dataHandler; public MyMQDataBindingProperties() { dataHandler = new BindingTypeBeanProperty(); dataHandler.setTags(new String[] {BindingTags.BINDING_KIND_DATAHANDLER}); }MyD public BindingTypeBeanProperty getDataHandler() { return dataHandler; } public void setDataHandler(BindingTypeBeanProperty dataHandler) { this.dataHandler = dataHandler; } }

When a DataBinding of this type is created, a Property page will be displayed as follows:

From here, a DataHandler can be selected from a selection window. At runtime, the DataBinding implementation can query its dataHandler property to determine the DataHandler configuration selected by the user. The BindingTypeBeanProperty has a method on it called getValue that returns a QName. It is this QName that names the DataHandler to be used.

The next question becomes that of how does the DataBinding, DataHandler or Function selector actually obtain the Java Bean that contains the properties during runtime execution? The answer to this will be fully explained in the BindingContext section but for now assume that there is a getter that can be used to retrieve the configured Java Bean at run time.

The presentation of the properties in the WID tooling is generated by introspection of the Java Bean for the properties. If the setting of the properties in the WID UI needs to be more advanced, this can be achieved by providing another helper class. This class is given the name <<BaseClass>>Configuration. For example, if the Java class that implements the DataHandler is called com.sample.MyDataHandler then the Java Bean must be called com.sample.MyDataHandlerConfiguration. This class must implement the BindingConfigurationEdit class.


The DataHandler interface itself extends BindingContext which is an interface with the following signature:

public void setBindingContext(Map context)

A common implementation of this method saves the map supplied for later use. The DataHandler interface extends BindingContext. It is interesting to note that neither DataBinding nor FunctionSelector extend this interface.

The binding context map provides some runtime properties to the DataBinding, DataHandler or FunctionSelector. These properties can be retrieved from the map using architected keys into the map. The keys of interest to us are:

The BindingContext.BINDING_CONFIGURATION key returns the corresponding Java Bean that contains the properties.

For example:

MyMQDataBindingProperties myProperties; myProperties = (MyMQDataBindingProperties)context.get(BindingContext.BINDING_CONFIGURATION);

could be used to retrieve the properties Java Bean.

The BindingContext.EXPECTED_TYPE key will be discussed in the context of creating Business Objects.

Creating a Business Object

When a DataHandler needs to create a Business Object, we seem to have a problem. A Business Object is characterized by the pair of name and namespace, together called a QName. In order to create a new Business Object, we need to be able to obtain the expected QName. This is where one of the context mapping values comes into play. The map value for the key BindingContext.EXPECTED_TYPE returns a QName instance of the expected Business Object type.

QName expectedType = (QName)context.get(BindingContext.EXPECTED_TYPE);

The Binding Registry

When a DataHandler needs to be used, it has to be created. The architected mechanism is a lookup in something called the "Binding Registry". The lookup key for a DataHandler is a QName which matches the DataHandler defined in WID.

QName qName= new QName("http://lib", "MyDataHandler"); BindingRegistry bindingRegistry = BindingRegistryFactory.getInstance(); DataHandler dataHandler = (DataHandler) bindingRegistry.locateBinding(qName, bindingContext);

It is not clear what it means to pass in a bindingContext to the locateBinding method. Experience has shown that passing in null is sufficient. This causes the setBindingContext() method on the DataHandler instance to be called with a BindingContext that contains the necessary information including the Data Handler Properties object. The Binding Registry can be used within a Java Component or Java Snippet to obtain a DataHandler outside the context of a DataBinding. This can be extremely useful as DataHandlers can thus be invoked during other forms of processing.

Calling a DataHandler from a DataBinding

A DataBinding is a concrete mapping that is protocol/technology specific. When it is invoked, it has to transform data. This is where a DataHandler can come into play. As illustrated in the Binding Registry section, a DataHandler object can be retrieved from the registry by name. Once the DataHandler has been obtained, either its transform or transformInto methods can be called to achieve the actual transformation. Although the name of the DataHandler can be hard coded into the DataBinding implementation, this is probably not a good idea. Instead, the selection of the DataHandler to be used can be supplied on the properties bean of the DataBinding. If a DataBinding implementation class is called MyDataBinding then the properties bean will be called MyDataBindingProperties. Refer back to the DataBinding properties section. If a property of type BindingTypeBeanProperty is created on the properties bean then selection of a DataHandler can be made much easier through the use of a search dialog. The result from the BindingTypeBeanProperty will be a QName that can be supplied to the Binding Registry to retrieve the desired DataHandler.

More details on DataHandlers, DataBinding and FunctionSelectors can be found in the EMD 1.1 specification.

Creating a UI Configuration class

When a DataHandler is configured in ID, its properties are normally displayed on a best effort basis. This can be dramatically improved by the creation of a companion class for the DataHandler that provides instructions to the ID tooling on how to display and enter the configuration properties.

Let us start with a review of the goal. The goal is to display configuration information for the user of the DataHandler. The configuration for the data handler is held in a Java Bean with the type <DataHandler>Properties. Each property exposed by the bean is supposed to be able to be modified by the end user.

A Java class called PropertyDescriptor is used to describe each property. So if the DataHandler is to show three properties, there will be three instances of a Property Descriptor. Property Descriptors are owned by another Java class called a PropertyGroup. It is an instance of a PropertyGroup that is returned by the <DataHandlerName>Configuration class when the createProperties() method is invoked.

The PropertyDescriptor looks as follows:

The PropertyGroup looks as follows:

A PropertyGroup inherits from a PropertyDescriptor.

getProperties() returns an array of PropertyDescriptor. This is the set of properties to be shown to the end user.

getProperty(String) returns a single PropertyDescriptor that is keyed by the name of the property.

The PropertyType looks as follows:

The name of this class must be <DataHandlerName>Configuration.

It must implement the Java interfaces called EditableType and BindingConfigurationEdit. This causes the following methods to be injected that must themselves be implemented:

PropertyGroup createProperties()

void synchronizeFromBeanToPropertyGroup(...)

void synchronizeFromPropertyGroupToBean(...)

EditableType getEditableType()

boolean isOptional()

void setType(...)

Let us take these apart. The first method of interest is called createProperties(). This returns a PropertyGroup object. It is the responsibility of the DataHandler provider to implement this. The PropertyGroup interface itself has a bunch of methods that must be implemented. These are:

To explain these, getProperties() returns an array of PropertyDescriptor objects. Each property descriptor is a property to be shown to the user. Here is an example implementation of this method:

public class XSLTDataHandlerConfiguration implements EditableType, BindingConfigurationEdit { ... public PropertyGroup createProperties() { return new MyPropertyGroup(); } ... }

public class MyPropertyGroup implements PropertyGroup { ... public PropertyDescriptor[] getProperties() { PropertyDescriptor pd[] = new MyPropertyDescriptor[1]; pd[0] = new FileProperty1(); return pd; } ... }

Each of the properties returned in the getProperties() array describes a visual property that is shown to the user. IBM supplies some helpers for these.

Property Types – TableProperty

This property adds some new methods that need implemented:

Property Types – FileProperty

The FileProperty PropertyType.getType() must be a

HTTP DataBinding

An HTTP DataBinding implements the HTTPStreamDataBinding interface. Creating an instance of this interface will result in a number of methods that need implementing and also an implied life cycle. For inbound processing (data coming into IBPM) … the following methods are executed:



IBM's DataObjects provides a number of utility functions for programming

Creating an SCA Business Object

A Business Object represents a structured piece of data. The Business Object can be created in an SCA application through a Java class called the BOFactory. This factory must be retrieved from a named SCA location:

BOFactory boFactory = (BOFactory)ServiceManager.INSTANCE.locateService("com/ibm/websphere/bo/BOFactory");

The BOFactory is part of the package and is fully documented along with the other related classes. At the simplest level, to create a new Business Object, we can use:

DataObject dataObj = boFactory.create("targetNameSpace", "complexTypeName");

As you will notice, the DataObject is created from the factory through the combination of target name-space and type name. The BOFactory appears to scan its current class path looking for XML or XSD documents that match these requirements and, from these values, then creates the associated DataObject. What this means is that if you need to create a DataObject in an arbitrary class, then the BO (.xsd file describing it) needs to be included as a dependency in the project in which the calling class is contained.

Here is an example. If you have a Library called "Lib" that contains a BO definition called "Customer" in name-space "http://mybiz", then if you want to create a DataObject instance of that type you would need to call:

boFactory.create("http://mybiz", "Customer");

If the code that creates the BO is in a Module, then "Lib" must be flagged on that Module as a dependency.

If the code that creates the BO is a JavaEE App, then the JAR file representing "Lib" must be added as a dependency to the JavaEE Deployment Descriptor.

Converting to/from XML

IBPM provides a utility class to convert a DataObject to/from a specific formatted XML document. The class implementing this function is called BOXMLSerializer. An instance of this class can be retrieved through

BOXMLSerializer mySerializer = (BOXMLSerializer) ServiceManager.INSTANCE.locateService("com/ibm/websphere/bo/BOXMLSerializer");

Some of the methods of this class expect a rootElementName. To determine the root of a current DataObject, use the following:


Some of the methods of this call expect a namespace. To determine the namespace of the current DataObject, use the following:


The following example illustrates turning a BO into an XML document:

BOXMLSerializer mySerializer = (BOXMLSerializer) ServiceManager.INSTANCE.locateService("com/ibm/websphere/bo/BOXMLSerializer"); String rootElementName = dataObject.getType().getName(); String targetNamespace = dataObject.getType().getURI(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); mySerializer.writeDataObject(dataObject, targetNamespace, rootElementName, baos); String xmlText = new String(baos.toString());

To convert an XML string to a DataObject, the following code may be utilized:

String xmlText = "<XML to become a BO>"; … BOXMLSerializer mySerializer = (BOXMLSerializer) ServiceManager.INSTANCE.locateService("com/ibm/websphere/bo/BOXMLSerializer"); ByteArrayInputStream bais = new ByteArrayInputStream(xmlText.getBytes()); BOXMLDocument document = mySerializer.readXMLDocument(bais); DataObject dataObject = document.getDataObject();

XML documents with no namespace

It is common to find XML documents that have no namespace information associated with them. In order to be able to support this kind of document, the corresponding Business Object should also be set to have no namespace associated with it.

When IBPM is supplied an XML document to be turned into a Business Object, it examines that document and provides a fully qualified name of the format {Namespace}Name. This is used to then find the corresponding Business Object definition.

When an XML document with no name-space is presented, then the result is {}Name. In order to find a matching Business Object definition, the BO must also appear as {}Name and this is achieved through nulling out the name-space for the BO. This will result in an ID based warning but this may be safely ignored.

Here is an example. Consider a Business Object called BO1 (no namespace) that has three fields. Converting this to its XML representation results in:

<?xml version="1.0" encoding="UTF-8"?> <BO1 xsi:type="BO1" xmlns:xsi=""> <f1>1</f1> <f2>2</f2> <f3>3</f3> </BO1>

No Comments
Back to top