BPM is composed of a number of functional components. These components and how they interact with each other constitute the architecture of BPM.
A Process Application is the container for a solution. You can loosely think of it as a project. The Process Application is initially created through the Process Center console. It is given a name and a tag called an acronym. The acronym must be unique and can be no more than seven characters in length.
Once the Process Application container has been created, artifacts can then be further created within it using the BPM Process Designer (PD) tooling. The Process Application and its artifact contents are stored within a repository hosted and managed by a component called the Process Center.
Process Applications can be created from the Process Center console either from its web page interface or from within BPM PD. The main Process Apps page has a button to create a new Process Application.
The creation of a new Process Application will open a dialog and prompt for a name for the new the application as well as its acronym value.
The "state" of a Process Application at a given point in time can be recorded in a "snapshot". This state consists of all the artifacts and their content at the time that the snapshot was taken. As changes are made to the Process Application after the snapshot was made those changes are not reflected in the snapshots that occurred in the past. Additional snapshots can be taken at anytime.
Process Applications commonly contain one or more Business Process Definitions (BPDs). These can be thought of as models of the process that will eventually be executable. In this book I switch between the phrases BPDs and BPMN processes as I see these as synonyms for each other.
A BPD in a Process Application reflects a template of a process as opposed to an instance of a running process. When a process is started, a new instance of the process is created from the template. An actual process instance can be thought of having a current state and can only be in one state at a given point in time. The potential states are:
Active– The process instance is active (running).
Completed– The process instance has completed.
DidNotStart– The process instance did not start.
Failed– The process instance has failed.
Suspended– The process instance has been suspended and can be resumed.
Terminated– The process instance was explicitly terminated prior to completion.
In addition to the processes execution state, there is also the concept of the state of any variables that may have been created.
When a process instance is created, it is given a unique integer process id value. This value is unique amongst all the process instances within the environment. Some customers have used this value as a key in 3rd party databases and have had problems when migrating processes to a new version of the product where the instance ids are reset. They remain unique in the environment just not globally unique across time. It is simply a bad idea to use the process id as any kind of unique identifier which would have to be unique for all time into the future.
Archiving Process Applications
Once created in PD, the definition of a Process Application is usually never deleted. This may seem odd at first but if we think about it, the only resource being consumed by a non-deployed Process Application is some disk space within the back-end Repository database which is usually relatively small. In addition, Process Applications are extremely coarse grained concepts and there should be no obvious need to create too many of these. A Process Application can be hidden from overt viewing by flagging it as archived. Archiving a process application removes it from view from the list.
Even though a Process Application has been archived, it still remains within the Repository. A filter on the Process Center console can be used to view archived applications and restore them to their visible state.
When a Process App has been archived, it is eligible for deletion. Deleting a Process App is not a recoverable command. Once deleted, it is gone forever. In versions of the product prior to 7.5.1, this capability was simply not present and Process Apps and Toolkits remained forever.
Process Applications state management
A Process Application can be manipulated or worked with as follows:
Cloned– The process application is duplicated with the new one being given a new name
Archived– Archiving an application simply means "hiding" it from normal view
Import– Bring in a previously exported Process Application from a .twx file (Note: The file suffix TWX used to mean "Team Works Export" when the product was historically called TeamWorks).
Export– Export a Process Application to a .twx file. A .twx file is a file format used to store the content of a Process Application in a computer file. The .twx file can be transferred to other systems running Process Center for subsequent importation.
Changing Process Application settings
Some core settings of a Process Application can be changed within the Process Designer. From the primary menu pull down, an entry for "Process App Settings" can be chosen:
After selecting this, an editor is shown in which some key settings can be changed.
These include the name of the Process Application, its Acronym and the textual description.
Similar to Process Applications, a Toolkit can also be thought of as a container for artifacts used in solutions. Unlike a Process Application, a Toolkit does not result in a deployable application. Instead, the contents of the Toolkit can be "included" or "used" by one or more Process Applications.
When Process Center is installed and configured, an IBPM supplied Toolkit called "
System Data" is automatically imported into the repository. See System Data Toolkit.
This toolkit is marked as read-only and is implicitly dependent on all other Process Applications and Toolkits. It is the System Data toolkit that contains the core definitions for data structures and other items common across all Process Applications.
Toolkits have their own tabs in the Process Center consoles. From there new Toolkits can be created, Toolkits exported and otherwise managed in a similar fashion to those of the Process Applications.
Just like Process Applications, Toolkits can have Snapshots taken off them allowing all the artifacts in a Toolkit to be considered a specific version.
To add a Toolkit as a dependency to a Process Application, the Toolkit must first have a snapshot associated with it. This is because the dependency added to the Process Application is not just the Toolkit's name but is instead a specific snapshot of that Toolkit. Once a snapshot of the Toolkit has been taken, a dependency can be added in the Designer view of PD by clicking the icon to the side of the Toolkits entry. A list of potential Toolkits and their associated snapshots is shown for toolkit selection.
Smart filtering is available in this list but is interestingly keyed off snapshot names and not toolkit names.
The following diagram summarizes the story of process apps and their relationship to toolkits. A Process Application can have a dependency (shown by the arrow) on a Toolkit. Multiple Process Applications can have dependencies on the same Toolkit and Toolkits can have dependencies on each other.
The nesting of toolkits opens up some interesting semantic questions. For example, if Toolkit A has a dependency on Toolkit B then if Process Application C depends on Toolkit A does that also mean that the artifacts defined in Toolkit B are visible to Process Application C? The answer appears to be no. If Process Application C wanted to utilize artifacts in Toolkit B it would have to have a dependency on that toolkit to enable access to those artifacts directly.
Tips on good toolkits
When building solutions using IBM BPM, each separate solution is contained within its own Process Application (Process App). The Process App is the literal unit of deployment to a Process Server for execution. This works great up until the the second separate solution is embarked upon. At that moment, one will realize that there are artifacts that were built in the first solution that can be reused in the second solution. One can always copy them from the first to the second but that will result in literal duplication. If improvements are made to one copy, they would either have to be redone in the second or else the two copies would drift out of synchronization with each other. A better notion would be the ability to have a single definition of an artifact and have it “leveraged” by both solutions. This results in a set of artifacts that are common between the applications.
IBM BPM supports exactly this concept through the notion of a “toolkit”. A toolkit is a named collection of artifacts that differs from a process application in one crucial way which is that a toolkit is not a deployable entity. When a toolkit is created, other process apps (and indeed other toolkits) can declare that they have a dependency on the toolkit. Once done, the BPM designer of the solution can think of the artifacts contained within the dependent toolkit as being fully available within the solution itself. It is as though (but not actually) the artifacts had been copied into the solution at hand.
As with any human endevour, factoring artifacts into a toolkit can be done well or it can be done poorly. We will now start thinking about some suggestions on what is thought to work well and what isn’t.
When one creates a new artifact, it is tempting to put it directly into a toolkit. This is rarely the right thing to do. Instead, give careful thought on what you are saying when you do so. By putting an artifact into a toolkit you are saying “This is a very important item that others will want to reuse”. That isn’t always the case. If you are tempted to create a new toolkit artifact, pause and go talk to others. Ask them if they would reuse it in the form you are suggesting? If you yourself don’t have any plans to reuse it, the chances are high that neither will others.
Generic vs specific
Once you have decided that an artifact is a good candidate for a toolkit, next consider how generic vs how specific it should be. Generic artifacts, by their definition, are more consumable by others. Specific artifacts do have their place, but consider creating two artifacts. One being a generic instance and the other being a more specific instance building on the former. They can even be placed in two distinct toolkits with appropriate dependencies between them.
Document the hell out of it
A peeve of mine is that talented and skilled individuals expend their brain power on activities and build wonderful creations but then neglect to document how to use them or how to maintain them. The result is a toolkit containing something that would appear to be what I need but with no instructions on how to use it. At best, it can simply be frustrating and at worst causes the creation to be ignored after wasting time trying to decode it. Artifacts placed in a toolkit must be very well documented. This includes (at a minimum) the reason for their existence, how to use them, any setup, any pre-requisites, any cautions and who to contact for questions or problems. An artifact in a toolkit without documentation is worse than useless and a pox be on the head of the author.
Toolkit contents developed by one team may be leveraged by many others, this means that not only the usage documentation has to be present but also the artifacts themselves may need to be examined and, in future, maintained by others beyond those who created them in the first place. Litter any code with comments and use the documentation areas provided by Process Designer.
Meaningful and consistent names
Artifacts in toolkits, like artifacts anywhere, have names associated with them. These artifacts should be named following your established naming conventions. It is poor show to start using artifacts such as business object definitions where their names start with lower case characters (eg. “address”) where your other business objects start with upper case characters (eg. “Customer”). Ensure that the names for terms are consistent throughout the environment. Creating services which expect a formal parameter called “customer” where other services expect a formal parameter called “client” just becomes confusing for no good reason.
Collaborate on toolkits
The IBM BPM product is designed for collaboration and toolkits are no exception. Don’t treat a toolkit as your own personal sandbox of artifacts that you will force others to leverage. A toolkit should not be considered a collection of “your” artifacts. Rather a toolkit should be a collection of “common function” artifacts. What this means in practices is to be prepared to share the creation of a toolkit with your colleagues. Building BPM solutions is not a solo activity. When you feel that a new artifact would be beneficial to be contained within a toolkit, consider that to be an important decision. Talk it through with your team and ensure that all are in agreement.
Version toolkits like you would version solutions
Just because artifacts are contained within a toolkit doesn’t mean that you are free to release new snapshots any more frequently than you would release solutions. In fact, it is often the case that you are much more conservative in releasing toolkit versions than solution versions. The creation of a new toolkit version that is a dependency on a solution may require the re-release of the solution version even though nothing at all has changed in the solution.
Provide a web based catalog
Sadly, IBM BPM does not natively integrate with any repositories to provide catalogs or searching for existing artifacts. The concept of “loosely coupled” function and reusability existed immediately after the first two cavemen programmers wrote their first lines of code but finding those artifacts remains one of the highest challenges. After having built and documented the toolkit, publicize it within the catalog of other toolkits in your environment. You will have to choose for yourself how this is achieved. Suggestions include Wikis and other web based communal tools but any indexable and searchable content management system may also be applied. At a minimum, create a Word document and send it around but communal repositories are by far the best. What you want to avoid is the situation where someone comes to you and says that they wrote their own toolkit simply because they didn’t know your’s existed or what it contained. Obviously, you won’t create a new toolkit in ignorance because you will already have performed all the necessary research to ensure that nothing similar already exists.
A task is a piece of work to be performed by a human being. Typically, this is achieved by the creation of an activity in the BPD which is then associated with a Human Service. When a process reaches a Task, that branch of the process pauses or suspends until the task has been completed. A task has a state associated with it. A task can only be in one potential state at any given point in time. The states associated with a task are:
Associated with a task is the concept of a priority which indicates how important this task is relative to other available tasks. The Task Priority could be used by user interfaces to determine a display or sort order for tasks and show the tasks with the highest importance first. The choices available for the task priority are:
- Normal (default)
Within a Human Service, the current task can be found from the variable:
This object contains a field called
priority which is the priority of this task. Changing this value results in the task's priority being changed.
At a high level, BPM is comprised of a number of coarse grained components. Taken together, these are the BPM product. Each component serves a unique and distinct purpose and are employed at different stages in the development or operation of a BPM solution. Breaking BPM down into these constituent components both aids in the understanding of the product as well as providing a practical differentiation between phases and pieces of operation.
Component – Process Server
Process Servers are the components/engines which run the business processes described by BPDs. During development, the processes run on an instance of a Process Server located at the Process Center. When it comes time to put the applications that have been built in the test and production environments, they will be installed on other Process Server instances.
A Process Server is implemented by a WebSphere Application Server (WAS) with the IBPM product integrated within it. The IBPM run-time consists of IBM written Java product code engineered to conform to and utilize the Java EE framework.
For users of IBPM Advanced, the Process Server also contains a large set of additional functions including Service Component Architecture (SCA), BPEL processes and mediations (to name but a few).
Component – Process Center
One of the core concepts of BPM is that there is only ever one definition of the solution you are building. This may sound obvious but comparing BPM to other products, we find that those other products have different "representations" and "copies" of the solution being built depending on what is being done. For example, some products store modeling data in one format/location, process development data in a different format/location and monitoring data in yet another format/location. The result of this is that a plethora of different data structures and sources exist with little interoperability between them. A change in a model, for example, may need to be manually reflected as a change in the implementation of the process. Because the different tools and products don't use the same underlying data and conversions from one product to another must happen, mistakes and misinterpretations can easily occur. The result can be a complex mess.
BPM on the other hand utilizes a concept that is called the "shared model". In simple terms this means that no matter what is being done within the overall solution, there is only one common repository and a single representation of that solution. Because of this, it is impossible to get two phases of the same solution out of synch with each other.
Another way of saying this is that the BPM PD does not maintain the artifacts of a solution on the user's workstation. Instead, they are retrieved from Process Center to IBM PD for editing and when the edit has completed, the changes are saved back to Process Center. Contrast this with other products where when a developer makes a change, the change is made locally on the user's workstation and no-one else sees that change. In order for others to see it, copies of the artifacts are passed around resulting in potential inconsistencies.
The Process Center repository is implemented as tables within a database (commonly DB2). The content of these tables are opaque and access to the repository information is achieved through the tools and web pages. One should never attempt to access these tables directly through database tools.
The Process Center is actually comprised of three component. It has the Process Center repository which is responsible for managing the solution's artifacts and it has an instance of a Process Server and a Performance Data Warehouse both used for unit testing.
The Process Center can be accessed either through IBM PD or through a web based interface. Here is a screen shot of the web access:
and here is the same interface in the BPM PD desktop client tooling:
As can be seen, they are virtually identical from an appearance stand point providing a consistent view of the models.
Starting Process Center
The Process Center component runs within a WebSphere Application Server (WAS) server so starting Process Center is actually the task of starting a WAS server which has been configured to host Process Center. During install of BPM, a WAS server that runs Process Center is registered with the Windows Services systems. An icon is added to the start menus to launch Process Center/WAS:
The Process Center server starts up quietly. A recommendation is to use a tail tool that can be applied to the WAS console log to follow the start-up information until start-up has completed. The console file for this log file can be found at:
This is the default WAS log. The location where the log file is written can be changed through the WAS admin console but it is recommended to leave it as the default unless there is a compelling reason to change.
Component – Process Designer
The IBM Business Process Manager Process Designer (PD) is the development time tooling used to design, model and build processes. For the longest time it was implemented under the covers using the Open Source technology called Eclipse but had a very non-Eclipse like skin applied over it. Unless you knew otherwise, you would never know that it was Eclipse based. This was both good and bad. It was good in that it built upon the proven robustness and maturity of Eclipse and leveraged a whole host of trusted functions under the covers. The down side was that because it didn't "feel" like Eclipse, it lost one of the major strengths of an Eclipse hosted application which is the consistency and familiarity of the framework. The decision to hide the Eclipse nature of PD was extremely deliberate. It was felt that the visual complexity of Eclipse was too overwhelming for business users. As such, even though it was just mentioned that under the covers PD is Eclipse based, you should now put that out of your mind as it will serve no further purpose.
With the adoption of browsers as the the new desktop environment and the move to cloud based computing, installation of thick application on the desktop has fallen out of fashion. IBM embarked on a re-implementation of process designer to make it 100% browser hosted. What this in effect means is that a user who wishes to author a BPM process points their browser at Process Center and the web pages shown provide the development environment. The look and feel of the browser based editor was modeled after the Eclipse Process Designer. This has meant that there have been two process designer implementations. There has been the classic Eclipse based Process Designer and now the newer Web based Process Designer. The later is now simply called Web Process Designer. IBM's intent was to slowly and carefully migrate users from Eclipse Process Designer to Web Process Designer and rolled out functions over time in the browser based tool. With the 8.6.0 release, Web Process Designer has become the default and IBM encourages all users to utilize that as opposed to Eclipse Process Designer.
At a high level BPM PD allows us to describe business processes using the BPMN notation. Processes are "drawn" visually on the screen canvas and the technical skills needed to achieve this task are as low as can possibly be made.
Play back sessions
One of the key strengths of BPM is the ability to incrementally build out and demonstrate the business processes being constructed. Rather than having to complete a phase (such as modeling) before seeing how it "feels" during further development, a concept called a "play back" can almost immediately be applied. A play back is the real-time execution of the process without having to explicitly compile and deploy a solution. Think of a play back as the ability to quickly build a "skeleton" of a business process and run it to see what it looks like. The real-time nature of change and play back allows us to enter a step, test it, realize that something is missing, change the process and re-test all without missing a beat or having to "flip" from one development tool or environment to another.
Launching Process Designer
From the Process Center list of applications and toolkits, we will see an option to "Open in Designer":
If we click that link, the process app will be opened in the process designer editor.
Working in the Designer view
The majority of the time spent in BPM PD will be spent in the Designer view. This is where the bulk of the description of the process is performed. The Designer view is shown in the following screen shot:
The Designer view contains a number of screen "areas" that change based upon what task is being performed. In summary, the view is built from four major areas.
In the top left we have a list of all the artifacts of the solution. In the top right, we have an editor for the current artifact being worked upon. In the bottom left we have a history of the changes and snap-shots available to us. In the bottom right we have some tab sheets the most important of which is the one called Properties.
The two left windows can be hidden or shown by using the icon in the bottom left of the window:
This is a toggle-able button. One click hides and a second click shows. Use this button to give your environment more visual space if the resolution of the screen is low or you wish to see more of your project while editing.
Working with the Library
Within BPM, you may be working with a large number and a wide variety of artifacts. Trying to manage these artifacts in a way that you can easily locate them can be a challenge. Classic user interface designs have provided folders in a tree structured diagram. IBPM has provided an alternative to this model. Although definitely non-standard it has managed to achieve a high degree of ease of use.
When working in the Designer mode, you can see the area called the "
Library". The library is the list/catalog of all artifacts that you are working with or have visibility to.
The major (top level) entries are:
- The current Process Application
- Blueworks live defined processes
- Smart Folders
The Library contains high level categories and within each category are the artifacts associated with those categories.
Here is a break-down of the items in the library:
Items defined in the library are local to just that Process Application. To move or copy items from one Process Application to another, the context menu of the item to be moved or copied can be used. It contains menu entries for both Move and Copy. The target is a different Process App or Toolkit.
Adding managed files
On occasion, files that were originated outside of IBPM PD may need to be included in the solution. IBPM PD provides the ability to include such files for packaging within a Process Application. Examples of files you may wish to package include image files, HTML files and JAR files.
There are three “types” of files that can be defined as server managed … these are:
Web File– Web based artifacts such as CSS style sheets, images and other web loaded assets
Server File– Server files such as Java JAR files and Java Script files
Design File– Product specific files such as XSLT style sheets used in Coach transformation
After a file has been added, its properties are shown:
tw.system.model.findManagedFileByPath("<file name>", TWManagedFile.Types.Web);
TWManagedFile has a number of properties including one called "
url". This property contains the web URL that can be used to access the file over a network.
If the managed file is a ZIP, appending the name of the file in the ZIP to the URL can be used to retrieve the specific file.
When working with artifacts such as BPDs, services, and more, it can sometimes be a challenge to find the parts you are looking for. PD provides an elegant solution to this problem through the technique known as tagging. A tag is a keyword that you make up. Think of it as the name of a collection or set. Each artifact that you care about can then be tagged with this keyword. Once tagged, you can then ask BPM PD to show you the tags of the artifacts or use this attribute in a search or sort filter.
Here is an example. Looking at the BPDs in a Process Application, we may initially see the following list:
In the top left of the list there is a pull-down that allows us to show how the elements in the list are categorized. By default, they are categorized by type. We can change this to
Tag and now the elements are categorized by their tags. Initially, none of the artifacts have any tags associated with them and we see that they belong to the category called "No Tags".
By right-clicking on an entry, we expose its context menu. In that menu there is an entry called
Tags. This is where we can associate the entry with one or tag values. If we want to create a new Tag, we can select the
Here we see the creation of a new tag:
After having tagged an entry, when the lists is shown again, we see that it is part of the grouping for the tag value:
An artifact can be tagged with multiple tags in which case it will appear in the list multiple times, once for each tag.
To remove a tag from an artifact, again go to its tag settings where there will be a check mark for each tag associated with the artifact. Un-checking the tag from the artifact will remove it from the tag set:
The Smart Folder definition for this may look like:
Smart folders are a way of organizing your work and seeing only those artifacts that you need to see at a particular time in a convenient manner. By default, four folders are pre-created for you:
Favorites– Artifacts tagged as favorite
Changed today– Artifacts that were changed today
Changed this week– Artifacts that changed this week
Validation errors– Artifacts that contain validation errors
To add an artifact as a favorite, bring up the context menu of the artifact that you wish to flag and select
Favorite from the menu. A star icon will appear next to the artifacts you have marked as favorites.
In addition to these pre-defined folders, you can create your own custom folders. Clicking the add button to the right of the Smart Folders label, you can create a new folder.
You would give the folder a name and then provide one or more rules that describe the types of artifacts that the folder should contain. These rules contain:
- Currently Changed
- Currently Changed by other
- Currently Open
- Last Modified By
- Last Modified Date
- Last Modified by me
- Validation Error
- Validation Warning
For example, to create a folder that shows all the artifacts that are tagged "My PoC", you might use:
One particularly good use of smart folders and tagging is to create a tag called "ToDo" which is used to identify artifacts that still need work before the solution can be deployed. As a solution is built, it is common to build a number of artifacts as place holders for further implementation. Remembering to work on these is the challenge. If a tag called "ToDo" is created and a smart folder defined which shows these items, one can immediately see if there is work outstanding.
When ever changes made to a Process Application are saved, the solution as a whole undergoes an automated correctness validation. This will attempt to catch development time errors that may have been introduced. Validation errors may include such things as missing artifacts, duplicate named service components or no longer supported component types.
The Smart Folders section contains a visual indication of the number (if any) of validation errors detected.
Selecting this folder will show a list of artifacts that are flagged as containing errors. When an artifact is opened that contains an error, the
Validation Errors tab will show the details of those errors:
Validation only occurs when artifacts are saved to the Process Center. All validation errors should be corrected before attempting to deploy a Process Application.
As you start to build out larger and larger solutions you will find that you have relationships between re-usable components. For example, you will find that you create Business Object definitions in a toolkit that are then leveraged in other process apps and toolkits. You will also create services that are invoked from a variety of processes. This results in references between components. For example, Process A uses Service B which accepts as input a business object of type C. What we need is a mechanism whereby we can determine which artifacts are referenced by which other artifacts. This forms the basis for minimal impact analysis. For example, if you determine that you wish to replace a data type, you would have to examine your solution to find all the places that data type was referenced. Walking through each of your applications and simply "looking" isn't going to be acceptable.
Within Process Designer, when we look at any individual artifact, we will find a "References" icon that looks as follows:
(Note: I don't appreciate how this icon means references but that's ok)
When we click this icon, we are presented with a new panel between the artifact list and the diagram:
What this panel initially shows is the set of other artifacts that are referenced by the current one. For example, if we are looking at a process diagram, we might see the business objects and services that are used by this process. This information is shown in the "References" section. However a second section called "Referenced By" is also extremely useful. This shows all the other artifacts that have a reference to the currently selected artifact. From this, we can quickly find all the places where our current artifact is used. There are two modes to looking at "Referenced By". The first is "Local Scope" where we are looking for use within the current Process App. If we see a globe icon, we can click it to toggle through to "Global Scope". In Global Scope we are shown all the references to our artifact across all the process app and toolkit models contained within Process Center.
Working as part of a development team
It is extremely common for a team of people to be working on a single Process Application solution at the same time. Some may be developers, some may be analysts and some may be performing other roles. Since each user of BPM is using PD to connect to the same Process Center Repository, we need some form of shared notification to ensure that users do not step on each others work and get notified when changes are complete.
BPM PD provides a highly elegant solution to the problem.
When multiple users open the same Process Application at the same time, the list of users that have that Process Application open is shown at the bottom of PD.
The following illustrates that there are two other users who also have this same Process Application open in their BPM PD tools.
Clicking on a user brings up details of what that user has open for editing or has recently worked upon:
It is permissible for multiple users to open the same artifact at the same time, in which case a notification that another user has it open will be shown:
If another user makes a change to the artifact, then it will be made read-only for everyone else until the changes are completed or discarded. The indication that it is being edited by another is immediate:
When the changes are completed, the artifact immediately becomes unlocked for the others and changes made by others are also reflected in the open editors of all the other users.
Component – Performance Data Warehouse
The Performance Data Warehouse is a database responsible for collecting and managing data originated by Process Server instances. Think of this as the repository for information about the history of the system as is used for reporting of the outcome of processes. The sourcing of data for reporting comes from the single Performance Data Warehouse database which may be associated with multiple Process Server instances. This architecture allows aggregation of information from multiple servers to be achieved. Since generation of reporting knowledge can be computationally expensive, the separation of the Process Servers from the Performance Data Warehouse also allows for reports to be generated without impacting the operation of running processes.
Component – Process Center Console
The Process Center Console provides a web based interface for managing the Process Center maintained projects. This capability is also available within the IBPM PD thick client tool.
Component – Process Portal
The IBPM Process Portal provides the primary end user interface for users to start process instances and see work tasks awaiting their attention.
Component – Process Admin Console
The Process Admin Console is a web based interface for system administrators. It provides a wealth of functions for system operations.