Orchestration Configuration
About Orchestration Workflow Development
An orchestration is a group of analytics to be run as a single unit. An orchestration workflow defines the order for executing the individual analytics. The runtime uses an orchestration workflow file to trigger the analytic execution request.
The orchestration workflow is defined in a BPMN file, which is an XML file conforming to the BPMN 2.0 standard. Create the orchestration workflow file in your preferred tool for editing XML files. Sample orchestration BPMN-compliant workflow files are available at https://github.com/PredixDev/predix-analytics-sample/tree/master/orchestrations.
Task Roadmaps: Developing an Orchestration Workflow
A task roadmap provides the recommended order of steps to complete a process and serves as a planning guide. Use the following roadmaps to develop an orchestration workflow file.
How the data will be accessed by the analytic determines which process to follow when developing the orchestration workflow. Choose from the following options.
- Task Roadmap: Running an Orchestration Using Predix Time Series Tags: Use when time series tag ids will be provided in the orchestration request so that the runtime can pass time series data to the analytics.
- Task Roadmap: Running an Orchestration with Single Asset: Use when time series tag ids will be resolved at runtime using asset tag queries that retrieve the time series tag ids from Predix Asset.
- Task Roadmap: Running an Orchestration for Asset Group: Use when running multiple orchestration instances, one for each asset in a set of assets.
- Task Roadmap: Running an Orchestration Using an External Data Source: Use when running an orchestration request where data from an external source (non-Predix Time Series data) will be passed to the analytics.
Task Roadmap: Running an Orchestration Using Predix Time Series Tags
When developing an orchestration workflow designed to run analytics that will use time series data, proceed as follows. The orchestration workflow will communicate with the Predix Time Series service to retrieve and store data for use by the referenced analytics.
Task | Information | |
---|---|---|
1. | Understand how to find the URI values and to configure the REST headers for each API | See |
2. | Create the orchestration workflow files | See About Orchestration Workflow Development |
3. | Validate the structure of the orchestration workflow file and health of referenced analytics | See Validating an Orchestration Workflow File |
4. | If using trained analytics:
| See |
5. | Create the port-to-field maps for the analytics in the orchestration | See Creating a Port-To-Field Map |
6. | Create the orchestration configuration entry and upload the BPMN and port-to-field artifacts | See Uploading an Orchestration Configuration Entry |
7. | Deploy the orchestration workflow file | See Deploying an Orchestration Workflow File |
8. | Run the orchestration with the tag id map in the request | See Running an Orchestration Using Predix Time Series Tags |
9. | Verify the orchestration execution results status | See Retrieving an Orchestration Execution Status |
Task Roadmap: Running an Orchestration with Single Asset
When developing an orchestration workflow designed to run analytics that will use asset model and time series data, proceed as follows. The orchestration will automatically communicate with both the Predix Asset service to retrieve tags, and the Predix Time Series service to retrieve and store data corresponding to these tags.
This task roadmap provides the steps to run an orchestration for single asset.
Task | Information | |
---|---|---|
1. | Understand how to find the URI values and to configure the REST headers for each API | See |
2. | Create the orchestration workflow files | See About Orchestration Workflow Development |
3. | Validate the structure of the orchestration workflow file and health of referenced analytics | See Validating an Orchestration Workflow File |
4. | If using trained analytics:
| See |
5. | Create the port-to-field maps for the analytics in the orchestration | See Creating a Port-To-Field Map |
6. | Create the orchestration configuration entry and upload the BPMN and port-to-field artifacts | See Uploading an Orchestration Configuration Entry |
7. | Create the default tag query | See Creating the Default Tag Queries |
8. | (Optional) Create a custom tag query in the port-to-field maps | See About Asset Model Data Handling for Orchestration |
9. | Deploy the orchestration workflow file | See Deploying an Orchestration Workflow File |
10. | Run the orchestration with the orchestration configuration and asset id | See Running an Orchestration for a Single Asset |
11. | Verify the orchestration execution results status | See Retrieving an Orchestration Execution Status |
Task Roadmap: Running an Orchestration for Asset Group
When developing an orchestration workflow designed to run analytics that will use asset model and time series data, proceed as follows. The orchestration will automatically communicate with both the Predix Asset service to retrieve tags, and the Predix Time Series service to retrieve and store data corresponding to these tags.
This task roadmap provides the steps to run an orchestration for an asset group.
Task | Information | |
---|---|---|
1. | Understand how to find the URI values and to configure the REST headers for each API | See |
2. | Create the orchestration workflow files | See About Orchestration Workflow Development |
3. | Validate the structure of the orchestration file and health of referenced analytics | See Validating an Orchestration Workflow File |
4. | If using trained analytics:
| See |
5. | Create the port-to-field maps for the analytics in the orchestration | See Creating a Port-To-Field Map |
6. | Create the orchestration configuration entry and upload the BPMN and port-to-field artifacts | See Uploading an Orchestration Configuration Entry |
7. | Create the default tag query | See Creating the Default Tag Queries |
8. | (Optional) Create a custom tag query in the port-to-field maps | See About Asset Model Data Handling for Orchestration |
9. | Deploy the orchestration workflow file | See Deploying an Orchestration Workflow File |
10. | Run the orchestration with the orchestration configuration and asset group query | See Running an Orchestration for an Asset Group |
11. | Verify the orchestration execution results status | See Retrieving an Orchestration Execution Status |
Task Roadmap: Running an Orchestration Using an External Data Source
When developing an orchestration workflow designed to run analytics that will use data from an external source (non-Predix Time Series data), proceed as follows. You will build a Custom Data Connector service to read data from and write data to the external source.
The Analytics Framework will call your data connector service to read the analytic input data and write the analytic output data to the data source.
Task | Information | |
---|---|---|
1. | Understand how to find the URI values and to configure the REST headers for each API | See |
2. | Create the orchestration workflow files | See About Orchestration Workflow Development |
3. | Validate the structure of the orchestration file and health of referenced analytics | See Validating an Orchestration Workflow File |
4. | If using trained analytics:
| See |
5. | Create the port-to-field maps for the analytics in the orchestration. When defining the dataSourceId field, provide a value of your choosing to identify the external data source the analytics will rely upon. | See Creating a Port-To-Field Map |
6. | Create the orchestration configuration entry and upload the BPMN and port-to-field artifacts | See Uploading an Orchestration Configuration Entry |
7. | Develop your Custom Connector Service and deploy it to Predix cloud. A Java-based reference implementation with PostgreSQL DB support is available. Use the reference implementation with PostgreSQL DB support as your starting point. | See |
8. | Deploy the orchestration workflow file | See Deploying an Orchestration Workflow File |
9. | Run the orchestration for analytics using an external data source | See Running an Orchestration Using an External Data Source |
10. | Verify the orchestration execution results status | See Retrieving an Orchestration Execution Status |
Validating an Orchestration Workflow File
Before You Begin
Before you can validate an orchestration workflow file, all referenced analytics must be:
- Hosted in the Analytics Catalog, and
- Deployed to Cloud Foundry, and
- Running
For more information, see Adding and Validating an Analytic Using REST APIs.
To validate the structure of a orchestration workflow file and the health of all referenced analytics, issue the following REST API request. The request is a multipart/form-data type (attribute named 'file') and must contain the orchestration workflow file to be validated.
POST <execution_uri>/api/v2/execution/validation
The following is a sample response with a valid BPMN XML file and a valid analytic reference.
{
"analyticValidationStatusMap": {
"<analytic_uri>/api/v1/analytic/execution": "200",
... (one entry for each analytic in the bpmn) ...
"<analytic_uri>/api/v1/analytic/execution": "200"
},
"id": "<request id>"
}
The following is a sample response with a valid BPMN XML file and an invalid analytic reference (e.g., to a nonexistent analytic). The 404 error indicates that the runtime could not find that deployed analytic. Check that the analytic id in the URI is correct.
{
"analyticValidationStatusMap": {
"<analytic_uri>/api/v1/analytic/execution": "200",
"<analytic_uri>/api/v1/analytic/execution": "404"
},
"id": "896f736c-63e2-11e6-9ca9-0aaf9ee07fcb"
}
The following is a sample response with an invalid BPMN XML file.
{
"code": "ORCH001",
"detail": "Failed to deploy orchestration workflow to runtime engine: [Error reading XML]",
"severity": "3",
"message": "Failed to deploy orchestration workflow to runtime engine.",
"parameters": [
"Error reading XML"
]
}
About Analytics Using a Trained Model
Analytics that use a trained model (commonly known as trained analytics, machine trained analytics, or machine learned analytics) are supported as follows.
About Deploying Trained Models as a Configuration Entry
run()
request.When deploying a trained model as a configuration entry in runtime, you will follow this process.
- Upload each trained model as an Orchestration Configuration Entry. The upload will include:
- model file
- model name
- model version
- model key, identifying the asset(s) to which the model file applies
- For each use of a trained model, create a port-to-field map that maps inputs to the analytic (from supported data sources) and maps the models from configuration entries. The port-to-field map references to models will specify the model name. It will not specify the asset context or asset id.
- When the orchestration is run:
- The run request will include an asset context.
- When the runtime sees a reference to a trained model(s) in the port-to-field map, it will retrieve the trained model(s) for the current asset context and pass it in to the analytic along with the input data. Refer to the following section for more information regarding how the trained models are selected at runtime for the current asset context.
How the Trained Models are Selected at Runtime
The trained model will be loaded during runtime using the these rules. This example describes how the process works. Assume we have the models in the orchestration configuration database as summarized in the following table.
Model Key | Model Name | Model Version | Model File |
---|---|---|---|
GE90x-1 | Model-1 | V1 | File1.model |
GE90x-1 | Model-1 | V2 | File2.model |
GE90x-1 | Model-2 | V1 | File3.model |
Group-1 | Model-3 | V3 | File4.model |
Group-2 | Model-1 | V1 | File5.model |
GE90x-2 | Model-5 | V3 | File6.model |
Group-2 | Model-6 | V1 | File7.model |
GE90x-3 | Model-3 | V3 | File8.model |
The scenarios and (expected) models will be retrieved as summarized in following table.
Scenario | Model File | Description |
---|---|---|
Orchestration execution with asset id “GE90x-1” with orchestration step's port-to-field map, referencing models “Model-1”, “v1”. | File1.model | |
Orchestration execution with asset id “GE90x-1” with orchestration step's port-to-field map, referencing models “Model-1”, “UNKNOWN”. | none | Performs an exact match for model version and retrieves nothing. Not included as part of inputModels . |
Orchestration execution with asset id “GE90x-NOTDEFINED”, model group key “Group-2”, with orchestration step using dependent models “Model-1”, “v1”. | File5.model | When an orchestration execution is triggered with both asset id and model group key and no model is found for asset id, then the model group key is used as the model key to retrieve model. |
Orchestration execution with asset group (pulling GE90x-3, GE90x-6) and model group key – “Group-1”. Step model “Model-3” and “V3”. |
| For all assets in asset group, tries to find models corresponding to the asset id as the model key and the model name/version defined in step. |
Orchestration execution with asset group (“GE90x-1”, “GE90x-2” and “GE90x-5”) and model group key – “Group-1”. Step model “Model” and “V3”. |
| Asset group retrieves actual assets “GE90x-1”, “GE90x-2” and “GE90x-5”. For each asset id context, because of model name startsWith support, different models will be retrieved for each asset. |
Orchestration execution with tag-map and model group key – “Group-2”. Step model “Model-1” and “V1”. | File5.model | Provided tag-map fields are only for Predix Time Series fields. |
Orchestration execution with tag-map and model group key – “Group-2”. Step model “Model” and “V1”. |
| Using startsWith wildcard search, two models will be retrieved for this group, “Model-1” and “Model-6”. |
Orchestration execution without any model group key. | Only asset id (model key) level models. No default models at model group key level. | Will not default to model group key level models if asset id level models are missing. |
Uploading an Orchestration Configuration Entry
The orchestration configuration service is a repository for storing orchestration flows and port-to-field maps. An orchestration configuration entry stores configuration attributes.
About This Task
The high level steps to create and upload an orchestration configuration entry are:
- Create an orchestration configuration entry with the attributes
- Upload the orchestration workflow file
- Upload the port-to-field map files corresponding to each orchestration step defined in the orchestration workflow
Procedure
Uploading the Trained Analytic Model as a Configuration Artifact
The steps to deploy the trained model as a configuration artifact are provided here. The trained models will be uploaded against a single asset or an asset group key used to identify the group of assets. The orchestration configuration service is the repository for storing trained models.
To upload the trained analytic model, issue the following REST API request.
POST <config_uri>/api/v2/config/orchestrations/models
The request is a multipart/form-data type with the following parts.
modelKey
, modelName
, modelVersion
combination must be unique.Name | Description | Required/Optional | Comments |
---|---|---|---|
file | Model file | Required | |
modelKey | Model key | Required |
|
modelName | Model name | Required | Length cannot exceed 255 characters |
modelVersion | Model version | Required | Length cannot exceed 255 characters |
description | Model description | Optional |
The following is a sample upload artifact response.
{
"id": "aa69522f-3b9f-4538-b089-79d8517e622d",
"fileName": "separator.png",
"description": "sample model",
"modelKey": "modelkey-1",
"modelName": "modelname-1",
"modelVersion": "v1",
"md5": "87dc5c965839396cae70b74b7e4df928",
"createdTimestamp": "2016-07-22T17:56:04+00:00",
"updatedTimestamp": "2016-07-22T17:56:04+00:00"
}
Creating the Default Tag Queries
You can specify a default tag query be used for all port definitions when running an analytic as follows.
Procedure
Downloading Artifacts Attached to an Orchestration Configuration Entry
You can download one or more orchestration configuration artifacts as follows.
Procedure
Retrieving Orchestration Configuration Entries
You can retrieve orchestration configuration entries by pagination, sorting, and filtering criteria as follows.
Issue the following REST API request.
GET <config_uri>/api/v2/config/orchestrations
The following sample request retrieves entries starting at page three in a zero-based page index, displaying 10 entries per page, and sorted by createdTimestamp
.
GET <config_uri>/api/v2/config/orchestrations?page=0&size=10
The following is a sample response for retrieving orchestration configuration entries.
{
"orchestrationEntries": [
{
"name": "Test workflow",
"id": "36cc09bf-7d44-4eac-9518-c738527abe03",
"description": "Test workflow containing one Python analytic",
"author": "Predix Analytics team",
"createdTimestamp": "2016-02-25T06:27:55+00:00",
"updatedTimestamp": "2016-02-25T06:27:55+00:00"
},
{
"name": "Test workflow1",
"id": "233c0d71-da6d-4761-ae97-f10d1c45d5dd",
"description": "Test workflow containing one Python analytic",
"author": "Predix Analytics team",
"createdTimestamp": "2016-02-25T06:30:46+00:00",
"updatedTimestamp": "2016-02-25T06:30:46+00:00"
}
],
"totalElements": 2,
"totalPages": 1,
"currentPageSize": 2,
"maximumPageSize": 10,
"currentPageNumber": 0
}
Deleting an Orchestration Configuration Entry
To delete an orchestration configuration entry and all the artifacts attached to it, issue the following REST API request.
DELETE <config_uri>/api/v2/config/orchestrations/{id}
To delete one artifact attached to an orchestration configuration entry only, issue the following REST API request.
DELETE <config_uri>/api/v2/config/orchestrations/artifacts/{id}