Script block applications

In this section we show how the script block may be applied.  These examples reveal how useful the script block can be.

Return to configuring the Script block

In this section you can find out more about:

In this section you can also view example scripts for:

Setting up a switch

The following example is a simple application that shows how to set up a simple switch.

In this script the value of Value_1 is monitored and used to determine the value to assign to Switch.

Return to top

 DIAGRAM SHOWING the output of the Set switch script block

Return to top

Initializing variables on the first execute

When a block with a script executes the first time, the output fields and state variables will all have 0 values, bad qualities and timestamp set to the first execute time. If these variables are not initialised they will generate bad quality results in all the expressions where they are used. To prevent this from happening the variables must be initialised on the first execute of the block. The script is shown below.

diagram showing the Initialise script  

Return to top

Explanation

On the first execute of the block the 'firstexecute' constant will have a value of true. That will cause the statements in the 'then' part to execute. These statements must set the initial values of the output fields and state fields. The expressions in these statements may use any input field because the input fields will have valid values.

On every execute after the first one, 'firstexecute' will be false and the statements in the 'then' part will not execute.

Calculating an average

Local variable average

In this script we wish to calculate the average value (Average) of good quality input fields. There are 3 input fields; Value_1, Value_2 and Value_3. We will proceed to count the number of input fields that have good quality (count) as well as sum the input fields with good quality (sum). At the end the average is calculated by dividing the sum by the count. The script is shown below.

diagram of the local variable average script block

Return to top

Count and sum needn't be an output from the block because this is a value which must be calculated every time the script executes, thus it may remain a local variable (see note below).

Note : Local variables may be used for storage of temporary results. Local variables are introduced in the script where an assignment is made to the variable for the first time. This happens on line 2 and 3 in the script. Each time the script executes, local variables are reinitialised, the script does not keep their previous value in memory (as opposed to output fields and state variables).

Output field variable counter

In this script we wish to set up a counter (Counter) that counts the number of executions that occur until a certain threshold value (Value_1) is reached, the counter is then set back to 0. Count must be an output field or a state variable to enable this. The script keeps the last value of its output fields and state variables in memory between consecutive executions. Unless the value is changed in the script the next time it executes, the value of output fields and state variables does not change.

diagram of the output field counter script block

Return to top

A sample of the input and output of this script block is shown below.

 

Example of the input to and output from the above script block

Return to top

Changing data types and scaling data

In certain cases the data type of a tag obtained from the SCADA/DCS/historian/PLC needs to be changed. An example where this may be necessary is where you might want to use a tag, defined as an integer on the SCADA/DCS/historian/PLC, as an input to the nonlinear model block. The nonlinear model block is unable to accept integer fields as inputs. In the script block, it is possible to define the data types of the output fields. The script block will try to guess the data type of the field in question, it is up to you to change this if necessary. In addition you may want to change the units of a particular tag or scale it for some reason. The factor may either be calculated or defined in the script block or it may be obtained from an external source.

Assume that we've typed the following script into the script pane. Note that we have 4 input fields to this block; Value_1 (integer), Value_2 (double), Value_3 (integer) and External_Factor (double).

diagram showing the script that will help illustrate data types and data scaling

Return to top

If the 'Show Variables' button is pressed to show the script variables page, the following will be displayed.

diagram showing the script variables

Return to top

Note that the script guesses the data types of the various fields calculated in the script.  At this stage the fields are all still local variables.

Internal_Factor is a value (1.2) that was assigned to the variable simply for use in the script, it is therefore not necessary to make it an output field, it may remain a local variable.  The value of 1.2 will be assigned to it every time the script executes.  

The data type of Scaled_Value_1 is double due to the fact that it is the product of a double (Internal_Factor) and an integer (Value_1).

The data type of Scaled_Value_2 is double due to the fact that it is the product of two doubles (External_Factor and Value_2).  

Note that Value_4 has integer as its data type because Value_1 and Value_3 are both integers and were summed.

To add Scaled_Value_1, Scaled_Value_2 and Value_4 as output fields; check the box in the Output Field column next to the name and click on the OK button. Scaled_Value_1, Scaled_Value_2 and Value_4 will now be in the fields list as output fields.

Now assume we wish to use Value_4 as an input to a nonlinear model, its data type must therefore be changed to double. Select Value_4 in the fields list and select double from the Type dropdown box. Click on the Set Type button. The output fields list will show the following.

diagram showing the changed data type

Return to top

Connection Monitor application

The following script is used to determine whether a break in communication has occurred between Architect and the SCADA/DCS/historian/PLC.

Diagram showing the Connection_Monitor script block

Return to top

Explanation

For this application it is required that a tag be read from and written to the SCADA/DCS/historian/PLC (MONITOR_READ and MONITOR_WRITE in the above script).

MONITOR_READ is an input to the Connection_Monitor script block, it is the value of the MONITOR tag read from the SCADA/DCS/historian/PLC for the current execution. The value of MONITOR_WRITE is dictated by MONITOR_READ.  MONITOR_WRITE is an output from the script and is written to the MONITOR tag on the SCADA/DCS/historian/PLC.  The script also generates a field that indicates whether the connection is still OK (Connection_OK).

If the connection remains OK, the MONITOR tag (on the SCADA/DCS/historian/PLC) will toggle between 0 and 1 between consecutive executions.  Thus, if the connection is OK and the value read from the SCADA/DCS/historian/PLC is 0, the value written to the SCADA/DCS/historian/PLC will be 1.  The value of MONITOR_READ on the current execution (i.e. 0) is saved for use on the following execution (saved as MONITOR_READ_prev).  On the next execution the script will check if the value that it reads from the SCADA/DCS/historian/PLC (which should be 1) is equal to the value that it read from the system 1 execution ago (i.e. MONITOR_READ_prev, which was set equal to 0).  If an error occurred this will not be the case and the Connection_OK tag will be set to 0.

diagram showing the relationship between the Monitor tags (Monitor_Read is the previous Monitor_Write), if the connection remains OK

As soon as an error occurs in the SCADA/DCS/historian/PLC connection, the value of MONITOR read from the system will be equal to the previous value of MONITOR (since MONITOR_WRITE will not have updated the tag OR MONITOR_READ could not be read).

Return to top

Examples:

1. Rate Of Change (ROC) Limiter:

Concept:

  • Limit signal rate of increase to maximum increase rate

  • Limit signal rate of decrease to maximum decrease rate

Script Block:

  • One input: Input

  • One output: Output

Operation:

  • Output normally follows input signal on every execute

  • Calculate that ROC (ROC_Req) at which the script will change the value of the output to conform to the ROC limit.

  • If  the actual ROC  is outside ROC_req then limit ROC of output to max allowable rate.

    Note : Function is state-based.

Return to top

Problem: state-based function initialisation

  • Initial value of Output is bad

  • Result: first value of ROC_Req will be bad

Solution step 1:

  • Handle bad quality ROC_Req

Solution step 2:

  • Initialise Output on first execution

  • Note : Make ROC Limiter more re-usable/generic. Input samples frequency may not be every minute, or may even be irregular.

Calculate true ROC:

  • ROC := (Input – Output) / Delta_T

  • Delta_T := Input.timestamp – Output.timestamp

  • What is time-unit of ROC, e.g. m/s, m/ms?

  • Scale ROC to use user-defined time-unit

  • Check for Delta_t = 0 !

Return to top

ROC Limiter Script Explanation

//**********************************************

// ROC Limiter (NOT THE FILTER)

//**********************************************

Firstly the parameters are initialized, by setting maximum and minimum limits for ROC and a time unit of one minute.

// Parameters

ROC_Max :=  0.05 // per ROC_Time_Unit

ROC_Min := -0.05 // per ROC_Time_Unit

ROC_Time_Unit := 1:00

Then the Output is initialised on first execute to be equal to the input variable or tag.   

// Initialisation

if firstexecute then

   Output := Input

endif

Following, the timeframe (delta time = Delta_T) is calculated by comparing the timestamp field properties of the Input and Output variables. Dividing its difference by the chosen time unit for this ROC limiter normalizes this value. This scales ROC to user defined time unit.    

// Calculate required ROC in specified units of measure

Delta_T := (Input.timestamp - Output.timestamp) / ROC_Time_Unit

Check if delta time is zero and then change the required ROC measure to zero also; otherwise, calculate the actual rate of change at every execute for the user selected time unit of one minute.

if Delta_T = 0 then

   ROC_Req := 0.0

else

   ROC_Req := (Input - Output) / Delta_T

endif

Check if the quality of the required ROC is bad in its quality field property. If it is bad then use the Input as Output, because quality should remain bad. Else if the quality is good then the ROC calculations can be used. When the ROC calculation is used, limit the signal rate of increase to max increase rate or limit signal rate of decrease to max decrease rate and if the ROC is not calculated the Input must always be used as the Output.

if ROC_Req.quality = qualitybad then

   Output := Input

else

   // Do rate-limited increase/decrease

   if ( ROC_Req > ROC_Max ) then

      Output := Output + ( ROC_Max * Delta_T )

   elseif ( ROC_Req < ROC_Min ) then

      Output := Output + ( ROC_Min * Delta_T )

   else

     // Follow input

     Output := Input

   endif

endif

//**********************************************

Return to top

2. Time Shifting of model target for predictive models:

Predictive control models allow the operator to take preventative measures, before a process is in a critical state, in a timely fashion. This is possible because such a model can give an indication of certain process conditions before it actually occurs.

  • Apply time-shifting when changing historical timestamps for predictive modeling

  • Done in the Script block

    • Can only manipulate timestamps “back in time”

    • Cannot move a timestamp forward past “execute time”                          (where execute time = now)

    • Historical model output data must be prepared for training a predictive model

    • Data must be shifted in time relative to the inputs

timestamp attribute of model output data is changed for training purposes.

  • Use script block to alter timestamp of

  • Historical data for training purposes (i.e. on a batch basis)

  • Real-time data for future training purposes (i.e. on a continuous basis)

 

Time shifting fits a model for predictive control.

Return to top

By shifting the model output back one time unit a better predictive relationship is found between the inputs and the model output. Thus this gives the user an opportunity to use the model inputs with timestamp = now to predict the next output in a predictive manner.  

Time Shifting Script Explanation

Shift the model output back in time by one minute. This is done by subtracting one minute from the model output’s timestamp field. Remember that the field properties can be set in one line of code as done below and that the field script format should be: “variable := field(variable.value, variable.quality, variable.timestamp)” {Remember VQT for fields.}

//**********************************************

// Time-shifting of model target for predictive models

//**********************************************

TS_t4 := 0::00:01:00

Model_Output_t1 := field(Model_Output.value, Model_Output.quality, Model_Output.timestamp - TS_t4 )

//**********************************************

It is of paramount importance that this predictive control model is tested or validated! When choosing a time stamp time-shift length, the time shifted model output must display a strong relationship with the model inputs. Different time shifts can be tested to probe for a predictive model, but only one will give the best result, if any is plausible.

Return to top

Basic Soft Sensor

A Soft Sensor provides a measurement for a process variable where the field instrument is often offline (the soft sensor provides a reconstructed value for the field instrument).  Soft Sensors are also used to provide higher frequency values for process variables that are typically only measured at low frequencies (such as laboratory analysis values).   The Soft Sensor fills in the “gaps” between the physical measurments. The performance of a Soft Sensor is validated through assessing: 1.) Degree of correlation (R2) and 2.) Size of the error, between the physical instrument data and soft sensor data.

Applications

  • Back-up sensor

  • Sensor drift

  • Increased Sampling Frequency

  • Predictions

  • Process Performance Indicators

  • Intelligent Alarming

  • Preventative Maintenance

Real-time analyses

  • Provide online sample estimations between offline laboratory analysis

  • Increase sampling frequency of on-stream analysers

More frequent measurements

  • Improve feedback control

  • Quicker response times (within residence time of process unit/reactor)

Return to top

Four cases for which a Soft Sensor must cater

For case 1 it can be seen above that the soft sensor and hardware sensor are well correlated with basically no error present, thus the hardware sensor’s data is still valid. In case 2 the sensors start out well correlated, but then becomes badly correlated as the error (difference) between the sensors increases. This is called (hardware) sensor drift, and for such a case (2) the soft sensor data will be used.

Return to top

When case 3 occurs the hardware sensor has to be trusted, because there is no correlation between the soft sensor and the physical measurement. For case 4 the soft sensor starts out badly correlated and cannot be trusted.  After a period, the soft sensor outputs are strongly correlated with the physical measurement.  In such a case there is no sense in using the soft sensor data and the hardware sensor’s data has to be accepted as correct. Thus, only for case 2 will the soft sensor be used (in stead of the physical measurement).

The objective of the Soft Sensor script is to detect all four these cases in order to either accept the Soft Sensor output value (case 2) or to reject the Soft Sensor value (all 3 other cases).

Soft Sensor Script Explanation

//**********************************************

// Reconstruction script for basic soft sensor

//**********************************************

Set range of allowable error, by initializing minimum and maximum error limits.

// Error Limits

//////////////////////////////////////

ErrorLow := -10

ErrorHigh := 10

Set lower limit of the correlation parameter (R2). The higher this correlation is the better fit is obtained between the hardware- and soft sensor data.

// ModelCoeficient_R2 limit

//////////////////////////////////////

ModelCoefficient_R2 := 0.9

Initialise outputs on each execute, so that the output will follow the measured input variable of the hardware sensor by default. In this example ‘Flow_rate’ is the tag for the soft sensor’s data. ‘Target’ is the hardware sensor tag and the model output variable is ‘Output’.  A flag is assigned to help with the logic of the script.

// Flow rate by default

//////////////////////////////////////

Output := Target

Reconstruction_Flag := 0

Initialise the script if the quality check is good, otherwise the Output will just stay with the default hardware sensor.

// If the modeled Flow rate quality is good

if ( Flow_rate.quality = qualitygood ) then

Check if the hardware sensor’s quality is bad or outside of the error range allowed.

// If the measured Flow Rate quality is bad or the Error is outside the allowed Error limits

  if ( Target.quality = qualitybad ) or (( Error.quality = qualitygood ) and (( Error > ErrorHigh ) or ( Error < ErrorLow ))) then

Now check if the model target and output are closely correlated…

// If the model target and model output has been closely correlated

if ( ( Moving_R2.quality >= ModelCoefficient_R2 ) then

Thus, if the soft sensor is well correlated and the hardware sensor is out of error limits or of bad quality, then use the soft sensor’s (Flow_rate) data.

    // Use the model output as final output

    Output := Flow_rate

    Reconstruction_Flag := 1

endif

  endif

endif

//**********************************************

The Reconstruction_Flag is an indicator for which sensor’s data is being used to model the output. (1 = Soft sensor data, 0 = Hardware Sensor)

Return to top

Division by zero

If there is a division by zero in a script the blueprint will stop execution.

Return to top


Related topics:

  

CSense 2023- Last updated: June 24,2025