Using workflow'(s parameters

Slimani Member
edited December 2023 in General

Hi dear community,

I'm using Aperture 2.8.8 verison

My use case is calling a workflow through REST and I need to provide some data as input (always in REST) in order to execute the workflow with these data.

I'm thinking about using data as workflow parameters, then inside my workflo I will use these data and add them in a dataset, then I do my data quality checks.

We need to use a solution with 100% REST (no sharing a file or datadrop zone)

What are the usage of parameters that we create and add to the workflow ? I have found only the name in the export file

If you have any other solution it could be very help full



  • Henry Simms
    Henry Simms Administrator

    Hi @Slimani

    The values passed in workflow parameters can be used in functions, so can be picked up in (at least) Transform, Validate, and Filter / Split steps. Parameters can also be passed through events to notifications. Here's a simple example:

    1. Create a workflow with 3 parameters, which are configurable on runtime:

    2. In this example, I'm just going to take the values of those parameters and, at runtime, populate a dataset with them in Data Studio. So first I create the dataset with an empty row that will hold the values:

    3. Then in a Transform step, I use the Constant Value function and select the workflow parameter whos value I want to populate in the workflow. I do this for each column:

    4. Then I execute the workflow via the REST API with the param values defined:

    5. We will see the values populated in a dataset (snapshot) taken when the workflow executed:

    The typical use-case for workflow parameters is to pass in values that affect the runtime behaviour, such as true / false flags, values to be filtered in / out, counts of rows to sample etc. It may make sense to pass all your input data into a workflow in this way if you only have small volumes / few fields. An alternative would be make the source of your workflow a refreshable source (possibly a dataset that itself consumes a REST API / OData feed) and refresh on execution.