Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

zeebe-node

camunda-community-hub17.1kApache-2.0deprecated8.3.2TypeScript support: included

This package is deprecated. Please use the official SDK package @camunda8/sdk. See: https://github.com/camunda/camunda-8-js-sdk

The Node.js client library for the Zeebe Workflow Automation Engine.

zeebe, zeebe.io, microservices, orchestration, bpmn, conductor, Camunda, Netflix, cloud, automation, process, workflow, Uber, Cadence

readme

Zeebe Node.js Client

Compatible with: Camunda Platform 8 Community Extension Lifecycle License

This is a Node.js gRPC client for Zeebe, the workflow engine in Camunda Platform 8. It is written in TypeScript and transpiled to JavaScript in the dist directory.

Comprehensive API documentation is available online.

See CHANGELOG.md to see what has changed with each release.

Get a hosted instance of Zeebe on Camunda Cloud.

Table of Contents

Quick Start

Connection Behaviour

Connecting to a Broker

Job Workers

Client Commands

Other Concerns

Programming with Safety

Development of the Library itself

Quick Start

Install

Add the Library to your Project

npm i zeebe-node

For Zeebe broker versions prior to 1.0.0:

npm i zeebe-node@0

Refer to here for the documentation for the pre-1.0.0 version of the library.

Get Broker Topology

const ZB = require('zeebe-node')

void (async () => {
    const zbc = new ZB.ZBClient()
    const topology = await zbc.topology()
    console.log(JSON.stringify(topology, null, 2))
})()

Deploy a process

const ZB = require('zeebe-node')
const fs = require('fs')

void (async () => {
    const zbc = new ZB.ZBClient() // localhost:26500 || ZEEBE_GATEWAY_ADDRESS

    const res = await zbc.deployProcess('./domain-mutation.bpmn')
    console.log(res)

    // Deploy multiple with an array of filepaths
    await zbc.deployProcess(['./wf1.bpmn', './wf2.bpmn'])

    const buffer = fs.readFileSync('./wf3.bpmn')

    // Deploy from an in-memory buffer
    await zbc.deployProcess({ definition: buffer, name: 'wf3.bpmn' })
})()

Start and service a process

This code demonstrates how to deploy a Zeebe process, create a process instance, and handle a service task using the Zeebe Node.js client. The 'get-customer-record' service task worker checks for the presence of a customerId variable, simulates fetching a customer record from a database, and completes the task with a customerRecordExists variable.

// Import the Zeebe Node.js client and the 'fs' module
const ZB = require('zeebe-node');
const fs = require('fs');

// Instantiate a Zeebe client with default localhost settings or environment variables
const zbc = new ZB.ZBClient();

// Create a Zeebe worker to handle the 'get-customer-record' service task
const worker = zbc.createWorker({
    // Define the task type that this worker will process
    taskType: 'get-customer-record',
    // Define the task handler to process incoming jobs
    taskHandler: job => {
        // Log the job variables for debugging purposes
        console.log(job.variables);

        // Check if the customerId variable is missing and return an error if so
        if (!job.variables.customerId) {
            return job.error('NO_CUSTID', 'Missing customerId in process variables');
        }

        // Add logic to retrieve the customer record from the database here
        // ...

        // Complete the job with the 'customerRecordExists' variable set to true
        return job.complete({
            customerRecordExists: true
        });
    }
});

// Define an async main function to deploy a process, create a process instance, and log the outcome
async function main() {
    // Deploy the 'new-customer.bpmn' process
    const res = await zbc.deployProcess('./new-customer.bpmn');
    // Log the deployment result
    console.log('Deployed process:', JSON.stringify(res, null, 2));

    // Create a process instance of the 'new-customer-process' process, with a customerId variable set
    // 'createProcessInstanceWithResult' awaits the outcome
    const outcome = await zbc.createProcessInstanceWithResult({
        bpmnProcessId: 'new-customer-process',
        variables: { customerId: 457 }
    });
    // Log the process outcome
    console.log('Process outcome', JSON.stringify(outcome, null, 2));
}

// Call the main function to execute the script
main();

Versioning

To enable that the client libraries can be easily supported to the Zeebe server we map the version numbers, so that Major, Minor match the server application. Patches are independent and indicate client updates.

NPM Package version 0.26.x supports Zeebe 0.22.x to 0.26.x.

NPM Package version 1.x supports Zeebe 1.x. It uses the C-based gRPC library by default.

NPM Package version 2.x supports Zeebe 1.x, and requires Node >= 16.6.1, >=14.17.5, or >=12.22.5. It removes the C-based gRPC library and uses the pure JS implementation.

Compatible Node Versions

Version 1.x of the package: Node versions <=16.x. Version 1.x uses the C-based gRPC library and does not work with Node 17. The C-based gRPC library is deprecated and no longer being maintained.

Version 2.x and later of the package: Node versions 12.22.5+, 14.17.5+, or 16.6.1+. Version 2.x uses the pure JS implementation of the gRPC library, and requires a fix to the nghttp2 library in Node (See #201).

Breaking changes in Zeebe 8.1.0

All deprecated APIs are removed in the 8.1.0 package version. If your code relies on deprecated methods and method signatures, you need to use a package version prior to 8.1.0 or update your application code.

Breaking changes in Zeebe 1.0.0

For Zeebe brokers prior to 1.0.0, use the 0.26.z version of zeebe-node. This README documents the Zeebe 1.0.0 API. The previous API is documented here.

Zeebe 1.0.0 contains a number of breaking changes, including the gRPC protocol and the API surface area. You must use a 1.x.y version of the client library with Zeebe 1.0.0 and later.

The pre-1.0.0 API of the Node client has been deprecated, but not removed. This means that your pre-1.0.0 applications should still work, just by changing the version of zeebe-node in the package.json.

gRPC Implementation

From version 2.x, the Zeebe Node client uses the pure JS gRPC client implementation.

For version 1.x, the Zeebe Node client uses the C gRPC client implementation grpc-node by default. The C-based gRPC implementation is deprecated and is not being maintained.

Type difference from other Zeebe clients

Protobuf fields of type int64 are serialised as type string in the Node library. These fields are serialised as numbers (long) in the Go and Java client. See grpc/#7229 for why the Node library serialises them as string. The Process instance key, and other fields that are of type long in other client libraries, are type string in this library. Fields of type int32 are serialised as type number in the Node library.

A note on representing timeout durations

All timeouts are ultimately communicated in milliseconds. They can be specified using the primitive type number, and this is always a number of milliseconds.

All timeouts in the client library can also, optionally, be specified by a time value that encodes the units, using the typed-durations package. You can specify durations for timeouts like this:

const { Duration } = require('zeebe-node')

const timeoutS = Duration.seconds.of(30) // 30s timeout
const timeoutMs = Duration.milliseconds.of(30000) // 30s timeout in milliseconds

Using the value types makes your code more semantically specific.

There are five timeouts to take into account.

The first is the job timeout. This is the amount of time that the broker allocates exclusive responsibility for a job to a worker instance. By default, this is 60 seconds. This is the default value set by this client library. See "Job Workers".

The second is the requestTimeout. Whenever the client library sends a gRPC command to the broker, it has an explicit or implied requestTimeout. This is the amount of time that the gRPC gateway will wait for a response from the broker cluster before returning a 4 DEADLINE gRPC error response.

If no requestTimeout is specified, then the configured timeout of the broker gateway is used. Out of the box, this is 15 seconds by default.

The most significant use of the requestTimeout is when using the createProcessInstanceWithResult command. If your process will take longer than 15 seconds to complete, you should specify a requestTimeout. See "Start a Process Instance and await the Process Outcome".

The third is the longpoll duration. This is the amount of time that the job worker holds a long poll request to activate jobs open.

The fourth is the maximum back-off delay in client-side gRPC command retries. See "Client-side gRPC retry in ZBClient".

Finally, the connectionTolerance option for ZBClient can also take a typed duration. This value is used to buffer reporting connection errors while establishing a connection - for example with Camunda SaaS, which requires a token exchange as part of the connection process.

Connection Behaviour

Client-side gRPC retry in ZBClient

If a gRPC command method fails in the ZBClient - such as ZBClient.deployProcess or ZBClient.topology(), the underlying gRPC library will throw an exception.

If no workers have been started, this can be fatal to the process if it is not handled by the application logic. This is especially an issue when a worker container starts before the Zeebe gRPC gateway is available to service requests, and can be inconsistent as this is a race condition.

To mitigate against this, the Node client implements some client-side gRPC operation retry logic by default. This can be configured, including disabled, via configuration in the client constructor.

  • Operations retry, but only for gRPC error codes 8 and 14 - indicating resource exhaustion (8) or transient network failure (14). Resource exhaustion occurs when the broker starts backpressure due to latency because of load. Network failure can be caused by passing in an unresolvable gateway address (14: DNS Resolution failed), or by the gateway not being ready yet (14: UNAVAILABLE: failed to connect to all addresses).
  • Operations that fail for other reasons, such as deploying an invalid bpmn file or cancelling a process that does not exist, do not retry.
  • Retry is enabled by default, and can be disabled by passing { retry: false } to the client constructor.
  • Values for retry, maxRetries and maxRetryTimeout can be configured via the environment variables ZEEBE_CLIENT_RETRY, ZEEBE_CLIENT_MAX_RETRIES and ZEEBE_CLIENT_MAX_RETRY_TIMEOUT respectively.
  • maxRetries and maxRetryTimeout are also configurable through the constructor options, or through environment variables. By default, if not supplied, the values are:
const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient(gatewayAddress, {
    retry: true,
    maxRetries: -1, // infinite retries
    maxRetryTimeout: Duration.seconds.of(5)
})

The environment variables are:

ZEEBE_CLIENT_MAX_RETRIES
ZEEBE_CLIENT_RETRY
ZEEBE_CLIENT_MAX_RETRY_TIMEOUT

Retry is provided by promise-retry, and the back-off strategy is simple ^2.

Additionally, the gRPC Client will continually reconnect when in a failed state, such as when the gateway goes away due to pod rescheduling on Kubernetes.

Eager Connection

The ZBClient eagerly connects to the broker by issuing a topology command in the constructor. This allows you an onReady event to be emitted. You can disable this (for example, for testing without a broker), by either passing eagerConnection: false to the client constructor options, or setting the environment variable ZEEBE_NODE_EAGER_CONNECTION to false.

onReady(), onConnectionError(), and connected

The client has a connected property that can be examined to determine if it has a gRPC connection to the gateway.

The client and the worker can take an optional onReady() and onConnectionError() handler in their constructors, like this:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient({
    onReady: () => console.log(`Connected!`),
    onConnectionError: () => console.log(`Disconnected!`)
})

const zbWorker = zbc.createWorker({
    taskType: 'demo-service',
    taskHandler: handler,
    onReady: () => console.log(`Worker connected!`),
    onConnectionError: () => console.log(`Worker disconnected!`)
})

These handlers are called whenever the gRPC channel is established or lost. As the grpc channel will often "jitter" when it is lost (rapidly emitting READY and ERROR events at the transport layer), there is a connectionTolerance property that determines how long the connection must be in a connected or failed state before the handler is called. By default this is 3000ms.

You can specify another value either in the constructor or via an environment variable.

To specify it via an environment variable, set ZEEBE_CONNECTION_TOLERANCE to a number of milliseconds.

To set it via the constructor, specify a value for connectionTolerance like this:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient({
    onReady: () => console.log(`Connected!`),
    onConnectionError: () => console.log(`Disconnected!`),
    connectionTolerance: 5000 // milliseconds
})

const zbWorker = zbc.createWorker({
    taskType: 'demo-service',
    taskHandler: handler,
    onReady: () => console.log(`Worker connected!`),
    onConnectionError: () => console.log(`Worker disconnected!`),
    connectionTolerance: Duration.seconds.of(3.5) // 3500 milliseconds
})

As well as the callback handlers, the client and workers extend the EventEmitter class, and you can attach listeners to them for the 'ready' and 'connectionError' events:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient()

const zbWorker = zbc.createWorker({
    taskType: 'demo-service',
    taskHandler: handler,
    connectionTolerance: Duration.seconds.of(3.5)
})

zbWorker.on('ready', () => console.log(`Worker connected!`))
zbWorker.on('connectionError', () => console.log(`Worker disconnected!`))

Initial Connection Tolerance

Some broker connections can initially emit error messages - for example: when connecting to Camunda SaaS, during TLS negotiation and OAuth authentication, the eager commands used to detect connection status will fail, and the library will report connection errors.

Since this is expected behaviour - a characteristic of that particular connection - the library has a configurable "initial connection tolerance". This is a number of milliseconds representing the expected window in which these errors will occur on initial connection.

If the library detects that you are connecting to Camunda SaaS, it sets this window to five seconds (5000 milliseconds). In some environments and under some conditions this may not be sufficient.

You can set an explicit value for this using the environment variable ZEEBE_INITIAL_CONNECTION_TOLERANCE, set to a number of milliseconds.

The effect of this setting is to suppress connection errors during this window, and only report them if the connection did not succeed by the end of the window.

Connecting to a Broker

TLS

The Node client does not use TLS by default.

Enable a secure connection by setting useTLS: true:

const { ZBClient } = require('zeebe-node')

const zbc = new ZBClient(tlsSecuredGatewayAddress, {
    useTLS: true,
})

Via environment variable:

ZEEBE_SECURE_CONNECTION=true

Using a Self-signed Certificate

You can use a self-signed SSL certificate with the Zeebe client. You need to provide the root certificates, the private key and the SSL cert chain as Buffers. You can pass them into the ZBClient constructor:

const rootCertsPath = '/path/to/rootCerts'
const privateKeyPath = '/path/to/privateKey'
const certChainPath = '/path/to/certChain'

const zbc = new ZBClient({
    useTLS: true,
    customSSL: {
        rootCerts: rootCertsPath,
        privateKey: privateKeyPath,
        certChain: certChainPath
    }
})

Or you can put the file paths into the environment in the following variables:

ZEEBE_CLIENT_SSL_ROOT_CERTS_PATH
ZEEBE_CLIENT_SSL_PRIVATE_KEY_PATH
ZEEBE_CLIENT_SSL_CERT_CHAIN_PATH

Enable TLS

ZEEBE_SECURE_CONNECTION=true

In this case, they will be passed to the constructor automatically.

OAuth

In case you need to connect to a secured endpoint with OAuth, you can pass in OAuth credentials. This will enable TLS (unless you explicitly disable it with useTLS: false), and handle the OAuth flow to get / renew a JWT:

const { ZBClient } = require('zeebe-node')

const zbc = new ZBClient("my-secure-broker.io:443", {
    oAuth: {
        url: "https://your-auth-endpoint/oauth/token",
        audience: "my-secure-broker.io",
        scope: "myScope",
        clientId: "myClientId",
        clientSecret: "randomClientSecret",
        customRootCert: fs.readFileSync('./my_CA.pem'),
        cacheOnDisk: true
    }
}

The cacheOnDisk option will cache the token on disk in $HOME/.camunda, which can be useful in development if you are restarting the service frequently, or are running in a serverless environment, like AWS Lambda.

If the cache directory is not writable, the ZBClient constructor will throw an exception. This is considered fatal, as it can lead to denial of service or hefty bills if you think caching is on when it is not.

The customRootCert argument is optional. It can be used to provide a custom TLS certificate as a Buffer, which will be used while obtaining the OAuth token from the specified URL. If not provided, the CAs provided by Mozilla will be used.

Basic Auth

If you put a proxy in front of the broker with basic auth, you can pass in a username and password:

const { ZBClient } = require('zeebe-node')

const zbc = new ZBClient("my-broker-with-basic-auth.io:443", {
    basicAuth: {
        username: "user1",
        password: "secret",
    },
    useTLS: true
}

Basic Auth will also work without TLS.

Camunda 8 SaaS

Camunda 8 SaaS is a hosted SaaS instance of Zeebe. The easiest way to connect is to use the Zero-conf constructor with the Client Credentials from the Camunda SaaS console as environment variables.

You can also connect to Camunda SaaS by using the camundaCloud configuration option, using the clusterId, clientSecret, and clientId from the Camunda SaaS Console, like this:

const { ZBClient } = require('zeebe-node')

const zbc = new ZBClient({
    camundaCloud: {
        clientId,
        clientSecret,
        clusterId,
        clusterRegion, // optional, defaults to bru-2
    },
})

That's it! Under the hood, the client lib will construct the OAuth configuration for Camunda SaaS and set the gateway address and port for you.

We recommend the Zero-conf constructor with the configuration passed in via environment variables. This allows you to run your application against different environments via configuration.

Zero-Conf constructor

The ZBClient has a 0-parameter constructor that takes the config from the environment. This is useful for injecting secrets into your app via the environment, and switching between development and production environments with no change to code.

To use the zero-conf constructor, you create the client like this:

const { ZBClient } = require('zeebe-node')

const zbc = new ZBClient()

With no relevant environment variables set, it will default to localhost on the default port with no TLS.

The following environment variable configurations are possible with the Zero-conf constructor:

From 8.3.0, multi-tenancy:

ZEEBE_TENANT_ID

Camunda SaaS:

ZEEBE_ADDRESS
ZEEBE_CLIENT_SECRET
ZEEBE_CLIENT_ID
ZEEBE_TOKEN_AUDIENCE
ZEEBE_AUTHORIZATION_SERVER_URL

Self-hosted or local broker (no TLS or OAuth):

ZEEBE_ADDRESS

Self-hosted with self-signed SSL certificate:

ZEEBE_CLIENT_SSL_ROOT_CERTS_PATH
ZEEBE_CLIENT_SSL_PRIVATE_KEY_PATH
ZEEBE_CLIENT_SSL_CERT_CHAIN_PATH
ZEEBE_SECURE_CONNECTION=true

Self-hosted or local broker with OAuth + TLS:

ZEEBE_CLIENT_ID
ZEEBE_CLIENT_SECRET
ZEEBE_TOKEN_AUDIENCE
ZEEBE_TOKEN_SCOPE
ZEEBE_AUTHORIZATION_SERVER_URL
ZEEBE_ADDRESS

Multi-tenant self-hosted or local broker with OAuth and no TLS:

ZEEBE_TENANT_ID='<default>'
ZEEBE_SECURE_CONNECTION=false
ZEEBE_ADDRESS='localhost:26500'
ZEEBE_CLIENT_ID='zeebe'
ZEEBE_CLIENT_SECRET='zecret'
ZEEBE_AUTHORIZATION_SERVER_URL='http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token'
ZEEBE_TOKEN_AUDIENCE='zeebe.camunda.io'
ZEEBE_TOKEN_SCOPE='not needed'
CAMUNDA_CREDENTIALS_SCOPES='Zeebe'
CAMUNDA_OAUTH_URL='http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token'

Basic Auth:

ZEEBE_BASIC_AUTH_PASSWORD
ZEEBE_BASIC_AUTH_USERNAME

Job Workers

Types of Job Workers

There are two different types of job worker provided by the Zeebe Node client:

  • The ZBWorker - this worker operates on individual jobs.
  • The ZBBatchWorker - this worker batches jobs on the client, to allow you to batch operations that pool resources. (This worker was introduced in 0.23.0 of the client).

Much of the information in the following ZBWorker section applies also to the ZBBatchWorker. The ZBBatchWorker section covers the features that differ from the ZBWorker.

The ZBWorker Job Worker

The ZBWorker takes a job handler function that is invoked for each job. It is invoked as soon as the worker retrieves a job from the broker. The worker can retrieve any number of jobs in a response from the broker, and the handler is invoked for each one, independently.

The simplest signature for a worker takes a string task type, and a job handler function.

The job handler receives the job object, which has methods that it can use to complete or fail the job, and a reference to the worker itself, which you can use to log using the worker's configured logger (See Logging).

Note: The second argument is deprecated, and remains for backward-compatibility - it is a complete function. In the 1.0 version of the API, the complete function methods are available on the job object.

const ZB = require('zeebe-node')

const zbc = new ZB.ZBClient()

const zbWorker = zbc.createWorker({
    taskType: 'demo-service',
    taskHandler: handler,
})

function handler(job) {
    zbWorker.log('Task variables', job.variables)

    // Task worker business logic goes here
    const updateToBrokerVariables = {
        updatedProperty: 'newValue',
    }

    return job.complete(updateToBrokerVariables)
}

Here is an example job:


{ key: '578',
  type: 'demo-service',
  jobHeaders:
   { processInstanceKey: '574',
     bpmnProcessId: 'test-process',
     processDefinitionVersion: 1,
     processKey: '3',
     elementId: 'ServiceTask_0xdwuw7',
     elementInstanceKey: '577' },
  customHeaders: '{}',
  worker: 'test-worker',
  retries: 3,
  deadline: '1546915422636',
  variables: { testData: 'something' } }

The worker can be configured with options. To do this, you should use the object parameter constructor.

Shown below are the defaults that apply if you don't supply them:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient()

const zbWorker = zbc.createWorker({
    taskType: 'demo-service',
    taskHandler: handler,
    // the number of simultaneous tasks this worker can handle
    maxJobsToActivate: 32,
    // the amount of time the broker should allow this worker to complete a task
    timeout: Duration.seconds.of(30),
    // One of 'DEBUG', 'INFO', 'NONE'
    loglevel: 'INFO',
    // Called when the connection to the broker cannot be established, or fails
    onConnectionError: () => zbWorker.log('Disconnected')
    // Called when the connection to the broker is (re-)established
    onReady: () => zbWorker.log('Connected.')
})

Unhandled Exceptions in Task Handlers

Note: this behaviour is for the ZBWorker only. The ZBBatchWorker does not manage this.

When a task handler throws an unhandled exception, the library will fail the job. Zeebe will then retry the job according to the retry settings of the task. Sometimes you want to halt the entire process so you can investigate. To have the library cancel the process on an unhandled exception, pass in {failProcessOnException: true} to the createWorker call:

import { ZBClient } from 'zeebe-node'

const zbc = new ZBClient()

zbc.createWorker({
    taskType: 'console-log',
    taskHandler: maybeFaultyHandler,
    failProcessOnException: true,
})

Completing tasks with success, failure, error, or forwarded

To complete a task, the job object that the task worker handler function receives has complete, fail, and error methods.

Call job.complete() passing in a optional plain old JavaScript object (POJO) - a key:value map. These are variable:value pairs that will be used to update the process state in the broker. They will be merged with existing values. You can set an existing key to null or undefined, but there is no way to delete a key.

Call job.fail() to fail the task. You mus t pass in a string message describing the failure. The client library decrements the retry count, and the broker handles the retry logic. If the failure is a hard failure and should cause an incident to be raised in Operate, then pass in 0 for the optional second parameter, retries:

job.fail('This is a critical failure and will raise an incident', 0)

From version 8.0.0 of the package, used with a 8.0.0 Zeebe broker, you can specify to the broker an optional backoff for the reactivation of the job, like this:

job.fail({
    errorMessage: 'Triggering a retry with a two second back-off',
    retryBackOff: 2000,
    retries: 1,
})

Call job.error() to trigger a BPMN error throw event. You must pass in a string error code for the error code, and you can pass an optional error message as the second parameter. If no BPMN error catch event exists for the error code, an incident will be raised.

job.error('RECORD_NOT_FOUND', 'Could not find the customer in the database')

From 8.2.5 of the client, you can update the variables in the workflow when you throw a BPMN error in a worker:

job.error({
    errorCode: 'RECORD_NOT_FOUND',
    errorMessage: 'Could not find the customer in the database',
    variables: {
        someVariable: 'someValue'
    }
})

Call job.forwarded() to release worker capacity to handle another job, without completing the job in any way with the Zeebe broker. This method supports the decoupled job completion pattern. In this pattern, the worker forwards the job to another system - a lambda or a RabbitMQ queue. Some other process is ultimately responsible for completing the job.

Working with Process Variables and Custom Headers

Process variables are available in your worker job handler callback as job.variables, and any custom headers are available as job.customHeaders.

These are read-only JavaScript objects in the Zeebe Node client. However, they are not stored that way in the broker.

Both process variables and custom headers are stored in the broker as a dictionary of named strings. That means that the variables and custom headers are JSON.parsed in the Node client when it fetches the job, and any update passed to the success() function is JSON.stringified.

If you pass in a circular JSON structure to complete() - like, for example the response object from an HTTP call - it will throw, as this cannot be serialised to a string.

To update a key deep in the object structure of a process variable, you can use the deepmerge utility:

const merge = require('deepmerge')
import { ZBClient } from 'zeebe-node'

const zbc = new ZBClient()

zbc.createWorker({
    taskType: 'some-task',
    taskHandler: job => {
        const { people } = job.variables
        // update bob's age, keeping all his other properties the same
        job.complete(merge(people, { bob: { age: 23 } }))
    }
})

When setting custom headers in BPMN tasks, while designing your model, you can put stringified JSON as the value for a custom header, and it will show up in the client as a JavaScript object.

Process variables and custom headers are untyped in the Zeebe broker, however the Node client in TypeScript mode provides the option to type them to provide safety. You can type your worker as any to turn that off:

// No type checking - totally dynamic and unchecked
zbc.createWorker<any>({
    taskType: 'yolo-jobs',
    taskHandler: (job) => {
        console.log(`Look ma - ${job.variables?.anything?.goes?.toUpperCase()}`)
        job.complete({what: job.variables.could.possibly.go.wrong})
    }
})

See the section Writing Strongly-typed Job Workers for more details.

Constraining the Variables Fetched by the Worker

Sometimes you only need a few specific process variables to service a job. One way you can achieve constraint on the process variables received by a worker is by using input variable mappings on the task in the model.

You can also use the fetchVariable parameter when creating a worker. Pass an array of strings, containing the names of the variables to fetch, to the fetchVariable parameter when creating a worker. Here is an example, in JavaScript:

zbc.createWorker({
    taskType: 'process-favorite-albums',
    taskHandler: job => {
        const { name, albums } = job.variables
        console.log(`${name} has the following albums: ${albums.join(', ')}`)
        job.complete()
    },
    fetchVariable: ['name', 'albums'],
})

If you are using TypeScript, you can supply an interface describing the process variables, and parameterize the worker:

interface Variables {
    name: string
    albums: string[]
}

zbc.createWorker<Variables>({
    taskType: 'process-favorite-albums',
    taskHandler: (job) => {
        const { name, albums = [] } = job.variables
        console.log(`${name} has the following albums: ${albums?.join?.(', ')}`)
        job.complete()
    },
    fetchVariable: ['name', 'albums'],
})

This parameterization does two things:

  • It informs the worker about the expected types of the variables. For example, if albums is a string, calling join on it will fail at runtime. Providing the type allows the compiler to reason about the valid methods that can be applied to the variables.
  • It allows the type-checker to pick up spelling errors in the strings in fetchVariable, by comparing them with the Variables typing.

Note, that this does not protect you against run-time exceptions where your typings are incorrect, or the payload simply does not match the definition that you provided.

See the section Writing Strongly-typed Job Workers for more details on run-time safety.

You can turn off the type-safety by typing the worker as any:

zbc.createWorker<any>({
    taskType: 'process-favorite-albums',
    taskHandler: (job) => {
        const { name, albums = [] } = job.variables
        // TS 3.7 safe access to .join _and_ safe call, to prevent run-time exceptions
        console.log(`${name} has the following albums: ${albums?.join?.(', ')}`)
        job.complete()
    },
    fetchVariable: ['name', 'albums'],
})

The "Decoupled Job Completion" pattern

The Decoupled Job Completion pattern uses a Zeebe Job Worker to activate jobs from the broker, and some other asynchronous (remote) system to do the work.

You might activate jobs and then send them to a RabbitMQ queue, or to an AWS lambda. In this case, there may be no outcome about the job that this worker can report back to the broker about success or failure. That will be the responsibility of another part of your distributed system.

The first thing you should do is ensure that you activate the job with sufficient time for the complete execution of your system. Your worker will not be completing the job, but it informs the broker how long the expected loop will take to close.

Next, call job.forward() in your job worker handler. This has no side-effect with the broker - so nothing is communicated to Zeebe. The job is still out there with your worker as far as Zeebe is concerned. What this call does is release worker capacity to request more jobs.

If you are using the Zeebe Node library in the remote system, or if the remote system eventually reports back to you (perhaps over a different RabbitMQ queue), you can use the ZBClient methods completeJob(), failJob(), and throwError() to report the outcome back to the broker.

You need at least the job.key, to be able to correlate the result back to Zeebe. Presumably you also want the information from the remote system about the outcome, and any updated variables.

Here is an example:

  • You have a COBOL system that runs a database.
  • Somebody wrote an adapter for this COBOL database. In executes commands over SSH.
  • The adapter is accessible via a RabbitMQ "request" queue, which takes a command and a correlation id, so that its response can be correlated to this request.
  • The adapter sends back the COBOL database system response on a RabbitMQ "response" queue, with the correlation id.
  • It typically takes 15 seconds for the round-trip through RabbitMQ to the COBOL database and back.

You want to put this system into a Zeebe-orchestrated BPMN model as a task.

Rather than injecting a RabbitMQ listener into the job handler, you can "fire and forget" the request using the decoupled job completion pattern.

Here is how you do it:

  • Your worker gets the job from Zeebe.
  • Your worker makes the command and sends it down the RabbitMQ "request" queue, with the job.jobKey as the correlation id.
  • Your worker calls job.forward()

Here is what that looks like in code:

import { RabbitMQSender } from './lib/my-awesome-rabbitmq-api'
import { ZBClient, Duration } from 'zeebe-node'

const zbc = new ZBClient()

const cobolWorker = zbc.createWorker({
    taskType: 'cobol-insert',
    timeout: Duration.seconds.of(20), // allow 5s over the expected 15s
    taskHandler: job => {
        const { key, variables } = job
        const request = {
            correlationId: key,
            command: `INSERT ${variables.customer} INTO CUSTOMERS`
        }
        RabbitMQSender.send({
            channel: 'COBOL_REQ',
            request
        })
        // Call forward() to release worker capacity
        return job.forward()
    }
)

Now for the response part:

  • Another part of your system listens to the RabbitMQ response queue.
  • It gets a response back from the COBOL adapter.
  • It examines the response, then sends the appropriate outcome to Zeebe, using the jobKey that has been attached as the correlationId
import { RabbitMQListener } from './lib/my-awesome-rabbitmq-api'
import { ZBClient } from 'zeebe-node'

const zbc = new ZBClient()

const RabbitMQListener.listen({
    channel: 'COBOL_RES',
    handler: message => {
        const { outcome, correlationId } = message
        if (outcome.SUCCESS) {
            zbc.completeJob({
                jobKey: correlationId,
                variables: {}
            })
        }
        if (outcome.ERROR) {
            zbc.throwError({
                jobKey: correlationId,
                errorCode: "5",
                errorMessage: "The COBOL Database reported an error. Boo!"
            })
        }
    })
}

See also the section "Publish a Message", for a pattern that you can use when it is not possible to attach the job key to the round trip data response.

The ZBBatchWorker Job Worker

The ZBBatchWorker Job Worker batches jobs before calling the job handler. Its fundamental differences from the ZBWorker are:

  • Its job handler receives an array of one or more jobs.
  • The handler is not invoked immediately, but rather when enough jobs are batched, or a job in the batch is at risk of being timed out by the Zeebe broker.

You can use the batch worker if you have tasks that benefit from processing together, but are not related in the BPMN model.

An example would be a high volume of jobs that require calls to an external system, where you have to pay per call to that system. In that case, you may want to batch up jobs, make one call to the external system, then update all the jobs and send them on their way.

The batch worker works on a first-of batch size or batch timeout basis.

You must configure both jobBatchMinSize and jobBatchMaxTime. Whichever condition is met first will trigger the processing of the jobs:

  • Enough jobs are available to the worker to satisfy the minimum job batch size;
  • The batch has been building for the maximum amount of time - "we're doing this now, before the earliest jobs in the batch time out on the broker".

You should be sure to specify a timeout for your worker that is jobBatchMaxTime plus the expected latency of the external call plus your processing time and network latency, to avoid the broker timing your batch worker's lock and making the jobs available to another worker. That would defeat the whole purpose.

Here is an example of using the ZBBatchWorker:

import { API } from './lib/my-awesome-external-api'
import { ZBClient, BatchedJob, Duration } from 'zeebe-node'

const zbc = new ZBClient()

// Helper function to find a job by its key
const findJobByKey = jobs => key => jobs.filter(job => job.jobKey === id)?.[0] ?? []

const handler = async (jobs: BatchedJob[]) => {
    console.log("Let's do this!")
    const {jobKey, variables} = job
    // Construct some hypothetical payload with correlation ids and requests
    const req = jobs.map(job => ({id: jobKey, data: variables.request}))
    // An uncaught exception will not be managed by the library
    try {
        // Our API wrapper turns that into a request, and returns
        // an array of results with ids
        const outcomes = await API.post(req)
        // Construct a find function for these jobs
        const getJob = findJobByKey(jobs)
        // Iterate over the results and call the succeed method on the corresponding job,
        // passing in the correlated outcome of the API call
        outcomes.forEach(res => getJob(res.id)?.complete(res.data))
    } catch (e) {
        jobs.forEach(job => job.fail(e.message))
    }
}

const batchWorker = zbc.createBatchWorker({
    taskType: 'get-data-from-external-api',
    taskHandler: handler,
    jobBatchMinSize: 10, // at least 10 at a time
    jobBatchMaxTime: 60, // or every 60 seconds, whichever comes first
    timeout: Duration.seconds.of(80) // 80 second timeout means we have 20 seconds to process at least
})

See this blog post for some more details on the implementation.

Long polling

With Zeebe 0.21 onward, long polling is supported for clients, and is used by default. Rather than polling continuously for work and getting nothing back, a client can poll once and leave the request open until work appears. This reduces network traffic and CPU utilization in the server. Every JobActivation Request is appended to the event log, so continuous polling can significantly impact broker performance, especially when an exporter is loaded (see here).

Long polling sends the ActivateJobs command to the broker, and waits for up to the long poll interval for jobs to be available, rather than returning immediately with an empty response if no jobs are available at that moment.

The default long poll duration is 30s.

To use a different long polling duration, pass in a long poll timeout in milliseconds to the client. All workers created with that client will use it. Alternatively, set a period per-worker.

Long polling for workers is configured in the ZBClient like this:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient('serverAddress', {
    longPoll: Duration.minutes.of(10), // Ten minutes - inherited by workers
})

const longPollingWorker = zbc.createWorker({
    taskType: 'task-type',
    taskHandler: handler,
    longPoll: Duration.minutes.of(2), // override client, poll 2m
})

Poll Interval

The poll interval is a timer that fires on the configured interval and sends an ActivateJobs command if no pending command is currently active. By default, this is set to 300ms. This guarantees that there will be a minimum of 300ms between ActivateJobs commands, which prevents flooding the broker.

Too many ActivateJobs requests per period of time can cause broker backpressure to kick in, and the gateway to return a GRPC 8 error code.

You can configure this with the pollInterval option in the client constructor, in which case all workers inherit it as their default. You can also override this by specifying a value in the createWorker call:

const zbc = new ZBClient({
    pollInterval: Duration.milliseconds.of(500),
})

const worker = zbc.createWorker({
    taskType: 'send-email',
    taskHandler: sendEmailWorkerHandler,
    pollInterval: Duration.milliseconds.of(750),
})

Client Commands

Deploy Process Models and Decision Tables

From version 8 of Zeebe, deployProcess in deprecated in favor of deployResource which allows you to deploy both process models and DMN tables.

You can deploy a resource as a buffer, or by passing a filename - in which case the client library will load the file into a buffer for you.

Deploy Process Model

By passing a filename, and allowing the client library to load the file into a buffer:

async function deploy() {
    const zbc = new ZBClient()
    const result = await zbc.deployResource({
        processFilename: `./src/__tests__/testdata/Client-DeployWorkflow.bpmn`,
    })
}

By passing a buffer, and a name:

async function deploy() {
    const zbc = new ZBClient()
    const process = fs.readFileSync(
        `./src/__tests__/testdata/Client-DeployWorkflow.bpmn`
    )
    const result = await zbc.deployResource({
        process,
        name: `Client-DeployWorkflow.bpmn`,
    })
}

Deploy DMN Table

By passing a filename, and allowing the client library to load the file into a buffer:

async function deploy() {
    const zbc = new ZBClient()
    const result = await zbc.deployResource({
        decisionFilename: `./src/__tests__/testdata/quarantine-duration.dmn`,
    })
}

By passing a buffer, and a name:

async function deploy() {
    const zbc = new ZBClient()
    const decision = fs.readFileSync(
        `./src/__tests__/testdata/quarantine-duration.dmn`
    )
    const result = await zbc.deployResource({
        decision,
        name: `quarantine-duration.dmn`,
    })
}

Deploy Form

From 8.3.1, you can deploy a form to the Zeebe broker:

async function deploy() {
    const zbc = new ZBClient()
    const form = fs.readFileSync(
        './src/__tests__/testdata/form_1.form'
    )
    const result = await zbc.deployResource({
        form,
        name: 'form_1.form',
    })
}

Start a Process Instance

const ZB = require('zeebe-node')

;(async () => {
    const zbc = new ZB.ZBClient('localhost:26500')
    const result = await zbc.createProcessInstance({
        bpmnProcessId: 'test-process',
        variables: {
            testData: 'something'
        }
    })
    console.log(result)
})()

Example output:


{ processKey: '3',
  bpmnProcessId: 'test-process',
  version: 1,
  processInstanceKey: '569' }

Start a Process Instance of a specific version of a Process definition

From version 0.22 of the client onward:

const ZB = require('zeebe-node')

;(async () => {
    const zbc = new ZB.ZBClient('localhost:26500')
    const result = await zbc.createProcessInstance({
        bpmnProcessId: 'test-process',
        variables: {
            testData: 'something',
        },
        version: 5,
    })
    console.log(result)
})()

Start a Process Instance and await the Process Outcome

From version 0.22 of the broker and client, you can await the outcome of a process end-to-end execution:

async function getOutcome() {
    const result = await zbc.createProcessInstanceWithResult({
        bpmnProcessId: processId,
        variables: {
            sourceValue: 5
        }
    })
    return result
}

Be aware that by default, this will throw an exception if the process takes longer than 15 seconds to complete.

To override the gateway's default timeout for a process that needs more time to complete:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient()

const result = await zbc.createProcessInstanceWithResult({
    bpmnProcessId: processId,
    variables: {
        sourceValue: 5,
        otherValue: 'rome',
    },
    requestTimeout: Duration.seconds.of(25),
    // also works supplying a number of milliseconds
    // requestTimeout: 25000
})

Publish a Message

You can publish a message to the Zeebe broker that will be correlated with a running process instance:

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZBClient()

zbc.publishMessage({
    correlationKey: 'value-to-correlate-with-process-variable',
    messageId: uuid.v4(),
    name: 'message-name',
    variables: { valueToAddToProcessVariables: 'here', status: 'PROCESSED' },
    timeToLive: Duration.seconds.of(10), // seconds
})

When would you do this? Well, the sky is not even the limit when it comes to thinking creatively about building a system with Zeebe - and here's one concrete example to get you thinking:

Recall the example of the remote COBOL database in the section "The "Decoupled Job Completion" pattern". We're writing code to allow that system to be participate in a BPMN-modelling process orchestrated by Zeebe.

But what happens if the adapter for that system has been written in such a way that there is no opportunity to attach metadata to it? In that case we have no opportunity to attach a job key. Maybe you send the fixed data for the command, and you have to correlate the response based on those fields.

Another example: think of a system that emits events, and has no knowledge of a running process. An example from one system that I orchestrate with Zeebe is Minecraft. A logged-in user in the game performs some action, and code in the game emits an event. I can catch that event in my Node-based application, but I have no knowledge of which running process to target - and the event was not generated from a BPMN task providing a worker with the complete context of a process.

In these two cases, I can publish a message to Zeebe, and let the broker figure out which processes are:

  • Sitting at an intermediate message catch event waiting for this message; or
  • In a sub-process that has a boundary event that will be triggered by this message; or
  • Would be started by a message start event, on receiving this message.

The Zeebe broker correlates a message to a running process instance not on the job key - but on the value of one of the process variables (for intermediate message events) and the message name (for all message events, including start messages).

So the response from your COBOL database system, sans job key, is sent back to Zeebe from the RabbitMQListener not via completeJob(), but with publishMessage(), and the value of the payload is used to figure out which process it is for.

In the case of the Minecraft event, a message is published to Zeebe with the Minecraft username, and that is used by Zeebe to determine which processes are running for that user and are interested in that event.

See the article "Zeebe Message Correlation" for a complete example with code.

Publish a Start Message

You can also publish a message targeting a Message Start Event. In this case, the correlation key is optional, and all Message Start events that match the name property will receive the message.

You can use the publishStartMessage() method to publish a message with no correlation key (it will be set to a random uuid in the background):

const { ZBClient, Duration } = require('zeebe-node')

const zbc = new ZB.ZBClient('localhost:26500')
zbc.publishStartMessage({
    messageId: uuid.v4(),
    name: 'message-name',
    variables: { initialProcessVariable: 'here' },
    timeToLive: Duration.seconds.of(10), // seconds
})

Both normal messages and start messages can be published idempotently by setting both the messageId and the correlationKey. They will only ever be correlated once. See: A message can be published idempotent.

Activate Jobs

If you have some use case that doesn't fit the existing workers, you can write your own custom workers using the ZBClient.activateJobs() method. It takes an ActivateJobsRequest object, and returns a stream for that call.

Attach a listener to the stream's 'data' event, and it will be called with an ActivateJobsResponse object if there are jobs to work on.

To complete these jobs, use the ZBClient methods completeJob(), failJob(), and throwError().

For more details, read the source code of the library, particularly the ZBWorkerBase class. This is an advanced use case, and the existing code in the library is the best documentation.

Other Concerns

Graceful Shutdown

To drain workers, call the close() method of the ZBClient. This causes all workers using that client to stop polling for jobs, and returns a Promise that resolves when all active jobs have either finished or timed out.

console.log('Closing client...')
zbc.close().then(() => console.log('All workers closed'))

Logging

Control the log output for the client library by setting the ZBClient log level. Valid log levels are NONE (supress all logging), ERROR (log only exceptions), INFO (general logging), or DEBUG (verbose logging). You can set this in the client constructor:

const zbc = new ZBClient('localhost', { loglevel: 'DEBUG' })

And also via the environment:

ZEEBE_NODE_LOG_LEVEL='ERROR' node start.js

By default the library uses console.info and console.error for logging. You can also pass in a custom logger, such as pino:

const logger = require('pino')()
const zbc = new ZBClient({ stdout: logger })

From version v0.23.0-alpha.1, the library logs human-readable logs by default, using the ZBSimpleLogger. If you want structured logs as stringified JSON, pass in ZBJSONLogger to the constructor stdout option, like this:

const { ZBJsonLogger, ZBClient } = require('zeebe-node')
const zbc = new ZBClient({ stdout: ZBJsonLogger })

You can also control this via environment variables:

export ZEEBE_NODE_LOG_TYPE=SIMPLE  # Simple Logger (default)
export ZEEBE_NODE_LOG_TYPE=JSON  # JSON Logger

Generating TypeScript constants for BPMN Models

Message names and Task Types are untyped magic strings. You can generate type information to avoid some classes of errors.

0.22.0-alpha.5 and above

Install the package globally:

npm i -g zeebe-node

Now you have the command zeebe-node <filename> that parses a BPMN file and emits type definitions.

All versions

The BpmnParser class provides a static method generateConstantsForBpmnFiles(). This method takes a filepath and returns TypeScript definitions that you can use to avoid typos in your code, and to reason about the completeness of your task worker coverage.

const ZB = require('zeebe-node')
;(async () => {
    console.log(await ZB.BpmnParser.generateConstantsForBpmnFiles(processFile))
})()

This will produce output similar to:

// Autogenerated constants for msg-start.bpmn

export enum TaskType = {
    CONSOLE_LOG = "console-log"
};

export enum MessageName = {
    MSG_EMIT_FRAME = "MSG-EMIT_FRAME",
    MSG_START_JOB = "MSG-START_JOB"
};

Generating code from a BPM Model file

You can scaffold your worker code from a BPMN file with the zeebe-node command. To use this command, install the package globally with:

npm i -g zeebe-node

Pass in the path to the BPMN file, and it will output a file to implement it:

zeebe-node my-model.bpmn

Writing Strongly-typed Job Workers

You can provide interfaces to get design-time type safety and intellisense on the process variables passed in the a worker job handler, the custom headers that it will receive, and the variables that it will pass back to Zeebe in the complete.success call:

interface InputVariables {
    name: string,
    age: number,
    preferences: {
        beverage: 'Coffee' | 'Tea' | 'Beer' | 'Water',
        color: string
    }
}

interface OutputVariables {
    suggestedGift: string
}

interface CustomHeaders {
    occasion: 'Birthday' | 'Christmas' | 'Hannukah' | 'Diwali'
}

const giftSuggester = zbc.createWorker<
    InputVariables,
    CustomHeaders,
    OutputVariables>
    ('get-gift-suggestion', (job) => {
        const suggestedGift = `${job.customHeaders.occasion} ${job.variables.preferences.beverage}`
        job.complete({ suggestedGift })
})

If you decouple the declaration of the job handler from the createWorker call, you will need to explicitly specify its type, like this:

import { ZBWorkerTaskHandler } from 'zeebe-node'

function getGiftSuggestion(job): ZBWorkerTaskHandler<InputVariables, CustomHeaders, OutputVariables> {
    const suggestedGift = `${job.customHeaders.occasion} ${job.variables.preferences.beverage}`
    job.complete({ suggestedGift })
}

const giftSuggester = zbc.createWorker({
    taskType: 'get-gift-suggestion',
    taskHandler: getGiftSuggestion
})

Run-time Type Safety

The parameterization of the client and workers helps to catch errors in code, and if your interface definitions are good, can go a long way to making sure that your workers and client emit the correct payloads and have a strong expectation about what they will receive, but it does not give you any run-time safety.

Your type definition may be incorrect, or the variables or custom headers may simply not be there at run-time, as there is no type checking in the broker, and other factors are involved, such as tasks with input and output mappings, and data added to the process variables by REST calls and other workers.

You should consider:

  • Writing interface definitions for your payloads to get design-time assist for protection against spelling errors as you demarshal and update variables.
  • Testing for the existence of variables and properties on payloads, and writing defensive pathways to deal with missing properties. If you mark everything as optional in your interfaces, the type-checker will force you to write that code.
  • Surfacing code exceptions operationally to detect and diagnose mismatched expectations.
  • If you want to validate inputs and outputs to your system at runtime, you can use io-ts. Once data goes into that, it either exits through an exception handler, or is guaranteed to have the shape of the defined codec at run-time.

As with everything, it is a balancing act / trade-off between correctness, safety, and speed. You do not want to lock everything down while you are still exploring.

I recommend the following scale, to match the maturity of your system:

  • Start with <any> typing for the workers; then
  • Develop interfaces to describe the DTOs represented in your process variables;
  • Use optional types on those interfaces to check your defensive programming structures;
  • Lock down the run-time behaviour with io-ts as the boundary validator.

You may choose to start with the DTOs. Anyway, there are options.

Developing Zeebe Node

The source is written in TypeScript in src, and compiled to ES6 in the dist directory.

To build:

npm run build

To start a watcher to build the source and API docs while you are developing:

npm run dev

Tests

Tests are written in Jest, and live in the src/__tests__ directory. To run the unit tests:

npm t

Integration tests are in the src/__tests__/integration directory.

They require a Zeebe broker to run. You can start a dockerised broker:

cd docker
docker-compose up

And then run them manually:

npm run test:integration

For the failure test, you need to run Operate and manually verify that an incident has been raised.

Writing Tests

Zeebe is inherently stateful, so integration tests need to be carefully isolated so that workers from one test do not service tasks in another test. Jest runs tests in a random order, so intermittent failures are the outcome of tests that mutate shared state.

The tests use a templating function to replace the process id, task types and message names in the bpmn model to produce distinct, isolated namespaces for each test and each test run.

Contributors

Name
Josh Wulf
Colin Raddatz
Jarred Filmer
Timothy Colbert
Olivier Albertini
Patrick Dehn

changelog

8.3.2

New Features

New shiny stuff

  • Added support for providing a value for a scope field in the OAuth request. This can be set with environment variable ZEEBE_TOKEN_SCOPE, or by passing a scope field as part of the oAuth config options for a ZBClient. This is needed to support OIDC / EntraID. Thanks to @nikku for the implementation. See PR #363 for more details.

8.3.1

New Features

New shiny stuff

  • You can now deploy forms to the Zeebe broker using ZBClient.deployResource(). See #332 for more details.

8.3.0

Breaking changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • Several method signatures for CreateProcessInstance and CreateProcessInstanceWithResult have been removed, leaving only the method that takes an object parameter. See #330 for more details.

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • The onConnectionError event fires correctly for Camunda SaaS, but fires a false positive when connecting to a Self-Managed instance. See #340 for more details.

New Features

New shiny stuff.

  • Camunda Platform 8.3.0 introduces multi-tenancy. To support this, the Node.js client adds an optional tenantId parameter to DeployResource, DeployProcess, CreateProcessInstance, CreateProcessInstanceWithResult, and PublishMessage. You can also specify a tenantId in the ZBClient constructor or via the environment variable ZEEBE_TENANT_ID. In the case that you specify it via the environment or constructor, it will be transparently added to all method invocations. See #330 for more details.
  • @grpc/grpc-js has been updated to 1.9.7, and @grpc/proto-loader has been updated to 0.7.10.

Things that were broken and are now fixed.

  • The onReady and onConnection event tests now pass, so these events should be usable. See #215 for more details.

Fixes

Things that were broken and are now fixed.

  • An error message "Grpc Stream Error: 16 UNAUTHENTICATED: Failed to parse bearer token, see cause for details" would be logged intermittently. This was because under particular conditions an expired token cached on disk could be used for API calls. To prevent this, the disk-cached token is evicted at the same time as the in-memory token. See #336 for more details.
  • The onReady and onConnection event tests now pass for Camunda SaaS. The onReady event fires correctly for Self-Managed started with docker-compose. See #215 and #340 for more details.

Version 8.2.5

New Features

New shiny stuff.

  • Throwing a BPMN Error, either from the ZBClient or in the job handler of a ZBWorker, accepted an error message and an error code. The gRPC API for ThrowError now accepts a variables field, but the Node client did not allow you to set variables along with the error code and message. The Node client now accepts an object for job.error that includes a variables field, as does ZBClient.throwError, allowing you to set variables when throwing a BPMN error. See #323, the README file, and the Client API documentation for more details.

Chores

Things that shouldn't have a visible impact.

  • Unit tests used a unique process model for each test run. As a result, the number of deployed process models in a cluster increased over time until a SaaS cluster would fail due to sharding of the ElasticSearch. Unit tests have been refactored to reuse process models. This will have no impact for end-users, but for developers it means that you can use the same cluster for unit tests.

Version 8.2.4

Fixes

Things that were broken and are now fixed.

  • Custom root certificates were not being passed to the Camunda SaaS OAuth provider. This caused a failure to connect when an SSL terminating firewall that uses a custom root certificate sits between the client and Camunda SaaS. Custom root certificates are now passed to the Camunda SaaS OAuth provider, and are used when making the connection. Thanks to @nikku for reporting this and providing the patch. See #319 for more details.

Version 8.2.3

Fixes

Things that were broken and are now fixed.

  • The object signature for job.fail() did not correctly apply an explicit value for retries. As a result, job retries would decrement automatically if this signature and option were used. The value is now correctly parsed and applied, and job retry count can be explicitly set in the job.fail() command with the object signature. Thanks to @patozgg for reporting this. See #316 for more details.

Version 8.2.2

Chores

Things that shouldn't have a visible impact.

  • Updated uuid dependency from v3 to v7. This avoids a warning message at install time that "versions prior to 7 may use Math.random()".

Version 8.2.1

New Features

New shiny stuff.

  • Add ZBClient.broadcastSignal, enabling the client to broadcast a signal. See #312 for more details.

Fixes

Things that were broken and are now fixed.

  • Previously, the timeToLive property of ZBClient.publishMessage was required, although it was documented as optional. In this release, both timeToLive and variables have been made optional. If no value is supplied for timeToLive, it defaults to 0. Thanks to @nhomble for raising this issue. See #311 for more details.

Version 8.2.0

New Features

New shiny stuff.

  • Add ZBClient.evaluateDecision, enabling a DMN table to be evaluated on a Zeebe 8.2 and later broker. See #296 for more details.

Version 8.1.8

Fixes

Things that were broken and are now fixed.

  • The OAuth token was being evicted from the in-memory cache immediately, resulting in the file cache being used for every request. This release correctly sets the expiry time for the in-memory token cache. See #307 for more details. Thanks to @walliee for the fix.

Version 8.1.7

Fixes

Things that were broken and are now fixed.

  • With cacheOnDisk disabled, the OAuthProvider could cause excessive requests to the token endpoint, leading to blacklisting and denial-of-service. This version makes several adjustments to mitigate this: it caches the token in memory, reuses a single inflight request to the token endpoint, and backs off the token endpoint on a request failure. See #301 for more details. Thanks to @nhomble for raising this issue.

Version 8.1.6

Chores

Things that shouldn't have a visible impact.

Version 8.1.5

New Features

New shiny stuff.

  • The ZBClient now implements the modifyProcessInstance API, introduced in Zeebe 8.1. This allows you to modify a running process instance, moving execution tokens and changing variables. This can be used, for example, to migrate a running process instance to a new version of the process model. See #294 for more details.
  • The ZBClient createProcessInstance method now allows you to specify startInstructions (introduced in Zeebe 8.1), allowing you to start a new process instance from an arbitrary point. Along with modifyProcessInstance, this is a powerful primitive for building migration functionality. See [#295] for more details.

Version 8.1.4

Fixes

Things that were broken and are now fixed.

  • The @grpc dependencies are now pinned to a specific version - 1.8.7 for grpc-js and 0.7.4 for proto-loader. This is to avoid broken upstream dependencies impacting installs. Previously, with the dependency unpinned, an install on different days could result in a library that worked, or did not work, depending on the state of the upstream libraries. Now, the same dependencies are installed every time, resulting in a consistent experience. Thanks to @nikku and @barmac from the Camunda Modeler team for identifying this. See #290 for more context.
  • The docker subdirectory is back, with a docker-compose.yml file to start a local broker for testing purposes. See #289 for more details.

New Features

New shiny stuff.

  • A custom SSL certificate is now able to be used for the oAuth endpoint. The got library used for the token exchange needs the certificate explicitly, and it can now be passed in as a customRootCert property to the oAuth property in the ZBClient constructor. Thanks to luca-waldmann-cimt for the feature. See #284 for more details.

Version 8.1.2

Fixes

Things that were broken and are now fixed.

  • In 8.1.1, the update to the version of got introduced a regression that broke the OAuth token request with certain gateway configurations. This is now reverted, and a test has been introduced to ensure this regression does not happen again. See #280 for more details.

New Features

New shiny stuff.

  • Applications can now extend the user agent identifier by setting a value for the environment variable ZEEBE_CLIENT_CUSTOM_AGENT_STRING. This will be appended to the standard user agent string. See #279 for more details.

Version 8.1.1

Chores

Things that shouldn't have a visible impact.

Version 8.1

Breaking changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • Remove all deprecated APIs. All methods and interfaces that were marked as deprecated in the 1.3.0 release have been removed. All support for application code using the pre-1.0 Zeebe API is now gone. You will need to update your application code to refactor the deprecated methods and interfaces, or stay on version 8.0.3 of the package.

Fixes

Things that were broken and are now fixed.

  • Previously, the connectionTolerance option to createWorker did not take a MaybeTimeDuration, requiring users to provide a number (the value units is milliseconds). The signature has now been fixed, and connectionTolerance can now take a number or a typed Duration. See #260 for more detail. Thanks to @dancrumb for reporting this.
  • Previously, the autogenerated code for a BPMN model used the deprecated worker constructor and did not return the job acknowledgement token. It now uses the object constructor and correctly returns the job acknowledgement token. See #257 for more details. Thanks to @megankirkbride for reporting this issue.
  • Previously, the OAuth token request sent by the library used JSON encoding. This worked with Camunda SaaS, but would fail with Keycloak in self-managed. The library now correctly encodes the request as x-www-form-urlencoded. See #272 for more details. Thanks to @AdrianErnstLGLN for reporting this issue and providing a patch.

Version 8.0.3

Fixes

Things that were broken and are now fixed.

  • Previously, the fetchVariable option passed to createWorker had no effect. All variables were always fetched by workers. This option setting is now respected, allowing you to constrain the variables fetched by workers. See #264 for details. Thanks to @Veckatimest for reporting this.

Version 8.0.2

Fixes

Things that were broken and are now fixed.

  • Custom SSL certificates configured via environment variables now work correctly. See PR #263 for the details. Thanks to @barmac for the PR.

Version 8.0.0

Version 8.0.0 is the release to support Camunda Platform 8. The semver change does not denote a breaking API change. It's a product marketing alignment, rather than a technical semver change.

New Features

New shiny stuff.

  • Zeebe 8.0.0 and later support an optional retry backoff for failed jobs. This is a communication to the broker about how long it should delay before making the job available for activation again. This is implemented as a new interface for job.fail. See [#248] for more details.

Version 2.4.0

Breaking changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • The C-based gRPC implementation has been removed in this release. It is unmaintained, and does not build with Node 17. The Zeebe Node client now uses the pure JS gRPC implementation and requires Node version 12.22.5+, 14.17.5+, or 16.6.1+. See #201 and #247 for more details.

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • The onConnectionError and onReady events do not work as expected. Applications that rely on these should not upgrade until this is fixed. See #215.

Version 1.3.5

Fixes

Things that were broken and are now fixed.

  • Incident resolution via ZBClient.resolveIncident() now works. Thanks to mrsateeshp for the Pull Request. See #242 for more details.
  • Auth token retries now have an automatic back-off to avoid saturating the endpoint and getting blacklisted if invalid credentials are supplied. See #244 for more details.

Version 1.3.3

Breaking changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • Previously, you could pass an entire URL to the clusterId field in the camundaCloud option in the ZBClient constructor. The library would parse this and extract the cluster ID. With the changes to support multiple regions, this no longer works. From version 1.4.0, you must pass in only the cluster Id, not the complete URL. See #232.

New Features

New shiny stuff.

  • With Camunda Cloud 1.1, the DNS schema for the hosted service has been upgraded to include regions. To support this, the camundaCloud object in the ZBClient constructor now has an optional clusterRegion field. When no value is specified it defaults to bru-2 (Belgium). See #232.

Chores

Things that shouldn't have a visible impact.

  • Package dependencies have been updated to pass Snyk vulnerability scanning and npm audit report.
  • Husky has been updated to version 7.

Version 1.3.2

Fixes

Things that were broken and are now fixed.

  • Setting maxRetries and maxRetryTimeout in the ZBClient constructor had no effect. Only setting the environment variables ZEEBE_CLIENT_MAX_RETRIES and ZEEBE_CLIENT_MAX_RETRY_TIMEOUT had an effect. Now, the constructor options take effect. The constructor options will be overridden by the environment variables if those are set. See #228.

Version 1.3.1

Fixes

Things that were broken and are now fixed.

  • The user agent was added to requests for an OAuth token, but not for gRPC calls. It is now set in the gRPC call metadata for all gRPC calls. Thanks to @zelldon for opening the issue and helping track it down. See #225.

Version 1.3.0

Note on Version Number

Versions 1.0 - 1.2 were released two years ago, under the old numbering scheme. Version 1.3.0 is the Node client release that supports Camunda Cloud 1.0 and Zeebe 1.0.

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • onReady and onConnectionError events are not firing reliably. At the moment, the onConnectionError is called even when a gateway is present and accessible, and onReady is not called. See #215.
  • The TLS connection does not work with self-managed Zeebe brokers secured with TLS. See #218 and #219.
  • An exception in the gRPC layer can cause an application to exit. The workaround for this at the moment is to add a handler on the process for uncaught exceptions. See #217.

Breaking changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • The Zeebe API has changed in 1.0.0 and uses a gRPC protocol that is incompatible with pre-1.0.0 brokers. The 1.0.0 package will not work with a pre-1.0.0 broker. Nor will a pre-1.0.0 version of zeebe-node work with a 1.0.0 broker. See #208.
  • The worker task handler has a new type signature: job => Promise<JOB_ACTION_ACKNOWLEDGEMENT>. This means that all code branches in the worker handler must return a complete method call (deprecated), or one of the new job.complete, job.fail, job.error, job.forward, or job.cancelWorkflowInstance methods. This signature means that the type system can now do an exhaustiveness check to detect code paths that will always time out in the worker. See #210.

Deprecations

Things that are deprecated and will be removed in a future release. Existing code will work for now, but should be migrated at some point. New code should not use these features.

  • The previous methods with the word workflow in them (e.g.: deployWorkflow, startWorkflowInstance) are deprecated. In the 1.0.0 package they transparently call the new methods with process in them (e.g.: deployProcess, startProcessInstance), so existing code does not need to be rewritten. However, new code should not use these deprecated methods. These methods are scheduled to be removed in whichever comes first: the 1.2.0 release, or three months from the release of the 1.0.0 release. See #208.
  • The complete parameter in the worker task handler callback is deprecated, and will be removed in a future release. Use the new methods on the job object instead.
  • The non-object constructors for createWorker are deprecated, and will be removed in a future release. Use the object constructor instead.

New Features

New shiny stuff.

  • The worker task handler now has a new signature: job => Promise<JOB_ACTION_ACKNOWLEDGEMENT>. The complete parameter is deprecated, and the job object now has the methods job.complete, job.fail, job.error, job.forward. See #209.
  • The job object has a new method job.cancelWorkflowInstance. This allows you to cancel a workflow from within a worker, and return a Promise<JOB_ACTION_ACKNOWLEDGEMENT> in the worker handler. See #211.
  • Attempting to call two outcome methods on a job (for example: job.complete() and job.fail(), or the deprecated complete.success() and complete.error()) will now log an error to the console, alerting you to the behaviour and identifying the task type of the worker. See #213.

Version 0.26.0

New Features

New shiny stuff.

  • Upgraded the grpc, @grpc/grpc-js and @grpc/proto dependencies to the latest releases.

Breaking Changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • The type of the complete.success parameter is changed from Partial<T> to T. This gives you an exhaustive check on this function in a typed worker. If you use the type parameters on createWorker and your code relies on the previous optional nature of the payload fields, you will need to change the type signature in your code. See #198

Fixes

Things that were broken and are now fixed.

  • A broken link in the README TOC is fixed. Thanks to @nwittstruck for the PR! See #200.

Version 0.25.1

New Features

New shiny stuff.

  • The library now supports connecting to a gateway that has a self-signed certificate. See the TLS section of the README for details on configuration. See #160.
  • Client-side retries are now configurable via the environment variables ZEEBE_CLIENT_MAX_RETRIES, ZEEBE_CLIENT_RETRY, and ZEEBE_CLIENT_MAX_RETRY_TIMEOUT. Thanks to @jaykanth6 for the implementation.

  • The Generic types used for parameterising the Client and Worker have been renamed to improve the intellisense. Previously, the WorkflowVariables, CustomHeaders, and OutputVariables type parameters were aliased to KeyedObject. In VSCode, these all displayed in intellisense as KeyedObject, losing the semantics of each parameter. They now display in intellisense with the type parameter name.

Version 0.25.0

Fixes

Things that were broken and are now fixed.

  • Workers would intermittently throw an unhandled exception, and in some cases disconnect from Camunda Cloud. This was caused by network errors throwing an error event on the stream after the end event was emitted and all listeners were removed. The error event listener is no longer removed when the end event is received, and the worker no longer throws an unhandled exception. See [#99}(https://github.com/zeebe-io/zeebe-client-node-js/issues/99).

Version 0.24.2

Fixes

Things that were broken and are now fixed.

  • The example code in example is updated to remove a deprecated method. See #185.
  • An race condition in the ZBBatchWorker that could cause jobs to be lost in certain specific and rare race conditions has been refactored. See #177
  • The onConnectionError event is now debounced. See #161.

Version 0.24.0

Fixes

Things that were broken and are now fixed.

  • The segfault-handler package dependency broke cross-architecture builds. This required users to change their build chain and caused issues with AWS lambda deployment. It was added to assist in debugging the pure JS implementation of gRPC. In this release it has been removed. See #173.

Version 0.23.3

Breaking Changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • This version goes back to the C-based gRPC implementation. We found several issues with the pure JS gRPC implementation and the nghttp2 implementation in Node. The issues differ between Node versions, and are challenging to isolate, as they occur in the Node engine itself. By default, in this version, the Zeebe Node client uses the C-based gRPC client. If you want to participate in testing the pure JS client (bug reports welcome!), you can activate the pure JS gRPC client by setting ZEEBE_NODE_PUREJS=true.
  • Prior to this release, the default value for maxRetries was 50 (about 2 minutes). This caused workers started more than 2 minutes before the broker to abandon connection attempts and fail to connect. With this release, retries are infinite by default.

Version 0.23.2

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • Node 12 has issues with the new pure JS implementation. We don't have a compatibility matrix yet, but Node 14 works.
  • The onConnectionError event of the ZBClient and ZBWorker/ZBBatchWorker is not debounced, and may be called multiple times in succession when the channel jitters, or the broker is not available. See #161.

Fixes

Things that were broken and are now fixed.

  • The client's gRPC channel would not reconnect if a Zeebe broker in Docker is restarted. The @grpc/grpc-js package is updated to 1.0.4 to bring in the fix for @grpc/grpc-js #1411. This enables the client to reliably reconnect to brokers that are restarted in Docker or rescheduled in Kubernetes.

Version 0.23.2

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • The onConnectionError event of the ZBClient and ZBWorker/ZBBatchWorker is not debounced, and may be called multiple times in succession when the channel jitters, or the broker is not available. See #161.

Fixes

Things that were broken and are now fixed.

  • The dist directory is now in the published package. Thanks to @lwille for the PR that fixed the build. See #163.

Version 0.23.0

Known Issues

Things that don't work or don't work as expected, and which will be addressed in a future release

  • There is no dist directory in this release. See #163, and do not use this release.
  • The onConnectionError event of the ZBClient and ZBWorker/ZBBatchWorker is not debounced, and may be called multiple times in succession when the channel jitters, or the broker is not available. See #161.

Breaking Changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • The job.variables and job.customHeaders in the worker job handler are now typed as read-only structures. This will only be a breaking change if your code relies on mutating these data structures. See the section "Working with Workflow Variables and Custom Headers" in the README for an explanation on doing deep key updates on the job variables.
  • The ZBClient no longer eagerly connects to the broker by default. Previously, it did this by issuing a topology command in the constructor. This allows you an onReady event to be emitted. You can re-enable the eager connection behavior, by either passing eagerConnection: true to the client constructor options, or setting the environment variable ZEEBE_NODE_EAGER_CONNECTION to true. See #151.
  • The library nows logs with the simplified ZBSimpleLogger by default, for friendly human-readable logs. This will only be a breaking change if you currently rely on the structured log output. To get the previous structured log behaviour, pass in stdout: ZBJsonLogger to the ZBClient constructor options, or set the environment variable ZEEBE_NODE_LOG_TYPE to JSON. Refer to the "Logging" section in the README.

New Features

New shiny stuff.

  • The underlying gRPC implementation has been switched to the pure JS @grpc/grpc-js. This means no more dependency on node-gyp or binary rebuilds for Docker containers / Electron; and a slim-down in the installed package size from 50MB to 27MB. All tests pass, including some new ones (for example: the worker keeps working when the broker goes away and comes back). The JS gRPC implementation may have effects on the behaviour of the client that are not covered in the unit and integration tests. Please open a GitHub issue if you encounter something.
  • Timeouts can now be expressed with units using the typed-duration package, which is included in and re-exported by the library. See the README section "A note on representing timeout durations".
  • There is a new ZBBatchWorker. This allows you to batch jobs that are unrelated in a BPMN model, but are related with respect to some (for example: rate-limited) external system. See the README for details. Thanks to Jimmy Beaudoin (@jbeaudoin11) for the suggestion, and helping with the design. Ref: #134.
  • ZBClient.createWorker has two new, additional, method signature. The first is a single object parameter signature. This is the preferred signature if you are passing in configuration options. The second signature is a version of the original that elides the id for the worker. With this, you can create a worker with just a task type and a job handler. A UUID is assigned as the worker id. This is the equivalent of passing in null as the first parameter to the original signature. The previous method signature still works, allowing you to specify an id if you want. See this article for details.
  • There is now a ZBLogMessage interface to help you implement a custom logger #127. For an example of a custom logger, see the Zeebe GitHub Action implementation.
  • There is new custom logger implementation ZBSimpleLogger that produces flat string output. If you are not interested in structured logs for analysis, this log is easier for humans to read.
  • ZBClient now contains an activateJobs method. This effectively exposes the entire Zeebe GRPC API, and allows you to write applications in the completely unmanaged style of the Java and Go libraries, if you have some radically different idea about application patterns.
  • The Grpc layer has been refactored to implement the idea of "connection characteristics". When connecting to Camunda Cloud, which uses TLS and OAuth, the library would emit errors every time. The refactor allows these connection errors to be correctly interpreted as expected behaviour of the "connection characteristics". You can also set an explicit initial connection tolerance in milliseconds for any broker connection with the environment variable ZEEBE_INITIAL_CONNECTION_TOLERANCE. See this article, issue #133, and the README section "Initial Connection Tolerance" for more details.
  • The connection tolerance for transient drop-outs before reporting a connection error is now configurable via the environment variable ZEEBE_CONNECTION_TOLERANCE, as well as the previous constructor argument connectionTolerance.
  • The Node client now emits a client-agent header to facilitate debugging on Camunda Cloud. See #155.
  • The integration tests have been refactored to allow them to run against Camunda Cloud. This required dealing with a Zeebe broker in an unknown state, so all tests now template unique process ids, unique task types, and unique message names to avoid previous test run state in the cluster interfering with subsequent test runs.
  • I've started documenting the internal operation of the client in BPMN diagrams. These can be found in the design directory.
  • The README now contains a section "Writing Strongly-typed Job Workers", on writing typed workers in TypeScript.
  • The README also has a shiny TOC. It has grown in size such that one is needed.

Fixes

Things that were broken and are now fixed.

  • An unmaintained package in the dependency tree of kafka-node (and arguably a bug in NPM's de-duping algorithm) caused zeebe-node to break by installing the wrong version of the long dependency, unless the two packages were installed in a specific order. We've explicitly added long to the dependencies of zeebe-node to address this, and reported it to kafka-node. Thanks to @need4eat for discovering this and helping to track down the cause. See #124.
  • Prior to 0.23.0 of the zeebe-node client, a worker would not reconnect if the broker was restarted, throwing gRPC channel errors until they were restarted. A stalled retry timer has been added to the worker. The worker will now automatically reconnect when the broker is available, if it goes away and comes back. See #99, #145, and #152.

Version 0.22.1

Breaking Changes

Changes in APIs or behaviour that may affect existing applications that use zeebe-node.

  • The default job activation timeout for the ZBWorker has been changed from 1s to 60s.
  • The signature for specifying a workflow definition version in createWorkflowInstance has changed. See the README for the new signature.
  • If the oAuth cacheOnDisk is true and the directory $HOME/.camunda is not writable, then the ZBClient constructor will now throw to prevent unbounded token requests. Thanks to GitHub user MainAero for reporting this. See #110.
  • Change default long poll for workers to 30s. See #101.
  • The ZBClient no longer bubbles up gRPC status from its workers. See #109 and this comment.
  • Remove pollMode (it's now always long-poll), and add pollInterval in ZBLogger messages.

New Features

New shiny stuff.

  • You can now throw a BPMN Error in your process from a worker using the complete.error(errorCode: string, errorMessage?: string) method, or from the client using the ZBClient.throwError(throwErrorRequest: ThrowErrorRequest) method.
  • If you install the package globally with npm i -g zeebe-node, you get the command zeebe-node <filename> that parses a BPMN file and emits type definitions.
  • The oAuth token cache directory is now configurable via the ZBClient constructor parameter oAuth.cacheDir or the environment variable ZEEBE_TOKEN_CACHE_DIR.
  • Add support for Basic Auth. See this commit and the README for details.
  • Awaitable workflow outcome. With a 0.22 broker, the client can initiate a workflow and receive the outcome of the workflow in the broker response. See zeebe/#2896 and this blog post.
  • Support ZEEBE_SECURE_CONNECTION environment variable to enable TLS. See #111.
  • ZBClient and ZBWorker now extend EventEmitter and emit ready and connectionError events from their gRPC client channels. This is in addition to the existing callback handlers. See #108.
  • ZBClient now has a completeJob method that allows you to complete a job "manually", outside a worker. This allows you to decouple your job implementation from the service worker across a memory boundary - for example, in another AWS Lambda. Thanks to GitHub user MainAero for this. See #112.
  • The ZBLogger class is now available for you to instantiate a logger for application-level logging.

Fixes

Things that were broken and now are not.

  • Respect ZEEBE_AUTHORIZATION_SERVER_URL setting from the environment.
  • Correctly log task type from gRPC client in ZBLogger. See #98.
  • A message with no name would break BpmnParser.generateConstantsForBpmnFiles. Now it handles this correctly. Thanks to T.V. Vignesh for reporting this. See #106.
  • The onReady handler was not called for workers on initial start. Now it is. Thanks to Patrick Dehn for reporting this. See #97.

Chores

Internal house-keeping with no end-user impact.

  • Upgrade TypeScript to 3.7.
  • Upgrade Prettier to 1.19.1.

Version 0.21.3

  • Feature: Enable gRPC heartbeat. The gRPC heartbeat is intended to stop proxies from terminating the gRPC connection. See #101.
  • Feature: gRPC channel logging now displays which worker the channel is for, or if it is for the ZBClient. See #98.
  • Chore: Upgrade grpc dependency from 1.22.0 to 1.23.4
  • Security: Upgraded typedoc dev dependency to 0.15.0, removing 8487 known vulnerabilities. Note that this package is used to build documentation and not installed in applications that depend on zeebe-node.

Version 0.21.2

  • Fix: ZBClient.close() and ZBWorker.close() now return an awaitable Promise that guarantees the underlying gRPC channel is closed. It takes at least two seconds after jobs are drained to close the gRPC connection. When the close promise resolves, the gRPC channel is closed. Note that ZBClient.close() closes all workers created from that client.
  • Fix: Workers would stall for 180 seconds if they received a 504: Gateway Unavailable error on the HTTP2 transport. This was incorrectly treated as a gRPC channel failure. The code now checks the state of the gRPC channel when a transport error is thrown, rather than assuming it has failed. See #96.
  • Feature: Log messages now include a context property with the stack frame that generated the log message.

Version 0.21.1

  • Feature: ZBClient.deployWorkflow() now accepts an object containing a buffer. (Thanks Patrick Dehn!)
  • Fix: Pass stdout to ZBLogger and GRPCClient. (Thanks Patrick Dehn!)

Version 0.21.0

  • Long-polling is now the default.
  • connected property added to ZBClient.
  • onConnectionError(), onReady(), and connectionTolerance added to ZBClient and ZBWorker.
  • gRPC retry on gRPC Error 8 (RESOURCE_EXHAUSTED) due to Broker Backpressure.
  • Deprecate ZB_NODE_LOG_LEVEL, add ZEEBE_NODE_LOG_LEVEL.

Version 0.20.6

  • BREAKING CHANGE: Remove complete() in job handler callback. Use complete.success().
  • Inject stdout to logger in GRPC client. Fixes #74.

Version 0.20.5

  • Add support for the Zeebe service on Camunda Cloud.

Version v0.20.1

  • Add long polling support. See #64.

Version v0.20

  • Add TLS support (Thanks Colin from the Camunda Cloud Team!).
  • Remove node-grpc-client dependency.
  • Change versioning to match Broker versioning (Thanks Tim Colbert!).

Version 2.4.0

  • Update for Zeebe 0.18.
  • Remove ZBClient.listWorkflows and ZBClient.getWorkflow - the broker no longer provides a query API.
  • Remove {redeploy: boolean} option from ZBClient.deployWorkflow method. This relies on listWorkflows. This will be the default behaviour in a future release of Zeebe. See zeebe/#1159.
  • Add client-side retry logic. Retries ZBClient gRPC command methods on failure due to gRPC error code 14 (Transient Network Error). See #41.

Version 1.2.0

  • Integration tests in CI.
  • Fixed a bug with cancelWorkflowInstance.
  • Workers can now be configured to fail a workflow instance on an unhandled exception in the task handler.
  • Logging levels NONE | ERROR | INFO | DEBUG are configurable in the ZBClient.
  • Custom logging enabled by injecting Pino or compatible logger.