Try for free Book a demo

Introduction to Azure Durable Functions

Microsoft Azure

11 Mins Read


Durable Function is an extension of Azure Functions and WebJobs – it’s an open source extension that you plug into your function. You build the extension on top of the open-source Durable Task Framework. Furthermore, the extension takes care of managing the state, checkpoint, and replays for you.


With Azure Durable Functions, you can write stateful workflows in a new function type called an orchestrator function. This orchestration function provides you more control for building workflows than using, for instance, a designer for Microsoft Flow or Logic Apps.

A developer writes code with Durable Functions; no designers, nor a JSON DSL that you need to learn. It is strictly coded, and developers like that. Therefore, you as a developer can build durable functions with the code the same way as you are familiar with Visual Studio or Code. Furthermore, other benefits of using Azure Durable Functions are:

  • You can call other functions synchronously and asynchronously, and you can save the output from called functions to local variables.
  • The state is managed for you, whenever a VM or process recycles you will not lose the state of your flow.

Key Concepts

The key concepts around Azure Durable Functions are:

  • The Orchestrator Client,
  • The Orchestrator Function,
  • The Activity Function,
  • Bindings,
  • And Checkpoints and replays

1. Orchestrator Client

The Orchestrator client is responsible for starting and stopping orchestrator functions and monitoring them. You can, for example, have an HTTP-triggered function as orchestrator client, which accepts an HTTP request and starts an orchestrator function, or a Service Bus triggered function, which listens to a queue and each message begins an orchestrator function.

Azure durable function concepts

2. Orchestrator Function

In the introduction, we already mentioned the orchestrator function. This function is the heart when building a durable function solution. Moreover, in the orchestrator function, you write your workflow in code. The workflow can consist of code statements calling other functions like activity, or other orchestration functions (sub orchestrations), or waits for other events to occur (callbacks or human interaction), or timers. Furthermore, an orchestration function itself is triggered by an orchestration trigger. This trigger supports starting new orchestrator function instances and resuming existing orchestrator function instances, which are “awaiting” a task.

Azure durable function concepts

You start an orchestration by a so-called orchestrator client, a function that on its turn can be triggered by a message in a queue, an HTTP request, or any other trigger mechanism you are familiar with using functions. Every instance of an orchestration function will have an instance identifier, which can be auto- or user-generated. You can for example with the DurableFunctionsHubHistory Table of the storage account attached to the Function App, examine an orchestration run from a given instance identifier.


3. Activity Function

You can call so-called activity functions from an orchestration function. These activity functions are the basic unit of work in a durable function solution. Each activity function executes one task, which can be anything you want. Furthermore, you can write these activity functions in any language supported by Azure Functions. Note that the orchestrator function only supports C# and the durable functions extension guarantees at least once execution of the activity function.

Azure durable function

The Orchestrator client starts an orchestration function, which calls one or more activity functions, or other sub-orchestrations, and so on.

Orchestrator client


Sub-orchestrations can be useful in case you want to organize a complex orchestration or when an orchestration function repeats itself 100 or 1000 times.


4. Bindings

With the durable functions extension of Azure Functions, you get more bindings to control the execution of orchestrator, activity and orchestrator client functions. The first binding is the Orchestration Trigger Binding mentioned in the orchestration function. This trigger polls a series of queues in the storage account, you assign to the Function App hosting your durable function solution(s). Moreover, these queues are internal implementation details of the extension. Hence they are not explicitly configured in the binding properties.

The orchestration trigger supports in- and outputs. An orchestration function supports a DurableOrchestrationContext as inputs and returns values of orchestration functions as outputs. Note these outputs must be JSON-serializable, and in case a function returns Task or void, a null value will be saved as the output.

Another binding is the Activity Trigger Binding for activity functions called by the orchestration function. This trigger binding will also, like the orchestration binding, poll a series of queues in the storage account you assign to the Function App hosting your durable function solution(s). The activity trigger binding also supports both inputs and outputs, just like the orchestration trigger. As input, the trigger supports a DurableActivityContext, and return values functions from the activity functions as outputs.

Lastly, you have a binding for the Orchestrator Client function. This binding enables you to interact with the orchestrator functions. Specifically, the binding allows you to start, query, and stop orchestration functions. Furthermore, it will also let you send events to the orchestration function while running.

5. Checkpoints and replays

One of the crucial aspects of durable functions is reliability. The functions (client, orchestration, activity) run on different VMs in a data center. The underlying network infrastructure for these VMs might not always be 100% reliable. However, the durable functions extension (Durable Task Framework) does ensure at least once execution of functions by using storage queues to drive function invocation and by periodically checkpointing execution history into storage tables. In short durable functions leverages the storage account attached to the Function App. Furthermore, by capturing the history of the execution, the orchestration function can be replayed automatically by rebuilding the in-memory state.

Every execution of an orchestration function leads to the generation of checkpoints. The durable functions extension generates and manages these checkpoints. Each checkpoint consists of the following:

  • Execution is saved to the DurableFunctionHubHistory Storage Table
  • Enqueues messages for functions the orchestrator wants to invoke.
  • Enqueues messages for the orchestrator itself — for example, durable timer messages.

Durable Function Use Case

A durable function is a good fit when you need to write a complex orchestration. For instance, when you chain functions together to support a multi-step process, you wish to automate. Before you could consider durable functions you would create some queues, function 1 (F1) would drop messages into a queue, the second function (F2) would pick it up, and drop it to another queue, where a third function (F3) would pick it up and so on.

Durable Function Use Case

You can build this solution. However, it’s much overhead with managing all the queues. How do you keep track of the relationship between all these functions, and when you want to add some error handling it makes the solution more complicated, because of compensating actions and persistence of state.

Durable Functions Solution

However Durable Functions orchestrators are capable of error-handling structures you as a developer are used to such as try-catch blocks and self-document the relationship between activity functions. Below you can see how the code for this scenario would look like with an orchestrator function. Note that since the activity functions’ return values, these are stored as a local state, and maintained for you as part of the execution history.

Durable Functions Solution

The orchestration trigger provides a context object, which allows you to call other functions. You call these other functions by name and provide data as input, and then you wait for some results. Hence, the orchestration function offers a way of how you can do sequential function calling.

How it works

You call the activity function async – an actual message is dropped into a queue, and some VM somewhere will pick up that message and execute the activity, process it, and return some value. The output comes back to another queue. Thus you can view this as a typical scale-out scenario, involving queues, but you do not see those queues nor do have to manage them. The extension does that for you. Under the hood, it is a distributed architecture consisting of VM’s and queues and the VM where the orchestration function runs might not be the same as the VM where the activity functions run.


The await statements are asynchronous, and thus the orchestration function can go to sleep. When an activity function you call does a CPU intensive processing – the orchestrator can go to sleep when the activity runs and hence thus no billing (consumption) in that period occurs. Think of it as orchestration de-hydrating in BizTalk. The orchestrator wakes up when one of the activity functions sends a notification that the works are complete.

In case VM goes down, the platform also persists state, meaning when a VM needs to recycle were one of the activity functions was running. The orchestrator function will resume where it left off. This what the context provides. The service drops messages into Azure storage, writing history events to a history table, and thus keep track of all the state. The extension under the hood leverages the storage account, which you create when provisioning the Function App – meaning it is isolated from others and managed for you.

Example implementation

You can try chaining functions scenario out with the available templates in the second version of Azure Functions. When provisioning a Function App your default runtime is V2. Moreover, the second version provides you with templates for Azure Durable Functions.

try chaining function

You can choose the Durable Functions HTTP starter as the Orchestrator Client as a starting point for a solution supporting chaining functions together. You can use the following implementation code:

#r "Microsoft.Azure.WebJobs.Extensions.DurableTask" 
#r "Newtonsoft.Json" 
using System.Net;
 public static async Task<HttpResponseMessage> Run(
    HttpRequestMessage req,
    DurableOrchestrationClient starter,
    string functionName,
    ILogger log)
    // Function input comes from the request content.
    dynamic eventData = await req.Content.ReadAsAsync<object>();
    // Pass the function name as part of the route
    string instanceId = await starter.StartNewAsync(functionName, eventData);
    log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
    return starter.CreateCheckStatusResponse(req, instanceId);

Subsequently, you choose Durable Functions orchestrator template for your orchestration function. In the code you can paste the following code:

#r "Microsoft.Azure.WebJobs.Extensions.DurableTask"

public static async Task<List<string>> Run(DurableOrchestrationContext context)


    var outputs = new List<string>();

    outputs.Add(await context.CallActivityAsync<string>("Hello", "Tokyo"));

    outputs.Add(await context.CallActivityAsync<string>("Hello", "Seattle"));

    outputs.Add(await context.CallActivityAsync<string>("Hello", "London"));

    return outputs;


Finally, you choose the Durable Function activity template for your activity function. Here you paste the following code:

#r "Microsoft.Azure.WebJobs.Extensions.DurableTask"

public static string Run(string name)


    return $"Hello {name}!";


In the Orchestrator Client function, you click Get Function URL, which you can use in Post Man. Next, you create a POST Request to the endpoint like and click Send – a request will be sent to the Orchestration Client, who will start the orchestration function. The orchestration function will call the activity functions on its turn.

POST /api/orchestrators/DurableFunctionsOrchestratorSample HTTP/1.1


Cache-Control: no-cache

Postman-Token: c4681306-ca94-25c4-84fb-a39daa439cbf

Content-Type: multipart/form-data; boundary=—-WebKitFormBoundary7MA4YWxkTrZu0gW

From the response you can select the following endpoint:



You can through endpoint get the status of your Orchestration Function:


    "instanceId": "c6903f4143494e0cb88d654c36e05416",

    "runtimeStatus": "Completed",

    "input": null,

    "customStatus": null,

    "output": [

        "Hello Tokyo!",

        "Hello Seattle!",

        "Hello London!"


    "createdTime": "2018-10-29T19:58:19Z",

    "lastUpdatedTime": "2018-10-29T19:58:29Z"


Through the Azure Storage Explorer, you can examine the history and instance tables.

Azure Storage Explorer

Azure Storage Explorer

The other endpoints in response from calling the Orchestration Client are the:

  • SendEventPostUri: The “raise event” URL of the orchestration instance.
  • TerminatePostUri: The “terminate” URL of the orchestration instance.
  • RewindPostUri: The “rewind” URL of the orchestration instance.

Wrap up

Azure Durable Functions provide you with more control over running a workflow. Instead of using a designer for Microsoft Flow or Logic Apps, you create your workflow with only code. The underlying durable task framework does the management for you with regards to reliability, state management, and tracking. This framework leverages the Azure Storage Account and its abilities by provisioning and managing queues and tables. The Azure Durable Functions abstract away the intermediate queues and storage.

In this blog post, we discussed how Azure Durable Functions could be of value when chaining functions together. Instead, explicitly use queues, the chaining is through the code, and the extension (Durable Task Framework) takes care of the plumbing with queues, state, and replays. Moreover, Durable Functions can support other scenarios too, such as :

  • fan-in fan-out to support MapReduce specific tasks, by coordinating the state of long-running processes with other clients,
  • monitoring of a process by continuously polling until you meet certain conditions,
  • and processes involving human interaction.

In upcoming blog posts, we will discuss these scenarios with Azure Durable Functions.

This article was published on Nov 5, 2018.

Related Articles