Scheduling & Cron

Implement recurring jobs and scheduled tasks using durable workflow patterns.

Workflows naturally support scheduling through sleep(). Unlike traditional cron systems that require external infrastructure, workflow-based scheduling is durable - if the server restarts, the schedule resumes without missing a beat. And because sleep() suspends the workflow without consuming compute resources, a workflow sleeping for one minute costs the same as one sleeping for a month.

Recurring Execution Patterns

There are three patterns for recurring jobs, each suited to different lifetimes and workloads.

Loop with Steps

The simplest pattern is a while(true) loop that runs a job and sleeps between iterations. Use this for short-lived or bounded work that eventually exits — polling that resolves, health checks with a failure threshold, or jobs with a known end condition.

workflows/recurring-job.ts
import { sleep } from "workflow";
declare function runJob(): Promise<{ success: boolean }>; // @setup

export async function recurringJobWorkflow() {
  "use workflow";

  while (true) { 
    // runJob can be a step function or an entire composed workflow
    // See: /docs/foundations/common-patterns#workflow-composition
    const result = await runJob(); 

    if (!result.success) {
      break;
    }

    await sleep("1 hour"); 
  }
}

Every iteration appends to the workflow's event log. For workflows that run indefinitely, the log grows without bound, causing replay performance to degrade over time. For true infinite cron jobs, use daisy-chaining instead.

Dispatcher Loop

A long-running dispatcher loop that uses a step function to call start() and kick off background child workflows without awaiting their completion. Each child workflow runs with its own event log, so the dispatcher's log stays small.

workflows/cron-dispatcher.ts
import { sleep } from "workflow";
declare function triggerDailyReport(): Promise<string>; // @setup
declare function triggerCleanup(): Promise<string>; // @setup

export async function cronDispatcherWorkflow() {
  "use workflow";

  let iteration = 0;

  while (true) {
    // Run daily report every iteration
    await triggerDailyReport();

    // Run cleanup every 7 iterations (weekly), skipping the first run
    iteration++;
    if (iteration % 7 === 0) { 
      await triggerCleanup(); 
    }

    await sleep("1 day");
  }
}
workflows/steps.ts
import { start } from "workflow/api";

export async function triggerDailyReport() {
  "use step";

  // stepId is stable across retries — use it if the child workflow
  // needs deduplication (e.g., as an idempotency key for external APIs)
  const run = await start(dailyReportWorkflow, []); 
  return run.runId;
}

export async function triggerCleanup() {
  "use step";

  // stepId is stable across retries — use it if the child workflow
  // needs deduplication (e.g., as an idempotency key for external APIs)
  const run = await start(cleanupWorkflow, []);
  return run.runId;
}

// These are independent workflows started by the dispatcher
export async function dailyReportWorkflow() {
  "use workflow";
  // Generate and send daily report
}

export async function cleanupWorkflow() {
  "use workflow";
  // Clean up old data
}

Each child workflow runs independently. If a daily report fails, it does not affect the dispatcher or the cleanup workflow. You can monitor each run separately using its runId.

Daisy-Chaining

The recommended pattern for infinite, true cron jobs. Instead of looping, the workflow executes its logic, sleeps for the desired duration, and as its final action triggers a brand new execution of itself before exiting. This ensures the event log resets with each run, keeping replay performance constant regardless of how long the job has been running.

workflows/daisy-chain.ts
import { sleep } from "workflow";
declare function runJob(): Promise<void>; // @setup
declare function startNextRun(): Promise<string>; // @setup

export async function cronWorkflow() {
  "use workflow";

  await runJob();
  await sleep("1 hour");

  // Start the next execution before exiting — event log resets
  await startNextRun(); 
}
workflows/steps.ts
import { start } from "workflow/api";

export async function startNextRun() {
  "use step";

  const run = await start(cronWorkflow, []);
  return run.runId;
}

sleep() consumes no compute resources while waiting. A workflow sleeping for one hour is effectively free until it wakes up.

For more on triggering workflows from within workflows, see Workflow Composition.

Scheduled Tasks at Specific Times

Pass a Date object to sleep() to wait until a specific point in time. This is useful for tasks that must run at exact times rather than fixed intervals.

workflows/scheduled-task.ts
import { sleep } from "workflow";
declare function sendReport(period: string): Promise<void>; // @setup

export async function endOfMonthReportWorkflow(year: number, monthNumber: number) {
  "use workflow";

  // monthNumber is 1-12; Date constructor uses 0-based months
  // new Date(2025, 6, 1) = July 1st 2025 (month index 6 = July)
  const reportDate = new Date(year, monthNumber, 1, 0, 0, 0); 
  await sleep(reportDate); 

  await sendReport(`${year}-${String(monthNumber).padStart(2, "0")}`);
}

Date constructors are deterministic inside workflow functions - the framework ensures consistent values across replays.

Health Checks and Keep-Alive

Periodic health monitoring is a natural fit for durable workflows. The workflow checks a service, takes action if something is wrong, and sleeps before checking again.

workflows/health-check.ts
import { sleep } from "workflow";
declare function checkServiceHealth(serviceUrl: string): Promise<{ healthy: boolean; latencyMs: number }>; // @setup
declare function sendAlert(serviceUrl: string, latencyMs: number): Promise<void>; // @setup
declare function restartService(serviceUrl: string): Promise<void>; // @setup

export async function healthCheckWorkflow(serviceUrl: string) {
  "use workflow";

  let consecutiveFailures = 0;

  while (true) {
    const status = await checkServiceHealth(serviceUrl);

    if (!status.healthy) {
      consecutiveFailures++;

      await sendAlert(serviceUrl, status.latencyMs);

      if (consecutiveFailures >= 10) { 
        throw new Error(`${serviceUrl} unrecoverable after 10 consecutive failures`); 
      }

      if (consecutiveFailures >= 3) { 
        await restartService(serviceUrl); 
        consecutiveFailures = 0; 
      }
    } else {
      consecutiveFailures = 0;
    }

    await sleep("30 seconds");
  }
}

The workflow tracks consecutiveFailures across iterations. After three consecutive failures, it escalates to a service restart and resets the counter to give the service a recovery window. After ten consecutive failures, a standard error permanently stops the workflow to prevent infinite restart loops. All of this state survives server restarts because the workflow is durable.

FatalError is designed for use inside "use step" functions to stop step retries. At the workflow level, a standard throw new Error(...) produces the same workflow failure state.

Graceful Shutdown

To stop a recurring workflow from the outside, use a hook. Attach the listener once before the loop — when external code sends data to the hook, the loop breaks on the next iteration.

workflows/stoppable-job.ts
import { sleep, createHook } from "workflow";
declare function runJob(): Promise<void>; // @setup

export async function stoppableJobWorkflow() {
  "use workflow";

  const stopHook = createHook<{ reason: string }>({ 
    token: "stop-recurring-job", 
  }); 

  let stopped = false;
  stopHook.then(() => { stopped = true; }); 

  while (true) {
    if (stopped) { break; } 

    await runJob();
    await sleep("1 hour");
  }
}

To stop the workflow, call resumeHook() with the custom token from any external context:

app/api/stop-job/route.ts
import { resumeHook } from "workflow/api";

export async function POST() {
  await resumeHook("stop-recurring-job", { reason: "Manual shutdown" });
  return Response.json({ stopped: true });
}

The custom token "stop-recurring-job" lets external code address the hook without needing to know the workflow's run ID. See Custom Tokens for Deterministic Hooks for more on this pattern.

If you don't need to pass data when stopping the workflow, you can cancel a run directly using the Run object from start(): await run.cancel(). The hook-based approach above is more flexible since it lets the workflow receive a reason or other data before stopping.