Try / Catch Node
The Try / Catch node adds error handling and retry logic to your workflows. It catches errors from upstream nodes and can automatically retry the failed section before routing to an error path.
How It Works
Place the Try / Catch node after a sequence of steps you want to protect. When the flow reaches this node:
- If all upstream steps completed successfully, the flow continues through the SUCCESS output.
- If an error occurred, the node retries the failed section based on your retry configuration.
- If all retries are exhausted, the flow routes through the ERROR output.
Start → API Call → Process Data → [Try / Catch]
├── SUCCESS → Save Result → End
└── ERROR → Send Alert → End
Configuration
Max Retries
The maximum number of retry attempts before giving up. Set to 0 to disable retries and only catch errors.
| Value | Behavior |
|---|---|
| 0 | No retries - errors go directly to the ERROR output |
| 1–3 | Good for transient failures (network timeouts, rate limits) |
| 5+ | Use with caution - consider the total execution time |
Backoff Type
Controls how the delay between retries changes over time.
- Fixed - Same delay every time. Example with 2s delay:
2s → 2s → 2s → 2s - Exponential - Delay doubles with each attempt. Example with 1s delay:
1s → 2s → 4s → 8s
Exponential backoff is recommended when calling external APIs, as it gives overloaded services progressively more time to recover.
Retry Delay
The base wait time between retry attempts. Combined with the delay unit (milliseconds, seconds, or minutes).
For exponential backoff, this is the initial delay - subsequent delays are calculated as delay × 2^(attempt - 1).
On Retries Exhausted
What happens when all retry attempts have failed:
- Continue to error path - Routes the flow through the ERROR output port. Use this when you want to handle the failure gracefully (log it, send a notification, return a fallback value).
- Stop flow with error - Terminates the entire workflow execution with an error status. Use this when the failure is unrecoverable.
Outputs
| Port | Description |
|---|---|
| SUCCESS | The flow continues here when no error occurred or a retry succeeded |
| ERROR | The flow continues here when all retries are exhausted (only if "Continue to error path" is selected) |
Examples
Retry a flaky API call
HTTP Request → [Try / Catch: 3 retries, 2s fixed delay]
├── SUCCESS → Parse Response
└── ERROR → Return Default Value
Exponential backoff for rate-limited APIs
HTTP Request → [Try / Catch: 5 retries, 1s exponential]
├── SUCCESS → Store Data
└── ERROR → Log Error → Notify Team
Catch-only (no retry)
Code Node → [Try / Catch: 0 retries]
├── SUCCESS → Continue
└── ERROR → Send Error Report
Things to Keep in Mind
- Nodes upstream of the Try / Catch will re-execute on retry. If a node sends an email or writes to a database, that side effect will happen again. Make sure retried operations are safe to repeat (idempotent).
- Total execution time adds up. 5 retries with 4s exponential backoff means waiting up to
1 + 2 + 4 + 8 + 16 = 31 secondsbefore the error path triggers. - The retry count and last error message are available as variables in downstream nodes, so you can include them in error reports or logs.