Error Handling
Production workflows need to handle failures gracefully. VelaFlows provides three error-handling mechanisms: Try-Catch for wrapping risky operations, Retry for transient failures, and On-Error for defining global error responses.
Try-Catch Node
Wraps a group of nodes and catches any errors they produce. If an error occurs, execution jumps to the catch branch instead of failing the workflow.
Configuration
| Field | Description |
|---|---|
| Try body | The nodes to execute (connected as the “try” output) |
| Catch body | The nodes to execute if an error occurs (connected as the “catch” output) |
| Error categories | Optional regex-based categorization of error messages |
Error Categories
You can define regex patterns to categorize errors for targeted handling:
| Category | Pattern Example | Description |
|---|---|---|
network | ECONNREFUSED|timeout|ETIMEDOUT | Network and connectivity errors |
auth | 401|403|unauthorized|forbidden | Authentication and permission errors |
validation | 422|validation|invalid | Input validation errors |
rate_limit | 429|rate.limit|too.many | Rate limiting errors |
not_found | 404|not.found | Resource not found errors |
Available in Catch
Inside the catch branch, these variables are available:
{{error.message}}— The error message string{{error.category}}— The matched error category (if configured){{error.nodeType}}— The type of node that failed{{error.nodeId}}— The ID of the node that failed
Example
Try-Catch
Try:
--> HTTP Request: POST https://external-api.com/data
--> Update Lead with response data
Catch:
--> Condition: {{error.category}} === "rate_limit"
True --> Delay: 60 seconds --> Retry the request
False --> Add Note: "API call failed: {{error.message}}"
--> Notify team via SlackRetry Node
Automatically retries a failed operation with configurable backoff. Useful for transient errors like network timeouts and rate limits.
Configuration
| Field | Description |
|---|---|
| Max attempts | Maximum number of retry attempts (e.g., 3) |
| Backoff strategy | fixed, linear, or exponential |
| Base delay | Starting delay between retries in seconds (e.g., 5) |
| Error categories | Which error categories to retry on (uses same regex-based categorization as Try-Catch) |
Backoff Strategies
| Strategy | Delay Pattern (base=5s) | Use Case |
|---|---|---|
| Fixed | 5s, 5s, 5s | Simple retry with consistent spacing |
| Linear | 5s, 10s, 15s | Gradually increasing backoff |
| Exponential | 5s, 10s, 20s | Aggressive backoff for rate-limited APIs |
Example
Retry (max: 3, exponential, base: 5s)
--> HTTP Request: POST https://payment-api.com/chargeIf the HTTP request fails:
- Wait 5 seconds, retry
- If still failing, wait 10 seconds, retry
- If still failing, wait 20 seconds, retry
- If still failing, propagate the error to the parent (Try-Catch or On-Error)
On-Error Node
Defines a global error handler for a section of the workflow. When any node in its scope fails (and is not caught by a Try-Catch), the On-Error action executes.
Configuration
| Field | Description |
|---|---|
| Action | What to do on error |
Actions
| Action | Description |
|---|---|
log | Log the error to the execution history |
notify | Send a notification to configured channels |
stop | Stop the workflow execution immediately |
escalate | Escalate to a team lead or manager |
Example
On-Error (action: notify)
--> Enrich Person (Apollo)
--> Validate Email (ZeroBounce)
--> Update LeadIf any of these nodes fail, the team is notified with the error details, and the workflow stops.
Combining Error Handlers
For robust workflows, combine all three:
On-Error (action: escalate) <-- Global fallback
--> Try-Catch <-- Catch specific errors
Try:
--> Retry (3x, exponential) <-- Handle transient failures
--> HTTP Request
--> Process Response
Catch:
--> Log Error
--> Send fallback message to customerExecution flow on failure:
- HTTP Request fails — Retry retries up to 3 times
- All retries exhausted — Error propagates to Try-Catch
- Catch branch executes — Logs error and sends fallback message
- If catch branch also fails — On-Error escalates to team
Best Practices
- Always wrap external API calls in Try-Catch or Retry nodes. External services are inherently unreliable.
- Use Retry for transient errors like network timeouts, 429 rate limits, and 503 service unavailable.
- Use Try-Catch for business logic errors like missing data, invalid responses, or expected failure scenarios.
- Use On-Error as a safety net at the top level to ensure no failure goes unnoticed.
- Log errors even when handling them gracefully. The execution log is invaluable for debugging.