Back to Blog
Operations·6 min read

Rate Limits, Quotas & API Errors: n8n Troubleshooting

429 errors are among the most common workflow failures. Understanding rate limits, quotas, and how to build resilient workflows saves hours of debugging.

December 2, 2025

Rate Limits, Quotas & API Errors: n8n Troubleshooting

Your workflow worked yesterday. Today it's failing with "429 Too Many Requests."

Rate limits are one of the most common—and most misunderstood—causes of n8n workflow failures. This guide covers what they are, why they happen, and how to handle them effectively.

Understanding rate limits

APIs implement rate limits to protect their infrastructure. If any user could make unlimited requests, a few heavy users would degrade the service for everyone.

Rate limits typically work like this:

  • You're allowed X requests per time window (e.g., 60 requests per minute)
  • Exceed that limit and subsequent requests are rejected
  • After the time window resets, you can make requests again

Different APIs have different limits, and those limits vary by plan tier. Free plans have strict limits. Enterprise plans have generous limits. Know what you're working with.

How rate limits appear in n8n

When you hit a rate limit, you'll typically see:

HTTP 429 status code. This is the standard "too many requests" response.

Error messages mentioning:
- "Rate limit exceeded"
- "Too many requests"
- "Quota exceeded"
- "Slow down"
- "Request throttled"

Retry-After headers. Some APIs tell you exactly how long to wait before retrying. n8n doesn't always surface this clearly, but it's in the response.

The workflow execution fails, and if you have error handling, it routes to your error path. If you don't have error handling, the execution just stops.

Common rate limit scenarios

Scenario 1: Burst during batch processing

You're syncing 500 records from one system to another. The workflow triggers for each record, and 500 parallel requests hit the API. Instant rate limit.

Solution: Add delays between items. Use n8n's "Split in Batches" node with a wait between batches. Process 10 at a time with a 2-second delay, and you'll stay under most limits.

Scenario 2: Polling too frequently

Your workflow polls an API every minute to check for new data. The API only allows 30 requests per minute. At 1440 requests/day, you're fine—until you add a second polling workflow.

Solution: Audit all workflows hitting the same API. Consolidate polling where possible. Increase poll intervals if real-time isn't required.

Scenario 3: Shared rate limits across clients

You're using one API key across multiple client workflows. Each client's workflows are under the limit individually, but combined they exceed it.

Solution: Use separate API keys per client where possible. This isolates rate limits and prevents one client's usage from affecting others.

Scenario 4: Tiered limits you didn't know about

The API has 60 requests/minute, but also 1,000 requests/hour, and 10,000 requests/day. You stayed under the minute limit but hit the daily cap.

Solution: Read the API documentation carefully. Track usage at all limit tiers, not just the most obvious one.

Building rate-limit-resilient workflows

Add retry logic

n8n has built-in retry options on many nodes:

  1. Open the node settings
  2. Look for "Retry on Fail" or similar
  3. Configure retry count and delay

For API calls, setting 3 retries with increasing delays (1s, 5s, 15s) handles most transient rate limits.

Implement backoff

If simple retries aren't enough, use exponential backoff:

  • First retry: wait 1 second
  • Second retry: wait 2 seconds
  • Third retry: wait 4 seconds
  • And so on

This gives the rate limit time to reset while avoiding hammering a busy API.

Batch and delay

For bulk operations:

  1. Split items into smaller batches (10-50 items)
  2. Process each batch with a workflow
  3. Wait between batches (calculated to stay under rate limit)

If the API allows 60 requests/minute, processing 10 items every 10 seconds keeps you safely under.

Cache where possible

If you're repeatedly fetching the same data, cache it:

  • Store results in a database or file
  • Check cache before calling API
  • Refresh cache on a schedule, not on every request

This dramatically reduces API calls for read-heavy workflows.

Use webhooks instead of polling

Many APIs support webhooks—they call you when something changes instead of you repeatedly asking.

Polling: 60 requests/hour regardless of activity
Webhooks: Requests only when events occur

If the API supports it, webhooks are almost always better.

Quotas vs. rate limits

Rate limits reset on short time windows (minutes, hours). Quotas are usually monthly caps.

"You have 10,000 API calls per month" is a quota. Once you hit it, you're done until next month—or you pay for more.

Quota management is different from rate limit handling:

  • Track usage proactively. Don't discover you're out of quota when a workflow fails.
  • Set alerts. Notify yourself at 80% quota consumption.
  • Plan for overages. Know what happens when you exceed. Some APIs charge per-request overage. Others hard-block.
  • Upgrade before you hit limits. If usage is trending up, upgrade preemptively.

Debugging rate limit issues

When you're hitting rate limits and don't know why:

Check the API dashboard. Most APIs show your usage and remaining quota. This tells you whether you're actually hitting limits or if the error is something else.

Audit all workflows. Multiple workflows might be hitting the same API. The culprit might not be the one that's failing.

Check for loops. A workflow that triggers itself can burn through rate limits instantly.

Review recent changes. Did someone increase polling frequency? Add a new integration? Change the scope of a sync?

Test isolation. Temporarily disable other workflows hitting the same API. If the problem goes away, you've found a shared limit issue.

Communicating with clients

When rate limits affect a client:

Explain simply. "The service limits how many requests we can make per minute. We exceeded that limit due to increased usage."

Describe the fix. "We've added delays to spread requests out over time, which avoids hitting the limit."

Set expectations. "If your data volume continues to grow, we may need to upgrade to a higher API tier."

Don't make it sound like a catastrophic failure—rate limits are normal operational concerns. Professional handling is expected.

Building for the future

APIs tend to get stricter with rate limits over time, not looser. Build with this in mind:

  • Always include retry logic, even if you've never hit limits
  • Track API usage across workflows, not just individually
  • Prefer webhooks and event-driven architectures over polling
  • Budget for higher API tiers as clients scale

Rate limits are a fact of life when working with external APIs. The difference between a fragile workflow and a resilient one is how you handle them.

Last updated on January 31, 2026

Continue Reading

View all
The n8n Error Taxonomy: 6 Categories for Faster Fixes
Operations·6 min

The n8n Error Taxonomy: 6 Categories for Faster Fixes

Not all workflow errors are the same. Authentication failures need different responses than rate limits or timeouts. Here's a framework for categorizing and responding to n8n errors systematically.

Nov 2, 2025

The Hidden Cost of Running n8n Blind
Operations·5 min

The Hidden Cost of Running n8n Blind

When workflows fail silently for days before anyone notices, the cost isn't just broken processes—it's eroded client trust and reactive firefighting. Here's why visibility into your n8n instances matters more than you think.

Aug 4, 2025