Limits

Kyuda imposes limits on source and pipeline execution, the events you send to Kyuda, and other properties. You'll receive an error if you encounter these limits.

Some of these limits apply only on the free tier. For example, Kyuda limits the daily number of invocations and execution time you can use on the free tier. On paid tiers, you can run an unlimited number of invocations, for any amount of execution time.

Other limits apply to both the free and paid tiers, but many can be raised upon request.

These limits are subject to change at any time.

Number of Sources

You can run an unlimited number of sources, as long as each operates under the other limits.

Number of Pipelines

You can run an unlimited number of pipelines, as long as each operates under the other limits.

Daily and Monthly Invocations

Daily and Monthly Compute Time

HTTP Triggers

HTTP Request Body Size

By default, the body of HTTP requests sent to a source or workflow is limited to 512KB.

Your endpoint will issue a 413 Payload Too Large status code when the body of your request exceeds 512KB.

Kyuda supports a way to bypass this limit. You can upload multiple large files, like images and videos up to 5TB, using multipart/form-data.

Queries Per Second

Generally the rate of HTTP requests sent to an endpoint is quantified by queries per second, or QPS. A query refers to an HTTP request.

You can send an average of 10 requests per second to your HTTP trigger. Any requests that exceed that threshold may trigger rate limiting. If you're rate limited, we'll return a 429 Too Many Requests response. If you control the application sending requests, you should retry the request with exponential backoff or a similar technique.

Email Triggers

Currently, most of the limits that apply to HTTP triggers also apply to email triggers.

The only limit that differs between email and HTTP triggers is the payload size: the total size of an email sent to a workflow - Its body, headers, and attachments - Is limited to 30MB.

Memory

By default, workflows run with 256MB of memory. You can modify a pipeline's memory in your pipeline's settings, up to 10GB.

Increasing your pipeline's memory gives you a proportional increase in CPU. If your pipeline is limited by memory or compute, increasing your pipeline's memory can reduce its overall runtime and make it more performant.

Disk

Your code, or a third party library, may need access to disk during the execution of your pipeline or event source. You have access to 2GB of disk in the /tmp directory.

Pipelines

Time per Execution

Event and Execution History

Logs, Steps and Exports

Last updated