Compute resources are the fundamental units that represent the processing power and memory available to your projects. The primary compute resources are vCPU and RAM.

This documentation outlines the key aspects of compute resources in the context of the Nhost Cloud Platform.

To further improve availability and fault tolerance, check out Service Replicas

Shared Compute

In a shared model, compute resources are shared amongst users. This is fine if your services mostly run at low to medium load, occasionally burst for brief periods of time, and can tolerate drops in performance. It is important to understand that the availability of CPU time is not guaranteed.

Free Plan

Projects on the free tier have a total of 2 shared vCPUs and 1 GiB of RAM spread over services as follows:

ServiceCPU (MiB)Memory (GiB)
Postgres0.5256
Hasura0.5384
Auth0.5256
Storage0.5128

Pro Plan

Projects on the pro tier have a total of 2 shared vCPUs and 2 GiB of RAM spread over services as follows:

ServiceCPU (MiB)Memory (GiB)
Postgres0.5512
Hasura0.5768
Auth0.5384
Storage0.5384

Dedicated Compute

For production workloads where latency is essential or consistent performance is non-negotiable, we strongly suggest the use of dedicated resources.

Compute/Dedicated resources are only available on the Pro plan

To setup dedicated resources for your project, you can either use the Dashboard or the Config.

nhost/nhost.toml
[hasura.resources.compute]
cpu = 500
memory = 1024

[auth.resources.compute]
cpu = 500
memory = 1024

[postgres.resources.compute]
cpu = 500
memory = 1024

[storage.resources.compute]
cpu = 500
memory = 1024

Bursting with Dedicated Compute

When using dedicated compute we allow your application to use more than its alloted CPU resources if those resources are available. This means what we are calling dedicated compute is, in fact, guaranteed compute. For instance:

resource allocation

In the graph above we can see three applications assigned to the same node, each with its own dedicated compute (the solid lines block). However, all applications are allowed to use the non-solid region of compute as long as the rest of the projects are not using it:

resource utilization

Above we can see three different scenarios:

  • In scenario A the green application is barely using its alloted CPU so if the other applications need it, they can borrow it.
  • Similarly, in scenario B the green application can borrow resources if it needs it from other applications if those aren’t using them.
  • In the case all applications need to use all of their resources, nobody can steal from each other as resources are guaranteed per application.

This borrowing of resources is convenient in case of short and unexpected bursts, however, as those are not guaranteed you shouldn’t rely on them for sustained usage.

Disk Performance

By default disks are provisioned with a capacity for 3000 IOPS and 125 Mbps of throughput. If you need higher performance don’t hesitate to contact us.