Ensuring a healthy and performant PostgreSQL service is crucial as it directly impacts the overall response time and stability of your backend. Since Postgres serves as the centerpiece of your backend, prioritize the optimization and maintenance of your Postgres service to achieve the desired performance and reliability.
In case your Postgres service is not meeting your performance expectations, you can explore the following options:
Consider upgrading your dedicated compute resources to provide more processing power and memory to the Postgres server.
Fine-tune the configuration parameters of Postgres to optimize its performance. Adjust settings such as
effective_cache_sizeto better align with your workload and server resources.
Identify and analyze slow-performing queries using tools like query logs or query monitoring extensions. Optimize or rewrite these queries to improve their efficiency.
Evaluate the usage of indexes in your database. Identify queries that could benefit from additional indexes and strategically add them to improve query performance.
Increase the disk size to increase disk performance. Keep in mind increasing the disk size isn’t reversible and increasing the memory of the service may yield better results. This is mostly useful when your data is very volatile and the postgres cache can’t work effectively. Only attempt to increase disk for performance reasons if your reads and writes are very high and increasing memory isn’t effective.
By implementing these steps, you can effectively address performance concerns and enhance the overall performance of your Postgres service.
Upgrade to our latest postgres image
Before trying anything else, always upgrade to our latest postgres image first. You can find our available images in our dashboard, under your database settings.
Upgrade to dedicated compute
Increasing CPU and memory is the simplest way to address performance issues. You can read more about compute resources here.
Fine-tune configuration parameters
When optimizing your Postgres setup, you can consider adjusting various Postgres settings. You can find a list of these parameters here. Keep in mind that the optimal values for these parameters will depend on factors such as available resources, workload, and data distribution.
To help you get started, you can use pgtune as a reference tool. Pgtune can generate recommended configuration settings based on your system specifications. By providing information about your system, it can suggest parameter values that may be a good starting point for optimization.
However, it’s important to note that the generated settings from pgtune are not guaranteed to be the best for your specific environment. It’s always recommended to review and customize the suggested settings based on your particular requirements, performance testing, and ongoing monitoring of your Postgres database.
Identifying slow queries
Monitoring slow queries is a highly effective method for tackling performance issues. Several tools leverage pg_stat_statements, a PostgreSQL extension, to provide constant monitoring. You can employ these tools to identify and address slow queries in real-time.
First, make sure the extension pg_stat_statements is enabled.
Click on this one-click install link
Select your project:
Replace the placeholders with your postgres password, subdomain and a user and password to protect your pghero service. Finally, click on create.
After confirming the service, copy the URL:
Finally, you can open the link you just copied to access pghero:
When you create a new service, it can take a few minutes for the DNS (Domain Name System) to propagate. If your browser displays an error stating that it couldn’t find the server or website, simply wait for a couple of minutes and then try again.
After successfully setting up pghero, it will begin displaying slow queries, suggesting index proposals, and offering other valuable information. Utilize this data to enhance your service’s performance.
Indexes can significantly enhance the speed of data retrieval. However, it’s essential to be aware that they introduce additional overhead during mutations. Therefore, understanding your workload is crucial before opting to add an index.
There are tools you can use to help analyze your workload and detect missing indexes.
- Enable hypopg
- Execute the command
docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements
$ docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements
Processing 1631 new query fingerprints
No new indexes found