Your Database Is Probably Your Bottleneck
Specialised database performance tuning for PostgreSQL, MySQL, and MongoDB. We analyse query plans, redesign indexes, tune connection pooling, and reduce your database P99 latency without a schema rewrite.
You might be experiencing...
Database performance tuning is frequently the highest-return engineering work available in a distributed system. The database is the stateful component that cannot be horizontally scaled as freely as application servers, and query performance problems compound non-linearly under load. A query that runs in 50ms with 10 concurrent users may take 3 seconds with 500 concurrent users due to lock contention, buffer pool pressure, and connection saturation.
The starting point is always pg_stat_statements (or its equivalent): a ranked view of query execution frequency and total time that reveals which queries matter most. A query that runs in 200ms but executes 50,000 times per minute contributes 10,000 seconds of database time per minute — optimising it from 200ms to 20ms has a larger impact than fixing a 2-second query that runs 100 times per day.
Index design is where most of the latency reduction comes from. Composite indexes, partial indexes, and covering indexes can convert a sequential scan of millions of rows into an index lookup returning results in single-digit milliseconds. We design every index change with a clear query plan before/after comparison, and we validate it under realistic concurrency — because an index that helps at 10 connections may behave differently at 500.
Engagement Phases
Query & Schema Analysis
We analyse pg_stat_statements (PostgreSQL) or Performance Schema (MySQL) to identify your top-50 slowest and most-executed queries. We run EXPLAIN ANALYZE on each, identify sequential scans on large tables, unused indexes, inefficient join orders, and lock contention patterns.
Index & Configuration Tuning
We design and implement index changes: composite indexes for multi-column WHERE clauses, partial indexes for filtered queries, covering indexes to eliminate heap fetches. We tune database configuration parameters (work_mem, shared_buffers, max_connections) and configure or optimise PgBouncer or ProxySQL for connection pooling.
Validation & Monitoring Setup
We validate every change with before/after EXPLAIN ANALYZE output and run a load test to confirm latency and CPU improvements under realistic concurrency. We configure pgBadger or PMM dashboards for ongoing slow query monitoring.
Deliverables
Before & After
| Metric | Before | After |
|---|---|---|
| Query P99 latency | 340 ms | 42 ms |
| Database CPU utilisation | 82% | 34% |
| Connection pool efficiency | 40% | 95% |
Tools We Use
Frequently Asked Questions
Do index changes require downtime?
In PostgreSQL, we use CREATE INDEX CONCURRENTLY so index creation does not lock the table. In MySQL, most ALTER TABLE operations for index changes are online in InnoDB. We always test on a replica first and validate the query plan improvement before applying to primary.
What about MongoDB — do you cover NoSQL databases?
Yes. For MongoDB we analyse the query profiler output, design compound indexes for frequently executed aggregation pipelines, identify collection scans on large collections, and tune the WiredTiger cache configuration. The methodology differs from relational databases but the impact is comparable.
When is the right answer 'add more hardware' rather than tuning?
Hardware scaling is the right answer when queries are already optimal and the workload genuinely requires more compute — typically for analytical queries on large datasets. For OLTP workloads, query and index tuning almost always produces larger gains than hardware at a fraction of the cost. We make that determination in the Day 1 analysis and will tell you honestly if scaling is the better path.
Your P99 Deserves Better
Book a free 30-minute performance scope call with our engineers. We review your latency profile, identify the most impactful optimization target, and scope a sprint to fix it.
Talk to an Expert