PostgreSQL Performance Optimization in Supabase
Performance optimization is crucial for any database-driven application. In this guide, we'll explore practical techniques to optimize your Supabase PostgreSQL database for speed and efficiency.
Understanding Query Performance
Before optimizing, you need to understand what's slow. PostgreSQL's EXPLAIN command is your best friend:
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE user_id = '123'
AND created_at > NOW() - INTERVAL '30 days';
Look for: - Sequential scans (Seq Scan) on large tables - High execution times - Large row counts being processed
Index Strategy
Indexes are the foundation of query performance. Here's how to use them effectively:
Basic Indexes
-- Single column index
CREATE INDEX idx_orders_user_id ON orders(user_id);-- Composite index for multi-column queries CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);
-- Partial index for specific conditions CREATE INDEX idx_active_orders ON orders(user_id) WHERE status = 'active'; ```
Index Types
Choose the right index type:
-- B-tree (default, good for equality and range queries)
CREATE INDEX idx_users_email ON users(email);-- GIN for full-text search CREATE INDEX idx_posts_content ON posts USING GIN(to_tsvector('english', content));
-- GiST for geometric data CREATE INDEX idx_locations ON stores USING GIST(location); ```
Query Optimization
Use SELECT Wisely
Avoid SELECT * in production:
-- Bad
SELECT * FROM users WHERE id = '123';-- Good SELECT id, name, email FROM users WHERE id = '123'; ```
JOIN Optimization
Order JOINs from smallest to largest tables:
SELECT o.id, o.total, u.name
FROM orders o
INNER JOIN users u ON o.user_id = u.id
WHERE o.created_at > NOW() - INTERVAL '7 days'
AND u.status = 'active';
Avoid N+1 Queries
Use joins or batching instead of looping:
// Bad: N+1 queries
const orders = await supabase.from('orders').select('*')
for (const order of orders.data) {
const user = await supabase
.from('users')
.select('*')
.eq('id', order.user_id)
.single()
}// Good: Single query with join const { data } = await supabase .from('orders') .select(` *, users ( id, name, email ) `) ```
Connection Pooling
Supabase uses connection pooling, but you should understand the limits:
- Use connection pooling for serverless functions
- Set appropriate timeout values
- Monitor connection usage
// Configure client for optimal pooling
const supabase = createClient(url, key, {
db: {
schema: 'public',
},
auth: {
persistSession: false,
},
})
Materialized Views
For expensive queries that don't need real-time data:
-- Create materialized view
CREATE MATERIALIZED VIEW user_statistics AS
SELECT
user_id,
COUNT(*) as total_orders,
SUM(total) as total_spent,
MAX(created_at) as last_order_date
FROM orders
GROUP BY user_id;-- Refresh periodically REFRESH MATERIALIZED VIEW user_statistics; ```
Partitioning Large Tables
For tables with millions of rows, consider partitioning:
-- Partition by date range
CREATE TABLE orders_2024_01 PARTITION OF orders
FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');CREATE TABLE orders_2024_02 PARTITION OF orders FOR VALUES FROM ('2024-02-01') TO ('2024-03-01'); ```
Monitoring and Maintenance
Track Slow Queries
Enable query logging in your Supabase dashboard and monitor slow queries regularly.
Regular Maintenance
-- Analyze tables for query planner
ANALYZE users;-- Vacuum to reclaim space VACUUM ANALYZE orders; ```
Caching Strategies
Implement caching for frequently accessed data:
// Use SWR or React Query for client-side caching
const { data, error } = useSWR(
'user-profile',
async () => {
const { data } = await supabase
.from('users')
.select('*')
.eq('id', userId)
.single()
return data
},
{ revalidateOnFocus: false }
)
Best Practices
- Create indexes on foreign keys
- Use appropriate data types
- Normalize data to reduce redundancy
- Denormalize strategically for read-heavy operations
- Monitor query performance regularly
- Use database functions for complex operations
Conclusion
Performance optimization is an ongoing process. Start with proper indexing, monitor your queries, and iterate based on real-world usage patterns. With these techniques, you can build Supabase applications that scale to millions of users.