Recently at work (on my last day), I dealt with a SQL query that got pretty complex because of new features making the business logic increasingly involved. The query ended up quite lengthy, but it was still the best way to respond to the new requirements. We were iterating over rows in a PostgreSQL database by batches across 20 parallel goroutines, until all rows had been processed.
We had a latent issue: despite updating our indexing strategy to match the new query, the two weren’t working together as well as they should. The job was too slow, the query was hammering database CPU, and we couldn’t parallelize further than what we were already doing.
The query was composed of multiple subqueries unionized together to meet all the conditions for fetching rows under multiple parameters. Each of those subqueries contained OR statements, and that was the one detail making index usage non-optimal. When I changed the indexing strategy and rewrote the query, I didn’t realize that although index scans were happening, some filtering was still occurring on top of them because of those OR clauses. We were scanning through 500k+ rows per subquery to ultimately limit the results to around 150. That’s the kind of detail that’s hard to catch in an EXPLAIN ANALYZE output, partly because it requires knowing exactly what to look for, and most people simply don’t.
In the end, breaking the OR conditions into smaller, separate subqueries was enough to make index usage optimal. Pure index scans, no additional filtering. Query time dropped by approximately 10x, and database CPU usage fell dramatically.
It’s the type of issue that reminds you that sometimes very little change is needed to get great results. You just need to be curious, rigorous, and analytical enough to find it.

