Why does it take over 41 seconds to read a table with less than 3 million rows? Are the rows so large? Is the tabe bloated? What is the size of the table as measured with pg_relation_size() and pg_table_size()?
There's one JSON column in each table with a couple fields, and a column with long texts in Items.
check_postgres gave a 1.4 bloat score to tenders, 1.9 to items. I had a duplicate index on transaction_id (one hand made, other from the unique constraint) and other text column indexes with 0.3-0.5 bloat scores. After Vacuum Full Analyze; sizes are greatly reduced, specially Items:
There were a couple mass deletions which probably caused the bloating. Autovacuum is on defaults, but I guess it doesn't take care of that. Still, performance seems about the same.
The planner is now using an Index Scan for Colombia without the subselect hack, but subselect takes ~200ms less in avg, so might as well keep doing it.
Row estimate is still +1M so still can't use that, but at least now it takes less than 10s to get the exact count with all countries.