Thread: Optimizing the Query
I have a table (doc(varchar), sentenceId(int), bow(varchar[])). In the bow column I import bag of words. In total there are 4623 rows. Table (nlptable) looks like(for 10 vectors):
doc | sentenceId | bow
-------------------------------------------
corpus | 1 | {1,0,0,1,0,0,0,1,0,1}
corpus | 2 | {0,1,1,1,0,1,0,0,0,0}
The query I run (compare bag of words representation of two sentences):
select a.doc, a.sentenceid, b.doc, b.sentenceid,
cardinality(array(select unnest(array_positions(a.bow, '1')) intersect select unnest(array_positions(b.bow, '1'))))::float / cardinality(a.bow)::float
from
nlptable a, nlptable b
where
a.sentenceid < b.sentenceid;
The problem is that for 10 vectors (10 most common words) the execution time is about 3 minutes, for 100 vectors about 25 minutes and for 500 vectors 80 minutes. I have to make calculation for 10,000 most common words, which will possibly take 1 day. The query is too slow and I want to optimize it, but now idea how. Is there an option to use some boolean type instead of varchar[] to deacrease size of data?
I am relatively new to postgres, so have no idea about optimization. I also heard that arrays in postgres are heavy to deal with. Because of this I searched alternative ways to store bag of words in table, but can not find functionality other than in arrays.