Thread: TODO item: Implement Boyer-Moore searching in LIKE queries
Hi pgsql-hackers,
Following the patch to implement strpos with Boyer-Moore-Horspool,
it was proposed we bring BMH to LIKE as well.
Original thread:
I'm a first time hacker and I found this task on the TODO list. It seemed interesting and isolated enough. But after looking at the code in like_match.c, it's clearly a non-trivial task.
How desirable is this feature? To begin answering that question,
I did some initial benchmarking with an English text corpus to find how much
faster BMH (Boyer-Moore-Horspool) would be compared to LIKE, and the results
were as follows: BMH can be up to 20% faster than LIKE, but for short
substrings, it's roughly comparable or slower.
Here are the results visualized:
Data attached, and description of the benchmark below.
I'd love to hear your thoughts:
- is the benchmark valid? am I missing anything?
- what conclusions can we draw from the results?
- what action should we take from here?
- is this the right mailing list for this discussion?
Thank you!
- Karan, pg community newbie
--- Benchmark Details ---
The easiest way to approximate the potential speedup from BMH,
is to compare the performance of the following queries:
1. SELECT count(*) FROM test_table WHERE text LIKE '%substring%';
2. SELECT count(*) FROM test_table WHERE strpos('substring', text) > 0;
We expect the strpos query to outperform the LIKE query since strpos is
implemented with BMH.
I loaded up the database with chunks of english text from the bible, a commonly
used corpus for simple text analysis. The exact procedure is described in more
detail at the end of the email. I ran a few queries, using short substrings and
long substrings, the choice of which is discussed in more detail below.
## Choice of test data
BMH is known to be particularly fast on english text, so I loaded the test table
with 5k-character chunks of text from the bible. BMH is expected to be slower
for small substrings where the overhead of creating a skip table may not be
justified. For larger substrings, BMH may outperform LIKE due to the skip table.
In the best case, if a text character does not exist in the substring, the
substring can jump length-of-substring characters, skipping what would be a lot
of work.
I chose small (< 15 character) substrings to be the most popular bigrams and
trigrams in the text. I chose long (10-250 character) substrings at random, and
took varied-length prefixes and suffixes to see how it affected algorithm
performance. The reason that prefixes were not sufficient is that BMH compares
from the right end of the substring, unlike the LIKE algorithm. The full
strings are included in the attached excel file.
## Database setup
Download the corpus (englishTexts) from
Create the table:
CREATE TABLE test_table (text text);
Generate the rows for insertion (generates chunks of 5k characters):
python gen_rows.py path/to/englishTexts/bible.txt > path/to/output/file
Load the table:
COPY test_table (text) from '/absolute/path/to/previous/output/file' WITH ENCODING 'UTF-8';
I ran the COPY command 21 times to exacerbate performance.
Queries were timed with psql's \timing feature. The mean of five queries was
reported.
## Hardware
Apple Macbook Air (Mid 2012)
CPU: 1.8 GHz Intel Core i5
RAM: 4 GB 1600 MHz DDR3
Attachment
On Mon, Aug 1, 2016 at 1:19 PM, Karan Sikka <karanssikka@gmail.com> wrote: > Following the patch to implement strpos with Boyer-Moore-Horspool, > it was proposed we bring BMH to LIKE as well. > > Original thread: > https://www.postgresql.org/message-id/flat/27645.1220635769%40sss.pgh.pa.us#27645.1220635769@sss.pgh.pa.us > > I'm a first time hacker and I found this task on the TODO list. It seemed > interesting and isolated enough. But after looking at the code in > like_match.c, it's clearly a non-trivial task. > > How desirable is this feature? To begin answering that question, > I did some initial benchmarking with an English text corpus to find how much > faster BMH (Boyer-Moore-Horspool) would be compared to LIKE, and the results > were as follows: BMH can be up to 20% faster than LIKE, but for short > substrings, it's roughly comparable or slower. > > Here are the results visualized: > > http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png > http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png Based on these results, this looks to me like a pretty unexciting thing upon which to spend time. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
<div dir="ltr">I agree, should we remove it from the TODO list?</div><div class="gmail_extra"><br /><div class="gmail_quote">OnMon, Aug 1, 2016 at 6:13 PM, Robert Haas <span dir="ltr"><<a href="mailto:robertmhaas@gmail.com"target="_blank">robertmhaas@gmail.com</a>></span> wrote:<br /><blockquote class="gmail_quote"style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, Aug 1, 2016at 1:19 PM, Karan Sikka <<a href="mailto:karanssikka@gmail.com">karanssikka@gmail.com</a>> wrote:<br /> > Followingthe patch to implement strpos with Boyer-Moore-Horspool,<br /> > it was proposed we bring BMH to LIKE as well.<br/> ><br /> > Original thread:<br /> > <a href="https://www.postgresql.org/message-id/flat/27645.1220635769%40sss.pgh.pa.us#27645.1220635769@sss.pgh.pa.us" rel="noreferrer" target="_blank">https://www.postgresql.org/message-id/flat/27645.1220635769%40sss.pgh.pa.us#27645.1220635769@sss.pgh.pa.us</a><br />><br /> > I'm a first time hacker and I found this task on the TODO list. It seemed<br /> > interesting and isolatedenough. But after looking at the code in<br /> > like_match.c, it's clearly a non-trivial task.<br /> ><br/> > How desirable is this feature? To begin answering that question,<br /> > I did some initial benchmarkingwith an English text corpus to find how much<br /> > faster BMH (Boyer-Moore-Horspool) would be compared toLIKE, and the results<br /> > were as follows: BMH can be up to 20% faster than LIKE, but for short<br /> > substrings,it's roughly comparable or slower.<br /> ><br /> > Here are the results visualized:<br /> ><br /> ><a href="http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png" rel="noreferrer" target="_blank">http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png</a><br/> > <a href="http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png"rel="noreferrer" target="_blank">http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png</a><br/><br /></span>Based on these results,this looks to me like a pretty unexciting<br /> thing upon which to spend time.<br /><span class="HOEnZb"><font color="#888888"><br/> --<br /> Robert Haas<br /> EnterpriseDB: <a href="http://www.enterprisedb.com" rel="noreferrer" target="_blank">http://www.enterprisedb.com</a><br/> The Enterprise PostgreSQL Company<br /></font></span></blockquote></div><br/></div>
On Mon, Aug 1, 2016 at 7:47 PM, Karan Sikka <karanssikka@gmail.com> wrote: > I agree, should we remove it from the TODO list? If nobody objects, sure. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas wrote: > On Mon, Aug 1, 2016 at 1:19 PM, Karan Sikka <karanssikka@gmail.com> wrote: > > Following the patch to implement strpos with Boyer-Moore-Horspool, > > it was proposed we bring BMH to LIKE as well. > > > > Original thread: > > https://www.postgresql.org/message-id/flat/27645.1220635769%40sss.pgh.pa.us#27645.1220635769@sss.pgh.pa.us > > > > I'm a first time hacker and I found this task on the TODO list. It seemed > > interesting and isolated enough. But after looking at the code in > > like_match.c, it's clearly a non-trivial task. > > > > How desirable is this feature? To begin answering that question, > > I did some initial benchmarking with an English text corpus to find how much > > faster BMH (Boyer-Moore-Horspool) would be compared to LIKE, and the results > > were as follows: BMH can be up to 20% faster than LIKE, but for short > > substrings, it's roughly comparable or slower. > > > > Here are the results visualized: > > > > http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png > > http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png > > Based on these results, this looks to me like a pretty unexciting > thing upon which to spend time. Uh, a 20% different is "unexciting" for you? I think it's interesting. Now, really, users shouldn't be running LIKE on constant strings all the time but rather use some sort of indexed search, but once in a while there is a need to run some custom query and you need to LIKE-scan a large portion of a table. For those cases an algorithm that performs 20% better is surely welcome. I wouldn't be so quick to dismiss this. Of course, it needs to work in all cases, or failing that, be able to fall back to the original code if it cannot support some corner case. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Yeah, you make a fair point.
I was hoping more people would chime in with strong opinions to make the
decision easier, but perhaps the lack of noise indicates this is a low-pri feature.
I will ask around in my coworker circles to see what they think,
could you do the same?
if you are at a scale where you care about this 20% speedup, you would want
to go all the way to an indexed structure, because the gains you would realize
would exceed 20%, and 20% may not be a sufficient speedup anyway.
On Tue, Aug 2, 2016 at 1:56 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Robert Haas wrote:
> On Mon, Aug 1, 2016 at 1:19 PM, Karan Sikka <karanssikka@gmail.com> wrote:
> > Following the patch to implement strpos with Boyer-Moore-Horspool,
> > it was proposed we bring BMH to LIKE as well.
> >
> > Original thread:
> > https://www.postgresql.org/message-id/flat/27645.1220635769%40sss.pgh.pa.us#27645.1220635769@sss.pgh.pa.us
> >
> > I'm a first time hacker and I found this task on the TODO list. It seemed
> > interesting and isolated enough. But after looking at the code in
> > like_match.c, it's clearly a non-trivial task.
> >
> > How desirable is this feature? To begin answering that question,
> > I did some initial benchmarking with an English text corpus to find how much
> > faster BMH (Boyer-Moore-Horspool) would be compared to LIKE, and the results
> > were as follows: BMH can be up to 20% faster than LIKE, but for short
> > substrings, it's roughly comparable or slower.
> >
> > Here are the results visualized:
> >
> > http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png
> > http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png
>
> Based on these results, this looks to me like a pretty unexciting
> thing upon which to spend time.
Uh, a 20% different is "unexciting" for you? I think it's interesting.
Now, really, users shouldn't be running LIKE on constant strings all the
time but rather use some sort of indexed search, but once in a while
there is a need to run some custom query and you need to LIKE-scan a
large portion of a table. For those cases an algorithm that performs
20% better is surely welcome.
I wouldn't be so quick to dismiss this.
Of course, it needs to work in all cases, or failing that, be able to
fall back to the original code if it cannot support some corner case.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Aug 2, 2016 at 1:56 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: >> > How desirable is this feature? To begin answering that question, >> > I did some initial benchmarking with an English text corpus to find how much >> > faster BMH (Boyer-Moore-Horspool) would be compared to LIKE, and the results >> > were as follows: BMH can be up to 20% faster than LIKE, but for short >> > substrings, it's roughly comparable or slower. >> > >> > Here are the results visualized: >> > >> > http://ctrl-c.club/~ksikka/pg/like-strpos-short-1469975400.png >> > http://ctrl-c.club/~ksikka/pg/like-strpos-long-1469975400.png >> >> Based on these results, this looks to me like a pretty unexciting >> thing upon which to spend time. > > Uh, a 20% different is "unexciting" for you? I think it's interesting. > Now, really, users shouldn't be running LIKE on constant strings all the > time but rather use some sort of indexed search, but once in a while > there is a need to run some custom query and you need to LIKE-scan a > large portion of a table. For those cases an algorithm that performs > 20% better is surely welcome. Sure, but an algorithm that performs 20% faster in the best case and worse in some other cases is not the same thing as a 20% across-the-board performance improvement. I guess if we had a way of deciding which algorithm to use in particular cases it might make sense. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Karan Sikka <karanssikka@gmail.com> writes: > Just for the record, I'm leaning to the side of "not worth it". My > thoughts are, if you are at a scale where you care about this 20% > speedup, you would want to go all the way to an indexed structure, > because the gains you would realize would exceed 20%, and 20% may not be > a sufficient speedup anyway. If I'm reading your test case correctly, 20% is actually a rather impressive number, because it's 20% *overall* gain on queries that will also involve TOAST fetch and decompress on the source data. (Decompress definitely, and I'm guessing those 5K strings don't compress well enough to avoid getting pushed out-of-line; though it might be worth repeating the test with chunks of 10K or 20K to be sure.) So the percentage improvement in the LIKE test proper must have been a lot more than that. However, I'm dubious that LIKE patterns with long fixed substrings are a common use-case, so I'm afraid that this might be quite a lot of work for something that won't much benefit most users. I'm also worried that the setup costs might be enough to make it a net loss in many cases. There are probably ways to amortize the setup costs, since typical scenarios involve the same LIKE pattern across many rows, but implementing that would add even more work. (Having said that, I've had a bee in my bonnet for a long time about removing per-row setup cost for repetitive regex matches, and whatever infrastructure that needs would work for this too. And for strpos' B-M-H setup, looks like. So this might be something to look into with a suitably wide view of what the problem is.) Not sure what advice to give you here. I think this is in the grey zone where it's hard to be sure whether it's worth putting work into. regards, tom lane
> Having said that, I've had a bee in my bonnet for a long time about
> removing per-row setup cost for repetitive regex matches, and
> whatever infrastructure that needs would work for this too.
What are the per-row setup costs for regex matches? I looked at
`regexp.c` and saw:
```
/*
* We cache precompiled regular expressions using a "self organizing list"
* structure, in which recently-used items tend to be near the front.
* Whenever we use an entry, it's moved up to the front of the list.
* Over time, an item's average position corresponds to its frequency of use.
*
```
```
What proverbial bee did you have in your bonnet about the current regex
implementation?
Which functions other than `strpos` and `LIKE` would benefit from a similar
cache, or perhaps a query-scoped cache?
In the mean time I'll look at other TODOs that catch my interest.
Feel free to point me in the direction of one that you think is
both desirable and easy enough for a beginner.
Thanks!
Karan
Karan Sikka <karanssikka@gmail.com> writes: >> Having said that, I've had a bee in my bonnet for a long time about >> removing per-row setup cost for repetitive regex matches, and >> whatever infrastructure that needs would work for this too. > What are the per-row setup costs for regex matches? Well, they're pretty darn high if you have more active regexps than will fit in that cache, and even if you don't, the cache lookup seems a bit inefficient. What I'd really like to do is get rid of that cache in favor of having a way to treat a precompiled regexp as a constant. I think this is probably possible via inventing a "regexp" datatype, which we make the declared RHS input type for the ~ operator, and give it an implicit cast from text so that existing queries don't break. The compiled regexp tree structure contains pointers so it could never go to disk, but now that we have the "expanded datum" infrastructure you could imagine that the on-disk representation is the same as text but we support adding a compiled tree to it in-memory. Or maybe we just need a smarter cache mechanism in regexp.c. A cache like that might be the only way to deal with a query using variable patterns (e.g, pattern argument coming from a table column). But it seems like basically the wrong approach for the common case of a constant pattern. regards, tom lane