Re: One source of constant annoyance identified - Mailing list pgsql-general
From | Markus Wollny |
---|---|
Subject | Re: One source of constant annoyance identified |
Date | |
Msg-id | 2266D0630E43BB4290742247C8910575014CE2C6@dozer.computec.de Whole thread Raw |
In response to | One source of constant annoyance identified ("Markus Wollny" <Markus.Wollny@computec.de>) |
Responses |
Re: One source of constant annoyance identified
|
List | pgsql-general |
Hello! Thank you very much for your efforts - we appreciate that very much :) > -----Ursprüngliche Nachricht----- > Von: Tom Lane [mailto:tgl@sss.pgh.pa.us] > Gesendet: Mittwoch, 3. Juli 2002 00:28 > An: Markus Wollny > Cc: pgsql-general@postgresql.org > Betreff: Re: [GENERAL] One source of constant annoyance identified > > > "Markus Wollny" <Markus.Wollny@computec.de> writes: > > Sorry I took so long - I attached the schema as asked. > > Thanks. But I'm still unable to reproduce the memory bloat you see on > SELECTs. This seems very peculiar. You said you were > running SuSE 7.3 > --- how recent is that? Which glibc version is it running? > (I've been > reduced to speculating about memory leakage inside libc, which is a > pretty long shot but...) I agree - it is a long shot: SuSE 7.3 has got Kernel: 2.4.10 and glibc: 2.2.4; it was released in October 2001. I tried using SuSE 8.0, released in late April this year, but I was more than unhappy with some of the new "features" - and it seems that hardly anybody has switched to 8.0 for server-usage as yet. Generally SuSE 7.3 (which is probably by far the most popular distro in Germany) is considered quite stable and current enough for server-usage. Would it really be worth the hassle updating glibc to 2.2.5? > > I took a quick look at top: Even this humble query causes > memory- and > > processor-load like a giant: 266M RAM, 38.3% processor time, 26.4% > > memory usage. Okay, it's calling the trigger for each row > which in turn > > inserts some new tuples into ct_com_board_fti, but is it expected to > > cause so much load? > > Wouldn't surprise me. Since you're using an AFTER trigger, > the pending > trigger events have to be saved up for commit time, so the list of > pending events is going to grow quite large. Okay, so there are indeed situations when this kind of backend size can be considered normal. That's some sort of relief :) > (How many rows do you have in ct_com_board_message, anyway? > How many did that query try to > update?) This however does not explain your problem with > SELECT, since > selects don't fire triggers. Currently there are 362,154 rows in ct_com_board_message and 85,101 rows in ct_com_user. I don't know if this can be considered a lot; we will expect that to grow at an accelerating rate during the next months, so more than a million can be expected within the next 12 months or so. We scarcely use any database-specific features like triggers as yet, it's 99.99% pure SQL, tables, indexes, data, selects, inserts, updates, deletes, no "fancy stuff" as yet. We'll have to get to know PostgreSQL better before using anything specific; we never needed to get too deep into Oracle either, which was an advantage when porting the whole thing over to PostgreSQL, but it seems that the latter calls for a bit more attention and know-how... > Could I see the output of EXPLAIN for that problem SELECT on your > machine? We are currently working on a new version for the search I gave you before, but here's another one that's causing similar symptoms - extra long running time and most probably huge backends (it's 10:20 am, far from peak time, and we already have ~280MB swap and six backends >100MB): This one generates an overview over all the forum-threads in one board which has 41,624 messages, 2971 of them are FATHER_ID=0, so thread-starters: select MESSAGE.BOARD_ID , MESSAGE.THREAD_ID , MESSAGE.FATHER_ID , MESSAGE.MESSAGE_ID , MESSAGE.USER_ID , MESSAGE.USER_LOGIN as LOGIN , MESSAGE.USER_STATUS as STATUS , MESSAGE.USER_RIGHTS as RIGHTS , MESSAGE.TITLE , MESSAGE.COUNT_REPLY as COUNT_REPLY , to_char(MESSAGE.LAST_REPLY,'DD.MM.YY hh24:mi') as LAST_REPLY , round((date_part('epoch',CURRENT_TIMESTAMP)-date_part('epoch',MESSAGE.LA ST_REPLY))/60) as diff_posting , to_char(MESSAGE.CREATED,'DD.MM.YY hh24:mi') as DATUM , (select count(*) from CT_COM_USER_THREAD_FOLLOW where USER_ID= '295798' and thread_id=MESSAGE.THREAD_ID) as TFUID from CT_COM_BOARD_MESSAGE MESSAGE where (0=0) and MESSAGE.BOARD_ID = 10 and MESSAGE.FATHER_ID = 0 and MESSAGE.STATE_ID = 0 order by MESSAGE.LAST_REPLY desc Sort (cost=30695.27..30695.27 rows=7693 width=154) (actual time=9745.94..9751.58 rows=4663 loops=1) -> Index Scan using idx_bm_show_topics on ct_com_board_message message (cost=0.00..30198.72 rows=7693 width=154) (actual time=111.56..9549.99 rows=4663 loops=1) SubPlan -> Aggregate (cost=5.83..5.83 rows=1 width=0) (actual time=0.91..0.91 rows=1 loops=4663) -> Index Scan using idx_user_thread_follow on ct_com_user_thread_follow (cost=0.00..5.83 rows=1 width=0) (actual time=0.88..0.88 rows=0 loops=4663) Total runtime: 9835.57 msec I do hope you can make anything of this... Regards, Markus
pgsql-general by date: