Re: performance-test farm - Mailing list pgsql-hackers
From | Tomas Vondra |
---|---|
Subject | Re: performance-test farm |
Date | |
Msg-id | 4F553C43.9070507@fuzzy.cz Whole thread Raw |
In response to | Re: performance-test farm (Greg Smith <greg@2ndquadrant.com>) |
Responses |
Re: performance-test farm
Re: performance-test farm |
List | pgsql-hackers |
On 12.5.2011 08:54, Greg Smith wrote: > Tomas Vondra wrote: > > The idea is that buildfarm systems that are known to have a) reasonable > hardware and b) no other concurrent work going on could also do > performance tests. The main benefit of this approach is it avoids > duplicating all of the system management and source code building work > needed for any sort of thing like this; just leverage the buildfarm > parts when they solve similar enough problems. Someone has actually > done all that already; source code was last sync'd to the build farm > master at the end of March: https://github.com/greg2ndQuadrant/client-code > > By far the #1 thing needed to move this forward from where it's stuck at > now is someone willing to dig into the web application side of this. > We're collecting useful data. It needs to now be uploaded to the > server, saved, and then reports of what happened generated. Eventually > graphs of performance results over time will be straighforward to > generate. But the whole idea requires someone else (not Andrew, who has > enough to do) sits down and figures out how to extend the web UI with > these new elements. Hi, I'd like to revive this thread. A few days ago we have finally got our buildfarm member working (it's called "magpie") - it's spending ~2h a day chewing on the buildfarm tasks, so we can use the other 22h to do some useful work. I suppose most of you are busy with 9.2 features, but I'm not so I'd like to spend my time on this. Now that I had to set up the buildfarm member I'm somehow aware of how the buildfarm works. I've checked the PGBuildFarm/server-code and greg2ndQuadrant/client-code repositories and while I certainly am not a perl whiz, I believe I can tweak it to handle the performance-related result too. What is the current state of this effort? Is there someone else working on that? If not, I propose this (for starters): * add a new page "Performance results" to the menu, with a list of members that uploaded the perfomance-results * for each member, there will be a list of tests along with a running average for each test, last test and indicator ifit improved, got worse or is the same * for each member/test, a history of runs will be displayed, along with a simple graph I'm not quite sure how to define which members will run the performance tests - I see two options: * for each member, add a flag "run performance tests" so that we can choose which members are supposed to be safe OR * run the tests on all members (if enabled in build-farm.conf) and then decide which results are relevant based on datadescribing the environment (collected when running the tests) I'm also wondering if * using the buildfarm infrastructure the right thing to do, if it can provide some 'advanced features' (see below) * we should use the current buildfarm members (although maybe not all of them) * it can handle one member running the tests with different settings (various shared_buffer/work_mem sizes, num of clientsetc.) and various hw configurations (for example magpie contains a regular SATA drive as well as an SSD - wouldbe nice to run two sets of tests, one for the spinner, one for the SSD) * this can handle 'pushing' a list of commits to test (instead of just testing the HEAD) so that we can ask the membersto run the tests for particular commits in the past (I consider this to be very handy feature) regards Tomas
pgsql-hackers by date: