Thread: Moving from 32 to 64 bit builds on Solaris
I've got an install of Postgres 8.2.3 on a Sun box that's ticking over nicely -- I'm pretty happy with it and how it's performing. It's a 32 bit build, and the machine I'm running it on has a lot of extra memory. I assume I'll have to do a 64 bit build to use more than a few gig of shared buffers. If I do that, though, am I going to have to do a database dump and reload, or will the disk files be compatible and it just a matter of shutting down the 32 bit server and firing up the 64 bit one? -- Dan --------------------------------------it's like this------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk
Dan Sugalski <dan@sidhe.org> writes: > I assume I'll have to do a 64 bit build to use more than a few gig of > shared buffers. If I do that, though, am I going to have to do a > database dump and reload, Yes, most likely, because you'll have changed MAXALIGN and therefore the data alignment rules. You should first ask yourself whether you will get any performance benefit from having "more than a few gig of shared buffers". If anyone has proven such a benefit I haven't seen it. regards, tom lane
At 1:01 AM -0500 3/10/07, Tom Lane wrote: >Dan Sugalski <dan@sidhe.org> writes: >> I assume I'll have to do a 64 bit build to use more than a few gig of >> shared buffers. If I do that, though, am I going to have to do a >> database dump and reload, > >Yes, most likely, because you'll have changed MAXALIGN and therefore the >data alignment rules. > >You should first ask yourself whether you will get any performance >benefit from having "more than a few gig of shared buffers". If anyone >has proven such a benefit I haven't seen it. Possibly it won't. The machine the DB is on sees heavy access to large files, to the point where parts of the database may get flushed out of the OS buffer cache. I was working on the (possibly deeply flawed assumption) that I'd be better off if more of the database was guaranteed pinned in memory in Postgres' buffer cache -- it wouldn't necessarily make the peak performance better, but it would make average performance better, since I'd not have to sometimes hit disk to read in things that had been evicted from the disk cache. -- Dan --------------------------------------it's like this------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk
On Sat, Mar 10, 2007 at 08:30:20AM -0500, Dan Sugalski wrote: > Possibly it won't. The machine the DB is on sees heavy access to > large files, to the point where parts of the database may get flushed > out of the OS buffer cache. I was working on the (possibly deeply > flawed assumption) that I'd be better off if more of the database was > guaranteed pinned in memory in Postgres' buffer cache -- it wouldn't Err, is shared memory actually guarenteed to be pinned in memory? I mean, in Linux it's not as that is a form of DOS attack, pin all memory by allocating it as SHM. Now, Solaris may do this differently, but if shared memory can be swapped out then having lots of it definitly isn't a beniefit. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
At 7:47 PM +0100 3/10/07, Martijn van Oosterhout wrote: >On Sat, Mar 10, 2007 at 08:30:20AM -0500, Dan Sugalski wrote: >> Possibly it won't. The machine the DB is on sees heavy access to >> large files, to the point where parts of the database may get flushed >> out of the OS buffer cache. I was working on the (possibly deeply >> flawed assumption) that I'd be better off if more of the database was >> guaranteed pinned in memory in Postgres' buffer cache -- it wouldn't > >Err, is shared memory actually guarenteed to be pinned in memory? I >mean, in Linux it's not as that is a form of DOS attack, pin all memory >by allocating it as SHM. I'm not worried about swapping -- the box has more than enough memory that it's not going to swap. What I am looking at is response times after a relatively long (20-30 minute) period of inactivity. When the database is being accessed and the data files are either in shared buffers or OS file cache, performance is really snappy. When the database has been ignored for a while and other things go on the box, postgres' data files get expunged from the OS caches, and access times for the first few queries go from milliseconds to seconds. No surprise, certainly, HD access times being what they are. I've got enough memory on the box that I can reasonably map a four or five gig into postgres and just leave it there. It'll probably not slow down anything else significantly, but if it means the difference between having to re-read a page from disk and just hit shared buffers, then it's worth it in response time. -- Dan --------------------------------------it's like this------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk