Re: UPDATE - Mailing list pgsql-general

From c k
Subject Re: UPDATE
Date
Msg-id d8e7a1e30902190531x551b9142mdfee37677ff7488c@mail.gmail.com
Whole thread Raw
In response to UPDATE  (c k <shreeseva.learning@gmail.com>)
Responses Re: UPDATE
List pgsql-general
1.5 GB RAM, Pentium dual core, single 80 GB disk, Windows XP sp3, PostgreSQL 8.3.6
a table is like following:
-- Table: accgroups

-- DROP TABLE accgroups;

CREATE TABLE accgroups
(
  accgroupid serial NOT NULL,
  accgroupidname character varying(150) NOT NULL DEFAULT ''::character varying,
  accgroupname character varying,
  createdby integer DEFAULT 0,
  createdtimestamp timestamp without time zone DEFAULT ('now'::text)::timestamp without time zone,
  locked smallint,
  lastmodifiedby integer DEFAULT 0,
  lastmodifiedtimestamp timestamp without time zone,
  remark character varying(255) DEFAULT NULL::character varying,
  cobranchid integer DEFAULT 0,
.
.
.
.
  againstid integer DEFAULT 0,
)
WITH (OIDS=FALSE);
This table has currently 1,65,000+ rows.
Query is fairly simple.
update accgroups set cobranchid=2 where cobranchid=1;
Thanks
CPKulkarni


On Thu, Feb 19, 2009 at 6:54 PM, Richard Huxton <dev@archonet.com> wrote:
> What can be done for such updates to make them faster?

You're going to have to provide some sort of information before anyone
can help you.

You might want to start with: basic hardware details: ram, number of
disks etc, O.S. version, PostgreSQL version, basic configuration changes
you've made, sample queries that are slow along with explain analyse
output (if it's not just a blanket update), table definitions...

--
 Richard Huxton
 Archonet Ltd

pgsql-general by date:

Previous
From: Geoffrey
Date:
Subject: Re: Appending \o output instead of overwriting the output file
Next
From: Geoffrey
Date:
Subject: Re: When adding millions of rows at once, getting out of disk space errors