Thread: function accessing other db
Hello everyone, We have to copy some data from one table to another table in other database. Is there any way to do this from inside a custom plpgsql function, with no external applications, interfaces or storage? Any help is appreciated.
On Thu, Feb 12, 2004 at 11:43:11 -0200, "Marcus Andree S. Magalhaes" <marcus.magalhaes@vlinfo.com.br> wrote: > > Hello everyone, > > We have to copy some data from one table to another table in other database. > > Is there any way to do this from inside a custom plpgsql function, with no > external applications, interfaces or storage? You can do something like: INSERT INTO tablea (col1, col2, col3) SELECT col1, col2, col3 FROM tableb;
This would work if both tables (tablea and tableb) resides on the same database. What to do if, say tablea lives on database 'db1' and tableb resides on another database, 'db2' ? Thanks for your message. > On Thu, Feb 12, 2004 at 11:43:11 -0200, > "Marcus Andree S. Magalhaes" <marcus.magalhaes@vlinfo.com.br> wrote: >> >> Hello everyone, >> >> We have to copy some data from one table to another table in other >> database. >> >> Is there any way to do this from inside a custom plpgsql function, >> with no external applications, interfaces or storage? > > You can do something like: > > INSERT INTO tablea (col1, col2, col3) SELECT col1, col2, col3 FROM > tableb;
Thanks. It seems to be exactly what I need! > Then the dblink contrib package might be of help. I haven't used it, so > I don't know for sure that it will solve your problem.
On Thu, Feb 12, 2004 at 13:41:21 -0200, "Marcus Andree S. Magalhaes" <marcus.magalhaes@vlinfo.com.br> wrote: > > This would work if both tables (tablea and tableb) resides on the > same database. What to do if, say tablea lives on database 'db1' and > tableb resides on another database, 'db2' ? Then the dblink contrib package might be of help. I haven't used it, so I don't know for sure that it will solve your problem.
I am having problems accessing the group, what is going on here? On 12 Feb 2004, at 16:29, Marcus Andree S. Magalhaes wrote: > Thanks. It seems to be exactly what I need! > >> Then the dblink contrib package might be of help. I haven't used it, >> so >> I don't know for sure that it will solve your problem. > > > > > ---------------------------(end of > broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to > majordomo@postgresql.org) >
Hi, Marcus Andree S. Magalhaes wrote: > Hello everyone, > > We have to copy some data from one table to another table in other database. > > Is there any way to do this from inside a custom plpgsql function, with no > external applications, interfaces or storage? > > Any help is appreciated. use dblink from contrib. C.
Hi, I'm reading the docs about the extended query language and the portals. Is it possible to use portals as a kind of concurrent cursors over a single TCP/IP connection? Or should you execute and close portals one after another and never execute the next portal before the previous one is closed? (Like regular transactions.) Thanks, Marc
>Is it possible to use portals as a kind of concurrent cursors over a >single TCP/IP connection? > >Or should you execute and close portals one after another and never >execute the next portal before the previous one is closed? (Like >regular transactions.) Mmmmmm... Sync doesn't allow to tell which portal you want to sync, so I guess that's it for concurrent-cursorish behavior. Anybody know a way to implement concurrent-cursorish behavior over a single TCP/IP connection? Thanks, Marc
"M. Bastin" <marcbastin@mindspring.com> writes: > The only thing that bothers me is the Sync command that would put an > end to all portals at once isn't it? Not if you are inside a transaction block (ie, have issued a BEGIN command). A possibly more serious issue is that an error detected in any one of the portals aborts the transaction and thus you lose all the portals. We may eventually have a solution for that involving subtransactions. regards, tom lane
>"M. Bastin" <marcbastin@mindspring.com> writes: >> I'm reading the docs about the extended query language and the >> portals. Is it possible to use portals as a kind of concurrent >> cursors over a single TCP/IP connection? >> Or should you execute and close portals one after another and never >> execute the next portal before the previous one is closed? (Like >> regular transactions.) > >You can certainly have multiple portals open and fetch from different >ones in turn; not clear if that's what you meant. Yes, I was thinking about a threaded client that would be able to fetch from several portals (quasi) simultaneously and repeatedly (but with other parameters) over 1 TCP/IP connection. The only thing that bothers me is the Sync command that would put an end to all portals at once isn't it? The docs seem to say you need to issue a sync after each cycle. There doesn't seem to be a possibility to selectively keep some portals open this way. That's what I make from reading the docs, I haven't had the chance to actually experiment with it yet. (I'm in the process of implementing the frontend-backend protocol v3 for my IDE. Only after that can I start experimenting.) Marc
"M. Bastin" <marcbastin@mindspring.com> writes: > I'm reading the docs about the extended query language and the > portals. Is it possible to use portals as a kind of concurrent > cursors over a single TCP/IP connection? > Or should you execute and close portals one after another and never > execute the next portal before the previous one is closed? (Like > regular transactions.) You can certainly have multiple portals open and fetch from different ones in turn; not clear if that's what you meant. A portal is more like a SQL cursor than like a transaction. regards, tom lane
This is a "best practice" question. Imagine you have 2 computers. One is a web server and the other runs PostgreSQL. Multiple browsers from all over the net connect to the web server, but to minimize load, the web server itself opens only 1 TCP/IP connection with PostgreSQL. (This is a general question, it doesn't have to be a web server. It could be any kind of "middleman".) How would all the browsers be best served simultaneously? The solution I think about is to use the extended query protocol, open an infinitely lasting transaction, and have a portal per browser. That would work to read data until an error occurs and the transaction would roll back and then *all* browsers' sessions would have to be reset. (Therefore to write data I'd have to cheat and still open a second TCP/IP connection over which each insert or update gets committed instantly.) Is there a better way to implement concurrent users over 1 single TCP/IP session? Thanks, Marc At 1:34 PM -0400 7/1/04, Tom Lane wrote: > > The only thing that bothers me is the Sync command that would put an >> end to all portals at once isn't it? > >Not if you are inside a transaction block (ie, have issued a BEGIN >command). > >A possibly more serious issue is that an error detected in any one of >the portals aborts the transaction and thus you lose all the portals. >We may eventually have a solution for that involving subtransactions. > > regards, tom lane