Re: Authentication prompt for mbox downloads - Mailing list pgsql-www
From | Magnus Hagander |
---|---|
Subject | Re: Authentication prompt for mbox downloads |
Date | |
Msg-id | CABUevEx8qg65Q+XBt16qzAijio=O6-1QBFks8xxmgMwR6wzAfg@mail.gmail.com Whole thread Raw |
In response to | Re: Authentication prompt for mbox downloads (Stephen Frost <sfrost@snowman.net>) |
Responses |
Re: Authentication prompt for mbox downloads
|
List | pgsql-www |
On Sun, Mar 29, 2020 at 9:35 PM Stephen Frost <sfrost@snowman.net> wrote: > > Greetings, > > * Stefan Kaltenbrunner (stefan@kaltenbrunner.cc) wrote: > > On 3/29/20 9:24 PM, Magnus Hagander wrote: > > > On Sun, Mar 29, 2020 at 9:08 PM Stephen Frost <sfrost@snowman.net> wrote: > > >> * Stefan Kaltenbrunner (stefan@kaltenbrunner.cc) wrote: > > >>> On 3/26/20 3:44 PM, Magnus Hagander wrote: > > >>>> We could put it just to the right of the download link though? There > > >>>> seems to be enough space for that with a large margin at least on > > >>>> desktops. > > >>> > > >>> what about doing something completely different - like hiding those > > >>> links behind community auth and call it a day(especially with so few users)? > > >>> > > >>> I dont see a problem with requiring community auth for downloading those > > >>> links as well as raw messages.. > > >> > > >> I guess the question there is- how hard would it be for someone to > > >> script the community auth process, so that they can automatically > > >> download the mbox's each month, if they wish to..? > > > > > > Complicated enough that a process like that is going to be *really* annoying. > > Hrmpf, really? It does become a multi-step process. You have to go fetch the login page, then submit your login credentials and get a cookie. Then you have to go to the download page, follow the redirect from there up to the login system, provide the cookie, follow the next redirect over to the receive page, get the cookie from there, follow the next redirect back to the download url and provide this new cookie. So yeah, doable, but not exactly convenient. And if you want to automate this it now means you are storing your community auth password in a script somewhere, and not just a throwaway password. > > if we are just talking about providing a monthly mbox file to download > > we could probably just provide a completely different way to provide > > that data that does not involve dynamically generating it in real-time... > > Being able to download just a single thread as an mbox is really awful > handy. And regardless of where we provide them we still have the same problem. The *reason* we have that password there is to prevent automated scrapers from grabbing all that data. And that *does* work, if we look at the logs of downloads of them. Yes, custom made scrapers that want to grab that down (which is what it's designed for) sometimes do, but we get *very* few downloads from automated scrapers and crawlers. > > >> I'm guessing it's probably not that hard, but that's the one thing that > > >> comes to mind regarding this proposal.. Otherwise I am generally in > > >> support of requiring community auth for this instead of using the basic > > >> auth method we have today. > > > > > > One of the original usecases for "download thread as mbox" was you, I > > > believe , which was basically about opening a mailbox directly from > > > inside mutt or something? Isn't that also going to be a lot more > > > annoying from something like this? > > > > I dont understand that argument - so mutt directly does http and http > > basic auth or what is the usecase? If it is just clicking in your > > browser and have that link fed into mutt by means of mime-type or file > > extension detection that would just work as before... > > The use-case is something with wget/curl to actually pull down the mbox > of the month or of the thread. If it's actually hard to script one of > those to work with community auth then that would definitely be an issue > for at least some of the users of it. Once you've reached the point of scripting it, though, you have already read the docs :) (and in fact the code behind it all) > I suppose there's another thought in here around having different > end-points to the archives, et al. That is- keep the existing > end-points with basic auth but don't link to that from the web page > archive and instead have it linked from some other page that talks about > how to use basic auth to pull down the mboxes. Then, on the web > archive, have a link to an end-point that uses community auth. We could take it one more step and just make it an intermediate page. That is, the link that now goes to the mbox, would go to an *individual* page for that mbox file that says "here are the credentials, now press this big button below and input those credentials". That button would point directly to the predictable URL for that mbox, meaning that those who automate this somewhere can just go directly to that URL, whereas those who are manually downloading it for the first time gets to go to a page with instructions. I think the only downside in the process there is those that manually download it regularly and *do* remember the instructions -- they have to click twice. But it's still just clicks, and very straight forward clicks. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/