[Wylug-help] Fw: Automate Web logins

Idris Fulat idrisfulat at hotmail.com
Sat Jun 26 18:03:34 BST 2004


This is a multi-part message in MIME format.
--
[ Picked text/plain from multipart/alternative ]

Thanks for the response Smylers, but I'm begining to think that this may a bit out of my league. I am a total linux newbie and too used to GUI oriented stuff. Also I haven't done anything to do with Perl so it is a bit alien. Your wget option seems easy enough and I'll Google into it as the web page is basic i.e. username and password and submit. Unfortunately I can't fix a time to get my teeth in as I have alot of other tasks to fulfill at work and so play around with these things in 'spare' time. We have 3 Linux kiosks for public net access and want to automate the login process. Like I said I'll look into what you mentioned next week and take it from there. If I don't try I won't learn.
I may also nudge Dave to see what he has:)
Thanks.


Idris Fulat writes:

> Hi, I have a little VB app that fills out a web page with login
> details then logs in. This process is repeated via the task schedular
> every ten mins so you are never logged out of the net as it times out
> every ten mins. I would like to replicate this MS process on some
> machines running Debian. I have, however not the first clue as to how
> to go about doing it except i'm sure a simple script would suffice.

Nobody else seems to've responded to this from Wednesday, so I'll offer
what I have.

Firstly the 'every 10 minutes' bit is easy: cron can run any command at
pretty much any time interval you want just by dropping a file into
/etc/cron.d/.

One way of programmatically pretending to be a human browsing a website
is to write a Perl script using the WWW::Mechanize module, which does
the awkward bits: you just have to provide the specifics for your
particular site.  The HTTP::Recorder module can even be set up as a
local web-proxy to write a WWW::Mechanize script for you as you just
browse the site in question!  However, if you haven't encountered any
Perl before then doing even this is far from trivial.

wget is a command-line program for grabbing a webpage and saving it to a
file.  Depending on the complexity of the site, it may be that making a
single HTTP get request is sufficient to log you in, perhaps something
like:

  http://www.example.com/login.cgi?user=Aardvark&password=blue

The sites' HTML log-in form almost certainly uses post rather than get
to send the data, but the script processing the submission may well
accept either; you can work out what the get equivalent is either by
reading the HTML source of the form, or using 'Mozilla Firefox' with the
Live HTTP Headers plug-in.

> Any detailed help appreciated.

I wouldn't describe what I've written above as "detailed", but this help
list is a free service, so the quality of answers is dependent on what
respondents feel like offering rather than what those seeking help ask
for!

More to the point, I don't know what you already know -- you may well be
_a fait_ with cron or Live HTTP Headers or whatever, and it'd be a
complete waste of my time to think up and type out detailed instructions
on exactly how to use them.  But please don't let that put you off
asking follow-up questions: if you have specific questions about any of
the above then I, and I'm sure others on the list, will try to help.

Smylers


_______________________________________________
Wylug-help mailing list
Wylug-help at wylug.org.uk
http://list.wylug.org.uk/mailman/listinfo/wylug-help
--




More information about the Wylug-help mailing list