Fast, secure and Free Open Source software downloads


Stateful programmatic web browsing in Python, after Andy Lester’s Perl module WWW::Mechanize.


The examples below are written for a website that does not exist (, so cannot be run. There are also some working examples that you can run.

import re
import mechanize

br = mechanize.Browser()"")
# follow second link with element text matching regular expression
response1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1)
assert br.viewing_html()
print br.title()
print response1.geturl()
print # headers
print # body

# Browser passes through unknown attributes (including methods)
# to the selected HTMLForm.
br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__)
# Submit current form. Browser calls .close() on the current response on
# navigation, so this closes response1
response2 = br.submit()

# print currently selected form (don't call .submit() on this, use br.submit())
print br.form

response3 = br.back() # back to cheese shop (same data as response1)
# the history mechanism returns cached response objects
# we can still use the response, even though it was .close()d
response3.get_data() # like .seek(0) followed by .read()
response4 = br.reload() # fetches from server

for form in br.forms():
print form
# .links() optionally accepts the keyword args of .follow_/.find_link()
for link in br.links(url_regex=""):
print link
br.follow_link(link) # takes EITHER Link instance OR keyword args

You may control the browser’s policy by using the methods of mechanize.Browser’s base class, mechanize.UserAgent. For example:

br = mechanize.Browser()
# Explicitly configure proxies (Browser will attempt to set good defaults).
# Note the userinfo ("joe:password@") and port number (":3128") are optional.
br.set_proxies({"http": "",
"ftp": "",
# Add HTTP Basic/Digest auth username and password for HTTP proxy access.
# (equivalent to using "joe:password@..." form above)
br.add_proxy_password("joe", "password")
# Add HTTP Basic/Digest auth username and password for website access.
br.add_password("", "joe", "password")
# Don't handle HTTP-EQUIV headers (HTTP headers embedded in HTML).
# Ignore robots.txt. Do not do this without thought and consideration.
# Don't add Referer (sic) header
# Don't handle Refresh redirections
# Don't handle cookies
# Supply your own mechanize.CookieJar (NOTE: cookie handling is ON by
# default: no need to do this unless you have some reason to use a
# particular cookiejar)
# Log information about HTTP redirects and Refreshes.
# Log HTTP response bodies (ie. the HTML, most of the time).
# Print HTTP headers.

# To make sure you're seeing all debug output:
logger = logging.getLogger("mechanize")

# Sometimes it's useful to process bad headers or bad HTML:
response = br.response() # this is a copy of response
headers = # currently, this is a mimetools.Message
headers["Content-type"] = "text/html; charset=utf-8"
response.set_data(response.get_data().replace("<!---", "<!--"))

mechanize exports the complete interface of urllib2:

import mechanize
response = mechanize.urlopen("")

When using mechanize, anything you would normally import from urllib2 should be imported from mechanize instead.


Much of the code was originally derived from the work of the following people:

  • Gisle Aas — libwww-perl

  • Jeremy Hylton (and many others) — urllib2

  • Andy Lester — WWW::Mechanize

  • Johnny Lee (coincidentally-named) — MSIE CookieJar Perl code from which mechanize’s support for that is derived.


  • Gary Poster and Benji York at Zope Corporation — contributed significant changes to the HTML forms code

  • Ronald Tschalar — provided help with Netscape cookies

Thanks also to the many people who have contributed bug reports and patches.

See also

There are several wrappers around mechanize designed for functional testing of web applications:

See the FAQ page for other links to related software.

I prefer questions and comments to be sent to the mailing list rather than direct to me.

John J. Lee, April 2010.