Menu

Intro
Dear Friend! Welcome to our user-agents library! Here you can find information about self user agent, manually inputed user agent string, library of user agents with descriptins and explaine list, it's are completely accessible for free.
Help us to make it better - write feedback

Enjoy with us, with best wishes - support team



Updated BCP library [News 2015-10-18]
Updated BCP library for better user agent decoding, now our tool is more up to date.

Most popular bots user-agent of last month [News 2009-12-01]
We find that most used user-agent by non friendly bots during last month was - "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/532.0 (KHTML, like Gecko) Chrome/3.0.195.27 Safari/532.0". Bot tried to parse My-Addr Project from more then 1000 unique IP addresses, as result all ip's was blocked and attempts was stoped.

IE6 not supported buggest part of sites [News 2009-09-05]
Today LiveJournal become not comptable with IE6, during opening settings page - it's alert error "page uploading is broken", on of possible reasons can be JScript ads. Typical user agent string is Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; WOW64; SV1). But with FireFox all working perfect, so let's use firefox :).
Example of FF3 user-agent string - Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.13) Gecko/2009080315 Ubuntu/9.04 (jaunty) Firefox/3.0.13 GTB5.
Example of FF3.5 useragent - Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2.
IE6 - "browser" of bots and crawlers now.

What is user agent

A user agent (ua) is the client application used with a network protocol (example HTTP), in biggest part of cases the HTTP_USER_AGENT phrase is linked with World Wide Web.
Web user agents range from web browsers and e-mail clients to search engine crawlers ("spiders"), as well as mobile phones, screen readers and braille browsers used by people with disabilities, and scripts that sending some manual ua data.
When Internet users visit a web site, a text string is generally sent to identify the user agent to the server.
Don't forget that sent user agent data can be not real, because client application can send that it want, and nobody control it.
This forms part of the HTTP request, prefixed with User-Agent: (case does not matter) and typically includes information such as the application name, version, host operating system, and language. Bots, such as web crawlers, often also include a URL and/or e-mail address so that the webmaster can contact the operator of the bot. Lots of Web Browsers support replacing of ua string, for example Konqueror can send ua string of Google Bot.


Different web browsers (Internet Explorer, Firefox, Opera, Safari,etc.) would therefore identify themselves with different user agent strings.
Search engines often using ua for identify self, for example - Google [Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)], Yahoo [Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; http://help.yahoo.com/help/us/ysearch/slurp)].
This is how web site can know is visitor human or bot.

User agents consist of 6 parts

  • Application name and version
  • Browser type[2]
  • The operation system[3]
  • Any extensions installed with browser/system[4]
  • "compatible" shows that this browser works correctly and conmptable with [2],[3],[4]
  • Some installed soft and OS version.
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322) SeoTool/1.0023
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322) SeoTool/1.0023
app name/version"compatible"browser typeOSextensionsinstalled soft
OS version

User agent string - it's string that sending browser/application during request (as usualy called "user agent"). The user-agent string is one of the criteria by which crawlers can be excluded from certain pages or parts of a website using the "Robots Exclusion Standard" (robots.txt).
This allows webmasters who feel that certain parts of their website should not be included in the data gathered by a particular crawler, or that a particular crawler is using up too much bandwidth, to request that crawler not to visit those pages.