????JFIF??x?x????'
Server IP : 79.136.114.73 / Your IP : 216.73.216.116 Web Server : Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.29 OpenSSL/1.0.1f System : Linux b8009 3.13.0-170-generic #220-Ubuntu SMP Thu May 9 12:40:49 UTC 2019 x86_64 User : www-data ( 33) PHP Version : 5.5.9-1ubuntu4.29 Disable Function : pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority, MySQL : ON | cURL : ON | WGET : ON | Perl : ON | Python : ON | Sudo : ON | Pkexec : ON Directory : /proc/self/root/home/b8009/Python-3.6.3/Doc/library/ |
Upload File : |
:mod:`urllib.robotparser` --- Parser for robots.txt ==================================================== .. module:: urllib.robotparser :synopsis: Load a robots.txt file and answer questions about fetchability of other URLs. .. sectionauthor:: Skip Montanaro <skip@pobox.com> **Source code:** :source:`Lib/urllib/robotparser.py` .. index:: single: WWW single: World Wide Web single: URL single: robots.txt -------------- This module provides a single class, :class:`RobotFileParser`, which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the :file:`robots.txt` file. For more details on the structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html. .. class:: RobotFileParser(url='') This class provides methods to read, parse and answer questions about the :file:`robots.txt` file at *url*. .. method:: set_url(url) Sets the URL referring to a :file:`robots.txt` file. .. method:: read() Reads the :file:`robots.txt` URL and feeds it to the parser. .. method:: parse(lines) Parses the lines argument. .. method:: can_fetch(useragent, url) Returns ``True`` if the *useragent* is allowed to fetch the *url* according to the rules contained in the parsed :file:`robots.txt` file. .. method:: mtime() Returns the time the ``robots.txt`` file was last fetched. This is useful for long-running web spiders that need to check for new ``robots.txt`` files periodically. .. method:: modified() Sets the time the ``robots.txt`` file was last fetched to the current time. .. method:: crawl_delay(useragent) Returns the value of the ``Crawl-delay`` parameter from ``robots.txt`` for the *useragent* in question. If there is no such parameter or it doesn't apply to the *useragent* specified or the ``robots.txt`` entry for this parameter has invalid syntax, return ``None``. .. versionadded:: 3.6 .. method:: request_rate(useragent) Returns the contents of the ``Request-rate`` parameter from ``robots.txt`` in the form of a :func:`~collections.namedtuple` ``(requests, seconds)``. If there is no such parameter or it doesn't apply to the *useragent* specified or the ``robots.txt`` entry for this parameter has invalid syntax, return ``None``. .. versionadded:: 3.6 The following example demonstrates basic use of the :class:`RobotFileParser` class:: >>> import urllib.robotparser >>> rp = urllib.robotparser.RobotFileParser() >>> rp.set_url("http://www.musi-cal.com/robots.txt") >>> rp.read() >>> rrate = rp.request_rate("*") >>> rrate.requests 3 >>> rrate.seconds 20 >>> rp.crawl_delay("*") 6 >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") False >>> rp.can_fetch("*", "http://www.musi-cal.com/") True