bopoda/robots-txt-parser 问题修复 & 功能扩展

解决BUG、新增功能、兼容多环境部署,快速响应你的开发需求

邮箱:yvsm@zunyunkeji.com | QQ:316430983 | 微信:yvsm316

bopoda/robots-txt-parser

最新稳定版本:v2.5.0

Composer 安装命令:

composer require bopoda/robots-txt-parser

包简介

PHP Class for parsing robots.txt files according to Google, Yandex specifications.

README 文档

README

Build Status

RobotsTxtParser — PHP class for parsing all the directives of the robots.txt files

RobotsTxtValidator — PHP class for check is url allow or disallow according to robots.txt rules.

Try demo of RobotsTxtParser on-line on live domains.

Parsing is carried out according to the rules in accordance with Google & Yandex specifications:

Last improvements:

  1. Pars the Clean-param directive according to the clean-param syntax.
  2. Deleting comments (everything following the '#' character, up to the first line break, is disregarded)
  3. The improvement of the Parse of Host — the intersection directive, should refer to the user-agent '*'; If there are multiple hosts, the search engines take the value of the first.
  4. From the class, unused methods are removed, refactoring done, the scope of properties of the class is corrected.
  5. Added more test cases, as well as test cases added to the whole new functionality.
  6. RobotsTxtValidator class added to check if url is allowed to parsing.
  7. With version 2.0, the speed of RobotsTxtParser was significantly improved.

Supported Directives:

  • DIRECTIVE_ALLOW = 'allow';
  • DIRECTIVE_DISALLOW = 'disallow';
  • DIRECTIVE_HOST = 'host';
  • DIRECTIVE_SITEMAP = 'sitemap';
  • DIRECTIVE_USERAGENT = 'user-agent';
  • DIRECTIVE_CRAWL_DELAY = 'crawl-delay';
  • DIRECTIVE_CLEAN_PARAM = 'clean-param';
  • DIRECTIVE_NOINDEX = 'noindex';

Installation

Install the latest version with

composer require bopoda/robots-txt-parser

Run tests

Run phpunit tests using command

php vendor/bin/phpunit

Usage example

You can start the parser by getting the content of a robots.txt file from a website:

$parser = new RobotsTxtParser(file_get_contents('http://example.com/robots.txt'));
var_dump($parser->getRules());

Or simply using the contents of the file as input (ie: when the content is already cached):

$parser = new RobotsTxtParser("
	User-Agent: *
	Disallow: /ajax
	Disallow: /search
	Clean-param: param1 /path/file.php

	User-agent: Yahoo
	Disallow: /

	Host: example.com
	Host: example2.com
");
var_dump($parser->getRules());

This will output:

array(2) {
  ["*"]=>
  array(3) {
    ["disallow"]=>
    array(2) {
      [0]=>
      string(5) "/ajax"
      [1]=>
      string(7) "/search"
    }
    ["clean-param"]=>
    array(1) {
      [0]=>
      string(21) "param1 /path/file.php"
    }
    ["host"]=>
    string(11) "example.com"
  }
  ["yahoo"]=>
  array(1) {
    ["disallow"]=>
    array(1) {
      [0]=>
      string(1) "/"
    }
  }
}

In order to validate URL, use the RobotsTxtValidator class:

$parser = new RobotsTxtParser(file_get_contents('http://example.com/robots.txt'));
$validator = new RobotsTxtValidator($parser->getRules());

$url = '/';
$userAgent = 'MyAwesomeBot';

if ($validator->isUrlAllow($url, $userAgent)) {
    // Crawl the site URL and do nice stuff
}

Contribution

Feel free to create PR in this repository. Please, follow PSR style.

See the list of contributors which participated in this project.

Final Notes:

Please use v2.0+ version which works by same rules but is more highly performance.

统计信息

  • 总下载量: 243.74k
  • 月度下载量: 0
  • 日度下载量: 0
  • 收藏数: 48
  • 点击次数: 1
  • 依赖项目数: 2
  • 推荐数: 0

GitHub 信息

  • Stars: 47
  • Watchers: 4
  • Forks: 17
  • 开发语言: DIGITAL Command Language

其他信息

  • 授权协议: MIT
  • 更新时间: 2019-01-02