定制 giauphan/crawl-blog-data 二次开发

按需修改功能、优化性能、对接业务系统,提供一站式技术支持

邮箱:yvsm@zunyunkeji.com | QQ:316430983 | 微信:yvsm316

giauphan/crawl-blog-data

最新稳定版本:v1.1

Composer 安装命令:

composer require giauphan/crawl-blog-data

包简介

This powerful web scraping tool is designed to gather data from blogs and websites with ease, providing you with valuable insights and information.

README 文档

README

Overview

Welcome to the Crawler Blog for Laravel repository! This robust web scraping tool is crafted to effortlessly gather data from blogs and websites, delivering valuable insights and information. Whether you're a content creator, market researcher, or e-commerce entrepreneur, this Laravel-based crawler provides an ideal solution for your data extraction needs.

Features

  • Web Scraping: Extract data from various blogs and websites, including blog posts, product descriptions, prices, and customer reviews.

Installation

Follow these steps to get the Crawler Blog for Laravel up and running:

  1. Clone the repository:
    composer require giauphan/crawl-blog-data -W

Laravel 10.x

You need to add provider and alias to your config/app.php file:

<?php

'providers' => [     

    Giauphan\CrawlBlog\CrawlBlogDataServiceProvider::class  
  
],

You need to add commands to your app/Console/Kernel.php file:

 protected function commands(): void
    {
        $this->load(__DIR__.'/Commands');
        $this->load(__DIR__.'/../CrawlBlog');

        require base_path('routes/console.php');
    }

Laravel 11.x

You need to add commands to your bootstrap/app.php file:

 use App\CrawlBlog\CrawlExample;

->withCommands([
        CrawlExample::class,
])

You can publish and run the migrations with:

php artisan vendor:publish --provider="Giauphan\CrawlBlog\CrawlBlogDataServiceProvider" --tag="migrations"
php artisan migrate

You can publish the config file with:

php artisan vendor:publish --provider="Giauphan\CrawlBlog\CrawlBlogDataServiceProvider" --tag="command"
  1. Configuration:
    • Update the .env file to configure the database settings.
    • Adjust the CrawlBlogData.php file to customize scraping behavior based on your requirements.

You can generate a new settings class using this artisan command.

 php artisan make:crawl-blog CrawlExample
  1. Executing the Crawler: Run the crawler via the command line using the following command:
    php artisan crawl:CrawlExample url category_name lang limitblog
    This initiates the web scraping process, and the extracted data will be saved to the configured database tables.

Contributions

We welcome contributions from the community! If you encounter bugs, have feature requests, or want to enhance the crawler, please submit issues or pull requests on GitHub.

License

The Crawler Blog for Laravel is open-source software licensed under the MIT License. Feel free to use, modify, and distribute it following the license terms.

Contact

For inquiries or support, contact us at Giauphan012@gmail.com.

Thank you for using the Crawler Blog for Laravel! Happy scraping!

统计信息

  • 总下载量: 1.26k
  • 月度下载量: 0
  • 日度下载量: 0
  • 收藏数: 1
  • 点击次数: 0
  • 依赖项目数: 0
  • 推荐数: 0

GitHub 信息

  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • 开发语言: PHP

其他信息

  • 授权协议: MIT
  • 更新时间: 2024-01-16