lzhx00/laravel-llm-client
最新稳定版本:v2.0.1
Composer 安装命令:
composer require lzhx00/laravel-llm-client
包简介
Laravel LLM Client Package - A unified interface for multiple LLM providers
README 文档
README
A Laravel package providing a unified, chainable interface for multiple LLM (Large Language Model) providers: OpenAI, Anthropic (Claude), Gemini, and Ollama.
Requirements
- Laravel 10.x, 11.x, or 12.x (This package only supports Laravel 10 and above)
- PHP 8.1 or higher
Tested on Laravel 12.x. Other versions may work, but are not officially tested.
Installation
composer require lzhx00/laravel-llm-client
Laravel will auto-discover and register the package.
If you have disabled auto-discovery, add the following to config/app.php:
'providers' => [ // ... Lzhx00\LLMClient\LLMClientServiceProvider::class, ], 'aliases' => [ // ... 'LLMClient' => Lzhx00\LLMClient\Facades\LLMClient::class, ],
Configuration
Publish the config file (optional, for customization):
php artisan vendor:publish --tag=llm-client-config
Set your API keys and provider settings in .env or config/llm.php:
LLM_DEFAULT_PROVIDER=openai OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_API_KEY=AIza... OLLAMA_BASE_URL=http://localhost:11434
config/llm.php Example
Each provider has its own default_model, embedding_model, and options.
return [ 'default' => env('LLM_DEFAULT_PROVIDER', 'openai'), 'providers' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'default_model' => 'gpt-3.5-turbo', 'embedding_model' => 'text-embedding-3-small', 'options' => [ 'temperature' => 0.7, // ...other OpenAI-specific options ], ], 'ollama' => [ 'base_url' => env('OLLAMA_BASE_URL', 'http://localhost:11434'), 'default_model' => 'llama3', 'embedding_model' => 'nomic-embed-text', 'options' => [ 'temperature' => 0.5, // ...other Ollama-specific options ], ], // ...other providers ], ];
Usage
Basic Text Generation
$response = LLMClient::generate('Say hello in English.');
Specify Provider
$response = LLMClient::use('ollama')->generate('Say hello in English.');
Chainable Model/Options (Recommended)
$response = LLMClient::model('llama3')->with(['temperature' => 0.5])->generate('Say hello.');
model()only affects chat/completion.embedModel()only affects embed.with()sets provider-specific options (except model/embeddingModel).
Embeddings
$vector = LLMClient::use('ollama')->embed('hello world'); // Specify embedding model $vector = LLMClient::embedModel('nomic-embed-text')->embed('hello world');
Streaming Response
LLMClient::generateStream('Tell me a joke.', [], function($chunk) { echo $chunk; });
List Models
$models = LLMClient::use('gemini')->models();
Supported Providers
- OpenAI (ChatGPT)
- Anthropic (Claude)
- Gemini (Google)
- Ollama
⚠️ Note: Only the Ollama provider has been fully tested.
Other providers are implemented based on official docs, but not tested with real API keys.
📄 License
MIT License
统计信息
- 总下载量: 3
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 0
- 点击次数: 0
- 依赖项目数: 0
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2025-07-09