arturas88/anyllm
最新稳定版本:v1.5.0
Composer 安装命令:
composer require arturas88/anyllm
包简介
Production-ready PHP library for LLMs - OpenAI, Anthropic, Google, Mistral, xAI, OpenRouter, Ollama. One interface, all providers, zero vendor lock-in.
关键字:
README 文档
README
One interface, all providers. Zero vendor lock-in.
Features • Installation • Quick Start • Documentation • Examples
🎉 Production-Ready Enterprise-Grade LLM Library
A comprehensive, battle-tested PHP 8.2+ library providing a unified interface for interacting with multiple LLM providers. Built for production with persistence, logging, rate limiting, caching, metrics, middleware, and more.
Latest: All Priority 2 features complete! Embeddings, Middleware, Advanced Streaming, Metrics & Monitoring. ✅
✨ Features
🔥 Core Capabilities
- 7 LLM Providers - OpenAI, Anthropic, Google, Mistral, xAI, OpenRouter, Ollama
- Unified Interface - Same code works across all providers
- Streaming Support - Real-time response streaming with pause/resume/cancel
- Structured Output - Type-safe JSON with automatic hydration
- Tool/Function Calling - Let LLMs use your PHP functions
- Agents - Autonomous multi-iteration problem solving with tool integration
- Workflows - Multi-step orchestration with variable interpolation and human-in-the-loop
- Embeddings & RAG - Semantic search and retrieval-augmented generation
- Multi-Modal - Text, images, files, audio (provider-dependent)
🚀 Production-Ready Infrastructure
- 💾 Persistence - Database, Redis, File storage for conversations
- 📊 Logging - Database and File logging with analytics dashboard
- 🚦 Rate Limiting - Memory, Redis, Database rate limiters (distributed)
- ⚡ Caching - Redis, Memcached, Database, File, Array caches
- 🔄 Retry Logic - Exponential backoff on failures
- 📈 Metrics - Prometheus export, dashboards, real-time monitoring
- 🔧 Middleware - Intercept and transform requests/responses
🎯 Developer Experience
- Token Counter - Estimate costs before making calls
- Config Validator - Catch errors before API calls
- Prompt Templates - Reusable templates with variables
- Agents & Workflows - Autonomous AI and multi-step pipelines
- Testing Support - Built-in fakes for unit tests
- Comprehensive Docs - Examples for every feature
📦 Installation
Prerequisites
- PHP 8.2+
- Composer
- An API key from your preferred provider (get OpenAI key at https://platform.openai.com/api-keys)
Step 1: Install Dependencies
composer require arturas88/anyllm composer require guzzlehttp/guzzle
Step 2: Configure API Key
Get your API key: https://platform.openai.com/api-keys
Then set it up:
# Option 1: Create .env file (recommended) cp .env.example .env # Edit .env and replace 'sk-your-openai-api-key-here' with your actual key nano .env # or use your preferred editor # Option 2: Export directly (temporary, for current session only) export OPENAI_API_KEY=sk-your-actual-key-here
Important: Make sure to use your actual API key, not the placeholder!
Step 3: Test It Works
composer test
You should see:
✅ Tests: 3, Assertions: 5
Run Your First Example
php examples/basic-usage.php
Expected output:
=== Example 1: Simple Text Generation ===
Response: PHP is a server-side scripting language...
Tokens used: 37
=== Example 2: Chat Conversation ===
Assistant: PHP 8.2 introduced several new features...
🚀 Quick Start
1. Basic Chat
<?php use AnyLLM\AnyLLM; use AnyLLM\Enums\Provider; // Create provider $llm = AnyLLM::provider(Provider::OpenAI) ->apiKey($_ENV['OPENAI_API_KEY']) ->model('gpt-4o') ->build(); // Simple chat $response = $llm->generateText('gpt-4o', 'Explain PHP in simple terms'); echo $response->text;
2. Conversation with History
use AnyLLM\Messages\UserMessage; $messages = [ new UserMessage('What is PHP?'), ]; $response = $llm->chat('gpt-4o', $messages); echo $response->content();
3. With Retry & Caching
use AnyLLM\Support\FileCache; $cache = new FileCache(); // Enable retry logic $llm->withRetry(maxRetries: 5); // Cache expensive calls $result = $cache->remember('llm:question:1', function() use ($llm) { return $llm->generateText('gpt-4o', 'What is AI?'); }, 3600);
4. Structured Output
use AnyLLM\StructuredOutput\Attributes\Description; class Person { #[Description('Full name')] public string $name; #[Description('Age in years')] public int $age; } $response = $llm->generateObject( model: 'gpt-4o', prompt: 'Extract person from: John Doe is 30 years old', schema: Person::class ); echo $response->object->name; // "John Doe" echo $response->object->age; // 30
5. Tool Calling
use AnyLLM\Tools\Tool; $weatherTool = Tool::fromCallable( name: 'get_weather', description: 'Get current weather', callable: function(string $location): string { return "Sunny, 72°F in {$location}"; } ); $response = $llm->chat('gpt-4o', [ new UserMessage('What\'s the weather in Paris?') ], tools: [$weatherTool]);
6. Agents & Workflows
Agents - Autonomous Problem Solving
use AnyLLM\Agents\Agent; use AnyLLM\Tools\Tool; // Create an agent with a system prompt $agent = Agent::create( provider: $llm, model: 'gpt-4o-mini', systemPrompt: 'You are a helpful assistant that solves problems step by step.', ); // Simple agent execution $result = $agent->run('What are the main benefits of using PHP?'); echo $result->content; echo "Iterations: {$result->iterations}"; // Agent with tools $calculator = Tool::fromCallable( name: 'calculator', description: 'Perform calculations', callable: function(string $expression): array { // Safe evaluation logic return ['result' => eval("return {$expression};")]; } ); $agentWithTools = Agent::create($llm, 'gpt-4o-mini') ->withTools($calculator) ->withMaxIterations(10); $result = $agentWithTools->run('Calculate 25 * 4 + 100'); echo $result->content; foreach ($result->toolExecutions as $execution) { echo "Tool: {$execution->name}, Result: " . json_encode($execution->result); }
Human-in-the-Loop Agents
// Request approval before tool execution $agent = Agent::create($llm, 'gpt-4o-mini') ->withTools($databaseTool) ->withBeforeToolExecution(function(string $toolName, array $arguments): bool { // Request human approval echo "Tool {$toolName} will be called with: " . json_encode($arguments); return true; // or false to skip }) ->withAfterToolExecution(function(ToolExecution $execution): mixed { // Review/modify tool result return $execution->result; // or modified result }) ->withBeforeFinalResponse(function(string $content, array $messages, array $toolExecutions): ?string { // Review/modify final response return $content; // or modified content });
Workflows - Multi-Step Orchestration
use AnyLLM\Agents\Workflow\Workflow; use AnyLLM\StructuredOutput\Schema; use AnyLLM\StructuredOutput\Attributes\{Description, ArrayOf}; // Simple workflow with variable interpolation $workflow = Workflow::create( provider: $llm, defaultModel: 'gpt-4o-mini', ) ->addStep( name: 'analyze', prompt: 'Analyze this text: {{input}}', ) ->addStep( name: 'summarize', prompt: 'Summarize the analysis: {{analyze}}', ) ->addStep( name: 'recommend', prompt: 'Provide recommendations based on: {{summarize}}', ); $result = $workflow->run(['input' => 'Your text here']); echo $result->finalOutput; foreach ($result->stepResults as $stepName => $stepResult) { echo "{$stepName}: {$stepResult->output}\n"; }
Workflows with Structured Output
class ProductAnalysis { #[Description('Product name')] public string $productName; #[Description('Key features')] #[ArrayOf('string')] public array $features; } $workflow = Workflow::create($llm, 'gpt-4o-mini') ->addStep( name: 'analyze_product', prompt: 'Analyze: {{product_description}}', outputSchema: Schema::fromClass(ProductAnalysis::class), ) ->addStep( name: 'create_marketing_plan', prompt: 'Create marketing plan for: {{analyze_product.productName}}', ); $result = $workflow->run(['product_description' => 'A mobile app...']); $analysis = $result->stepResults['analyze_product']->output; echo $analysis->productName;
Human-in-the-Loop Workflows
$workflow = Workflow::create($llm, 'gpt-4o-mini') ->addStep(name: 'draft', prompt: 'Create draft: {{topic}}') ->addStep(name: 'review', prompt: 'Review: {{draft}}') ->withBeforeStep(function(string $stepName, string $prompt, WorkflowContext $context): bool { // Request approval before executing step if ($stepName === 'review') { echo "Approve review step? (yes/no): "; return true; // or false to skip } return true; }) ->withAfterStep(function(string $stepName, StepResult $result, WorkflowContext $context): ?StepResult { // Review/modify step result if ($stepName === 'draft') { // Allow modifications return $result; // or modified StepResult } return null; // Use original });
7. Production Features
Persistence
use AnyLLM\Conversations\ConversationManager; use AnyLLM\Conversations\Repository\ConversationRepositoryFactory; // Database persistence $repository = ConversationRepositoryFactory::create('database', [ 'driver' => 'mysql', 'host' => 'localhost', 'database' => 'anyllm', 'username' => 'root', 'password' => '', ]); $manager = new ConversationManager($llm, $repository); // Conversations persist forever $conversation = $manager->create('user-123', 'Support Chat'); $manager->addMessage($conversation, 'user', 'Hello!'); $response = $manager->chat($conversation, $llm);
Logging
use AnyLLM\Logging\LoggerFactory; $logger = LoggerFactory::create('database', $dbConfig); // All requests are logged // Query logs, generate analytics, track costs $stats = $logger->analyze('openai');
Rate Limiting
use AnyLLM\Support\RateLimit\RateLimiterFactory; $limiter = RateLimiterFactory::create('redis', $redisConfig); // Prevent abuse $limiter->attempt( key: "user:{$userId}", callback: fn() => $llm->chat($messages), maxAttempts: 10, decaySeconds: 60 );
Caching
use AnyLLM\Support\Cache\CacheFactory; $cache = CacheFactory::create('redis', $redisConfig); // Massive cost savings $response = $cache->remember('prompt:' . md5($prompt), function() use ($llm, $prompt) { return $llm->generateText('gpt-4o', $prompt); }, 3600);
6. Embeddings & Semantic Search
$texts = [ 'The cat sat on the mat', 'A feline rested on a rug', ]; $embeddings = $llm->embed('text-embedding-3-small', $texts); // Check similarity $similarity = $embeddings->similarity(0, 1); echo "Similarity: " . number_format($similarity, 4); // 0.9234 // Or find similar texts $queryEmbedding = $llm->embed('text-embedding-3-small', 'What is AI?'); $results = \AnyLLM\Support\VectorMath::kNearest( $queryEmbedding->getEmbedding(0), $embeddings->embeddings, k: 3 );
Middleware Pipeline
use AnyLLM\Middleware\MiddlewarePipeline; use AnyLLM\Middleware\CachingMiddleware; use AnyLLM\Middleware\LoggingMiddleware; use AnyLLM\Middleware\RateLimitMiddleware; $pipeline = new MiddlewarePipeline([ new RateLimitMiddleware($limiter), new CachingMiddleware($cache), new LoggingMiddleware($logger), ]); // All requests go through middleware automatically
Streaming with Controls
use AnyLLM\Streaming\StreamController; $controller = new StreamController(); $controller ->onChunk(fn($content) => echo $content) ->onProgress(fn($progress) => updateUI($progress)) ->onComplete(fn($content, $tokens) => log("Done: {$tokens} tokens")); // Supports pause/resume/cancel $controller->pause(); $controller->resume(); $controller->cancel();
Metrics Dashboard
use AnyLLM\Metrics\MetricsCollector; use AnyLLM\Metrics\MetricsDashboard; $metrics = new MetricsCollector(); // Track everything $metrics->recordRequest('openai', 'gpt-4o', 'chat'); $metrics->recordLatency('openai', 'gpt-4o', 1234); $metrics->recordTokens('openai', 'gpt-4o', 150); $metrics->recordCost('openai', 'gpt-4o', 0.0075); // View real-time dashboard $dashboard = new MetricsDashboard($metrics); echo $dashboard->render(); // or renderHtml() // Export for Prometheus/Grafana echo $metrics->exportPrometheus();
🎯 Supported Providers
| Provider | Chat | Streaming | Tools | Embeddings | Images | Audio | Vision | Local |
|---|---|---|---|---|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ DALL-E | ✅ Whisper | ✅ | ❌ |
| Anthropic | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ Claude | ❌ |
| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ Gemini | ❌ | |
| Mistral | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ Pixtral | ❌ |
| xAI (Grok) | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ |
| OpenRouter | ✅ | ✅ | Varies | Varies | Varies | Varies | Varies | ❌ |
| Ollama | ✅ | ✅ | Varies | Varies | ❌ | ❌ | Varies | ✅ |
📚 Documentation
- Changelog - Version history
- Contributing - How to contribute
- Troubleshooting - Common issues & solutions 🔧
- Database Schema - Database structure reference
💡 Examples
Comprehensive examples for every feature:
php examples/basic-usage.php # Getting started php examples/all-providers-demo.php # All 7 providers php examples/embeddings-example.php # Embeddings & RAG php examples/middleware-example.php # Middleware system php examples/streaming-advanced.php # Advanced streaming php examples/metrics-example.php # Metrics & monitoring php examples/conversation-persistence.php # Persistent conversations php examples/logging-example.php # Logging & analytics php examples/rate-limiting-example.php # Rate limiting php examples/cache-drivers-example.php # Caching strategies php examples/quick-wins-demo.php # All quick wins php examples/agent-basic.php # Basic agent usage php examples/agent-with-tools.php # Agents with tool calling php examples/agent-human-in-loop.php # Human-in-the-loop agents php examples/workflow-basic.php # Basic workflows php examples/workflow-advanced.php # Advanced workflow patterns php examples/workflow-human-in-loop.php # Human-in-the-loop workflows php examples/async-promise-example.php # Async/promise support php examples/batch-processing-example.php # Batch processing php examples/content-moderation-example.php # Content moderation
🛠️ Quick Commands
# Run tests composer test # Check code style composer cs-fix -- --dry-run # Run static analysis composer phpstan # View test coverage composer test-coverage open coverage/index.html # Run specific example php examples/embeddings-example.php php examples/streaming-advanced.php php examples/middleware-example.php
🏗️ Architecture
AnyLLM
├── 7 Provider Implementations
├── Unified Interface & Message System
├── Structured Output & Tool Calling
├── Agents & Workflows
├── Embeddings & Vector Math
│
├── Infrastructure
│ ├── Persistence (DB/Redis/File)
│ ├── Logging (DB/File + Analytics)
│ ├── Rate Limiting (Memory/Redis/DB)
│ ├── Caching (Redis/Memcached/DB/File)
│ └── Metrics (Prometheus/JSON export)
│
├── Middleware System
│ ├── Request/Response Interception
│ ├── Built-in Middleware
│ └── Custom Middleware Support
│
├── Advanced Streaming
│ ├── Pause/Resume/Cancel
│ ├── Real-time Token Counting
│ └── SSE Formatting
│
└── Developer Tools
├── Token Counter
├── Config Validator
├── Prompt Templates
└── Testing Support
⚡ Performance
With Caching (Redis)
- 200-600x faster than API calls
- 100% cost savings on cache hits
- Perfect for: Identical prompts, embeddings, common queries
With Conversation Summarization
- 60-80% token reduction after threshold
- Massive cost savings on long conversations
- Context preserved with intelligent summaries
With Rate Limiting
- Prevents runaway costs
- Protects against abuse
- Multi-tier subscription support
🔧 Configuration
Create config/any-llm.php:
return [ 'providers' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'default_model' => 'gpt-4o', ], ], 'conversations' => [ 'repository' => 'database', // or 'redis', 'file' 'driver' => 'sqlite', // or 'mysql', 'pgsql' 'database' => __DIR__ . '/database/anyllm.sqlite', // SQLite path 'auto_summarize' => true, ], 'logging' => [ 'driver' => 'database', // or 'file' 'database_driver' => 'sqlite', // or 'mysql', 'pgsql' 'enabled' => true, ], 'cache' => [ 'driver' => 'database', // or 'redis', 'memcached', 'file' 'database_driver' => 'sqlite', // for database cache 'default_ttl' => 3600, ], 'rate_limiting' => [ 'driver' => 'database', // or 'redis', 'memory' 'database_driver' => 'sqlite', // for database rate limiter 'max_attempts' => 100, 'decay_seconds' => 60, ], ];
🗄️ Database Setup
SQLite (Recommended for Local Development)
Perfect for local development - no server setup required:
use AnyLLM\Conversations\Repository\ConversationRepositoryFactory; $repository = ConversationRepositoryFactory::create('database', [ 'driver' => 'sqlite', 'database' => __DIR__ . '/database/anyllm.sqlite', ]);
MySQL/PostgreSQL (Production)
# Run migrations php artisan migrate # Or run SQL directly mysql database < database/migrations/create_llm_logs_table.php mysql database < database/migrations/create_llm_conversations_table.php mysql database < database/migrations/create_llm_messages_table.php
Supported Databases:
- ✅ SQLite - Perfect for local development (zero setup)
- ✅ MySQL - Production-ready, high performance
- ✅ PostgreSQL - Advanced features, JSON support
🎯 Best Practices
1. Always Use Retry Logic
$llm->withRetry(maxRetries: 5)->chat($messages);
2. Cache Expensive Calls
$cache->remember($key, fn() => $llm->embed($texts), 3600);
3. Monitor Usage
$logger->analyze(); // Get cost and usage stats
4. Set Rate Limits
$limiter->attempt($userKey, $callback, maxAttempts: 10);
5. Use Conversations for Chat
$manager->chat($conversation, $llm); // Auto-saves, auto-summarizes
🧪 Testing
use AnyLLM\Testing\FakeProvider; $fake = (new FakeProvider()) ->willReturn('Hello!'); $response = $fake->generateText( model: 'fake-model', prompt: 'Say hello', ); // Make assertions $fake->assertCalled('generateText'); $fake->assertCalledTimes('generateText', 1);
🔧 Common Issues & Troubleshooting
Issue 1: "OPENAI_API_KEY not set"
Solution: Make sure you created the .env file with your API key:
cat .env # Should show: OPENAI_API_KEY=sk-...
Issue 2: "cURL error 77: certificate file"
This happens with some local PHP environments (like Laravel Herd).
Solution: Set the certificate path or disable SSL verification (development only):
# Option 1: Download certificates curl https://curl.se/ca/cacert.pem -o cacert.pem export CURL_CA_BUNDLE=$(pwd)/cacert.pem # Option 2: Disable SSL verification (NOT for production!) export CURLOPT_SSL_VERIFYPEER=false
Issue 3: "Class not found"
Solution: Regenerate the autoloader:
composer dump-autoload
Issue 4: Example returns empty response
Possible causes:
- Invalid API key
- Rate limit exceeded
- Model doesn't support the feature
- Network connectivity issue
Debug:
# Test basic connectivity php -r "require 'bootstrap.php'; \$llm = AnyLLM\AnyLLM::openai(apiKey: \$_ENV['OPENAI_API_KEY']); var_dump(\$llm);"
API Key Issues
// Validate API key use AnyLLM\Config\ConfigValidator; ConfigValidator::requireApiKey($apiKey, 'OpenAI');
Rate Limits
// Check remaining attempts $remaining = $limiter->remaining($key, $maxAttempts);
Token Issues
// Estimate tokens before calling use AnyLLM\Support\TokenCounter; $tokens = TokenCounter::estimate($text, 'gpt-4');
Debugging
Debug HTTP requests and responses to see exactly what's being sent to and received from LLM providers:
// Enable debugging (logs to stdout by default) $llm = $llm->withDebugging(); // With custom logger $llm = $llm->withDebugging(function(string $type, array $data) { // $type is 'REQUEST', 'RESPONSE', or 'CHUNK' error_log(json_encode(['type' => $type, 'data' => $data])); }); // Show full base64 content (default: truncated for readability) $llm = $llm->withDebugging(showFullBase64: true); // Disable debugging $llm = $llm->withoutDebugging(); // Enable via config $llm = AnyLLM::provider(Provider::OpenAI) ->apiKey($key) ->option('debug', true) ->build();
The debug output shows:
- REQUEST: Full request payload (method, endpoint, data)
- RESPONSE: Complete response from the API
- CHUNK: Streaming response chunks
Base64-encoded content (images, PDFs, etc.) is automatically truncated in logs for readability, but you can show full content with showFullBase64: true.
🚢 Production Checklist
- Configure API keys securely
- Set up database (run migrations)
- Configure Redis for caching & rate limiting
- Enable logging (database or file)
- Set appropriate rate limits
- Enable retry logic
- Configure conversation summarization
- Set up metrics monitoring
- Test in staging environment
- Review security (API key storage)
📊 Benchmarks
- API Response: 1-3 seconds (provider-dependent)
- Cache Hit: 5-10ms (200-600x faster!)
- Token Estimation: ~0.1ms per 1000 chars
- Database Query: 10-50ms
- Redis Cache: 1-5ms
🤝 Contributing
Contributions welcome! The library is designed for easy extension:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Submit a pull request
📝 License
MIT License - see LICENSE file for details.
🙏 Credits
Built with ❤️ using PHP 8.2+, Guzzle, and modern PHP best practices.
Ready for production use!
Start building amazing LLM-powered applications today.
统计信息
- 总下载量: 7
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 0
- 点击次数: 0
- 依赖项目数: 0
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2025-12-01