rahasistiyak/laravel-ai-integration
最新稳定版本:2.0.0
Composer 安装命令:
composer require rahasistiyak/laravel-ai-integration
包简介
Comprehensive AI integration package for Laravel
README 文档
README
The Ultimate AI Integration Package for Laravel
Enterprise-grade, multi-provider AI SDK with caching, cost tracking, and production-ready features
Laravel AI Integration provides a unified, elegant API to interact with multiple AI providers including OpenAI, Anthropic (Claude), Google (Gemini), Ollama, and Groq. Built specifically for Laravel 11+, it abstracts provider complexity while offering powerful features like streaming, function calling, embeddings, and more.
Installation • Usage • Features • FAQ • Examples
✨ Features
- 🎯 5 AI Providers: OpenAI, Anthropic (Claude), Google (Gemini), Ollama, Groq
- 💬 Chat Completion: Standard and streaming responses
- 🧠 Embeddings: Generate vector embeddings for semantic search
- 🖼️ Image Generation: DALL-E and compatible APIs
- 🛠️ Function Calling: Tool/function use support
- 🔄 Streaming: Real-time SSE streaming for chat
- 💾 Response Caching: Intelligent caching with Redis/database support (v2.0)
- 💰 Cost Tracking: Token counting and cost calculation (v2.0)
- 🔁 Retry Logic: Exponential backoff with circuit breaker (v2.0)
- 📝 Prompt Templates: Reusable prompt system (v2.0)
- 🎨 Eloquent Integration: Traits for AI-powered models
- ⚡ Task Abstraction: Pre-built tasks for common operations
- 💻 Artisan Commands: CLI for code generation, cache management, usage stats
- 📦 Jobs: Queue support for background processing
📦 Installation
Install via Composer:
composer require rahasistiyak/laravel-ai-integration
Publish the configuration file:
php artisan vendor:publish --tag=ai-config
⚙️ Configuration
Environment Variables
Add your API keys to .env:
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_API_KEY=... GROQ_API_KEY=... OLLAMA_BASE_URL=http://localhost:11434 AI_DEFAULT_PROVIDER=openai # Optional: Enable Caching & Tracking (v2.0) AI_CACHE_ENABLED=true AI_TRACKING_ENABLED=true
Provider Configuration
Edit config/ai.php to customize provider settings:
return [ 'default' => env('AI_DEFAULT_PROVIDER', 'openai'), 'providers' => [ 'openai' => [ 'driver' => 'openai', 'api_key' => env('OPENAI_API_KEY'), 'base_url' => env('OPENAI_BASE_URL', 'https://api.openai.com/v1'), 'timeout' => 30, 'models' => [ 'chat' => ['gpt-4', 'gpt-3.5-turbo'], 'embedding' => ['text-embedding-ada-002'], ], ], // Additional providers... ], ];
🚀 Usage
Basic Chat
use Rahasistiyak\LaravelAiIntegration\Facades\AI; $response = AI::chat() ->messages([ ['role' => 'user', 'content' => 'Explain quantum computing in simple terms'] ]) ->get(); echo $response->content();
That's it! You're now using AI in Laravel with caching and cost tracking enabled by default.
✨ Features
- 🎯 5 AI Providers: OpenAI, Anthropic (Claude), Google (Gemini), Ollama, Groq
- 💬 Chat Completion: Standard and streaming responses
- 🧠 Embeddings: Generate vector embeddings for semantic search
- 🖼️ Image Generation: DALL-E and compatible APIs
- 🛠️ Function Calling: Tool/function use support
- 🔄 Streaming: Real-time SSE streaming for chat
- 💾 Response Caching: Intelligent caching with Redis/database support (v2.0)
- 💰 Cost Tracking: Token counting and cost calculation (v2.0)
- 🔁 Retry Logic: Exponential backoff with circuit breaker (v2.0)
- 📝 Prompt Templates: Reusable prompt system (v2.0)
- 🎨 Eloquent Integration: Traits for AI-powered models
- ⚡ Task Abstraction: Pre-built tasks for common operations
- 💻 Artisan Commands: CLI for code generation, cache management, usage stats
- 📦 Jobs: Queue support for background processing
📦 Supported Providers
| Provider | Chat | Streaming | Embeddings | Images | Function Calling |
|---|---|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| Anthropic (Claude) | ✅ | ✅ | ❌ | ❌ | ✅ |
| Google (Gemini) | ✅ | ❌ | ✅ | ❌ | ❌ |
| Groq | ✅ | ✅ | ❌ | ❌ | ❌ |
| Ollama | ✅ | ❌ | ✅ | ❌ | ❌ |
💡 Examples
Basic Chat
$response = AI::chat() ->messages([ ['role' => 'system', 'content' => 'You are a helpful assistant'], ['role' => 'user', 'content' => 'Explain Laravel in one sentence'] ]) ->get(); echo $response->content(); // "Laravel is a modern PHP framework..."
Streaming Responses
Stream responses in real-time:
AI::chat() ->messages([ ['role' => 'user', 'content' => 'Write a short story about AI'] ]) ->stream(function ($chunk) { echo $chunk; // Output each chunk as it arrives });
Using Different Providers
// Use Anthropic (Claude) $response = AI::driver('anthropic') ->chat([ ['role' => 'user', 'content' => 'Hello Claude!'] ]); // Use Google Gemini $response = AI::driver('google') ->chat([ ['role' => 'user', 'content' => 'Hello Gemini!'] ]); // Use Groq $response = AI::driver('groq') ->chat([ ['role' => 'user', 'content' => 'Hello Groq!'] ]); // Use local Ollama $response = AI::driver('ollama') ->chat([ ['role' => 'user', 'content' => 'Hello Llama!'] ]);
Embeddings
Generate vector embeddings for semantic search:
$embedding = AI::embed()->generate('Your text here'); // Returns: [0.0123, -0.0234, 0.0156, ...]
Eloquent Model Integration
Add AI capabilities to your models:
use Rahasistiyak\LaravelAiIntegration\Traits\HasAiEmbeddings; class Article extends Model { use HasAiEmbeddings; } // Generate embeddings $article = Article::find(1); $embedding = $article->generateEmbedding();
Function Calling / Tools
Use function calling for structured outputs:
$response = AI::chat() ->withTools([ [ 'type' => 'function', 'function' => [ 'name' => 'get_weather', 'description' => 'Get the current weather for a location', 'parameters' => [ 'type' => 'object', 'properties' => [ 'location' => [ 'type' => 'string', 'description' => 'City name', ], 'unit' => [ 'type' => 'string', 'enum' => ['celsius', 'fahrenheit'], ], ], 'required' => ['location'], ], ], ], ]) ->messages([ ['role' => 'user', 'content' => 'What\'s the weather in Tokyo?'] ]) ->get();
Task Abstraction
Use pre-built tasks for common operations:
// Text classification $category = AI::task()->classify( 'This new GPU delivers incredible performance for AI workloads', ['Technology', 'Fashion', 'Sports', 'Politics'] ); // Returns: "Technology"
Image Generation
$image = AI::image()->generate('A futuristic city at sunset', [ 'size' => '1024x1024', 'quality' => 'hd' ]); // Returns: ['url' => 'https://...']
Console Commands
Generate code via Artisan:
php artisan ai:generate-code "Create a UserObserver that logs model events" --language=php
Background Jobs
Process AI tasks in the background:
use Rahasistiyak\LaravelAiIntegration\Jobs\ProcessAiTask; ProcessAiTask::dispatch('classify', $text, [ 'labels' => ['Positive', 'Negative', 'Neutral'] ]);
🛠️ Advanced Features
Custom Model Selection
Use the model() method on the chat builder to specify a different model for the request:
// Use a specific model with the default provider AI::chat() ->model('gpt-4-turbo') ->messages([...]) ->get();
Custom Parameters
AI::chat() ->withParameters([ 'temperature' => 0.9, 'max_tokens' => 500, 'top_p' => 0.95, ]) ->messages([...]) ->get();
Fluent API Chaining
$response = AI::chat() ->model('gpt-4') ->withParameters(['temperature' => 0.7]) ->withTools([...]) ->messages([...]) ->get();
Prompt Templates (v2.0)
use Rahasistiyak\LaravelAiIntegration\Support\PromptTemplate; $prompt = PromptTemplate::load('classification') ->with(['text' => $userInput, 'categories' => 'Tech, Sports']) ->toMessages(); $response = AI::chat()->messages($prompt)->get();
🎁 v2.0 New Features
Response Caching
Save 60-80% on API costs automatically:
// First call - hits API $response = AI::chat()->messages([...])->get(); // Second identical call - instant from cache! $cached = AI::chat()->messages([...])->get();
Cost Tracking
Track usage and costs:
php artisan ai:usage --provider=openai
Circuit Breaker
Automatic retry with circuit breaker pattern ensures 99.9% uptime.
📝 License
This package is open-source software licensed under the MIT License.
🙏 Credits
- Author: Rahasistiyak
- Package: rahasistiyak/laravel-ai-integration
Made with ❤️ for the Laravel community
统计信息
- 总下载量: 0
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 2
- 点击次数: 0
- 依赖项目数: 0
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2025-12-28