camh/laravel-ollama
最新稳定版本:v1.1.0
Composer 安装命令:
composer require camh/laravel-ollama
包简介
Laravel client for Ollama (generate, chat, embeddings) with PII masking & JSON helpers.
README 文档
README
A Laravel client for Ollama (local LLM): text generation, chat, embeddings, PII masking, and JSON guardrails.
Features
- Text Generation: Use local LLMs for text generation.
- Chat: Multi-turn conversations with context.
- Embeddings: Generate vector embeddings for text.
- PII Masking: Automatically mask sensitive info (emails, tokens, etc.).
- JSON Guardrails: Repair/validate JSON output from models.
- Artisan Commands: Manage models, chat, and more from CLI.
Installation
composer require camh/laravel-ollama
php artisan vendor:publish --tag=config --provider="Camh\\Ollama\\OllamaServiceProvider"
Configuration
Add to your .env:
OLLAMA_BASE=http://127.0.0.1:11434 OLLAMA_GEN_MODEL=llama3.1 OLLAMA_EMBED_MODEL=nomic-embed-text OLLAMA_TIMEOUT=60 OLLAMA_RETRIES=1
Edit config/ollama.php for advanced options:
base: Ollama server URLdefault_model: Default model for generationembed_model: Model for embeddingstimeout: Request timeout (seconds)retries: Number of retriesmask_pii: Enable/disable PII maskingpii_patterns: Custom regex patterns for PII
Usage
You can interact with the package through the Ollama facade.
Generate Text
To generate a simple text completion:
use Camh\Ollama\Facades\Ollama; $response = Ollama::generate('Why is the sky blue?'); echo $response;
You can also pass additional options, such as the model or temperature:
$response = Ollama::generate('Why is the sky blue?', [ 'model' => 'llama3.1', 'options' => [ 'temperature' => 0.7, ], ]);
Chat Completions
For chat-based interactions, provide a history of messages:
$messages = [ ['role' => 'system', 'content' => 'You are a helpful assistant.'], ['role' => 'user', 'content' => 'Why is the sky blue?'], ]; $response = Ollama::chat($messages); echo $response; // Outputs the assistant's message content
Embeddings
To create embeddings for a given input:
$embedding = Ollama::embed('Hello world'); // Returns an array of floats $embeddings = Ollama::embed(['Hello', 'world']); // Returns an array of embedding arrays
Streaming Responses
For real-time responses, use the stream method with a callback:
Ollama::stream('Tell me a long story.', function ($chunk) { echo $chunk; });
JSON Mode
To receive a response in JSON format (auto-repairs invalid JSON):
$response = Ollama::json('Create a user profile for John Doe.'); // Returns an array
You can also provide a JSON schema for structured output:
$schema = [ 'type' => 'object', 'properties' => [ 'name' => ['type' => 'string'], 'email' => ['type' => 'string', 'format' => 'email'], ], 'required' => ['name', 'email'], ]; $response = Ollama::json('Create a user profile for John Doe.', [], $schema);
Persistent, Rich Conversations
You can manage multi-turn, persistent conversations using the Conversation class:
use Camh\Ollama\Support\Conversation; use Camh\Ollama\Facades\Ollama; // Start or load a conversation for a user/session $conversation = Conversation::load('user123') ?? new Conversation('user123', 'You are a helpful assistant.'); // Add a user message and get the assistant's reply $reply = Ollama::conversation($conversation, 'Why is the sky blue?'); echo $reply; // Continue the conversation $reply2 = Ollama::conversation($conversation, 'What about sunsets?'); echo $reply2; // Save the conversation (automatically done after each reply) $conversation->save(); // Retrieve full history $history = $conversation->getMessages(); // Clear the conversation $conversation->clear(); $conversation->save();
- Each conversation is identified by a unique ID (e.g., user/session ID).
- Conversations are automatically persisted in cache and can be loaded later.
- You can add metadata, limit history, or extend the Conversation class for more features.
Artisan Commands
php artisan ollama:tags # List available models/tags php artisan ollama:pull llama3.1 # Download a model php artisan ollama:chat "Say hello" # Chat with a model php artisan ollama:show llama3.1 # Show model details php artisan ollama:delete llama3.1 # Delete a model php artisan ollama:copy src dest # Copy a model php artisan ollama:create name # Create a new model
Testing
composer test
Contributing
Pull requests and issues are welcome! Please see CONTRIBUTING.md if available.
License
MIT
统计信息
- 总下载量: 16
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 1
- 点击次数: 0
- 依赖项目数: 0
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2025-09-24