datlechin/flarum-ai
最新稳定版本:v0.1.1
Composer 安装命令:
composer require datlechin/flarum-ai
包简介
Drop-in AI integration for Flarum. Text generation, vector search, content filtering. OpenAI, Anthropic, Gemini, and custom provider support.
README 文档
README
AI integration framework for Flarum with text generation, embeddings, and moderation. Multi-provider support (OpenAI, Gemini, Anthropic) with extensible architecture.
Features
- 🤖 Text generation with streaming support
- 🔍 Vector embeddings for semantic search
- 🛡️ AI-powered content moderation
- 🔌 Multi-provider architecture (OpenAI, Gemini, Anthropic)
- ⚡ SSE streaming for real-time responses
- 🔧 Extensible provider system
Installation
composer require datlechin/flarum-ai
Configuration
- Navigate to Admin Panel → Extensions → AI
- Select your LLM provider (OpenAI, Gemini, or Anthropic)
- Enter your API key
- Configure model settings
Developer Usage
Text Generation
use Datlechin\Ai\Providers\HttpProviderFactory; // Get the provider instance $provider = app(HttpProviderFactory::class)->createLlmProvider(); // Generate text $messages = [ ['role' => 'system', 'content' => 'You are a helpful assistant.'], ['role' => 'user', 'content' => 'Hello!'] ]; $result = $provider->complete($messages); echo $result['content'];
Streaming Text Generation
// Stream responses in real-time foreach ($provider->stream($messages) as $chunk) { echo $chunk; // Output each chunk as it arrives }
Embeddings
use Datlechin\Ai\Providers\HttpProviderFactory; // Get embeddings provider $provider = app(HttpProviderFactory::class)->createEmbeddingsProvider(); // Generate embeddings $text = "This is some text to embed"; $embedding = $provider->embed($text); // Returns array of floats (vector representation) print_r($embedding);
Content Moderation
use Datlechin\Ai\Providers\HttpProviderFactory; // Get moderation provider $provider = app(HttpProviderFactory::class)->createModerationProvider(); // Check content $result = $provider->moderate("Content to check"); if ($result['flagged']) { // Handle flagged content print_r($result['categories']); }
Creating Custom Providers
Implement the provider interfaces:
namespace MyExtension\Providers; use Datlechin\Ai\Providers\Contracts\LlmProviderInterface; class CustomLlmProvider implements LlmProviderInterface { public function complete(array $messages, array $options = []): array { // Your implementation return [ 'content' => 'Generated text', 'usage' => ['tokens' => 100] ]; } public function stream(array $messages, array $options = []): \Generator { // Yield chunks yield "chunk1"; yield "chunk2"; } public function getName(): string { return 'custom'; } public function getModel(): string { return 'custom-model'; } }
Register in extend.php:
use Datlechin\Ai\Providers\ProviderCatalog; return [ (new Extend\ServiceProvider()) ->register(function ($container) { $catalog = $container->make(ProviderCatalog::class); $catalog->register('custom', MyCustomProvider::class); }), ];
Available Providers
OpenAI
- Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Supports: Text generation, embeddings, moderation
- Streaming: ✅
Google Gemini
- Models: Gemini Pro, Gemini Flash
- Supports: Text generation, embeddings
- Streaming: ✅
Anthropic
- Models: Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus
- Supports: Text generation
- Streaming: ✅
Events
Listen to AI events in your extensions:
use Datlechin\Ai\Events\TextGenerated; use Illuminate\Contracts\Events\Dispatcher; return [ (new Extend\Event()) ->listen(TextGenerated::class, function (TextGenerated $event) { // $event->content // $event->provider // $event->model }), ];
Available events:
TextGenerationStartedTextGeneratedEmbeddingsStartedEmbeddingsGeneratedModerationStartedModerationCompletedProviderInitializedProviderFailed
API Endpoints
Generate Text
POST /api/ai/generate Content-Type: application/json { "messages": [ {"role": "system", "content": "You are helpful"}, {"role": "user", "content": "Hello"} ], "stream": true }
Generate Embeddings
POST /api/ai/embeddings Content-Type: application/json { "text": "Text to embed" }
Moderate Content
POST /api/ai/moderate Content-Type: application/json { "content": "Content to check" }
Extension Examples
Text Summarization
$provider = app(HttpProviderFactory::class)->createLlmProvider(); $messages = [ ['role' => 'system', 'content' => 'Summarize the following text concisely.'], ['role' => 'user', 'content' => $longText] ]; $summary = $provider->complete($messages);
Semantic Search
$embeddingsProvider = app(HttpProviderFactory::class)->createEmbeddingsProvider(); // Embed query $queryVector = $embeddingsProvider->embed($searchQuery); // Search in database (using vector similarity) $results = DB::table('ai_embeddings') ->selectRaw('*, vector_distance(embedding, ?) as distance', [$queryVector]) ->orderBy('distance') ->limit(10) ->get();
Content Filtering
$moderationProvider = app(HttpProviderFactory::class)->createModerationProvider(); $result = $moderationProvider->moderate($userContent); if ($result['flagged']) { // Auto-hide or flag for review $post->hide(); }
Configuration Options
Settings available in admin panel:
datlechin-ai.provider- Selected provider (openai, gemini)datlechin-ai.api_key- API key for providerdatlechin-ai.models.selected.text- Text generation modeldatlechin-ai.models.selected.embeddings- Embeddings modeldatlechin-ai.models.selected.moderation- Moderation model
Access in code:
$provider = $settings->get('datlechin-ai.provider'); $model = $settings->get('datlechin-ai.models.selected.text');
Requirements
- Flarum 1.2+
- PHP 8.1+
- Composer 2.0+
- Valid API key for chosen provider
Links
Sponsor
If you find this extension helpful, you can support ongoing development through GitHub Sponsors.
统计信息
- 总下载量: 147
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 5
- 点击次数: 0
- 依赖项目数: 1
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2025-10-11