138 lines
2.8 KiB
Markdown
138 lines
2.8 KiB
Markdown
# Jira Webhook LLM
|
|
|
|
## Langfuse Integration
|
|
|
|
### Overview
|
|
The application integrates with Langfuse for observability and analytics of LLM usage and webhook events. This integration provides detailed tracking of:
|
|
- Webhook events
|
|
- LLM model usage
|
|
- Error tracking
|
|
- Performance metrics
|
|
|
|
### Configuration
|
|
Langfuse configuration is managed through both `application.yml` and environment variables.
|
|
|
|
#### application.yml
|
|
```yaml
|
|
langfuse:
|
|
enabled: true
|
|
public_key: "pk-lf-..."
|
|
secret_key: "sk-lf-..."
|
|
host: "https://cloud.langfuse.com"
|
|
```
|
|
|
|
#### Environment Variables
|
|
```bash
|
|
LANGFUSE_ENABLED=true
|
|
LANGFUSE_PUBLIC_KEY="pk-lf-..."
|
|
LANGFUSE_SECRET_KEY="sk-lf-..."
|
|
LANGFUSE_HOST="https://cloud.langfuse.com"
|
|
```
|
|
|
|
### Enable/Disable
|
|
To disable Langfuse integration:
|
|
1. Set `langfuse.enabled: false` in `application.yml`
|
|
2. Or set `LANGFUSE_ENABLED=false` in your environment
|
|
|
|
### Tracking Details
|
|
The following events are tracked:
|
|
1. Webhook events
|
|
- Input payload
|
|
- Timestamps
|
|
- Issue metadata
|
|
2. LLM processing
|
|
- Model used
|
|
- Input/output
|
|
- Processing time
|
|
3. Errors
|
|
- Webhook processing errors
|
|
- LLM processing errors
|
|
- Validation errors
|
|
|
|
### Viewing Data
|
|
Visit your Langfuse dashboard to view the collected metrics and traces.
|
|
|
|
## Deployment Guide
|
|
|
|
### Redis Configuration
|
|
The application requires Redis for caching and queue management. Configure Redis in `application.yml`:
|
|
|
|
```yaml
|
|
redis:
|
|
host: "localhost"
|
|
port: 6379
|
|
password: ""
|
|
db: 0
|
|
```
|
|
|
|
Environment variables can also be used:
|
|
```bash
|
|
REDIS_HOST="localhost"
|
|
REDIS_PORT=6379
|
|
REDIS_PASSWORD=""
|
|
REDIS_DB=0
|
|
```
|
|
|
|
### Worker Process Management
|
|
The application uses Celery for background task processing. Configure workers in `application.yml`:
|
|
|
|
```yaml
|
|
celery:
|
|
workers: 4
|
|
concurrency: 2
|
|
max_tasks_per_child: 100
|
|
```
|
|
|
|
Start workers with:
|
|
```bash
|
|
celery -A jira-webhook-llm worker --loglevel=info
|
|
```
|
|
|
|
### Monitoring Setup
|
|
The application provides Prometheus metrics endpoint at `/metrics`. Configure monitoring:
|
|
|
|
1. Add Prometheus scrape config:
|
|
```yaml
|
|
scrape_configs:
|
|
- job_name: 'jira-webhook-llm'
|
|
static_configs:
|
|
- targets: ['localhost:8000']
|
|
```
|
|
|
|
2. Set up Grafana dashboard using the provided template
|
|
|
|
### Rate Limiting
|
|
Rate limiting is configured in `application.yml`:
|
|
|
|
```yaml
|
|
rate_limiting:
|
|
enabled: true
|
|
requests_per_minute: 60
|
|
burst_limit: 100
|
|
```
|
|
|
|
### Health Check Endpoint
|
|
The application provides a health check endpoint at `/health` that returns:
|
|
|
|
```json
|
|
{
|
|
"status": "OK",
|
|
"timestamp": "2025-07-14T01:59:42Z",
|
|
"components": {
|
|
"database": "OK",
|
|
"redis": "OK",
|
|
"celery": "OK"
|
|
}
|
|
}
|
|
```
|
|
|
|
### System Requirements
|
|
Minimum system requirements:
|
|
|
|
- Python 3.9+
|
|
- Redis 6.0+
|
|
- 2 CPU cores
|
|
- 4GB RAM
|
|
- 10GB disk space
|
|
|
|
Required Python packages are listed in `requirements.txt` |