2025-07-14 12:47:20 +02:00
2025-07-14 12:47:20 +02:00

Jira Webhook LLM

Langfuse Integration

Overview

The application integrates with Langfuse for observability and analytics of LLM usage and webhook events. This integration provides detailed tracking of:

  • Webhook events
  • LLM model usage
  • Error tracking
  • Performance metrics

Configuration

Langfuse configuration is managed through both application.yml and environment variables.

application.yml

langfuse:
  enabled: true
  public_key: "pk-lf-..."
  secret_key: "sk-lf-..."
  host: "https://cloud.langfuse.com"

Environment Variables

LANGFUSE_ENABLED=true
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com"

Enable/Disable

To disable Langfuse integration:

  1. Set langfuse.enabled: false in application.yml
  2. Or set LANGFUSE_ENABLED=false in your environment

Tracking Details

The following events are tracked:

  1. Webhook events
    • Input payload
    • Timestamps
    • Issue metadata
  2. LLM processing
    • Model used
    • Input/output
    • Processing time
  3. Errors
    • Webhook processing errors
    • LLM processing errors
    • Validation errors

Viewing Data

Visit your Langfuse dashboard to view the collected metrics and traces.

Deployment Guide

Redis Configuration

The application requires Redis for caching and queue management. Configure Redis in application.yml:

redis:
  host: "localhost"
  port: 6379
  password: ""
  db: 0

Environment variables can also be used:

REDIS_HOST="localhost"
REDIS_PORT=6379
REDIS_PASSWORD=""
REDIS_DB=0

Worker Process Management

The application uses Celery for background task processing. Configure workers in application.yml:

celery:
  workers: 4
  concurrency: 2
  max_tasks_per_child: 100

Start workers with:

celery -A jira-webhook-llm worker --loglevel=info

Monitoring Setup

The application provides Prometheus metrics endpoint at /metrics. Configure monitoring:

  1. Add Prometheus scrape config:
scrape_configs:
  - job_name: 'jira-webhook-llm'
    static_configs:
      - targets: ['localhost:8000']
  1. Set up Grafana dashboard using the provided template

Rate Limiting

Rate limiting is configured in application.yml:

rate_limiting:
  enabled: true
  requests_per_minute: 60
  burst_limit: 100

Health Check Endpoint

The application provides a health check endpoint at /health that returns:

{
  "status": "OK",
  "timestamp": "2025-07-14T01:59:42Z",
  "components": {
    "database": "OK",
    "redis": "OK",
    "celery": "OK"
  }
}

System Requirements

Minimum system requirements:

  • Python 3.9+
  • Redis 6.0+
  • 2 CPU cores
  • 4GB RAM
  • 10GB disk space

Required Python packages are listed in requirements.txt

Description
No description provided
Readme 314 KiB
Languages
Python 95.9%
Dockerfile 4.1%