Some checks are pending
CI/CD Pipeline / test (push) Waiting to run
- Updated `chains.py` to streamline imports and improve error handling for LLM initialization. - Modified `models.py` to enhance the `AnalysisFlags` model with field aliases and added datetime import. - Deleted outdated prompt files (`jira_analysis_v1.0.0.txt`, `jira_analysis_v1.1.0.txt`, `jira_analysis_v1.2.0.txt`) to clean up the repository. - Introduced a new prompt file `jira_analysis_v1.2.0.txt` with updated instructions for analysis. - Removed `logging_config.py` and test files to simplify the codebase. - Updated webhook handler to improve error handling and logging. - Added a new shared store for managing processing requests in a thread-safe manner.
Jira Webhook LLM
Langfuse Integration
Overview
The application integrates with Langfuse for observability and analytics of LLM usage and webhook events. This integration provides detailed tracking of:
- Webhook events
- LLM model usage
- Error tracking
- Performance metrics
Configuration
Langfuse configuration is managed through both application.yml and environment variables.
application.yml
langfuse:
enabled: true
public_key: "pk-lf-..."
secret_key: "sk-lf-..."
host: "https://cloud.langfuse.com"
Environment Variables
LANGFUSE_ENABLED=true
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com"
Enable/Disable
To disable Langfuse integration:
- Set
langfuse.enabled: falseinapplication.yml - Or set
LANGFUSE_ENABLED=falsein your environment
Tracking Details
The following events are tracked:
- Webhook events
- Input payload
- Timestamps
- Issue metadata
- LLM processing
- Model used
- Input/output
- Processing time
- Errors
- Webhook processing errors
- LLM processing errors
- Validation errors
Viewing Data
Visit your Langfuse dashboard to view the collected metrics and traces.
Description
Languages
Python
95.9%
Dockerfile
4.1%