jira-webhook-llm/config/application.yml
Ireneusz Bachanowicz 2763b40b60
Some checks are pending
CI/CD Pipeline / test (push) Waiting to run
Refactor Jira Webhook LLM integration
- Simplified the FastAPI application structure and improved error handling with middleware.
- Introduced a retry decorator for asynchronous functions to enhance reliability.
- Modularized the LLM initialization and prompt loading into separate functions for better maintainability.
- Updated Pydantic models for Jira webhook payload and analysis flags to ensure proper validation and structure.
- Implemented a structured logging configuration for better traceability and debugging.
- Added comprehensive unit tests for prompt loading, response validation, and webhook handling.
- Established a CI/CD pipeline with GitHub Actions for automated testing and coverage reporting.
- Enhanced the prompt template for LLM analysis to include specific instructions for handling escalations.
2025-07-13 13:19:10 +02:00

31 lines
974 B
YAML

# Default application configuration
llm:
# The mode to run the application in.
# Can be 'openai' or 'ollama'.
# This can be overridden by the LLM_MODE environment variable.
mode: ollama
# Settings for OpenAI-compatible APIs (like OpenRouter)
openai:
# It's HIGHLY recommended to set this via an environment variable
# instead of saving it in this file.
# Can be overridden by OPENAI_API_KEY
api_key: "sk-or-v1-..."
# Can be overridden by OPENAI_API_BASE_URL
api_base_url: "https://openrouter.ai/api/v1"
# Can be overridden by OPENAI_MODEL
model: "deepseek/deepseek-chat:free"
# Settings for Ollama
ollama:
# Can be overridden by OLLAMA_BASE_URL
base_url: "http://192.168.0.122:11434"
# base_url: "https://api-amer-sandbox-gbl-mdm-hub.pfizer.com/ollama"
# Can be overridden by OLLAMA_MODEL
model: "phi4-mini:latest"
# model: "qwen3:1.7b"
# model: "mollm:360m"