jira-webhook-llm/config/application.yml
Ireneusz Bachanowicz 0c468c0a69 feat: Implement Jira Webhook Handler with LLM Integration
- Added FastAPI application to handle Jira webhooks.
- Created Pydantic models for Jira payload and LLM output.
- Integrated LangChain with OpenAI and Ollama for LLM processing.
- Set up Langfuse for tracing and monitoring.
- Implemented analysis logic for Jira tickets, including sentiment analysis and label suggestions.
- Added test endpoint for LLM integration.
- Updated requirements.txt to include necessary dependencies and versions.
2025-07-13 11:44:19 +02:00

30 lines
958 B
YAML

# Default application configuration
llm:
# The mode to run the application in.
# Can be 'openai' or 'ollama'.
# This can be overridden by the LLM_MODE environment variable.
mode: ollama
# Settings for OpenAI-compatible APIs (like OpenRouter)
openai:
# It's HIGHLY recommended to set this via an environment variable
# instead of saving it in this file.
# Can be overridden by OPENAI_API_KEY
api_key: "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"
# Can be overridden by OPENAI_API_BASE_URL
api_base_url: "https://openrouter.ai/api/v1"
# Can be overridden by OPENAI_MODEL
model: "deepseek/deepseek-chat:free"
# Settings for Ollama
ollama:
# Can be overridden by OLLAMA_BASE_URL
base_url: "http://localhost:11434"
# Can be overridden by OLLAMA_MODEL
# model: "phi4-mini:latest"
# model: "qwen3:1.7b"
model: "smollm:360m"