jira-webhook-llm/docker-compose.yml
Ireneusz Bachanowicz 2763b40b60
Some checks are pending
CI/CD Pipeline / test (push) Waiting to run
Refactor Jira Webhook LLM integration
- Simplified the FastAPI application structure and improved error handling with middleware.
- Introduced a retry decorator for asynchronous functions to enhance reliability.
- Modularized the LLM initialization and prompt loading into separate functions for better maintainability.
- Updated Pydantic models for Jira webhook payload and analysis flags to ensure proper validation and structure.
- Implemented a structured logging configuration for better traceability and debugging.
- Added comprehensive unit tests for prompt loading, response validation, and webhook handling.
- Established a CI/CD pipeline with GitHub Actions for automated testing and coverage reporting.
- Enhanced the prompt template for LLM analysis to include specific instructions for handling escalations.
2025-07-13 13:19:10 +02:00

33 lines
1.2 KiB
YAML

name: jira-webhook-stack
services:
ollama-jira:
image: artifactory.pfizer.com/mdmhub-docker-dev/mdmtools/ollama/ollama-preloaded:0.0.1
ports:
- "11434:11434"
restart: unless-stopped
# Service for your FastAPI application
jira-webhook-llm:
image: artifactory.pfizer.com/mdmhub-docker-dev/mdmtools/ollama/jira-webhook-llm:0.1.8
ports:
- "8000:8000"
environment:
# Set the LLM mode to 'ollama' or 'openai'
LLM_MODE: ollama
# Point to the Ollama service within the Docker Compose network
# 'ollama' is the service name, which acts as a hostname within the network
OLLAMA_BASE_URL: "https://api-amer-sandbox-gbl-mdm-hub.pfizer.com/ollama"
# Specify the model to use
OLLAMA_MODEL: phi4-mini:latest
# Ensure the Ollama service starts and is healthy before starting the app
depends_on:
- ollama-jira
restart: unless-stopped
# Command to run your FastAPI application using Uvicorn
# --host 0.0.0.0 is crucial for the app to be accessible from outside the container
# --reload is good for development; remove for production
command: uvicorn jira-webhook-llm:app --host 0.0.0.0 --port 8000