jira-webhook-llm/Dockerfile
Ireneusz Bachanowicz 0c468c0a69 feat: Implement Jira Webhook Handler with LLM Integration
- Added FastAPI application to handle Jira webhooks.
- Created Pydantic models for Jira payload and LLM output.
- Integrated LangChain with OpenAI and Ollama for LLM processing.
- Set up Langfuse for tracing and monitoring.
- Implemented analysis logic for Jira tickets, including sentiment analysis and label suggestions.
- Added test endpoint for LLM integration.
- Updated requirements.txt to include necessary dependencies and versions.
2025-07-13 11:44:19 +02:00

52 lines
1.4 KiB
Docker

# syntax=docker/dockerfile:1.4
# --- Stage 1: Build Dependencies ---
# Using a specific, stable Python version on Alpine for a small final image.
FROM python:3.10-alpine3.18 AS builder
WORKDIR /app
# Install build dependencies for Python packages.
RUN apk add --no-cache --virtual .build-deps \
build-base \
gcc \
musl-dev \
python3-dev \
linux-headers
# Copy only the requirements file first to leverage Docker's build cache.
COPY requirements.txt .
# Install Python dependencies.
RUN pip install --no-cache-dir -r requirements.txt
# Remove build dependencies to keep the final image lean.
RUN apk del .build-deps
# --- Stage 2: Runtime Environment ---
# Start fresh with a lean Alpine Python image.
FROM python:3.10-alpine3.18
WORKDIR /app
# Copy installed Python packages from the builder stage.
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
# Set environment variables for Python.
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
# Copy the configuration directory first.
# If only code changes, this layer remains cached.
COPY config ./config
# Copy your application source code.
COPY jira-webhook-llm.py .
COPY config.py .
# Expose the port your application listens on.
EXPOSE 8000
# Define the command to run your application.
CMD ["uvicorn", "jira-webhook-llm:app", "--host", "0.0.0.0", "--port", "8000"]