jira-webhook-llm/llm/models.py
Ireneusz Bachanowicz 2763b40b60
Some checks are pending
CI/CD Pipeline / test (push) Waiting to run
Refactor Jira Webhook LLM integration
- Simplified the FastAPI application structure and improved error handling with middleware.
- Introduced a retry decorator for asynchronous functions to enhance reliability.
- Modularized the LLM initialization and prompt loading into separate functions for better maintainability.
- Updated Pydantic models for Jira webhook payload and analysis flags to ensure proper validation and structure.
- Implemented a structured logging configuration for better traceability and debugging.
- Added comprehensive unit tests for prompt loading, response validation, and webhook handling.
- Established a CI/CD pipeline with GitHub Actions for automated testing and coverage reporting.
- Enhanced the prompt template for LLM analysis to include specific instructions for handling escalations.
2025-07-13 13:19:10 +02:00

26 lines
1.0 KiB
Python

from typing import Optional, List, Union
from pydantic import BaseModel, ConfigDict, field_validator, Field
class JiraWebhookPayload(BaseModel):
model_config = ConfigDict(alias_generator=lambda x: ''.join(word.capitalize() if i > 0 else word for i, word in enumerate(x.split('_'))), populate_by_name=True)
issueKey: str
summary: str
description: Optional[str] = None
comment: Optional[str] = None
labels: Optional[Union[List[str], str]] = []
@field_validator('labels', mode='before')
@classmethod
def convert_labels_to_list(cls, v):
if isinstance(v, str):
return [v]
return v or []
status: Optional[str] = None
assignee: Optional[str] = None
updated: Optional[str] = None
class AnalysisFlags(BaseModel):
hasMultipleEscalations: bool = Field(description="Is there evidence of multiple escalation attempts?")
customerSentiment: Optional[str] = Field(description="Overall customer sentiment (e.g., 'neutral', 'frustrated', 'calm').")