Refactor project structure and documentation for CV Optimization Platform

- Updated .cursorrules with clearer project organization
- Removed setup-project.sh script and consolidated project documentation
- Simplified and restructured project rules, overview, and resources
- Added comprehensive task list with detailed development roadmap
- Cleaned up and standardized markdown files in .cursor/rules directory
This commit is contained in:
Ireneusz Bachanowicz 2025-02-26 00:25:02 +01:00
parent ab139e84af
commit c5202ca4c5
14 changed files with 421 additions and 290 deletions

View File

@ -0,0 +1,34 @@
---
description: Defines the coding conventions and style guidelines to be followed throughout the project.
globs:
---
# Coding Conventions and Style Guide
## General Principles
- **Consistency:** Maintain a consistent coding style across the entire project.
- **Readability:** Write code that is easy to understand and maintain by others (and your future self).
- **Simplicity:** Keep code as simple as possible while achieving the desired functionality. Avoid unnecessary complexity.
## Language Specific Conventions
### JavaScript (React Frontend)
- **Airbnb JavaScript Style Guide:** (Largely followed - [https://github.com/airbnb/javascript](https://github.com/airbnb/javascript))
- **React Component Structure:** Functional components preferred. Use hooks for state and lifecycle management.
- **Naming Conventions:** `camelCase` for variables and functions, `PascalCase` for React components.
- **File Structure:** Organize components into logical directories based on features or UI sections.
- **Comments:** Use comments to explain complex logic or non-obvious code sections.
### Python/Node.js (Backend - Adjust based on your choice)
- **Node.js Standard Style:** (If using Node.js backend - [https://standardjs.com/](https://standardjs.com/))
- **Error Handling:** Use try-catch blocks (JavaScript) for robust error handling.
- **Logging:** Use a logging library (e.g., `winston` or `pino` in Node.js) for structured logging.
- **API Endpoint Design:** RESTful API principles. Clear and consistent endpoint naming.
## Tailwind CSS Styling
- **Utility-First Approach:** Primarily use Tailwind utility classes for styling.
- **Component-Specific Styles (when needed):** Create custom CSS classes or components for reusable styles beyond utility classes.
- **Responsiveness:** Utilize Tailwind's responsive modifiers (`sm:`, `md:`, `lg:`, etc.) for responsive design.
## Git Conventions
- **Branching Strategy:** Feature branches for new features, `main` branch for stable releases.
- **Commit Messages:** Use clear and concise commit messages following conventional commits format (e.g., `feat: Add user authentication`, `fix: Resolve file upload bug`).
- **Pull Requests:** Use pull requests for code reviews and merging changes into `main`.

View File

@ -1,13 +0,0 @@
**CV Platform Development Protocol**
1. Security First:
- Always sanitize user input using DOMPurify
- Encrypt resume files with AES-256-GSM during storage
2. LLM Implementation Rules:
- Use OpenAI text-embedding-3-small for semantic analysis
- Maintain 0.7 temperature for balance between creativity/accuracy
- Verify ATS keyword suggestions against JobCopilot's latest data
3. Task Management:
- Reference .notes/task_list.md before making changes
- Create atomic Git commits per feature using Conventional Commits

View File

@ -0,0 +1,37 @@
---
description: Outlines the deployment plan for the CV Optimization Platform, including environments and steps.
globs:
---
# Deployment Plan
## Environments
- **Development Environment:** Local development environment on your machine.
- **Staging Environment:** (Optional but recommended for more complex projects) A staging environment that mirrors production for testing before live deployment. Could be a separate branch or deployment on Vercel/Render.
- **Production Environment:** Live website accessible to users.
## Deployment Steps (MVP Launch)
### Frontend (Vercel)
1. **Connect GitHub Repository to Vercel:** In Vercel, create a new project and connect it to your GitHub repository.
2. **Configure Build Settings:** Vercel will usually auto-detect Next.js settings. Verify build command and output directory are correct.
3. **Set Environment Variables:** Add any necessary environment variables (e.g., API endpoint URLs, API keys - *be careful not to expose secrets in frontend code directly, ideally pass through backend or use secure environment variable management*).
4. **Deploy:** Trigger deployment from Vercel dashboard or by pushing to your `main` branch (if configured for automatic deployments).
5. **Domain Setup (Optional):** Configure a custom domain name if desired.
### Backend (Render/Heroku)
1. **Create Render/Heroku Account and Project:** Create an account and a new project for your backend application on Render or Heroku.
2. **Connect GitHub Repository:** Connect your backend GitHub repository to the Render/Heroku project.
3. **Configure Deployment Settings:**
- **Build Command:** (e.g., `pip install -r requirements.txt` for Python, `npm install` for Node.js).
- **Start Command:** (e.g., `gunicorn app:app` for Flask, `uvicorn app:app --reload` for FastAPI during development, `node server.js` for Node.js).
- **Environment Variables:** Set necessary environment variables, especially API keys, database connection strings (if applicable). Use secure environment variable management provided by Render/Heroku.
4. **Database Setup (if using external DB):** Set up a database service (e.g., Render PostgreSQL, Heroku Postgres, MongoDB Atlas) and configure connection details.
5. **Deploy:** Trigger deployment from Render/Heroku dashboard or by pushing to your `main` branch.
6. **Health Checks:** Configure health check endpoints for monitoring application status.
## Post-Deployment
- **Monitoring:** Set up monitoring tools (e.g., Vercel Analytics, Render/Heroku logs, dedicated monitoring services) to track application performance and errors.
- **Logging:** Review logs regularly to identify and address issues.
- **Scaling (Future):** Plan for scaling backend resources as user load increases.
**Note:** This is a basic deployment plan for MVP. It will need to be refined as the project evolves and more complex features are added. Consider CI/CD pipelines for automated deployments in the future.

47
.cursor/rules/features.md Normal file
View File

@ -0,0 +1,47 @@
---
description: Lists the features and functionalities of the CV Optimization Platform, categorized by priority.
globs:
---
# Features and Functionalities
## Core Features (MVP)
- **CV Upload:**
- Support for .pdf, .doc, .docx, and .txt file formats.
- Drag-and-drop and file selection upload methods.
- File size limits and type validation.
- **AI-Powered CV Analysis:**
- Integration with OpenAI API (or chosen LLM).
- Analysis of CV content for:
- Grammar and spelling errors.
- Clarity and conciseness.
- Impact and effectiveness of language.
- Keyword optimization for ATS.
- **Suggestion Display:**
- Side-by-side comparison of original and improved CV sections.
- Clear and actionable suggestions for improvement.
- Ability for users to accept, reject, or modify suggestions.
- **Basic User Interface:**
- Simple and intuitive design for ease of use.
- Responsive design for different screen sizes.
## Future Features (Post-MVP)
- **User Accounts:**
- Registration and login functionality.
- Secure storage of user CVs.
- User profiles to manage CVs.
- **ATS Compliance Checking (Advanced):**
- Simulate ATS parsing and scoring.
- Provide specific feedback on ATS compatibility.
- **Job Description Tailoring:**
- Option to upload a job description.
- AI analysis to tailor the CV to the specific job description.
- **Cover Letter Generation:**
- AI-powered cover letter generation based on CV and job description.
- **Multiple CV Versions:**
- Allow users to create and manage multiple versions of their CV.
- **Integration with Job Boards:**
- (Future consideration) Integration with job boards for direct application.
- **Premium Features (Monetization):**
- Advanced ATS checking.
- Job description tailoring.
- Priority support.

View File

@ -1,66 +1,24 @@
--- ---
description: description: Provides a high-level overview of the CV Optimization Platform project, its goals, and target users.
globs: globs:
--- ---
# CV Optimization Platform # CV Optimization Platform Project Overview
## Table of Contents ## Project Goal
1. [Key Features](#key-features) To develop a web platform that helps users improve their CVs using AI-powered analysis, increasing their chances of landing better jobs by optimizing for both human reviewers and automated Applicant Tracking Systems (ATS).
2. [Tech Stack](#tech-stack)
3. [Core Functionalities](#core-functionalities)
4. [Documentation](#documentation)
5. [Additional Requirements](#additional-requirements)
6. [Current File Structure](#current-file-structure)
## Key Features ## Target Audience
- AI-powered resume analysis Job seekers who want to:
- ATS compliance checking - Improve the quality and effectiveness of their CVs.
- Real-time editing suggestions - Optimize their CVs for Applicant Tracking Systems (ATS).
- GDPR-compliant storage - Increase their chances of getting noticed by recruiters and hiring managers.
- Save time and effort in CV writing.
## Tech Stack ## Key Success Metrics
- React.js frontend - User satisfaction (measured through feedback and surveys).
- Node.js/Express.js backend - Number of CVs analyzed and improved.
- MongoDB database (initially using simple file storage) - User engagement (frequency of use, feature adoption).
- OpenAI/Claude LLM integration - Positive impact on users' job search outcomes (ideally tracked through user testimonials or optional success stories).
## Core Functionalities ## Project Status
- User authentication Currently in the initial development phase, focusing on core functionalities (CV upload, AI analysis, suggestion display).
- File upload and storage
- AI analysis and suggestions
- ATS compliance checking
- Real-time editing suggestions
## Documentation
- OpenAI API documentation
- Claude API documentation
- MongoDB documentation
- React.js documentation
## Additional Requirements
- Use Tailwind CSS for styling
- Implement responsive design
- Ensure high performance and scalability
- Follow best practices for security and error handling
- Use Next.js for server-side rendering and routing
- Implement proper error handling and user feedback
- Ensure cross-browser compatibility
- Optimize performance with code splitting and lazy loading
- Implement caching where appropriate
## Current File Structure
my-app/
├── app/
├── components/
├── public/
├── styles/
├── .env
├── .gitignore
├── package.json
├── tailwind.config.ts
├── tsconfig.json
├── README.md
./notes/
├── resume_resources.md
├── project_overview.md
├── development_tasks.md

View File

@ -0,0 +1,26 @@
---
description: Outlines the development requirements, including non-functional requirements, best practices, and coding guidelines.
globs:
---
# Development Requirements and Best Practices
## Non-Functional Requirements
- **Performance:** Ensure fast loading times and responsive UI. Optimize backend for efficient processing.
- **Scalability:** Design the system to handle increasing user load and data volume.
- **Security:** Implement robust security measures to protect user data and prevent vulnerabilities. GDPR compliance for user data handling.
- **Reliability:** Ensure the application is stable and handles errors gracefully.
- **Accessibility:** Design with accessibility in mind (WCAG guidelines).
- **Cross-Browser Compatibility:** Test and ensure compatibility across major browsers (Chrome, Firefox, Safari, Edge).
- **Responsiveness:** Application should be fully responsive and work well on different devices (desktops, tablets, mobile).
## Development Best Practices
- **Clean Code:** Write readable, maintainable, and well-documented code.
- **Version Control:** Use Git for version control and collaborate effectively. Follow Git best practices (branching, pull requests).
- **Testing:** Implement unit tests, integration tests, and end-to-end tests to ensure code quality and prevent regressions.
- **Error Handling:** Implement comprehensive error handling throughout the application. Provide informative error messages to users.
- **Logging:** Implement logging for debugging and monitoring purposes.
- **Code Reviews:** Conduct code reviews to ensure code quality and knowledge sharing.
- **Performance Optimization:** Implement code splitting, lazy loading, and caching where appropriate to optimize performance.
- **Security Best Practices:** Follow security best practices for web development, including input validation, output encoding, and protection against common vulnerabilities (OWASP guidelines).
- **Tailwind CSS Conventions:** Follow Tailwind CSS best practices for styling and maintainability.
- **React Best Practices:** Adhere to React best practices for component design, state management, and performance.

View File

@ -1,35 +1,22 @@
# Resume Resources ---
description: A collection of external resources for resume building, including templates, writing tools, and job websites.
globs:
---
# Resume Building Resources
## CV Templates ## Resume Templates
- [React Resume Template](https://github.com/tbakerx/react-resume-template?tab=readme-ov-file) - [Resume.com Templates](https://www.resume.com/templates/)
- [React Resume](https://reactresume.com/) - [Canva Resume Templates](https://www.canva.com/resumes/templates/)
- [Google Docs Resume Templates](https://www.google.com/docs/about/) (Search for "resume" in template gallery)
## LinkedIn Resume Builder ## Resume Writing Tools & Guides
- [Professional Resume Builder](https://lnkd.in/gPbPiF3Y) - [Grammarly](https://www.grammarly.com/) (Grammar and spell checker)
- [Jobscan](https://www.jobscan.co/) (ATS optimization tool - *Consider as competitor research*)
- [Resume Worded](https://resumeworded.com/) (Resume feedback and improvement tools - *Consider as competitor research*)
- [Indeed Career Guide - Resume Writing](https://www.indeed.com/career-advice/resumes-cover-letters)
## Resume Writing Tools ## Job Search Websites (for inspiration and job description examples)
1. **Looking for Resume**: Build resumes easily with professional templates. - [Indeed](https://www.indeed.com/)
2. **ResumeWorded**: Instantly score and improve resumes and LinkedIn profiles. - [LinkedIn Jobs](https://www.linkedin.com/jobs/)
3. **ResumeGenius**: Resources from application to job offer. - [Glassdoor](https://www.glassdoor.com/Job/jobs.htm)
4. **Resume.io**: AI-powered resume analysis and improvement suggestions. - [Monster](https://www.monster.com/)
5. **Enhancv**: Professional templates approved by recruiters in minutes.
6. **Rezi**: AI-based CV builder with ATS optimization and keyword scanner.
7. **Zety**: Create polished CVs in minutes with AI-powered tools.
8. **CV Compiler**: Compare your CV with job descriptions for personalized feedback.
9. **Visual CV**: Create professional CVs with AI-powered tools and templates.
10. **Jobscan**: Optimize resumes for applicant tracking systems (ATS).
11. **Novoresume**: Easy-to-use resume builder with modern templates.
12. **Hiration**: Get tailored resume templates and career advice.
13. **Kickresume**: Create resumes, cover letters, and websites effortlessly.
14. **FlowCV**: Free tool to build visually appealing and structured resumes.
## Remote Job Websites
1. **We Work Remotely**: The largest remote community with 4.5M visitors.
2. **Remote.co**: Offers remote roles in various fields.
3. **FlexJobs**: Curates high-quality, vetted remote job opportunities.
4. **Remotive**: Features remote jobs in various sectors.
5. **Jobspresso**: Curated remote jobs in tech, marketing, design, and writing.
## Networking
- Join me on Telegram: [Unlock Career Secrets](https://t.me/hrswatisharma)
- Follow HR Swati Sharma for more valuable content.

View File

@ -0,0 +1,193 @@
---
description: Decomposed task list for the CV Optimization Platform, organized by component and functionality. Follow this plan from top to bottom.
globs:
---
# Decomposed Development Task List
**Project Goal:** Develop a CV Optimization Platform MVP with core features: CV upload, AI-powered analysis, and basic suggestion display.
**Development Flow:** Follow the components and tasks in the order listed below for a logical development progression.
---
## 1. Frontend Development (React.js with Next.js)
**Functionality:** User Interface and User Interaction
### 1.1. UI Structure and Basic Layout
- **Tasks:**
- [ ] 1.1.1. Set up basic Next.js project structure (if not already done).
- [ ] 1.1.2. Create homepage (`pages/index.js`) with a clear heading and project description.
- [ ] 1.1.3. Design basic layout using Tailwind CSS for overall page structure (header, main content area).
- [ ] 1.1.4. Create a placeholder area for CV upload section.
- [ ] 1.1.5. Create a placeholder area for displaying original CV text.
- [ ] 1.1.6. Create a placeholder area for displaying improved CV text/suggestions.
### 1.2. File Upload Functionality
- **Tasks:**
- [ ] 1.2.1. Install `react-dropzone` library.
- [ ] 1.2.2. Create a File Upload Component (`components/FileUpload.js`).
- [ ] 1.2.3. Implement drag-and-drop functionality using `react-dropzone`.
- [ ] 1.2.4. Implement file selection button.
- [ ] 1.2.5. Display selected file name to the user.
- [ ] 1.2.6. Implement basic client-side file type validation (.pdf, .doc, .docx, .txt).
- [ ] 1.2.7. Implement basic client-side file size validation.
- [ ] 1.2.8. Display user-friendly error messages for file validation failures.
### 1.3. Displaying Original CV Text
- **Tasks:**
- [ ] 1.3.1. Create a component to display CV text (`components/CVDisplay.js`).
- [ ] 1.3.2. Fetch uploaded file content from backend API (after backend `/upload` endpoint is ready - *note dependency*).
- [ ] 1.3.3. Display the raw text content of the uploaded CV in the `CVDisplay` component.
- [ ] 1.3.4. Implement basic formatting for text display (preserve line breaks, basic text styling).
### 1.4. Displaying Improved CV Text/Suggestions (Placeholder for now)
- **Tasks:**
- [ ] 1.4.1. Create a placeholder component for improved CV display (`components/ImprovedCVDisplay.js`).
- [ ] 1.4.2. For now, display static placeholder text in `ImprovedCVDisplay` (e.g., "Improved CV Suggestions will appear here").
- [ ] 1.4.3. Plan layout for side-by-side comparison or tabbed view for original vs. improved CV (implementation later).
### 1.5. Basic Frontend Styling and Responsiveness
- **Tasks:**
- [ ] 1.5.1. Apply Tailwind CSS styling to all frontend components for a consistent look and feel.
- [ ] 1.5.2. Ensure basic responsiveness for different screen sizes (desktop, mobile).
- [ ] 1.5.3. Refine UI elements for better user experience (padding, margins, font sizes, colors).
---
## 2. Backend Development (Node.js/Express.js OR Python/Flask/FastAPI - *Choose one and proceed*)
**Functionality:** API Endpoints, Logic, LLM Integration, Document Processing
### 2.1. Backend Project Setup and API Framework
- **Tasks (Choose Node.js OR Python path):**
**Node.js/Express.js Path:**
- [ ] 2.1.1. Set up Node.js backend project (using `npm init`).
- [ ] 2.1.2. Install Express.js and required packages (`express`, `dotenv`, etc.).
- [ ] 2.1.3. Create a basic Express.js server (`server.js`) with a "Hello World" endpoint to test.
- [ ] 2.1.4. Configure `.env` file for environment variables.
**Python/Flask/FastAPI Path:**
- [ ] 2.1.1. Set up Python backend project and virtual environment.
- [ ] 2.1.2. Install Flask/FastAPI and required packages (`flask` or `fastapi`, `uvicorn`, `python-dotenv`, etc.).
- [ ] 2.1.3. Create a basic Flask/FastAPI app (`app.py`) with a "Hello World" endpoint to test.
- [ ] 2.1.4. Configure `.env` file for environment variables.
### 2.2. `/upload` API Endpoint - File Handling
- **Tasks (Adjust based on Node.js or Python):**
- [ ] 2.2.1. Create `/upload` API endpoint in backend.
- [ ] 2.2.2. Implement file upload handling using appropriate middleware/libraries (e.g., `multer` in Node.js, Flask/FastAPI file handling).
- [ ] 2.2.3. Implement server-side file type validation (double-check file types).
- [ ] 2.2.4. Implement server-side file size limits.
- [ ] 2.2.5. Save uploaded files temporarily to a server directory (for processing).
- [ ] 2.2.6. Return success response to frontend with temporary file path or identifier.
- [ ] 2.2.7. Implement error handling for file upload failures and return appropriate error responses.
### 2.3. Document Text Extraction Functionality
- **Tasks (Adjust based on Node.js or Python and chosen libraries):**
- [ ] 2.3.1. Install document processing libraries (e.g., `pdf-parse`, `docx-parser` for Node.js OR `PyPDF2`, `python-docx` for Python).
- [ ] 2.3.2. Create functions to extract text from:
- [ ] 2.3.2.a. PDF files
- [ ] 2.3.2.b. DOCX files
- [ ] 2.3.2.c. DOC files (if supporting)
- [ ] 2.3.2.d. TXT files
- [ ] 2.3.3. Integrate text extraction functions into `/upload` endpoint after file saving.
- [ ] 2.3.4. After successful text extraction, return the extracted text in the API response to the frontend (instead of just file path).
- [ ] 2.3.5. Implement error handling for text extraction failures.
### 2.4. `/analyze` API Endpoint - LLM Integration (Initial Setup)
- **Tasks (Adjust based on Node.js or Python and OpenAI API):**
- [ ] 2.4.1. Create `/analyze` API endpoint in backend.
- [ ] 2.4.2. Install OpenAI API library (or chosen LLM library).
- [ ] 2.4.3. Configure OpenAI API key in `.env` file and load it securely.
- [ ] 2.4.4. In `/analyze` endpoint, receive CV text from frontend request.
- [ ] 2.4.5. Construct a basic prompt for the LLM for general CV improvement (use prompt examples from previous discussions).
- [ ] 2.4.6. Send the CV text and prompt to the OpenAI API.
- [ ] 2.4.7. Receive the LLM response.
- [ ] 2.4.8. For now, simply return the raw LLM response text to the frontend in the API response.
- [ ] 2.4.9. Implement basic error handling for OpenAI API calls (API connection errors, rate limits, etc.).
---
## 3. Integration and Testing
**Functionality:** Connecting Frontend and Backend, End-to-End Testing
### 3.1. Connect Frontend File Upload to Backend `/upload` Endpoint
- **Tasks:**
- [ ] 3.1.1. In Frontend `FileUpload` component, modify file upload logic to send files to the backend `/upload` endpoint using `fetch` or `axios`.
- [ ] 3.1.2. Handle successful `/upload` response and store the returned CV text (or file identifier if needed) in frontend state.
- [ ] 3.1.3. Display the received CV text in the `CVDisplay` component.
- [ ] 3.1.4. Handle error responses from `/upload` endpoint and display user-friendly error messages.
### 3.2. Connect Frontend "Process CV" Button to Backend `/analyze` Endpoint
- **Tasks:**
- [ ] 3.2.1. Add a "Process CV" button to the frontend UI (below the uploaded CV display area).
- [ ] 3.2.2. When "Process CV" button is clicked, trigger an API call from frontend to backend `/analyze` endpoint.
- [ ] 3.2.3. Send the CV text (currently displayed in `CVDisplay`) to the `/analyze` endpoint in the request body.
- [ ] 3.2.4. Handle successful `/analyze` response and store the returned improved CV text (LLM response) in frontend state.
- [ ] 3.2.5. Display the received improved CV text in the `ImprovedCVDisplay` component (replace placeholder text).
- [ ] 3.2.6. Handle error responses from `/analyze` endpoint and display user-friendly error messages.
- [ ] 3.2.7. Implement loading state/spinner to indicate processing while waiting for API responses.
### 3.3. End-to-End Testing and Refinement
- **Tasks:**
- [ ] 3.3.1. Test the entire flow: Upload CV -> Display Original -> Click "Process CV" -> Display Improved CV.
- [ ] 3.3.2. Test with different file types (.pdf, .docx, .txt).
- [ ] 3.3.3. Test with various CV content examples.
- [ ] 3.3.4. Test error handling scenarios (invalid file types, large files, API errors, etc.).
- [ ] 3.3.5. Refine UI and user flow based on testing.
- [ ] 3.3.6. Basic code cleanup and commenting.
---
## 4. Deployment (Initial MVP Deployment)
**Functionality:** Deploying Frontend and Backend to hosting platforms.
### 4.1. Frontend Deployment to Vercel
- **Tasks:**
- [ ] 4.1.1. Create a Vercel account (if not already done).
- [ ] 4.1.2. Connect your frontend GitHub repository to Vercel.
- [ ] 4.1.3. Configure Vercel deployment settings (if needed, usually auto-detects Next.js).
- [ ] 4.1.4. Deploy frontend to Vercel.
- [ ] 4.1.5. Verify frontend is accessible on Vercel URL.
### 4.2. Backend Deployment to Render/Heroku (Choose one)
- **Tasks (Choose Render or Heroku):**
**Render Deployment:**
- [ ] 4.2.1. Create a Render account (if not already done).
- [ ] 4.2.2. Create a new Web Service on Render.
- [ ] 4.2.3. Connect your backend GitHub repository to Render.
- [ ] 4.2.4. Configure Render deployment settings (build command, start command, environment variables - especially OpenAI API key).
- [ ] 4.2.5. Deploy backend to Render.
- [ ] 4.2.6. Verify backend API is accessible on Render URL (e.g., test "Hello World" endpoint).
**Heroku Deployment:**
- [ ] 4.2.1. Create a Heroku account (if not already done).
- [ ] 4.2.2. Create a new Heroku app.
- [ ] 4.2.3. Connect your backend GitHub repository to Heroku.
- [ ] 4.2.4. Configure Heroku deployment settings (buildpacks, environment variables - especially OpenAI API key).
- [ ] 4.2.5. Deploy backend to Heroku.
- [ ] 4.2.6. Verify backend API is accessible on Heroku URL (e.g., test "Hello World" endpoint).
### 4.3. Configure Frontend to Connect to Deployed Backend
- **Tasks:**
- [ ] 4.3.1. In Frontend code, update API endpoint URLs to point to your deployed backend URL (on Render/Heroku).
- [ ] 4.3.2. Re-deploy frontend to Vercel with updated backend URL.
- [ ] 4.3.3. Test the entire application flow on the deployed environment (Vercel frontend connected to Render/Heroku backend).
---
**Next Steps (Post MVP - Not part of this decomposed list yet, for future planning):**
- Implement User Accounts
- Enhance LLM Prompting and Analysis (ATS Optimization, Job Description Tailoring)
- Implement Suggestion Editing and User Feedback
- Advanced UI/UX improvements
- Monetization Strategy Implementation
- ... and so on
**Follow this decomposed task list sequentially. Start with "1. Frontend Development - 1.1. UI Structure and Basic Layout" and work your way down. This provides a clear path to build your MVP.**

View File

@ -0,0 +1,30 @@
---
description: Details the technology stack chosen for the CV Optimization Platform.
globs:
---
# Technology Stack
## Frontend
- **Framework:** React.js (using Next.js for server-side rendering and routing)
- **Styling:** Tailwind CSS
- **State Management:** (Initially simple React state, consider Context API or Zustand if complexity grows)
- **UI Library:** (Consider a component library like Material UI or Ant Design if needed for more complex UI elements later)
## Backend
- **Language:** Node.js (with Express.js framework) or Python (with Flask/FastAPI - *Decision needed based on your preference and libraries for LLM interaction*)
- **API Framework:** Express.js (Node.js) or Flask/FastAPI (Python)
- **Document Processing Libraries:**
- Python: `PyPDF2`, `python-docx` (if using Python backend)
- Node.js: `pdf-parse`, `docx-parser` (if using Node.js backend)
## Database
- **Initial Stage:** File-based storage (for simplicity in MVP) - *Consider moving to MongoDB or PostgreSQL for user accounts and scalable data storage later.*
- **Future Scalable Database:** MongoDB (NoSQL) or PostgreSQL (Relational) - *To be decided based on data structure and scalability needs.*
## LLM Integration
- **Primary LLM Provider:** OpenAI API (GPT models) - for initial development and ease of use.
- **Potential Alternative:** Hugging Face Transformers - for exploring open-source models and potential cost optimization in the future.
## Deployment
- **Frontend Hosting:** Vercel (for Next.js applications)
- **Backend Hosting:** Render or Heroku (or AWS/Google Cloud/Azure for more control and scalability)

View File

@ -1,22 +1,13 @@
All files are located in the my-app folder. All files are located in the `my-app` folder.
All cursor rules are located in the `.cursor/rules` folder.
All cursor rules are located in the .cursor folder. ## Cursor Project Rules
## Cursor Rules Overview
### CV Platform Development Protocol
- **File**: `.cursor/rules/cv-platform-rules.mdc`
- **Description**: Contains development protocols for the CV platform, including security measures, LLM implementation rules, and task management guidelines.
### Project Overview
- **File**: `.cursor/rules/project_overview.mdc`
- **Description**: Provides a comprehensive overview of the CV optimization platform, including key features, tech stack, core functionalities, and additional requirements.
### Resume Resources
- **File**: `.cursor/rules/resume_resources.md`
- **Description**: A collection of resources for resume building, including templates, writing tools, and job websites.
### Task List
- **File**: `.cursor/rules/task_list.md`
- **Description**: Contains a list of tasks for the project, organized by priority and current sprint.
- **Project Overview**: `.cursor/rules/project_overview.md` - High-level project description, goals, and target audience.
- **Tech Stack**: `.cursor/rules/tech_stack.md` - Detailed technology stack being used for the project.
- **Features & Functionalities**: `.cursor/rules/features.md` - List of key features and functionalities to be implemented.
- **Development Requirements**: `.cursor/rules/requirements.md` - Non-functional requirements, best practices, and coding guidelines.
- **Task List**: `.cursor/rules/task_list.md` - Current task list with status and priority.
- **Resume Resources**: `.cursor/rules/resume_resources.md` - External resources and helpful links.
- **Coding Conventions**: `.cursor/rules/coding_conventions.md` - Specific coding style and conventions to follow.
- **Deployment Plan**: `.cursor/rules/deployment_plan.md` - Outline for deployment process and environment.

View File

@ -18,6 +18,7 @@ if (!fs.existsSync(uploadDir)) {
} }
export default async function handler(req: NextApiRequest, res: NextApiResponse) { export default async function handler(req: NextApiRequest, res: NextApiResponse) {
console.log("Received request for file upload"); // Debug log
const form = new formidable.IncomingForm(); const form = new formidable.IncomingForm();
form.uploadDir = uploadDir; // Set the upload directory form.uploadDir = uploadDir; // Set the upload directory
form.keepExtensions = true; // Keep file extensions form.keepExtensions = true; // Keep file extensions
@ -30,10 +31,12 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
const file = files.cv; // Access the uploaded file const file = files.cv; // Access the uploaded file
if (!file) { if (!file) {
console.warn("No file uploaded."); // Warning log
return res.status(400).json({ error: "No file uploaded." }); return res.status(400).json({ error: "No file uploaded." });
} }
const newFilePath = path.join(uploadDir, file.originalFilename || file.newFilename); const newFilePath = path.join(uploadDir, file.originalFilename || file.newFilename);
console.log(`Moving file to: ${newFilePath}`); // Debug log
// Move the file to the desired location // Move the file to the desired location
fs.rename(file.filepath, newFilePath, (err) => { fs.rename(file.filepath, newFilePath, (err) => {
@ -41,6 +44,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
console.error("Error saving the file:", err); // Log the error console.error("Error saving the file:", err); // Log the error
return res.status(500).json({ error: "Error saving the file." }); return res.status(500).json({ error: "Error saving the file." });
} }
console.log("File uploaded successfully!"); // Debug log
res.status(200).json({ message: "File uploaded successfully!" }); res.status(200).json({ message: "File uploaded successfully!" });
}); });
}); });

View File

@ -6,7 +6,8 @@
"dev": "next dev --turbopack", "dev": "next dev --turbopack",
"build": "next build", "build": "next build",
"start": "next start", "start": "next start",
"lint": "next lint" "lint": "next lint",
"debug": "NODE_DEBUG=next node server.js"
}, },
"dependencies": { "dependencies": {
"formidable": "^3.5.2", "formidable": "^3.5.2",

View File

@ -1,13 +0,0 @@
#!/bin/zsh
# Install core dependencies
brew install poppler
npm install -D eslint @types/node
# Configure Cline models
curl -O https://raw.githubusercontent.com/instructa/ai-prompts/main/cline-defaults.json
# Setup Python virtualenv
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

View File

@ -1,151 +0,0 @@
#!/bin/bash
# Create directory structure
mkdir -p \
.cursor/rules \
.cursor/config \
.cursor/prompts \
.notes/meeting_notes \
scripts \
utils
# Create core configuration files
cat > .cursor/rules/cv-platform-rules.mdc <<EOF
**CV Platform Development Protocol**
1. Security First:
- Always sanitize user input using DOMPurify
- Encrypt resume files with AES-256-GSM during storage
2. LLM Implementation Rules:
- Use OpenAI text-embedding-3-small for semantic analysis
- Maintain 0.7 temperature for balance between creativity/accuracy
- Verify ATS keyword suggestions against JobCopilot's latest data
3. Task Management:
- Reference .notes/task_list.md before making changes
- Create atomic Git commits per feature using Conventional Commits
EOF
cat > .cursor/config/llm-params.json <<EOF
{
"defaultModel": "claude-3.5-sonnet-20240620",
"fallbackModel": "gpt-4-turbo-2024-04-09",
"temperature": {
"codeGeneration": 0.3,
"errorDebugging": 0.7
},
"contextWindow": 16000
}
EOF
# Create task tracking system
cat > .notes/task_list.md <<EOF
## P-0: Critical Path (Current Sprint)
- [ ] Implement PDF parser using PyPDF2 and @breezypdf/pdf-extract
- [ ] Create ATS keyword mapping system (Due: 2025-03-01)
## P-1: Near-Term Backlog
- [ ] Design premium subscription flow
- [ ] Research GDPR-compliant storage solutions
## P-X: Innovation Pipeline
- [ ] Experiment with Claude 3 Opus for cover letter generation
EOF
# Create prompt engineering files
cat > .cursor/prompts/cv-analysis.md <<EOF
# CV Enhancement Workflow
1. **Structural Analysis**
- Identify missing sections using industry benchmarks
- Check chronological consistency of employment history
2. **ATS Optimization**
- Cross-reference with 50+ tracking systems
- Generate keyword gap report
3. **LLM Enhancement**
- Rewrite summaries using power verbs
- Convert responsibilities to measurable achievements
EOF
cat > .cursor/prompts/error-handling.md <<EOF
**Debugging Process**
1. Reproduce the error in isolation
2. Analyze stack trace with @backend/logger.js
3. Propose 3 potential solutions with pros/cons
4. Implement safest option with rollback plan
EOF
# Create development workflow files
cat > scripts/setup-environment.sh <<EOF
#!/bin/zsh
# Install core dependencies
brew install poppler
npm install -D eslint @types/node
# Configure Cline models
curl -O https://raw.githubusercontent.com/instructa/ai-prompts/main/cline-defaults.json
# Setup Python virtualenv
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
EOF
chmod +x scripts/setup-environment.sh
# Create sample code files
cat > utils/resume_analysis.py <<EOF
from openai import OpenAI
from pdfminer.high_level import extract_text
client = OpenAI()
def analyze_resume(file_path):
text = extract_text(file_path)
response = client.chat.completions.create(
model="gpt-4-turbo",
messages=[{
"role": "system",
"content": "Analyze resume for:\\n1. Missing ATS keywords\\n2. Skill gaps\\n3. Achievement opportunities"
},
{"role": "user", "content": text}]
)
return response.choices[0].message.content
EOF
cat > utils/pdf.worker.js <<EOF
const { PDFDocument } = require('pdf-lib');
self.addEventListener('message', async (e) => {
const pdfDoc = await PDFDocument.load(e.data);
const pages = pdfDoc.getPages();
const textContent = pages.map(p => p.getTextContent());
self.postMessage(textContent);
});
EOF
# Create project documentation
cat > .notes/project_overview.md <<EOF
# CV Optimization Platform
## Key Features
- AI-powered resume analysis
- ATS compliance checking
- Real-time editing suggestions
- GDPR-compliant storage
## Tech Stack
- React.js frontend
- Node.js/Express.js backend
- MongoDB database
- OpenAI/Cline LLM integration
EOF
echo "Project structure created successfully!"
echo "Next steps:"
echo "1. Run 'chmod +x scripts/*.sh'"
echo "2. Run 'scripts/setup-environment.sh'"
echo "3. Install VS Code/Cursor extensions from .vscode/extensions.json"