233 lines
6.5 KiB
Markdown
233 lines
6.5 KiB
Markdown
# MCP Server - AI-Powered Code Editor
|
|
|
|
A comprehensive server that automatically clones Gitea repositories, analyzes code with AI models (Gemini/OpenAI), applies intelligent code changes, and commits them back to the repository.
|
|
|
|
## 🚀 Features
|
|
|
|
- **Repository Management**: Clone repositories from Gitea with authentication
|
|
- **AI-Powered Analysis**: Use Gemini CLI or OpenAI to analyze and edit code
|
|
- **Model Selection**: Choose specific AI models (e.g., gemini-1.5-pro, gpt-4)
|
|
- **Real-time Progress Tracking**: Web interface with live status updates
|
|
- **Modern UI**: Beautiful, responsive frontend with progress indicators
|
|
- **Background Processing**: Asynchronous task processing with status monitoring
|
|
- **Comprehensive Logging**: Full logging to both console and file
|
|
- **Docker Support**: Easy deployment with Docker and docker-compose
|
|
|
|
## 📋 Prerequisites
|
|
|
|
- Python 3.8+
|
|
- Git
|
|
- API keys for AI models (Gemini or OpenAI)
|
|
|
|
## 🛠️ Installation
|
|
|
|
### Option 1: Docker (Recommended)
|
|
|
|
1. **Clone the repository**
|
|
```bash
|
|
git clone <your-repo-url>
|
|
cd mcp-server
|
|
```
|
|
|
|
2. **Build and run with Docker Compose**
|
|
```bash
|
|
docker-compose up --build
|
|
```
|
|
|
|
3. **Or build and run manually**
|
|
```bash
|
|
docker build -t mcp-server .
|
|
docker run -p 8000:8000 mcp-server
|
|
```
|
|
|
|
### Option 2: Local Installation
|
|
|
|
1. **Clone the repository**
|
|
```bash
|
|
git clone <your-repo-url>
|
|
cd mcp-server
|
|
```
|
|
|
|
2. **Install Python dependencies**
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
3. **Install Gemini CLI (if using Gemini)**
|
|
```bash
|
|
# Download from GitHub releases
|
|
curl -L https://github.com/google/generative-ai-go/releases/latest/download/gemini-linux-amd64 -o /usr/local/bin/gemini
|
|
chmod +x /usr/local/bin/gemini
|
|
```
|
|
|
|
4. **Start the server**
|
|
```bash
|
|
python main.py
|
|
# or
|
|
python start.py
|
|
```
|
|
|
|
## 🚀 Usage
|
|
|
|
### Using the Web Interface
|
|
|
|
1. Open your browser and navigate to `http://localhost:8000`
|
|
2. Fill in the repository details:
|
|
- **Gitea Repository URL**: Your repository URL (e.g., `http://157.66.191.31:3000/user/repo.git`)
|
|
- **Gitea Token**: Your Gitea access token (get from Settings → Applications → Generate new token)
|
|
- **AI Model**: Choose between Gemini CLI or OpenAI
|
|
- **Model Name**: Specify the exact model (e.g., `gemini-1.5-pro`, `gpt-4`)
|
|
- **API Key**: Your AI model API key
|
|
- **Prompt**: Describe what changes you want to make to the code
|
|
|
|
3. Click "Process Repository" and monitor the progress
|
|
|
|
### API Endpoints
|
|
|
|
- `GET /` - Web interface
|
|
- `POST /process` - Start repository processing
|
|
- `GET /status/{task_id}` - Get processing status
|
|
- `GET /health` - Health check
|
|
|
|
## 🔧 Configuration
|
|
|
|
### Environment Variables
|
|
|
|
| Variable | Description | Default |
|
|
|----------|-------------|---------|
|
|
| `HOST` | Server host | `0.0.0.0` |
|
|
| `PORT` | Server port | `8000` |
|
|
|
|
### Supported AI Models
|
|
|
|
**Gemini Models:**
|
|
- `gemini-1.5-pro` (recommended)
|
|
- `gemini-1.5-flash`
|
|
- `gemini-1.0-pro`
|
|
|
|
**OpenAI Models:**
|
|
- `gpt-4`
|
|
- `gpt-4-turbo`
|
|
- `gpt-3.5-turbo`
|
|
|
|
### Supported File Types
|
|
|
|
The system analyzes and can modify:
|
|
- Python (`.py`)
|
|
- JavaScript (`.js`, `.jsx`)
|
|
- TypeScript (`.ts`, `.tsx`)
|
|
- HTML (`.html`)
|
|
- CSS (`.css`)
|
|
- JSON (`.json`)
|
|
- Markdown (`.md`)
|
|
|
|
## 📁 Project Structure
|
|
|
|
```
|
|
mcp-server/
|
|
├── main.py # FastAPI application
|
|
├── requirements.txt # Python dependencies
|
|
├── Dockerfile # Docker configuration
|
|
├── docker-compose.yml # Docker Compose configuration
|
|
├── README.md # This file
|
|
├── templates/
|
|
│ └── index.html # Frontend template
|
|
├── static/
|
|
│ ├── style.css # Frontend styles
|
|
│ └── script.js # Frontend JavaScript
|
|
└── logs/ # Log files (created by Docker)
|
|
```
|
|
|
|
## 🔄 How It Works
|
|
|
|
1. **Repository Cloning**: Authenticates with Gitea and clones the repository
|
|
2. **AI Analysis**: Sends code and prompt to selected AI model
|
|
3. **Code Modification**: Applies AI-suggested changes to the codebase
|
|
4. **Commit & Push**: Commits changes and pushes back to Gitea
|
|
|
|
## 🎯 Example Prompts
|
|
|
|
- "Add error handling to all API endpoints"
|
|
- "Optimize database queries for better performance"
|
|
- "Add comprehensive logging throughout the application"
|
|
- "Refactor the authentication system to use JWT tokens"
|
|
- "Add unit tests for all utility functions"
|
|
|
|
## 📊 Logging
|
|
|
|
The server provides comprehensive logging:
|
|
- **Console Output**: Real-time logs in the terminal
|
|
- **File Logging**: Logs saved to `mcp_server.log`
|
|
- **Task-specific Logging**: Each task has detailed logging with task ID
|
|
|
|
### Viewing Logs
|
|
|
|
**Docker:**
|
|
```bash
|
|
# View container logs
|
|
docker logs <container_id>
|
|
|
|
# Follow logs in real-time
|
|
docker logs -f <container_id>
|
|
```
|
|
|
|
**Local:**
|
|
```bash
|
|
# View log file
|
|
tail -f mcp_server.log
|
|
```
|
|
|
|
## 🔒 Security Considerations
|
|
|
|
- API keys are sent from frontend and not stored
|
|
- Use HTTPS in production
|
|
- Implement proper authentication for the web interface
|
|
- Regularly update dependencies
|
|
- Monitor API usage and costs
|
|
|
|
## 🐛 Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
1. **Repository cloning fails**
|
|
- Verify Gitea token is valid and has repository access
|
|
- Check repository URL format
|
|
- Ensure repository exists and is accessible
|
|
- Make sure token has appropriate permissions (read/write)
|
|
|
|
2. **AI model errors**
|
|
- Verify API keys are correct
|
|
- Check model name spelling
|
|
- Ensure internet connectivity
|
|
|
|
3. **Gemini CLI not found**
|
|
- Install Gemini CLI: `curl -L https://github.com/google/generative-ai-go/releases/latest/download/gemini-linux-amd64 -o /usr/local/bin/gemini && chmod +x /usr/local/bin/gemini`
|
|
|
|
### Logs
|
|
|
|
Check the logs for detailed error messages and processing status:
|
|
- **Frontend**: Real-time logs in the web interface
|
|
- **Backend**: Console and file logs with detailed information
|
|
|
|
## 🤝 Contributing
|
|
|
|
1. Fork the repository
|
|
2. Create a feature branch
|
|
3. Make your changes
|
|
4. Add tests if applicable
|
|
5. Submit a pull request
|
|
|
|
## 📄 License
|
|
|
|
This project is licensed under the MIT License - see the LICENSE file for details.
|
|
|
|
## 🆘 Support
|
|
|
|
For issues and questions:
|
|
1. Check the troubleshooting section
|
|
2. Review the logs in the web interface and console
|
|
3. Create an issue in the repository
|
|
|
|
---
|
|
|
|
**Note**: This tool modifies code automatically. Always review changes before deploying to production environments. |