first push

This commit is contained in:
Philipp
2025-11-28 12:50:27 +01:00
parent 471ea10341
commit 5220ffbe46
84 changed files with 1857 additions and 1527 deletions

1
.gitignore vendored
View File

@@ -2,3 +2,4 @@
backend/uploads/*.pth
*.pth
backend/node_modules/
backend/.venv/

40
backend/.gitignore vendored Normal file
View File

@@ -0,0 +1,40 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
venv/
env/
ENV/
*.egg-info/
dist/
build/
# Flask
instance/
.webassets-cache
# Database
*.db
*.sqlite
# Environment
.env
.flaskenv
# IDE
.vscode/
.idea/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
# Logs
*.log
# Uploads
uploads/*.pth

View File

@@ -0,0 +1,148 @@
# Backend Conversion Summary
## ✅ Conversion Complete
Your Node.js backend has been successfully converted to Python using Flask and SQLAlchemy.
## 📁 New Python Files Created
### Core Application
- **app.py** - Main Flask application (replaces server.js)
- **start.py** - Startup script for easy launching
- **requirements.txt** - Python dependencies (replaces package.json)
### Database Layer
- **database/database.py** - SQLAlchemy database configuration (replaces database.js)
### Models (Sequelize → SQLAlchemy)
- **models/TrainingProject.py**
- **models/TrainingProjectDetails.py**
- **models/training.py**
- **models/LabelStudioProject.py**
- **models/Images.py**
- **models/Annotation.py**
- **models/__init__.py**
### API Routes
- **routes/api.py** - All API endpoints converted to Flask blueprints (replaces api.js)
- **routes/__init__.py**
### Services
- **services/fetch_labelstudio.py** - Label Studio API integration
- **services/seed_label_studio.py** - Database seeding logic
- **services/generate_json_yolox.py** - COCO JSON generation
- **services/generate_yolox_exp.py** - YOLOX experiment file generation
- **services/push_yolox_exp.py** - Save training settings to DB
- **services/__init__.py**
### Documentation
- **README.md** - Comprehensive documentation
- **QUICKSTART.md** - Quick setup guide
- **.gitignore** - Python-specific ignore patterns
## 🔄 Key Changes
### Technology Stack
| Component | Node.js | Python |
|-----------|---------|--------|
| Framework | Express.js | Flask |
| ORM | Sequelize | SQLAlchemy |
| HTTP Client | node-fetch | requests |
| Package Manager | npm | pip |
| Runtime | Node.js | Python 3.8+ |
### API Compatibility
✅ All endpoints preserved with same URLs
✅ Request/response formats maintained
✅ Same database schema
✅ Same business logic
### Converted Features
- ✅ Training project management
- ✅ Label Studio integration
- ✅ YOLOX configuration and training
- ✅ File upload handling
- ✅ Image and annotation management
- ✅ COCO JSON generation
- ✅ Training logs
## 🚀 Getting Started
1. **Install dependencies:**
```bash
cd backend
python -m venv venv
.\venv\Scripts\Activate.ps1 # Windows
pip install -r requirements.txt
```
2. **Run the server:**
```bash
python start.py
```
3. **Server runs at:** `http://0.0.0.0:3000`
## 📦 Dependencies Installed
- Flask 3.0.0 - Web framework
- Flask-CORS 4.0.0 - Cross-origin resource sharing
- Flask-SQLAlchemy 3.1.1 - ORM integration
- SQLAlchemy 2.0.23 - Database ORM
- PyMySQL 1.1.0 - MySQL driver
- requests 2.31.0 - HTTP client
- Pillow 10.1.0 - Image processing
## ⚠️ Important Notes
1. **Virtual Environment**: Always activate the virtual environment before running
2. **Database**: MySQL must be running with the `myapp` database created
3. **Credentials**: Update database credentials in `app.py` if needed
4. **Python Version**: Requires Python 3.8 or higher
## 🧪 Testing
Test the conversion:
```bash
# Get all training projects
curl http://localhost:3000/api/training-projects
# Get Label Studio projects
curl http://localhost:3000/api/label-studio-projects
```
## 📝 Original Files
Your original Node.js files remain untouched:
- server.js
- package.json
- routes/api.js
- models/*.js (JavaScript)
- services/*.js (JavaScript)
You can keep them as backup or remove them once you verify the Python version works correctly.
## 🔍 What to Verify
1. ✅ Database connection works
2. ✅ All API endpoints respond correctly
3. ✅ File uploads work
4. ✅ Label Studio integration works
5. ✅ YOLOX training can be triggered
6. ✅ COCO JSON generation works
## 🐛 Troubleshooting
See **QUICKSTART.md** for common issues and solutions.
## 📚 Further Documentation
- **README.md** - Complete project documentation
- **QUICKSTART.md** - Setup guide
- **API Documentation** - All endpoints documented in README.md
---
**Conversion completed successfully!** 🎉
Your backend is now running on Python with Flask and SQLAlchemy.

113
backend/QUICKSTART.md Normal file
View File

@@ -0,0 +1,113 @@
# Quick Start Guide - Python Backend
## Step-by-Step Setup
### 1. Install Python
Make sure you have Python 3.8 or higher installed:
```bash
python --version
```
### 2. Create Virtual Environment
```bash
cd backend
python -m venv venv
```
### 3. Activate Virtual Environment
**Windows:**
```powershell
.\venv\Scripts\Activate.ps1
```
**Linux/Mac:**
```bash
source venv/bin/activate
```
### 4. Install Dependencies
```bash
pip install -r requirements.txt
```
### 5. Verify Database Connection
Make sure MySQL is running and the database `myapp` exists:
```sql
CREATE DATABASE IF NOT EXISTS myapp;
```
### 6. Run the Server
```bash
python start.py
```
Or:
```bash
python app.py
```
The server should now be running at `http://0.0.0.0:3000`
## Testing the API
Test if the server is working:
```bash
curl http://localhost:3000/api/training-projects
```
## Common Issues
### ModuleNotFoundError
If you get import errors, make sure you've activated the virtual environment and installed all dependencies.
### Database Connection Error
Check that:
- MySQL is running
- Database credentials in `app.py` are correct
- Database `myapp` exists
### Port Already in Use
If port 3000 is already in use, modify the port in `app.py`:
```python
app.run(host='0.0.0.0', port=3001, debug=True)
```
## What Changed from Node.js
1. **Server Framework**: Express.js → Flask
2. **ORM**: Sequelize → SQLAlchemy
3. **HTTP Client**: node-fetch → requests
4. **Package Manager**: npm → pip
5. **Dependencies**: package.json → requirements.txt
6. **Startup**: `node server.js``python app.py`
## Next Steps
1. Test all API endpoints
2. Update frontend to point to the new Python backend (if needed)
3. Migrate any remaining Node.js-specific logic
4. Test file uploads and downloads
5. Test YOLOX training functionality
## File Structure Comparison
**Before (Node.js):**
```
backend/
├── server.js
├── package.json
├── routes/api.js
├── models/*.js
└── services/*.js
```
**After (Python):**
```
backend/
├── app.py
├── requirements.txt
├── routes/api.py
├── models/*.py
└── services/*.py
```

107
backend/README.md Normal file
View File

@@ -0,0 +1,107 @@
# Python Backend for COCO Tool
This is the converted Python backend using Flask and SQLAlchemy.
## Setup
1. Create a virtual environment (recommended):
```bash
python -m venv venv
```
2. Activate the virtual environment:
- Windows: `venv\Scripts\activate`
- Linux/Mac: `source venv/bin/activate`
3. Install dependencies:
```bash
pip install -r requirements.txt
```
## Running the Server
### Option 1: Using start.py
```bash
python start.py
```
### Option 2: Using Flask directly
```bash
python app.py
```
### Option 3: Using Flask CLI
```bash
flask --app app run --host=0.0.0.0 --port=3000
```
The server will start on `http://0.0.0.0:3000`
## Database Configuration
The database configuration is in `database/database.py`. Default settings:
- Host: localhost
- Database: myapp
- User: root
- Password: root
Modify `app.py` to change these settings.
## Project Structure
```
backend/
├── app.py # Main Flask application
├── start.py # Startup script
├── requirements.txt # Python dependencies
├── database/
│ └── database.py # Database configuration
├── models/ # SQLAlchemy models
│ ├── __init__.py
│ ├── Annotation.py
│ ├── Images.py
│ ├── LabelStudioProject.py
│ ├── training.py
│ ├── TrainingProject.py
│ └── TrainingProjectDetails.py
├── routes/
│ └── api.py # API endpoints
└── services/ # Business logic
├── fetch_labelstudio.py
├── generate_json_yolox.py
├── generate_yolox_exp.py
├── push_yolox_exp.py
└── seed_label_studio.py
```
## API Endpoints
All endpoints are prefixed with `/api`:
- `GET /api/seed` - Seed database from Label Studio
- `POST /api/generate-yolox-json` - Generate YOLOX training files
- `POST /api/start-yolox-training` - Start YOLOX training
- `GET /api/training-log` - Get training logs
- `GET/POST /api/training-projects` - Manage training projects
- `GET /api/label-studio-projects` - Get Label Studio projects
- `GET/POST/PUT /api/training-project-details` - Manage project details
- `POST /api/yolox-settings` - Save YOLOX settings
- `GET/DELETE /api/trainings` - Manage trainings
- `DELETE /api/training-projects/:id` - Delete training project
## Migration Notes
This is a direct conversion from Node.js/Express to Python/Flask:
- Express → Flask
- Sequelize ORM → SQLAlchemy ORM
- node-fetch → requests library
- Async routes maintained where needed
- All file paths and logic preserved from original
## Differences from Node.js Version
1. Python uses async/await differently - some routes may need adjustments
2. File handling uses Python's built-in open() instead of fs module
3. Subprocess calls use Python's subprocess module
4. JSON handling uses Python's json module
5. Path operations use os.path instead of Node's path module

43
backend/app.py Normal file
View File

@@ -0,0 +1,43 @@
from flask import Flask, send_from_directory
from flask_cors import CORS
import os
from database.database import db, init_db
app = Flask(__name__, static_folder='..', static_url_path='')
CORS(app)
# Configure database
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:root@localhost/myapp'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Initialize database
db.init_app(app)
# Import and register blueprints
from routes.api import api_bp
app.register_blueprint(api_bp, url_prefix='/api')
# Serve static files (HTML, CSS, JS)
@app.route('/')
def index():
return send_from_directory('..', 'index.html')
@app.route('/<path:path>')
def serve_static(path):
return send_from_directory('..', path)
# Initialize DB and start server
if __name__ == '__main__':
with app.app_context():
try:
# Test database connection
db.engine.connect()
print('DB connection established.')
# Create tables if they don't exist
db.create_all()
# Start server
app.run(host='0.0.0.0', port=3000, debug=True)
except Exception as err:
print(f'Failed to start: {err}')

14
backend/check_db.py Normal file
View File

@@ -0,0 +1,14 @@
import pymysql
conn = pymysql.connect(host='localhost', user='root', password='root', database='myapp')
cursor = conn.cursor()
cursor.execute('DESCRIBE image')
rows = cursor.fetchall()
print("Current 'image' table structure:")
print("-" * 60)
for row in rows:
print(f"Field: {row[0]:<15} Type: {row[1]:<15} Null: {row[2]}")
print("-" * 60)
conn.close()

View File

@@ -0,0 +1,4 @@
# Database module
from database.database import db
__all__ = ['db']

View File

@@ -0,0 +1,9 @@
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def init_db(app):
"""Initialize database with app context"""
db.init_app(app)
with app.app_context():
db.create_all()

View File

@@ -0,0 +1,12 @@
-- Migration: Add width and height columns to image table
-- Date: 2025-11-27
USE myapp;
-- Add width and height columns to image table
ALTER TABLE `image`
ADD COLUMN `width` FLOAT NULL AFTER `image_path`,
ADD COLUMN `height` FLOAT NULL AFTER `width`;
-- Verify the changes
DESCRIBE `image`;

View File

@@ -0,0 +1,23 @@
from database.database import db
class Annotation(db.Model):
__tablename__ = 'annotation'
annotation_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
image_id = db.Column(db.Integer, nullable=False)
x = db.Column(db.Float, nullable=False)
y = db.Column(db.Float, nullable=False)
height = db.Column(db.Float, nullable=False)
width = db.Column(db.Float, nullable=False)
Label = db.Column(db.String(255), nullable=False)
def to_dict(self):
return {
'annotation_id': self.annotation_id,
'image_id': self.image_id,
'x': self.x,
'y': self.y,
'height': self.height,
'width': self.width,
'Label': self.Label
}

19
backend/models/Images.py Normal file
View File

@@ -0,0 +1,19 @@
from database.database import db
class Image(db.Model):
__tablename__ = 'image'
image_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
image_path = db.Column(db.String(500), nullable=False)
project_id = db.Column(db.Integer, nullable=False)
width = db.Column(db.Float)
height = db.Column(db.Float)
def to_dict(self):
return {
'image_id': self.image_id,
'image_path': self.image_path,
'project_id': self.project_id,
'width': self.width,
'height': self.height
}

View File

@@ -0,0 +1,13 @@
from database.database import db
class LabelStudioProject(db.Model):
__tablename__ = 'label_studio_project'
project_id = db.Column(db.Integer, primary_key=True, unique=True)
title = db.Column(db.String(255), nullable=False)
def to_dict(self):
return {
'project_id': self.project_id,
'title': self.title
}

View File

@@ -0,0 +1,28 @@
from database.database import db
class TrainingProject(db.Model):
__tablename__ = 'training_project'
project_id = db.Column(db.Integer, primary_key=True, unique=True, autoincrement=True)
title = db.Column(db.String(255), nullable=False)
description = db.Column(db.String(500))
classes = db.Column(db.JSON, nullable=False)
project_image = db.Column(db.LargeBinary)
project_image_type = db.Column(db.String(100))
def to_dict(self):
result = {
'project_id': self.project_id,
'title': self.title,
'description': self.description,
'classes': self.classes,
'project_image_type': self.project_image_type
}
if self.project_image:
import base64
base64_data = base64.b64encode(self.project_image).decode('utf-8')
mime_type = self.project_image_type or 'image/png'
result['project_image'] = f'data:{mime_type};base64,{base64_data}'
else:
result['project_image'] = None
return result

View File

@@ -0,0 +1,19 @@
from database.database import db
class TrainingProjectDetails(db.Model):
__tablename__ = 'training_project_details'
id = db.Column(db.Integer, primary_key=True, unique=True, autoincrement=True)
project_id = db.Column(db.Integer, nullable=False, unique=True)
annotation_projects = db.Column(db.JSON, nullable=False)
class_map = db.Column(db.JSON)
description = db.Column(db.JSON)
def to_dict(self):
return {
'id': self.id,
'project_id': self.project_id,
'annotation_projects': self.annotation_projects,
'class_map': self.class_map,
'description': self.description
}

View File

@@ -0,0 +1,16 @@
# Import all models to ensure they are registered with SQLAlchemy
from models.TrainingProject import TrainingProject
from models.TrainingProjectDetails import TrainingProjectDetails
from models.training import Training
from models.LabelStudioProject import LabelStudioProject
from models.Images import Image
from models.Annotation import Annotation
__all__ = [
'TrainingProject',
'TrainingProjectDetails',
'Training',
'LabelStudioProject',
'Image',
'Annotation'
]

View File

@@ -0,0 +1,92 @@
from database.database import db
class Training(db.Model):
__tablename__ = 'training'
id = db.Column(db.Integer, primary_key=True, autoincrement=True, unique=True)
exp_name = db.Column(db.String(255))
max_epoch = db.Column(db.Integer)
depth = db.Column(db.Float)
width = db.Column(db.Float)
activation = db.Column(db.String(255))
warmup_epochs = db.Column(db.Integer)
warmup_lr = db.Column(db.Float)
basic_lr_per_img = db.Column(db.Float)
scheduler = db.Column(db.String(255))
no_aug_epochs = db.Column(db.Integer)
min_lr_ratio = db.Column(db.Float)
ema = db.Column(db.Boolean)
weight_decay = db.Column(db.Float)
momentum = db.Column(db.Float)
input_size = db.Column(db.JSON)
print_interval = db.Column(db.Integer)
eval_interval = db.Column(db.Integer)
save_history_ckpt = db.Column(db.Boolean)
test_size = db.Column(db.JSON)
test_conf = db.Column(db.Float)
nms_thre = db.Column(db.Float)
multiscale_range = db.Column(db.Integer)
enable_mixup = db.Column(db.Boolean)
mosaic_prob = db.Column(db.Float)
mixup_prob = db.Column(db.Float)
hsv_prob = db.Column(db.Float)
flip_prob = db.Column(db.Float)
degrees = db.Column(db.Float)
mosaic_scale = db.Column(db.JSON)
mixup_scale = db.Column(db.JSON)
translate = db.Column(db.Float)
shear = db.Column(db.Float)
training_name = db.Column(db.String(255))
project_details_id = db.Column(db.Integer, nullable=False)
seed = db.Column(db.Integer)
train = db.Column(db.Integer)
valid = db.Column(db.Integer)
test = db.Column(db.Integer)
selected_model = db.Column(db.String(255))
transfer_learning = db.Column(db.String(255))
model_upload = db.Column(db.LargeBinary)
def to_dict(self):
return {
'id': self.id,
'exp_name': self.exp_name,
'max_epoch': self.max_epoch,
'depth': self.depth,
'width': self.width,
'activation': self.activation,
'warmup_epochs': self.warmup_epochs,
'warmup_lr': self.warmup_lr,
'basic_lr_per_img': self.basic_lr_per_img,
'scheduler': self.scheduler,
'no_aug_epochs': self.no_aug_epochs,
'min_lr_ratio': self.min_lr_ratio,
'ema': self.ema,
'weight_decay': self.weight_decay,
'momentum': self.momentum,
'input_size': self.input_size,
'print_interval': self.print_interval,
'eval_interval': self.eval_interval,
'save_history_ckpt': self.save_history_ckpt,
'test_size': self.test_size,
'test_conf': self.test_conf,
'nms_thre': self.nms_thre,
'multiscale_range': self.multiscale_range,
'enable_mixup': self.enable_mixup,
'mosaic_prob': self.mosaic_prob,
'mixup_prob': self.mixup_prob,
'hsv_prob': self.hsv_prob,
'flip_prob': self.flip_prob,
'degrees': self.degrees,
'mosaic_scale': self.mosaic_scale,
'mixup_scale': self.mixup_scale,
'translate': self.translate,
'shear': self.shear,
'training_name': self.training_name,
'project_details_id': self.project_details_id,
'seed': self.seed,
'train': self.train,
'valid': self.valid,
'test': self.test,
'selected_model': self.selected_model,
'transfer_learning': self.transfer_learning
}

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_37_train.json"
self.val_ann = "coco_project_37_valid.json"
self.test_ann = "coco_project_37_test.json"
self.num_classes = 1
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 1.0
self.width = 1.0
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,292 +0,0 @@
import os
import random
import torch
import torch.distributed as dist
import torch.nn as nn
# Dynamically import BaseExp from fixed path
import importlib.util
import sys
base_exp_path = '/home/kitraining/Yolox/YOLOX-main/yolox/exp/base_exp.py'
spec = importlib.util.spec_from_file_location('base_exp', base_exp_path)
base_exp = importlib.util.module_from_spec(spec)
sys.modules['base_exp'] = base_exp
spec.loader.exec_module(base_exp)
BaseExp = base_exp.BaseExp
__all__ = ["Exp", "check_exp_value"]
class Exp(BaseExp):
def __init__(self):
super().__init__()
self.seed = None
self.data_dir = r'/home/kitraining/To_Annotate/'
self.train_ann = 'coco_project_37_train.json'
self.val_ann = 'coco_project_37_valid.json'
self.test_ann = 'coco_project_37_test.json'
self.num_classes = 80
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-l.pth'
self.depth = 1.00
self.width = 1.00
self.act = 'silu'
self.data_num_workers = 4
self.input_size = (640, 640)
self.multiscale_range = 5
self.mosaic_prob = 1.0
self.mixup_prob = 1.0
self.hsv_prob = 1.0
self.flip_prob = 0.5
self.degrees = (10.0, 10.0)
self.translate = (0.1, 0.1)
self.mosaic_scale = (0.1, 2)
self.enable_mixup = True
self.mixup_scale = (0.5, 1.5)
self.shear = (2.0, 2.0)
self.warmup_epochs = 5
self.max_epoch = 300
self.warmup_lr = 0
self.min_lr_ratio = 0.05
self.basic_lr_per_img = 0.01 / 64.0
self.scheduler = 'yoloxwarmcos'
self.no_aug_epochs = 15
self.ema = True
self.weight_decay = 5e-4
self.momentum = 0.9
self.print_interval = 10
self.eval_interval = 10
self.save_history_ckpt = True
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split('.')[0]
self.test_size = (640, 640)
self.test_conf = 0.01
self.nmsthre = 0.65
self.exp_name = 'custom_exp123'
self.max_epoch = 300
self.depth = 1
self.width = 1
self.activation = 'silu'
self.warmup_epochs = 5
self.warmup_lr = 0
self.scheduler = 'yoloxwarmcos'
self.no_aug_epochs = 15
self.min_lr_ratio = 0.05
self.ema = True
self.weight_decay = 0.0005
self.momentum = 0.9
self.input_size = (640, 640)
self.print_interval = 10
self.eval_interval = 10
self.save_history_ckpt = True
self.test_size = (640, 640)
self.test_conf = 0.01
self.multiscale_range = 5
self.enable_mixup = True
self.mosaic_prob = 1
self.mixup_prob = 1
self.hsv_prob = 1
self.flip_prob = 0.5
self.degrees = (10, 10)
self.mosaic_scale = (0.1, 2)
self.mixup_scale = (0.5, 1.5)
self.translate = (0.1, 0.1)
self.shear = (2, 2)
self.project_details_id = 37
self.selected_model = 'YOLOX-l'
self.transfer_learning = 'coco'
def get_model(self):
from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead
def init_yolo(M):
for m in M.modules():
if isinstance(m, nn.BatchNorm2d):
m.eps = 1e-3
m.momentum = 0.03
if getattr(self, 'model', None) is None:
in_channels = [256, 512, 1024]
backbone = YOLOPAFPN(self.depth, self.width, in_channels=in_channels, act=self.act)
head = YOLOXHead(self.num_classes, self.width, in_channels=in_channels, act=self.act)
self.model = YOLOX(backbone, head)
self.model.apply(init_yolo)
self.model.head.initialize_biases(1e-2)
self.model.train()
return self.model
def get_dataset(self, cache=False, cache_type='ram'):
from yolox.data import COCODataset, TrainTransform
return COCODataset(
data_dir=self.data_dir,
json_file=self.train_ann,
img_size=self.input_size,
preproc=TrainTransform(
max_labels=50,
flip_prob=self.flip_prob,
hsv_prob=self.hsv_prob
),
cache=cache,
cache_type=cache_type,
)
def get_data_loader(self, batch_size, is_distributed, no_aug=False, cache_img=None):
from yolox.data import (
TrainTransform,
YoloBatchSampler,
DataLoader,
InfiniteSampler,
MosaicDetection,
worker_init_reset_seed,
)
from yolox.utils import wait_for_the_master
if self.dataset is None:
with wait_for_the_master():
assert cache_img is None, 'cache_img must be None if you did not create self.dataset before launch'
self.dataset = self.get_dataset(cache=False, cache_type=cache_img)
self.dataset = MosaicDetection(
dataset=self.dataset,
mosaic=not no_aug,
img_size=self.input_size,
preproc=TrainTransform(
max_labels=120,
flip_prob=self.flip_prob,
hsv_prob=self.hsv_prob),
degrees=self.degrees,
translate=self.translate,
mosaic_scale=self.mosaic_scale,
mixup_scale=self.mixup_scale,
shear=self.shear,
enable_mixup=self.enable_mixup,
mosaic_prob=self.mosaic_prob,
mixup_prob=self.mixup_prob,
)
if is_distributed:
batch_size = batch_size // dist.get_world_size()
sampler = InfiniteSampler(len(self.dataset), seed=self.seed if self.seed else 0)
batch_sampler = YoloBatchSampler(
sampler=sampler,
batch_size=batch_size,
drop_last=False,
mosaic=not no_aug,
)
dataloader_kwargs = {'num_workers': self.data_num_workers, 'pin_memory': True}
dataloader_kwargs['batch_sampler'] = batch_sampler
dataloader_kwargs['worker_init_fn'] = worker_init_reset_seed
train_loader = DataLoader(self.dataset, **dataloader_kwargs)
return train_loader
def random_resize(self, data_loader, epoch, rank, is_distributed):
tensor = torch.LongTensor(2).cuda()
if rank == 0:
size_factor = self.input_size[1] * 1.0 / self.input_size[0]
if not hasattr(self, 'random_size'):
min_size = int(self.input_size[0] / 32) - self.multiscale_range
max_size = int(self.input_size[0] / 32) + self.multiscale_range
self.random_size = (min_size, max_size)
size = random.randint(*self.random_size)
size = (int(32 * size), 32 * int(size * size_factor))
tensor[0] = size[0]
tensor[1] = size[1]
if is_distributed:
dist.barrier()
dist.broadcast(tensor, 0)
input_size = (tensor[0].item(), tensor[1].item())
return input_size
def preprocess(self, inputs, targets, tsize):
scale_y = tsize[0] / self.input_size[0]
scale_x = tsize[1] / self.input_size[1]
if scale_x != 1 or scale_y != 1:
inputs = nn.functional.interpolate(
inputs, size=tsize, mode='bilinear', align_corners=False
)
targets[..., 1::2] = targets[..., 1::2] * scale_x
targets[..., 2::2] = targets[..., 2::2] * scale_y
return inputs, targets
def get_optimizer(self, batch_size):
if 'optimizer' not in self.__dict__:
if self.warmup_epochs > 0:
lr = self.warmup_lr
else:
lr = self.basic_lr_per_img * batch_size
pg0, pg1, pg2 = [], [], []
for k, v in self.model.named_modules():
if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
pg2.append(v.bias)
if isinstance(v, nn.BatchNorm2d) or 'bn' in k:
pg0.append(v.weight)
elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
pg1.append(v.weight)
optimizer = torch.optim.SGD(
pg0, lr=lr, momentum=self.momentum, nesterov=True
)
optimizer.add_param_group({'params': pg1, 'weight_decay': self.weight_decay})
optimizer.add_param_group({'params': pg2})
self.optimizer = optimizer
return self.optimizer
def get_lr_scheduler(self, lr, iters_per_epoch):
from yolox.utils import LRScheduler
scheduler = LRScheduler(
self.scheduler,
lr,
iters_per_epoch,
self.max_epoch,
warmup_epochs=self.warmup_epochs,
warmup_lr_start=self.warmup_lr,
no_aug_epochs=self.no_aug_epochs,
min_lr_ratio=self.min_lr_ratio,
)
return scheduler
def get_eval_dataset(self, **kwargs):
from yolox.data import COCODataset, ValTransform
testdev = kwargs.get('testdev', False)
legacy = kwargs.get('legacy', False)
return COCODataset(
data_dir=self.data_dir,
json_file=self.val_ann if not testdev else self.test_ann,
name='' if not testdev else 'test2017',
img_size=self.test_size,
preproc=ValTransform(legacy=legacy),
)
def get_eval_loader(self, batch_size, is_distributed, **kwargs):
valdataset = self.get_eval_dataset(**kwargs)
if is_distributed:
batch_size = batch_size // dist.get_world_size()
sampler = torch.utils.data.distributed.DistributedSampler(
valdataset, shuffle=False
)
else:
sampler = torch.utils.data.SequentialSampler(valdataset)
dataloader_kwargs = {
'num_workers': self.data_num_workers,
'pin_memory': True,
'sampler': sampler,
}
dataloader_kwargs['batch_size'] = batch_size
val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs)
return val_loader
def get_evaluator(self, batch_size, is_distributed, testdev=False, legacy=False):
from yolox.evaluators import COCOEvaluator
return COCOEvaluator(
dataloader=self.get_eval_loader(batch_size, is_distributed,
testdev=testdev, legacy=legacy),
img_size=self.test_size,
confthre=self.test_conf,
nmsthre=self.nmsthre,
num_classes=self.num_classes,
testdev=testdev,
)
def get_trainer(self, args):
from yolox.core import Trainer
trainer = Trainer(self, args)
return trainer
def eval(self, model, evaluator, is_distributed, half=False, return_outputs=False):
return evaluator.evaluate(model, is_distributed, half, return_outputs=return_outputs)
def check_exp_value(exp):
h, w = exp.input_size
assert h % 32 == 0 and w % 32 == 0, 'input size must be multiples of 32'

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_38_train.json"
self.val_ann = "coco_project_38_valid.json"
self.test_ann = "coco_project_38_test.json"
self.depth = 0.33
self.width = 0.50
self.num_classes = 1
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,292 +0,0 @@
import os
import random
import torch
import torch.distributed as dist
import torch.nn as nn
# Dynamically import BaseExp from fixed path
import importlib.util
import sys
base_exp_path = '/home/kitraining/Yolox/YOLOX-main/yolox/exp/base_exp.py'
spec = importlib.util.spec_from_file_location('base_exp', base_exp_path)
base_exp = importlib.util.module_from_spec(spec)
sys.modules['base_exp'] = base_exp
spec.loader.exec_module(base_exp)
BaseExp = base_exp.BaseExp
__all__ = ["Exp", "check_exp_value"]
class Exp(BaseExp):
def __init__(self):
super().__init__()
self.seed = None
self.data_dir = r'/home/kitraining/To_Annotate/'
self.train_ann = 'coco_project_38_train.json'
self.val_ann = 'coco_project_38_valid.json'
self.test_ann = 'coco_project_38_test.json'
self.num_classes = 80
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-s.pth'
self.depth = 1.00
self.width = 1.00
self.act = 'silu'
self.data_num_workers = 4
self.input_size = (640, 640)
self.multiscale_range = 5
self.mosaic_prob = 1.0
self.mixup_prob = 1.0
self.hsv_prob = 1.0
self.flip_prob = 0.5
self.degrees = (10.0, 10.0)
self.translate = (0.1, 0.1)
self.mosaic_scale = (0.1, 2)
self.enable_mixup = True
self.mixup_scale = (0.5, 1.5)
self.shear = (2.0, 2.0)
self.warmup_epochs = 5
self.max_epoch = 300
self.warmup_lr = 0
self.min_lr_ratio = 0.05
self.basic_lr_per_img = 0.01 / 64.0
self.scheduler = 'yoloxwarmcos'
self.no_aug_epochs = 15
self.ema = True
self.weight_decay = 5e-4
self.momentum = 0.9
self.print_interval = 10
self.eval_interval = 10
self.save_history_ckpt = True
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split('.')[0]
self.test_size = (640, 640)
self.test_conf = 0.01
self.nmsthre = 0.65
self.exp_name = 'lalalalal'
self.max_epoch = 300
self.depth = 1
self.width = 1
self.activation = 'silu'
self.warmup_epochs = 5
self.warmup_lr = 0
self.scheduler = 'yoloxwarmcos'
self.no_aug_epochs = 15
self.min_lr_ratio = 0.05
self.ema = True
self.weight_decay = 0.0005
self.momentum = 0.9
self.input_size = (640, 640)
self.print_interval = 10
self.eval_interval = 10
self.save_history_ckpt = True
self.test_size = (640, 640)
self.test_conf = 0.01
self.multiscale_range = 5
self.enable_mixup = True
self.mosaic_prob = 1
self.mixup_prob = 1
self.hsv_prob = 1
self.flip_prob = 0.5
self.degrees = (10, 10)
self.mosaic_scale = (0.1, 2)
self.mixup_scale = (0.5, 1.5)
self.translate = (0.1, 0.1)
self.shear = (2, 2)
self.project_details_id = 38
self.selected_model = 'YOLOX-s'
self.transfer_learning = 'coco'
def get_model(self):
from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead
def init_yolo(M):
for m in M.modules():
if isinstance(m, nn.BatchNorm2d):
m.eps = 1e-3
m.momentum = 0.03
if getattr(self, 'model', None) is None:
in_channels = [256, 512, 1024]
backbone = YOLOPAFPN(self.depth, self.width, in_channels=in_channels, act=self.act)
head = YOLOXHead(self.num_classes, self.width, in_channels=in_channels, act=self.act)
self.model = YOLOX(backbone, head)
self.model.apply(init_yolo)
self.model.head.initialize_biases(1e-2)
self.model.train()
return self.model
def get_dataset(self, cache=False, cache_type='ram'):
from yolox.data import COCODataset, TrainTransform
return COCODataset(
data_dir=self.data_dir,
json_file=self.train_ann,
img_size=self.input_size,
preproc=TrainTransform(
max_labels=50,
flip_prob=self.flip_prob,
hsv_prob=self.hsv_prob
),
cache=cache,
cache_type=cache_type,
)
def get_data_loader(self, batch_size, is_distributed, no_aug=False, cache_img=None):
from yolox.data import (
TrainTransform,
YoloBatchSampler,
DataLoader,
InfiniteSampler,
MosaicDetection,
worker_init_reset_seed,
)
from yolox.utils import wait_for_the_master
if self.dataset is None:
with wait_for_the_master():
assert cache_img is None, 'cache_img must be None if you did not create self.dataset before launch'
self.dataset = self.get_dataset(cache=False, cache_type=cache_img)
self.dataset = MosaicDetection(
dataset=self.dataset,
mosaic=not no_aug,
img_size=self.input_size,
preproc=TrainTransform(
max_labels=120,
flip_prob=self.flip_prob,
hsv_prob=self.hsv_prob),
degrees=self.degrees,
translate=self.translate,
mosaic_scale=self.mosaic_scale,
mixup_scale=self.mixup_scale,
shear=self.shear,
enable_mixup=self.enable_mixup,
mosaic_prob=self.mosaic_prob,
mixup_prob=self.mixup_prob,
)
if is_distributed:
batch_size = batch_size // dist.get_world_size()
sampler = InfiniteSampler(len(self.dataset), seed=self.seed if self.seed else 0)
batch_sampler = YoloBatchSampler(
sampler=sampler,
batch_size=batch_size,
drop_last=False,
mosaic=not no_aug,
)
dataloader_kwargs = {'num_workers': self.data_num_workers, 'pin_memory': True}
dataloader_kwargs['batch_sampler'] = batch_sampler
dataloader_kwargs['worker_init_fn'] = worker_init_reset_seed
train_loader = DataLoader(self.dataset, **dataloader_kwargs)
return train_loader
def random_resize(self, data_loader, epoch, rank, is_distributed):
tensor = torch.LongTensor(2).cuda()
if rank == 0:
size_factor = self.input_size[1] * 1.0 / self.input_size[0]
if not hasattr(self, 'random_size'):
min_size = int(self.input_size[0] / 32) - self.multiscale_range
max_size = int(self.input_size[0] / 32) + self.multiscale_range
self.random_size = (min_size, max_size)
size = random.randint(*self.random_size)
size = (int(32 * size), 32 * int(size * size_factor))
tensor[0] = size[0]
tensor[1] = size[1]
if is_distributed:
dist.barrier()
dist.broadcast(tensor, 0)
input_size = (tensor[0].item(), tensor[1].item())
return input_size
def preprocess(self, inputs, targets, tsize):
scale_y = tsize[0] / self.input_size[0]
scale_x = tsize[1] / self.input_size[1]
if scale_x != 1 or scale_y != 1:
inputs = nn.functional.interpolate(
inputs, size=tsize, mode='bilinear', align_corners=False
)
targets[..., 1::2] = targets[..., 1::2] * scale_x
targets[..., 2::2] = targets[..., 2::2] * scale_y
return inputs, targets
def get_optimizer(self, batch_size):
if 'optimizer' not in self.__dict__:
if self.warmup_epochs > 0:
lr = self.warmup_lr
else:
lr = self.basic_lr_per_img * batch_size
pg0, pg1, pg2 = [], [], []
for k, v in self.model.named_modules():
if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
pg2.append(v.bias)
if isinstance(v, nn.BatchNorm2d) or 'bn' in k:
pg0.append(v.weight)
elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
pg1.append(v.weight)
optimizer = torch.optim.SGD(
pg0, lr=lr, momentum=self.momentum, nesterov=True
)
optimizer.add_param_group({'params': pg1, 'weight_decay': self.weight_decay})
optimizer.add_param_group({'params': pg2})
self.optimizer = optimizer
return self.optimizer
def get_lr_scheduler(self, lr, iters_per_epoch):
from yolox.utils import LRScheduler
scheduler = LRScheduler(
self.scheduler,
lr,
iters_per_epoch,
self.max_epoch,
warmup_epochs=self.warmup_epochs,
warmup_lr_start=self.warmup_lr,
no_aug_epochs=self.no_aug_epochs,
min_lr_ratio=self.min_lr_ratio,
)
return scheduler
def get_eval_dataset(self, **kwargs):
from yolox.data import COCODataset, ValTransform
testdev = kwargs.get('testdev', False)
legacy = kwargs.get('legacy', False)
return COCODataset(
data_dir=self.data_dir,
json_file=self.val_ann if not testdev else self.test_ann,
name='' if not testdev else 'test2017',
img_size=self.test_size,
preproc=ValTransform(legacy=legacy),
)
def get_eval_loader(self, batch_size, is_distributed, **kwargs):
valdataset = self.get_eval_dataset(**kwargs)
if is_distributed:
batch_size = batch_size // dist.get_world_size()
sampler = torch.utils.data.distributed.DistributedSampler(
valdataset, shuffle=False
)
else:
sampler = torch.utils.data.SequentialSampler(valdataset)
dataloader_kwargs = {
'num_workers': self.data_num_workers,
'pin_memory': True,
'sampler': sampler,
}
dataloader_kwargs['batch_size'] = batch_size
val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs)
return val_loader
def get_evaluator(self, batch_size, is_distributed, testdev=False, legacy=False):
from yolox.evaluators import COCOEvaluator
return COCOEvaluator(
dataloader=self.get_eval_loader(batch_size, is_distributed,
testdev=testdev, legacy=legacy),
img_size=self.test_size,
confthre=self.test_conf,
nmsthre=self.nmsthre,
num_classes=self.num_classes,
testdev=testdev,
)
def get_trainer(self, args):
from yolox.core import Trainer
trainer = Trainer(self, args)
return trainer
def eval(self, model, evaluator, is_distributed, half=False, return_outputs=False):
return evaluator.evaluate(model, is_distributed, half, return_outputs=return_outputs)
def check_exp_value(exp):
h, w = exp.input_size
assert h % 32 == 0 and w % 32 == 0, 'input size must be multiples of 32'

View File

@@ -1,27 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_39_train.json"
self.val_ann = "coco_project_39_valid.json"
self.test_ann = "coco_project_39_test.json"
self.num_classes = 80
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.enable_mixup = False
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_40_train.json"
self.val_ann = "coco_project_40_valid.json"
self.test_ann = "coco_project_40_test.json"
self.num_classes = 80
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-x.pth'
self.depth = 1.33
self.width = 1.25
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_41_train.json"
self.val_ann = "coco_project_41_valid.json"
self.test_ann = "coco_project_41_test.json"
self.num_classes = 1
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-l.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_42_train.json"
self.val_ann = "coco_project_42_valid.json"
self.test_ann = "coco_project_42_test.json"
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_42_train.json"
self.val_ann = "coco_project_42_valid.json"
self.test_ann = "coco_project_42_test.json"
self.num_classes = 4
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-s.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_43_train.json"
self.val_ann = "coco_project_43_valid.json"
self.test_ann = "coco_project_43_test.json"
self.depth = 1.33
self.width = 1.25
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_43_train.json"
self.val_ann = "coco_project_43_valid.json"
self.test_ann = "coco_project_43_test.json"
self.num_classes = 1
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_48_train.json"
self.val_ann = "coco_project_48_valid.json"
self.test_ann = "coco_project_48_test.json"
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/tiny_persondetector/best_ckpt.pth'
self.num_classes = 4
self.depth = 0.33
self.width = 0.375
self.ingput_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_44_train.json"
self.val_ann = "coco_project_44_valid.json"
self.test_ann = "coco_project_44_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-x.pth'
self.depth = 1.33
self.width = 1.25
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_46_train.json"
self.val_ann = "coco_project_46_valid.json"
self.test_ann = "coco_project_46_test.json"
self.num_classes = 4
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = True

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_46_train.json"
self.val_ann = "coco_project_46_valid.json"
self.test_ann = "coco_project_46_test.json"
self.num_classes = 80
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_47_train.json"
self.val_ann = "coco_project_47_valid.json"
self.test_ann = "coco_project_47_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 0.33
self.width = 0.375
self.input_size = (416, 416)
self.mosaic_scale = (0.5, 1.5)
self.random_size = (10, 20)
self.test_size = (416, 416)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = True

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_47_train.json"
self.val_ann = "coco_project_47_valid.json"
self.test_ann = "coco_project_47_test.json"
self.num_classes = 4
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-x.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_48_train.json"
self.val_ann = "coco_project_48_valid.json"
self.test_ann = "coco_project_48_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-s.pth'
self.depth = 0.33
self.width = 0.50
self.act = "relu"
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_48_train.json"
self.val_ann = "coco_project_48_valid.json"
self.test_ann = "coco_project_48_test.json"
self.num_classes = 4
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_49_train.json"
self.val_ann = "coco_project_49_valid.json"
self.test_ann = "coco_project_49_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX_s.pth'
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = True
# -------------- training config --------------------- #
self.warmup_epochs = 5
self.max_epoch = 100
self.act = "silu"

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_49_train.json"
self.val_ann = "coco_project_49_valid.json"
self.test_ann = "coco_project_49_test.json"
self.num_classes = 4
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-Tiny.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1 +0,0 @@
python tools/demo.py video -f /home/kitraining/coco_tool/backend/project_50/50/exp.py -c ./YOLOX_outputs/exp/best_ckpt.pth --path /home/kitraining/Videos/test_1.mkv --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu

View File

@@ -1,57 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_50_train.json"
self.val_ann = "coco_project_50_valid.json"
self.test_ann = "coco_project_50_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX_s.pth'
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
# -------------- training config --------------------- #
self.warmup_epochs = 15 # More warmup
self.max_epoch = 250 # more epochs
self.act = "silu" #Activation function
# Thresholds
self.test_conf = 0.01 # Low to catch more the second class
self.nmsthre = 0.7
# Data Augmentation intens to improve generalization
self.enable_mixup = True
self.mixup_prob = 0.9 # mixup
self.mosaic_prob = 0.9 # mosaico
self.degrees = 30.0 # Rotation
self.translate = 0.4 # Translation
self.scale = (0.2, 2.0) # Scaling
self.shear = 10.0 # Shear
self.flip_prob = 0.8
self.hsv_prob = 1.0
# Learning rate
self.basic_lr_per_img = 0.001 / 64.0 # Lower LR to avoid divergence
self.scheduler = "yoloxwarmcos"
# Loss weights
self.cls_loss_weight = 8.0 # More weight to the classification loss
self.obj_loss_weight = 1.0
self.reg_loss_weight = 0.5
# Input size bigger for better detection of small objects like babys
self.input_size = (832, 832)
self.test_size = (832, 832)
# Batch size
self.batch_size = 5 # Reduce if you have memory issues

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_50_train.json"
self.val_ann = "coco_project_50_valid.json"
self.test_ann = "coco_project_50_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-s.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,58 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_53_train.json"
self.val_ann = "coco_project_53_valid.json"
self.test_ann = "coco_project_53_test.json"
self.num_classes = 3
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/YOLOX_outputs/exp_Topview_4/best_ckpt.pth'
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
# -------------- training config --------------------- #
self.warmup_epochs = 15 # More warmup
self.max_epoch = 250 # more epochs
self.act = "silu" #Activation function
# Thresholds
self.test_conf = 0.01 # Low to catch more the second class
self.nmsthre = 0.7
# Data Augmentation intens to improve generalization
self.enable_mixup = True
self.mixup_prob = 0.9 # mixup
self.mosaic_prob = 0.9 # mosaico
self.degrees = 30.0 # Rotation
self.translate = 0.4 # Translation
self.scale = (0.2, 2.0) # Scaling
self.shear = 10.0 # Shear
self.flip_prob = 0.8
self.hsv_prob = 1.0
# Learning rate
self.basic_lr_per_img = 0.001 / 64.0 # Lower LR to avoid divergence
self.scheduler = "yoloxwarmcos"
# Loss weights
self.cls_loss_weight = 8.0 # More weight to the classification loss
self.obj_loss_weight = 1.0
self.reg_loss_weight = 0.5
# Input size bigger for better detection of small objects like babys
self.input_size = (832, 832)
self.test_size = (832, 832)
# Batch size
self.batch_size = 5 # Reduce if you have memory issues

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_53_train.json"
self.val_ann = "coco_project_53_valid.json"
self.test_ann = "coco_project_53_test.json"
self.num_classes = 3
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_54_train.json"
self.val_ann = "coco_project_54_valid.json"
self.test_ann = "coco_project_54_test.json"
self.num_classes = 3
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX_s.pth'
self.depth = 0.33
self.width = 0.50
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
# -------------- training config --------------------- #
self.use_focal_loss = True # Focal Loss for better class imbalance handling
self.focal_loss_alpha = 0.25
self.focal_loss_gamma = 1.5
self.warmup_epochs = 20 # More warmup
self.max_epoch = 150 # More epochs for better convergence
self.act = "silu" # Activation function
self.no_aug_epochs = 30 # No augmentation for last epochs to stabilize training
self.class_weights = [1.0, 1.0, 1.0] # Weights for each class to handle imbalance
# Thresholds
self.test_conf = 0.15 # Low to catch more the second class
self.nmsthre = 0.5 # IoU threshold for NMS
# Data Augmentation intens to improve generalization
self.enable_mixup = True
self.mixup_prob = 0.7 # mixup
self.mosaic_prob = 0.8 # mosaico
self.degrees = 20.0 # Rotation
self.translate = 0.2 # Translation
self.scale = (0.5, 1.5) # Scaling
self.shear = 5.0 # Shear
self.flip_prob = 0.8
self.hsv_prob = 1.0
# Learning rate
self.basic_lr_per_img = 0.001 / 64.0 # Lower LR to avoid divergence
self.scheduler = "yoloxwarmcos"
self.min_lr_ratio = 0.01
# Loss weights
self.cls_loss_weight = 8.0 # More weight to the classification loss
self.obj_loss_weight = 1.0
self.reg_loss_weight = 1.0
# Input size bigger for better detection of small objects like babys
self.input_size = (832, 832)
self.test_size = (832, 832)
# Batch size
self.batch_size = 5 # Reduce if you have memory issues

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "/home/kitraining/To_Annotate/"
self.train_ann = "coco_project_54_train.json"
self.val_ann = "coco_project_54_valid.json"
self.test_ann = "coco_project_54_test.json"
self.num_classes = 2
self.pretrained_ckpt = r'/home/kitraining/Yolox/YOLOX-main/pretrained/YOLOX-s.pth'
self.depth = 1
self.width = 1
self.input_size = (640, 640)
self.mosaic_scale = (0.1, 2)
self.random_size = (10, 20)
self.test_size = (640, 640)
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = False

8
backend/requirements.txt Normal file
View File

@@ -0,0 +1,8 @@
Flask==3.0.0
Flask-CORS==4.0.0
Flask-SQLAlchemy==3.1.1
SQLAlchemy==2.0.23
PyMySQL==1.1.0
python-dotenv==1.0.0
requests==2.31.0
Pillow==10.1.0

View File

@@ -0,0 +1 @@
# Routes module

531
backend/routes/api.py Normal file
View File

@@ -0,0 +1,531 @@
from flask import Blueprint, request, jsonify, send_file
from werkzeug.utils import secure_filename
import os
import json
import subprocess
from database.database import db
from models.TrainingProject import TrainingProject
from models.TrainingProjectDetails import TrainingProjectDetails
from models.training import Training
from models.LabelStudioProject import LabelStudioProject
from models.Images import Image
from models.Annotation import Annotation
api_bp = Blueprint('api', __name__)
# Global update status (similar to Node.js version)
update_status = {"running": False}
@api_bp.route('/seed', methods=['GET'])
def seed():
"""Trigger seeding from Label Studio"""
from services.seed_label_studio import seed_label_studio
result = seed_label_studio()
return jsonify(result)
@api_bp.route('/generate-yolox-json', methods=['POST'])
def generate_yolox_json():
"""Generate YOLOX JSON and exp.py for a project"""
try:
data = request.get_json()
project_id = data.get('project_id')
if not project_id:
return jsonify({'message': 'Missing project_id in request body'}), 400
# Find all TrainingProjectDetails for this project
details_rows = TrainingProjectDetails.query.filter_by(project_id=project_id).all()
if not details_rows:
return jsonify({'message': f'No TrainingProjectDetails found for project {project_id}'}), 404
# Get project name
training_project = TrainingProject.query.get(project_id)
project_name = training_project.title.replace(' ', '_') if training_project.title else f'project_{project_id}'
from services.generate_json_yolox import generate_training_json
from services.generate_yolox_exp import save_yolox_exp
# For each details row, generate coco.jsons and exp.py
for details in details_rows:
details_id = details.id
generate_training_json(details_id)
# Find all trainings for this details row
trainings = Training.query.filter_by(project_details_id=details_id).all()
if not trainings:
continue
# Create output directory
out_dir = os.path.join(os.path.dirname(__file__), '..', project_name, str(details_id))
os.makedirs(out_dir, exist_ok=True)
# Save exp.py for each training
for training in trainings:
exp_file_path = os.path.join(out_dir, 'exp.py')
save_yolox_exp(training.id, exp_file_path)
return jsonify({'message': f'YOLOX JSON and exp.py generated for project {project_id}'})
except Exception as err:
print(f'Error generating YOLOX JSON: {err}')
return jsonify({'message': 'Failed to generate YOLOX JSON', 'error': str(err)}), 500
@api_bp.route('/start-yolox-training', methods=['POST'])
def start_yolox_training():
"""Start YOLOX training"""
try:
data = request.get_json()
project_id = data.get('project_id')
training_id = data.get('training_id')
# Get project name
training_project = TrainingProject.query.get(project_id)
project_name = training_project.title.replace(' ', '_') if training_project.title else f'project_{project_id}'
# Look up training row
training_row = Training.query.get(training_id)
if not training_row:
training_row = Training.query.filter_by(project_details_id=training_id).first()
if not training_row:
return jsonify({'error': f'Training row not found for id or project_details_id {training_id}'}), 404
project_details_id = training_row.project_details_id
# Path to exp.py
out_dir = os.path.join(os.path.dirname(__file__), '..', project_name, str(project_details_id))
exp_src = os.path.join(out_dir, 'exp.py')
if not os.path.exists(exp_src):
return jsonify({'error': f'exp.py not found at {exp_src}'}), 500
# YOLOX configuration
yolox_main_dir = '/home/kitraining/Yolox/YOLOX-main'
yolox_venv = '/home/kitraining/Yolox/yolox_venv/bin/activate'
# Determine model argument
model_arg = ''
cmd = ''
if (training_row.transfer_learning and
isinstance(training_row.transfer_learning, str) and
training_row.transfer_learning.lower() == 'coco'):
model_arg = f' -c /home/kitraining/Yolox/YOLOX-main/pretrained/{training_row.selected_model}'
cmd = f'bash -c \'source {yolox_venv} && python tools/train.py -f {exp_src} -d 1 -b 8 --fp16 -o {model_arg}.pth --cache\''
elif (training_row.selected_model and
training_row.selected_model.lower() == 'coco' and
(not training_row.transfer_learning or training_row.transfer_learning == False)):
model_arg = f' -c /pretrained/{training_row.selected_model}'
cmd = f'bash -c \'source {yolox_venv} && python tools/train.py -f {exp_src} -d 1 -b 8 --fp16 -o {model_arg}.pth --cache\''
else:
cmd = f'bash -c \'source {yolox_venv} && python tools/train.py -f {exp_src} -d 1 -b 8 --fp16 --cache\''
print(cmd)
# Start training in background
subprocess.Popen(cmd, shell=True, cwd=yolox_main_dir)
return jsonify({'message': 'Training started'})
except Exception as err:
return jsonify({'error': 'Failed to start training', 'details': str(err)}), 500
@api_bp.route('/training-log', methods=['GET'])
def training_log():
"""Get YOLOX training log"""
try:
project_id = request.args.get('project_id')
training_id = request.args.get('training_id')
training_project = TrainingProject.query.get(project_id)
project_name = training_project.title.replace(' ', '_') if training_project.title else f'project_{project_id}'
out_dir = os.path.join(os.path.dirname(__file__), '..', project_name, str(training_id))
log_path = os.path.join(out_dir, 'training.log')
if not os.path.exists(log_path):
return jsonify({'error': 'Log not found'}), 404
with open(log_path, 'r') as f:
log_data = f.read()
return jsonify({'log': log_data})
except Exception as err:
return jsonify({'error': 'Failed to fetch log', 'details': str(err)}), 500
@api_bp.route('/training-projects', methods=['POST'])
def create_training_project():
"""Create a new training project"""
try:
title = request.form.get('title')
description = request.form.get('description')
classes = json.loads(request.form.get('classes', '[]'))
project_image = None
project_image_type = None
if 'project_image' in request.files:
file = request.files['project_image']
project_image = file.read()
project_image_type = file.content_type
project = TrainingProject(
title=title,
description=description,
classes=classes,
project_image=project_image,
project_image_type=project_image_type
)
db.session.add(project)
db.session.commit()
return jsonify({'message': 'Project created!'})
except Exception as error:
print(f'Error creating project: {error}')
db.session.rollback()
return jsonify({'message': 'Failed to create project', 'error': str(error)}), 500
@api_bp.route('/training-projects', methods=['GET'])
def get_training_projects():
"""Get all training projects"""
try:
projects = TrainingProject.query.all()
serialized = [project.to_dict() for project in projects]
return jsonify(serialized)
except Exception as error:
return jsonify({'message': 'Failed to fetch projects', 'error': str(error)}), 500
@api_bp.route('/update-status', methods=['GET'])
def get_update_status():
"""Get update status"""
return jsonify(update_status)
@api_bp.route('/label-studio-projects', methods=['GET'])
def get_label_studio_projects():
"""Get all Label Studio projects with annotation counts"""
try:
from sqlalchemy import func
# Get all projects
label_studio_projects = LabelStudioProject.query.all()
# Get annotation counts in one query using SQL aggregation
annotation_counts_query = db.session.query(
Image.project_id,
Annotation.Label,
func.count(Annotation.annotation_id).label('count')
).join(
Annotation, Image.image_id == Annotation.image_id
).group_by(
Image.project_id, Annotation.Label
).all()
# Organize counts by project_id
counts_by_project = {}
for project_id, label, count in annotation_counts_query:
if project_id not in counts_by_project:
counts_by_project[project_id] = {}
counts_by_project[project_id][label] = count
# Build result
projects_with_counts = []
for project in label_studio_projects:
project_dict = project.to_dict()
project_dict['annotationCounts'] = counts_by_project.get(project.project_id, {})
projects_with_counts.append(project_dict)
return jsonify(projects_with_counts)
except Exception as error:
return jsonify({'message': 'Failed to fetch projects', 'error': str(error)}), 500
@api_bp.route('/training-project-details', methods=['POST'])
def create_training_project_details():
"""Create TrainingProjectDetails"""
try:
data = request.get_json()
project_id = data.get('project_id')
annotation_projects = data.get('annotation_projects')
class_map = data.get('class_map')
description = data.get('description')
if not project_id or annotation_projects is None:
return jsonify({'message': 'Missing required fields'}), 400
details = TrainingProjectDetails(
project_id=project_id,
annotation_projects=annotation_projects,
class_map=class_map,
description=description
)
db.session.add(details)
db.session.commit()
return jsonify({'message': 'TrainingProjectDetails created', 'details': details.to_dict()})
except Exception as error:
db.session.rollback()
return jsonify({'message': 'Failed to create TrainingProjectDetails', 'error': str(error)}), 500
@api_bp.route('/training-project-details', methods=['GET'])
def get_training_project_details():
"""Get all TrainingProjectDetails"""
try:
details = TrainingProjectDetails.query.all()
return jsonify([d.to_dict() for d in details])
except Exception as error:
return jsonify({'message': 'Failed to fetch TrainingProjectDetails', 'error': str(error)}), 500
@api_bp.route('/training-project-details', methods=['PUT'])
def update_training_project_details():
"""Update class_map and description in TrainingProjectDetails"""
try:
data = request.get_json()
project_id = data.get('project_id')
class_map = data.get('class_map')
description = data.get('description')
if not project_id or not class_map or not description:
return jsonify({'message': 'Missing required fields'}), 400
details = TrainingProjectDetails.query.filter_by(project_id=project_id).first()
if not details:
return jsonify({'message': 'TrainingProjectDetails not found'}), 404
details.class_map = class_map
details.description = description
db.session.commit()
return jsonify({'message': 'Class map and description updated', 'details': details.to_dict()})
except Exception as error:
db.session.rollback()
return jsonify({'message': 'Failed to update class map or description', 'error': str(error)}), 500
@api_bp.route('/yolox-settings', methods=['POST'])
def yolox_settings():
"""Receive YOLOX settings and save to DB"""
try:
settings = request.form.to_dict()
print('--- YOLOX settings received ---')
print('settings:', settings)
# Map select_model to selected_model if present
if 'select_model' in settings and 'selected_model' not in settings:
settings['selected_model'] = settings['select_model']
del settings['select_model']
# Lookup or create project_details_id
if not settings.get('project_id') or not settings['project_id'].isdigit():
raise ValueError('Missing or invalid project_id in request')
project_id = int(settings['project_id'])
details = TrainingProjectDetails.query.filter_by(project_id=project_id).first()
if not details:
details = TrainingProjectDetails(
project_id=project_id,
annotation_projects=[],
class_map=None,
description=None
)
db.session.add(details)
db.session.commit()
settings['project_details_id'] = details.id
# Map 'act' to 'activation'
if 'act' in settings:
settings['activation'] = settings['act']
del settings['act']
# Type conversions
numeric_fields = [
'max_epoch', 'depth', 'width', 'warmup_epochs', 'warmup_lr',
'no_aug_epochs', 'min_lr_ratio', 'weight_decay', 'momentum',
'print_interval', 'eval_interval', 'test_conf', 'nmsthre',
'multiscale_range', 'degrees', 'translate', 'shear',
'train', 'valid', 'test'
]
for field in numeric_fields:
if field in settings:
settings[field] = float(settings[field])
# Boolean conversions
boolean_fields = ['ema', 'enable_mixup', 'save_history_ckpt']
for field in boolean_fields:
if field in settings:
if isinstance(settings[field], str):
settings[field] = settings[field].lower() == 'true'
else:
settings[field] = bool(settings[field])
# Array conversions
array_fields = ['mosaic_scale', 'mixup_scale', 'scale']
for field in array_fields:
if field in settings and isinstance(settings[field], str):
settings[field] = [float(x.strip()) for x in settings[field].split(',') if x.strip()]
# Trim string fields
for key in settings:
if isinstance(settings[key], str):
settings[key] = settings[key].strip()
# Default for transfer_learning
if 'transfer_learning' not in settings:
settings['transfer_learning'] = False
# Convert empty seed to None
if 'seed' in settings and (settings['seed'] == '' or settings['seed'] is None):
settings['seed'] = None
# Validate required fields
required_fields = [
'project_details_id', 'exp_name', 'max_epoch', 'depth', 'width',
'activation', 'train', 'valid', 'test', 'selected_model', 'transfer_learning'
]
for field in required_fields:
if field not in settings or settings[field] in [None, '']:
raise ValueError(f'Missing required field: {field}')
print('Received YOLOX settings:', settings)
# Handle uploaded model file
if 'ckpt_upload' in request.files:
file = request.files['ckpt_upload']
upload_dir = os.path.join(os.path.dirname(__file__), '..', 'uploads')
os.makedirs(upload_dir, exist_ok=True)
filename = file.filename or f'uploaded_model_{project_id}.pth'
file_path = os.path.join(upload_dir, filename)
file.save(file_path)
settings['model_upload'] = file_path
# Save to DB
from services.push_yolox_exp import push_yolox_exp_to_db
training = push_yolox_exp_to_db(settings)
return jsonify({'message': 'YOLOX settings saved to DB', 'training': training.to_dict()})
except Exception as error:
print(f'Error in /api/yolox-settings: {error}')
db.session.rollback()
return jsonify({'message': 'Failed to save YOLOX settings', 'error': str(error)}), 500
@api_bp.route('/yolox-settings/upload', methods=['POST'])
def yolox_settings_upload():
"""Upload binary model file"""
try:
project_id = request.args.get('project_id')
if not project_id:
return jsonify({'message': 'Missing project_id in query'}), 400
# Save file to disk
upload_dir = os.path.join(os.path.dirname(__file__), '..', 'uploads')
os.makedirs(upload_dir, exist_ok=True)
filename = request.headers.get('x-upload-filename', f'uploaded_model_{project_id}.pth')
file_path = os.path.join(upload_dir, filename)
# Read binary data
with open(file_path, 'wb') as f:
f.write(request.data)
# Update latest training row
details = TrainingProjectDetails.query.filter_by(project_id=project_id).first()
if not details:
return jsonify({'message': 'No TrainingProjectDetails found for project_id'}), 404
training = Training.query.filter_by(project_details_id=details.id).order_by(Training.id.desc()).first()
if not training:
return jsonify({'message': 'No training found for project_id'}), 404
training.model_upload = file_path
db.session.commit()
return jsonify({
'message': 'Model file uploaded and saved to disk',
'filename': filename,
'trainingId': training.id
})
except Exception as error:
print(f'Error in /api/yolox-settings/upload: {error}')
db.session.rollback()
return jsonify({'message': 'Failed to upload model file', 'error': str(error)}), 500
@api_bp.route('/trainings', methods=['GET'])
def get_trainings():
"""Get all trainings (optionally filtered by project_id)"""
try:
project_id = request.args.get('project_id')
if project_id:
# Find all details rows for this project
details_rows = TrainingProjectDetails.query.filter_by(project_id=project_id).all()
if not details_rows:
return jsonify([])
# Get all trainings linked to any details row for this project
details_ids = [d.id for d in details_rows]
trainings = Training.query.filter(Training.project_details_id.in_(details_ids)).all()
return jsonify([t.to_dict() for t in trainings])
else:
# Return all trainings
trainings = Training.query.all()
return jsonify([t.to_dict() for t in trainings])
except Exception as error:
return jsonify({'message': 'Failed to fetch trainings', 'error': str(error)}), 500
@api_bp.route('/trainings/<int:id>', methods=['DELETE'])
def delete_training(id):
"""Delete a training by id"""
try:
training = Training.query.get(id)
if training:
db.session.delete(training)
db.session.commit()
return jsonify({'message': 'Training deleted'})
else:
return jsonify({'message': 'Training not found'}), 404
except Exception as error:
db.session.rollback()
return jsonify({'message': 'Failed to delete training', 'error': str(error)}), 500
@api_bp.route('/training-projects/<int:id>', methods=['DELETE'])
def delete_training_project(id):
"""Delete a training project and all related entries"""
try:
# Find details rows for this project
details_rows = TrainingProjectDetails.query.filter_by(project_id=id).all()
details_ids = [d.id for d in details_rows]
# Delete all trainings linked to these details
if details_ids:
Training.query.filter(Training.project_details_id.in_(details_ids)).delete(synchronize_session=False)
TrainingProjectDetails.query.filter_by(project_id=id).delete()
# Delete the project itself
project = TrainingProject.query.get(id)
if project:
db.session.delete(project)
db.session.commit()
return jsonify({'message': 'Training project and all related entries deleted'})
else:
return jsonify({'message': 'Training project not found'}), 404
except Exception as error:
db.session.rollback()
return jsonify({'message': 'Failed to delete training project', 'error': str(error)}), 500

View File

@@ -0,0 +1 @@
# Services module

View File

@@ -0,0 +1,85 @@
import requests
import time
API_URL = 'http://192.168.1.19:8080/api'
API_TOKEN = 'c1cef980b7c73004f4ee880a42839313b863869f'
def fetch_label_studio_project(project_id):
"""Fetch Label Studio project annotations"""
export_url = f'{API_URL}/projects/{project_id}/export?exportType=JSON_MIN'
headers = {'Authorization': f'Token {API_TOKEN}'}
# Trigger export
res = requests.get(export_url, headers=headers)
if not res.ok:
error_text = res.text if res.text else ''
print(f'Failed to trigger export: {res.status_code} {res.reason} - {error_text}')
raise Exception(f'Failed to trigger export: {res.status_code} {res.reason}')
data = res.json()
# If data is an array, it's ready
if isinstance(data, list):
return data
# If not, poll for the export file
file_url = data.get('download_url') or data.get('url')
tries = 0
while not file_url and tries < 20:
time.sleep(2)
res = requests.get(export_url, headers=headers)
if not res.ok:
error_text = res.text if res.text else ''
print(f'Failed to poll export: {res.status_code} {res.reason} - {error_text}')
raise Exception(f'Failed to poll export: {res.status_code} {res.reason}')
data = res.json()
file_url = data.get('download_url') or data.get('url')
tries += 1
if not file_url:
raise Exception('Label Studio export did not become ready')
# Download the export file
full_url = file_url if file_url.startswith('http') else f"{API_URL.replace('/api', '')}{file_url}"
res = requests.get(full_url, headers=headers)
if not res.ok:
error_text = res.text if res.text else ''
print(f'Failed to download export: {res.status_code} {res.reason} - {error_text}')
raise Exception(f'Failed to download export: {res.status_code} {res.reason}')
return res.json()
def fetch_project_ids_and_titles():
"""Fetch all Label Studio project IDs and titles"""
try:
response = requests.get(
f'{API_URL}/projects/',
headers={
'Authorization': f'Token {API_TOKEN}',
'Content-Type': 'application/json'
}
)
if not response.ok:
error_text = response.text if response.text else ''
print(f'Failed to fetch projects: {response.status_code} {response.reason} - {error_text}')
raise Exception(f'HTTP error! status: {response.status_code}')
data = response.json()
if 'results' not in data or not isinstance(data['results'], list):
raise Exception('API response does not contain results array')
# Extract id and title from each project
projects = [
{'id': project['id'], 'title': project['title']}
for project in data['results']
]
print(projects)
return projects
except Exception as error:
print(f'Failed to fetch projects: {error}')
return []

View File

@@ -0,0 +1,179 @@
import json
import os
import math
from models.TrainingProject import TrainingProject
from models.TrainingProjectDetails import TrainingProjectDetails
from models.Images import Image
from models.Annotation import Annotation
def generate_training_json(training_id):
"""Generate COCO JSON for training, validation, and test sets"""
# training_id is now project_details_id
training_project_details = TrainingProjectDetails.query.get(training_id)
if not training_project_details:
raise Exception(f'No TrainingProjectDetails found for project_details_id {training_id}')
details_obj = training_project_details.to_dict()
# Get parent project for name
training_project = TrainingProject.query.get(details_obj['project_id'])
# Get split percentages (default values if not set)
train_percent = details_obj.get('train_percent', 85)
valid_percent = details_obj.get('valid_percent', 10)
test_percent = details_obj.get('test_percent', 5)
coco_images = []
coco_annotations = []
coco_categories = []
category_map = {}
category_id = 0
image_id = 0
annotation_id = 0
for cls in details_obj['class_map']:
asg_map = []
list_asg = cls[1]
for asg in list_asg:
asg_map.append({'original': asg[0], 'mapped': asg[1]})
# Build category list and mapping
if asg[1] and asg[1] not in category_map:
category_map[asg[1]] = category_id
coco_categories.append({'id': category_id, 'name': asg[1], 'supercategory': ''})
category_id += 1
# Get images for this project
images = Image.query.filter_by(project_id=cls[0]).all()
for image in images:
image_id += 1
file_name = image.image_path
# Clean up file path
if '%20' in file_name:
file_name = file_name.replace('%20', ' ')
if file_name and file_name.startswith('/data/local-files/?d='):
file_name = file_name.replace('/data/local-files/?d=', '')
file_name = file_name.replace('/home/kitraining/home/kitraining/', '')
if file_name and file_name.startswith('home/kitraining/To_Annotate/'):
file_name = file_name.replace('home/kitraining/To_Annotate/', '')
# Get annotations for this image
annotations = Annotation.query.filter_by(image_id=image.image_id).all()
coco_images.append({
'id': image_id,
'file_name': file_name,
'width': image.width or 0,
'height': image.height or 0
})
for annotation in annotations:
# Translate class name using asg_map
mapped_class = annotation.Label
for map_entry in asg_map:
if annotation.Label == map_entry['original']:
mapped_class = map_entry['mapped']
break
# Only add annotation if mapped_class is valid
if mapped_class and mapped_class in category_map:
annotation_id += 1
area = 0
if annotation.width and annotation.height:
area = annotation.width * annotation.height
coco_annotations.append({
'id': annotation_id,
'image_id': image_id,
'category_id': category_map[mapped_class],
'bbox': [annotation.x, annotation.y, annotation.width, annotation.height],
'area': area,
'iscrowd': 0
})
# Shuffle images for random split using seed
def seeded_random(seed):
x = math.sin(seed) * 10000
return x - math.floor(x)
def shuffle(array, seed):
for i in range(len(array) - 1, 0, -1):
j = int(seeded_random(seed + i) * (i + 1))
array[i], array[j] = array[j], array[i]
# Use seed from details_obj if present, else default to 42
split_seed = details_obj.get('seed', 42)
if split_seed is not None:
split_seed = int(split_seed)
else:
split_seed = 42
shuffle(coco_images, split_seed)
# Split images
total_images = len(coco_images)
train_count = int(total_images * train_percent / 100)
valid_count = int(total_images * valid_percent / 100)
test_count = total_images - train_count - valid_count
train_images = coco_images[0:train_count]
valid_images = coco_images[train_count:train_count + valid_count]
test_images = coco_images[train_count + valid_count:]
# Helper to get image ids for each split
train_image_ids = {img['id'] for img in train_images}
valid_image_ids = {img['id'] for img in valid_images}
test_image_ids = {img['id'] for img in test_images}
# Split annotations
train_annotations = [ann for ann in coco_annotations if ann['image_id'] in train_image_ids]
valid_annotations = [ann for ann in coco_annotations if ann['image_id'] in valid_image_ids]
test_annotations = [ann for ann in coco_annotations if ann['image_id'] in test_image_ids]
# Build final COCO JSONs
def build_coco_json(images, annotations, categories):
return {
'images': images,
'annotations': annotations,
'categories': categories
}
train_json = build_coco_json(train_images, train_annotations, coco_categories)
valid_json = build_coco_json(valid_images, valid_annotations, coco_categories)
test_json = build_coco_json(test_images, test_annotations, coco_categories)
# Create output directory
project_name = training_project.title.replace(' ', '_') if training_project and training_project.title else f'project_{details_obj["project_id"]}'
annotations_dir = '/home/kitraining/To_Annotate/annotations'
os.makedirs(annotations_dir, exist_ok=True)
# Write to files
train_path = f'{annotations_dir}/coco_project_{training_id}_train.json'
valid_path = f'{annotations_dir}/coco_project_{training_id}_valid.json'
test_path = f'{annotations_dir}/coco_project_{training_id}_test.json'
with open(train_path, 'w') as f:
json.dump(train_json, f, indent=2)
with open(valid_path, 'w') as f:
json.dump(valid_json, f, indent=2)
with open(test_path, 'w') as f:
json.dump(test_json, f, indent=2)
print(f'COCO JSON splits written to {annotations_dir} for trainingId {training_id}')
# Also generate inference exp.py
from services.generate_yolox_exp import generate_yolox_inference_exp
project_folder = os.path.join(os.path.dirname(__file__), '..', project_name, str(training_id))
os.makedirs(project_folder, exist_ok=True)
inference_exp_path = os.path.join(project_folder, 'exp_infer.py')
try:
exp_content = generate_yolox_inference_exp(training_id)
with open(inference_exp_path, 'w') as f:
f.write(exp_content)
print(f'Inference exp.py written to {inference_exp_path}')
except Exception as err:
print(f'Failed to generate inference exp.py: {err}')

View File

@@ -0,0 +1,152 @@
import os
import shutil
from models.training import Training
from models.TrainingProject import TrainingProject
def generate_yolox_exp(training_id):
"""Generate YOLOX exp.py file"""
# Fetch training row from DB
training = Training.query.get(training_id)
if not training:
training = Training.query.filter_by(project_details_id=training_id).first()
if not training:
raise Exception(f'Training not found for trainingId or project_details_id: {training_id}')
# If transfer_learning is 'coco', copy default exp.py
if training.transfer_learning == 'coco':
selected_model = training.selected_model.lower().replace('-', '_')
exp_source_path = f'/home/kitraining/Yolox/YOLOX-main/exps/default/{selected_model}.py'
if not os.path.exists(exp_source_path):
raise Exception(f'Default exp.py not found for model: {selected_model} at {exp_source_path}')
# Copy to project folder
project_details_id = training.project_details_id
project_folder = os.path.join(os.path.dirname(__file__), '..', f'project_23/{project_details_id}')
os.makedirs(project_folder, exist_ok=True)
exp_dest_path = os.path.join(project_folder, 'exp.py')
shutil.copyfile(exp_source_path, exp_dest_path)
return {'type': 'default', 'expPath': exp_dest_path}
# If transfer_learning is 'sketch', generate custom exp.py
if training.transfer_learning == 'sketch':
exp_content = generate_yolox_inference_exp(training_id)
return {'type': 'custom', 'expContent': exp_content}
raise Exception(f'Unknown transfer_learning type: {training.transfer_learning}')
def save_yolox_exp(training_id, out_path):
"""Save YOLOX exp.py to specified path"""
exp_result = generate_yolox_exp(training_id)
if exp_result['type'] == 'custom' and 'expContent' in exp_result:
with open(out_path, 'w') as f:
f.write(exp_result['expContent'])
return out_path
elif exp_result['type'] == 'default' and 'expPath' in exp_result:
# Optionally copy the file if outPath is different
if exp_result['expPath'] != out_path:
shutil.copyfile(exp_result['expPath'], out_path)
return out_path
else:
raise Exception('Unknown expResult type or missing content')
def generate_yolox_inference_exp(training_id, options=None):
"""Generate inference exp.py using DB values"""
if options is None:
options = {}
training = Training.query.get(training_id)
if not training:
training = Training.query.filter_by(project_details_id=training_id).first()
if not training:
raise Exception(f'Training not found for trainingId or project_details_id: {training_id}')
# Always use the training_id (project_details_id) for annotation file names
project_details_id = training.project_details_id
data_dir = options.get('data_dir', '/home/kitraining/To_Annotate/')
train_ann = options.get('train_ann', f'coco_project_{training_id}_train.json')
val_ann = options.get('val_ann', f'coco_project_{training_id}_valid.json')
test_ann = options.get('test_ann', f'coco_project_{training_id}_test.json')
# Get num_classes from TrainingProject.classes JSON
num_classes = 80
try:
training_project = TrainingProject.query.get(project_details_id)
if training_project and training_project.classes:
classes_arr = training_project.classes
if isinstance(classes_arr, str):
import json
classes_arr = json.loads(classes_arr)
if isinstance(classes_arr, list):
num_classes = len([c for c in classes_arr if c not in [None, '']])
elif isinstance(classes_arr, dict):
num_classes = len([k for k, v in classes_arr.items() if v not in [None, '']])
except Exception as e:
print(f'Could not determine num_classes from TrainingProject.classes: {e}')
depth = options.get('depth', training.depth or 1.00)
width = options.get('width', training.width or 1.00)
input_size = options.get('input_size', training.input_size or [640, 640])
mosaic_scale = options.get('mosaic_scale', training.mosaic_scale or [0.1, 2])
random_size = options.get('random_size', [10, 20])
test_size = options.get('test_size', training.test_size or [640, 640])
exp_name = options.get('exp_name', 'inference_exp')
enable_mixup = options.get('enable_mixup', False)
# Build exp content
exp_content = f'''#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import os
from yolox.exp import Exp as MyExp
class Exp(MyExp):
def __init__(self):
super(Exp, self).__init__()
self.data_dir = "{data_dir}"
self.train_ann = "{train_ann}"
self.val_ann = "{val_ann}"
self.test_ann = "coco_project_{training_id}_test.json"
self.num_classes = {num_classes}
'''
# Set pretrained_ckpt if transfer_learning is 'coco'
if training.transfer_learning and isinstance(training.transfer_learning, str) and training.transfer_learning.lower() == 'coco':
yolox_base_dir = '/home/kitraining/Yolox/YOLOX-main'
selected_model = training.selected_model.replace('.pth', '') if training.selected_model else ''
if selected_model:
exp_content += f" self.pretrained_ckpt = r'{yolox_base_dir}/pretrained/{selected_model}.pth'\n"
# Format arrays
input_size_str = ', '.join(map(str, input_size)) if isinstance(input_size, list) else str(input_size)
mosaic_scale_str = ', '.join(map(str, mosaic_scale)) if isinstance(mosaic_scale, list) else str(mosaic_scale)
random_size_str = ', '.join(map(str, random_size)) if isinstance(random_size, list) else str(random_size)
test_size_str = ', '.join(map(str, test_size)) if isinstance(test_size, list) else str(test_size)
exp_content += f''' self.depth = {depth}
self.width = {width}
self.input_size = ({input_size_str})
self.mosaic_scale = ({mosaic_scale_str})
self.random_size = ({random_size_str})
self.test_size = ({test_size_str})
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
self.enable_mixup = {str(enable_mixup)}
'''
return exp_content
def save_yolox_inference_exp(training_id, out_path, options=None):
"""Save inference exp.py to custom path"""
exp_content = generate_yolox_inference_exp(training_id, options)
with open(out_path, 'w') as f:
f.write(exp_content)
return out_path

View File

@@ -0,0 +1,36 @@
from models.training import Training
from models.TrainingProjectDetails import TrainingProjectDetails
from database.database import db
def push_yolox_exp_to_db(settings):
"""Save YOLOX settings to database"""
normalized = dict(settings)
# Map 'act' from frontend to 'activation' for DB
if 'act' in normalized:
normalized['activation'] = normalized['act']
del normalized['act']
# Convert 'on'/'off' to boolean for save_history_ckpt
if isinstance(normalized.get('save_history_ckpt'), str):
normalized['save_history_ckpt'] = normalized['save_history_ckpt'] == 'on'
# Convert comma-separated strings to arrays
for key in ['input_size', 'test_size', 'mosaic_scale', 'mixup_scale']:
if isinstance(normalized.get(key), str):
arr = [float(v.strip()) for v in normalized[key].split(',')]
normalized[key] = arr[0] if len(arr) == 1 else arr
# Find TrainingProjectDetails for this project
details = TrainingProjectDetails.query.filter_by(project_id=normalized['project_id']).first()
if not details:
raise Exception(f'TrainingProjectDetails not found for project_id {normalized["project_id"]}')
normalized['project_details_id'] = details.id
# Create DB row
training = Training(**normalized)
db.session.add(training)
db.session.commit()
return training

View File

@@ -0,0 +1,149 @@
from database.database import db
from models.LabelStudioProject import LabelStudioProject
from models.Images import Image
from models.Annotation import Annotation
from services.fetch_labelstudio import fetch_label_studio_project, fetch_project_ids_and_titles
update_status = {"running": False}
def seed_label_studio():
"""Seed database with Label Studio project data"""
update_status["running"] = True
print('Seeding started')
try:
projects = fetch_project_ids_and_titles()
for project in projects:
print(f"Processing project {project['id']} ({project['title']})")
# Upsert project in DB
existing_project = LabelStudioProject.query.filter_by(project_id=project['id']).first()
if existing_project:
existing_project.title = project['title']
else:
new_project = LabelStudioProject(project_id=project['id'], title=project['title'])
db.session.add(new_project)
db.session.commit()
# Fetch project data (annotations array)
data = fetch_label_studio_project(project['id'])
if not isinstance(data, list) or len(data) == 0:
print(f"No annotation data for project {project['id']}")
continue
# Remove old images and annotations for this project
old_images = Image.query.filter_by(project_id=project['id']).all()
old_image_ids = [img.image_id for img in old_images]
if old_image_ids:
Annotation.query.filter(Annotation.image_id.in_(old_image_ids)).delete(synchronize_session=False)
Image.query.filter_by(project_id=project['id']).delete()
db.session.commit()
print(f"Deleted {len(old_image_ids)} old images and their annotations for project {project['id']}")
# Prepare arrays
images_bulk = []
anns_bulk = []
for ann in data:
# Extract width/height
width = None
height = None
if isinstance(ann.get('label_rectangles'), list) and len(ann['label_rectangles']) > 0:
width = ann['label_rectangles'][0].get('original_width')
height = ann['label_rectangles'][0].get('original_height')
elif isinstance(ann.get('label'), list) and len(ann['label']) > 0:
if ann['label'][0].get('original_width') and ann['label'][0].get('original_height'):
width = ann['label'][0]['original_width']
height = ann['label'][0]['original_height']
# Only process if width and height are valid
if width and height:
image_data = {
'project_id': project['id'],
'image_path': ann.get('image'),
'width': width,
'height': height
}
images_bulk.append(image_data)
# Handle multiple annotations per image
if isinstance(ann.get('label_rectangles'), list):
for ann_detail in ann['label_rectangles']:
# Get label safely
rectanglelabels = ann_detail.get('rectanglelabels', [])
if isinstance(rectanglelabels, list) and len(rectanglelabels) > 0:
label = rectanglelabels[0]
elif isinstance(rectanglelabels, str):
label = rectanglelabels
else:
label = 'unknown'
ann_data = {
'image_path': ann.get('image'),
'x': (ann_detail['x'] * width) / 100,
'y': (ann_detail['y'] * height) / 100,
'width': (ann_detail['width'] * width) / 100,
'height': (ann_detail['height'] * height) / 100,
'Label': label
}
anns_bulk.append(ann_data)
elif isinstance(ann.get('label'), list):
for ann_detail in ann['label']:
# Get label safely
rectanglelabels = ann_detail.get('rectanglelabels', [])
if isinstance(rectanglelabels, list) and len(rectanglelabels) > 0:
label = rectanglelabels[0]
elif isinstance(rectanglelabels, str):
label = rectanglelabels
else:
label = 'unknown'
ann_data = {
'image_path': ann.get('image'),
'x': (ann_detail['x'] * width) / 100,
'y': (ann_detail['y'] * height) / 100,
'width': (ann_detail['width'] * width) / 100,
'height': (ann_detail['height'] * height) / 100,
'Label': label
}
anns_bulk.append(ann_data)
# Insert images and get generated IDs
inserted_images = []
for img_data in images_bulk:
new_image = Image(**img_data)
db.session.add(new_image)
db.session.flush() # Flush to get the ID
inserted_images.append(new_image)
db.session.commit()
# Map image_path -> image_id
image_map = {img.image_path: img.image_id for img in inserted_images}
# Assign correct image_id to each annotation
for ann_data in anns_bulk:
ann_data['image_id'] = image_map.get(ann_data['image_path'])
del ann_data['image_path']
# Insert annotations
for ann_data in anns_bulk:
new_annotation = Annotation(**ann_data)
db.session.add(new_annotation)
db.session.commit()
print(f"Inserted {len(images_bulk)} images and {len(anns_bulk)} annotations for project {project['id']}")
print('Seeding done')
return {'success': True, 'message': 'Data inserted successfully!'}
except Exception as error:
print(f'Error inserting data: {error}')
db.session.rollback()
return {'success': False, 'message': str(error)}
finally:
update_status["running"] = False
print('updateStatus.running set to false')

14
backend/start.py Normal file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env python3
"""
Start the Flask backend server
"""
import sys
import os
# Add the backend directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from app import app
if __name__ == '__main__':
app.run(host='0.0.0.0', port=3000, debug=True)