@mytec: iter3.2.5 gpu polish start

This commit is contained in:
2026-02-03 12:33:52 +02:00
parent 20d19d09ae
commit a61753c642
10 changed files with 1125 additions and 2 deletions

View File

@@ -44,7 +44,8 @@
"Bash(sort:*)",
"Bash(journalctl:*)",
"Bash(pkill:*)",
"Bash(pip3 list:*)"
"Bash(pip3 list:*)",
"Bash(chmod:*)"
]
}
}

View File

@@ -0,0 +1,504 @@
# RFCP — Iteration 3.5.2: Native Backend + GPU Fix + UI Polish
## Overview
Fix critical architecture issues: GPU indicator dropdown broken, GPU acceleration not working
(CuPy in wrong Python environment), and prepare path to remove WSL2 dependency for end users.
Plus UI polish items carried over from 3.5.1.
**Priority:** GPU fixes first, then UI polish, then native Windows exploration.
---
## CRITICAL CONTEXT
### Current Architecture Problem
```
RFCP.exe (Electron, Windows)
└── launches backend via WSL2:
python3 -m uvicorn app.main:app --host 0.0.0.0 --port 8090
└── /usr/bin/python3 (WSL2 system Python 3.12)
└── NO venv, NO CuPy installed
User installed CuPy in Windows Python → backend doesn't see it.
User installed CuPy in WSL system Python → needs --break-system-packages
```
### GPU Hardware (Confirmed Working)
```
nvidia-smi output (from WSL2):
NVIDIA GeForce RTX 4060 Laptop GPU
Driver: 581.42 (Windows) / 580.95.02 (WSL2)
CUDA: 13.0
VRAM: 8188 MiB
GPU passthrough: WORKING ✅
```
### Files to Reference
```
backend/app/services/gpu_backend.py — GPUManager class
backend/app/api/routes/gpu.py — GPU API endpoints
frontend/src/components/ui/GPUIndicator.tsx — GPU badge/dropdown
desktop/ — Electron app source
installer/ — Build scripts
```
---
## Task 1: Fix GPU Indicator Dropdown Z-Index (Priority 1 — 10 min)
### Problem
GPU dropdown WORKS (opens on click, shows diagnostics, install hints) but renders
BEHIND the right sidebar panel. The sidebar (Sites, Coverage Settings) has higher
z-index than the GPU dropdown, so the dropdown is invisible/hidden underneath.
See screenshots: dropdown is partially visible only when sidebar is made very narrow.
It shows: "COMPUTE DEVICES", "CPU (NumPy)", install hints, "Run Diagnostics",
and even diagnostics JSON — all working but hidden behind sidebar.
### Root Cause
GPUIndicator dropdown z-index is lower than the right sidebar panel z-index.
### Solution
In `GPUIndicator.tsx` — find the dropdown container div and set z-index
higher than the sidebar:
```tsx
{isOpen && (
<div
className="absolute top-full mt-1 bg-dark-surface border border-dark-border
rounded-lg shadow-2xl p-3 min-w-[300px]"
style={{ zIndex: 9999 }} // MUST be above sidebar (which is ~z-50 or z-auto)
>
...
</div>
)}
```
**Key requirements:**
1. `z-index: 9999` (or at minimum higher than sidebar)
2. Position: dropdown should open to the LEFT (toward center of screen)
to avoid being cut off by right edge
3. `right-0` on the absolute positioning (anchored to right edge of badge)
**Alternative approach** — use Tailwind z-index:
```tsx
className="absolute top-full right-0 mt-1 z-[9999] ..."
```
**Also check:** The parent container of GPUIndicator might need `position: relative`
for absolute positioning to work correctly against the right sidebar.
### Testing
- [ ] Click "CPU" badge → dropdown appears ABOVE the sidebar
- [ ] Full dropdown visible: devices, install hints, diagnostics
- [ ] Dropdown doesn't get cut off on right side
- [ ] Click outside → dropdown closes
- [ ] Dropdown works at any window width
---
## Task 2: Install CuPy in WSL Backend (Priority 1 — 10 min)
### Problem
CuPy installed in Windows Python, but backend runs in WSL2 system Python.
### Solution
Add a startup check in the backend that detects missing GPU packages
and provides clear instructions. Also, the Electron app should try to
install dependencies on first launch.
**Step 1: Backend startup GPU check**
In `backend/app/main.py`, add on startup:
```python
@app.on_event("startup")
async def check_gpu_availability():
"""Log GPU status on startup for debugging."""
import logging
logger = logging.getLogger("rfcp.gpu")
# Check CuPy
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
if device_count > 0:
name = cp.cuda.Device(0).name
mem = cp.cuda.Device(0).mem_info[1] // 1024 // 1024
logger.info(f"✅ GPU detected: {name} ({mem} MB VRAM)")
logger.info(f" CuPy {cp.__version__}, CUDA devices: {device_count}")
else:
logger.warning("⚠️ CuPy installed but no CUDA devices found")
except ImportError:
logger.warning("⚠️ CuPy not installed — GPU acceleration disabled")
logger.warning(" Install: pip install cupy-cuda12x --break-system-packages")
except Exception as e:
logger.warning(f"⚠️ CuPy error: {e}")
# Check PyOpenCL
try:
import pyopencl as cl
platforms = cl.get_platforms()
for p in platforms:
for d in p.get_devices():
logger.info(f"✅ OpenCL device: {d.name.strip()}")
except ImportError:
logger.info(" PyOpenCL not installed (optional)")
except Exception:
pass
```
**Step 2: GPU diagnostics endpoint enhancement**
Enhance `/api/gpu/diagnostics` to return install commands:
```python
@router.get("/diagnostics")
async def gpu_diagnostics():
import platform, sys
diagnostics = {
"python": sys.version,
"platform": platform.platform(),
"executable": sys.executable,
"is_wsl": "microsoft" in platform.release().lower(),
"cuda_available": False,
"opencl_available": False,
"install_hint": "",
"devices": []
}
# Check nvidia-smi
try:
import subprocess
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,memory.total", "--format=csv,noheader"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0:
diagnostics["nvidia_smi"] = result.stdout.strip()
except:
diagnostics["nvidia_smi"] = "not found"
# Check CuPy
try:
import cupy
diagnostics["cupy_version"] = cupy.__version__
diagnostics["cuda_available"] = True
count = cupy.cuda.runtime.getDeviceCount()
for i in range(count):
d = cupy.cuda.Device(i)
diagnostics["devices"].append({
"id": i,
"name": d.name,
"memory_mb": d.mem_info[1] // 1024 // 1024,
"backend": "CUDA"
})
except ImportError:
if diagnostics.get("is_wsl"):
diagnostics["install_hint"] = "pip3 install cupy-cuda12x --break-system-packages"
else:
diagnostics["install_hint"] = "pip install cupy-cuda12x"
return diagnostics
```
**Step 3: Frontend shows diagnostics clearly**
In GPUIndicator dropdown, show:
```
⚠ No GPU detected
Your system: WSL2 + NVIDIA RTX 4060
To enable GPU acceleration:
┌─────────────────────────────────────────────┐
│ pip3 install cupy-cuda12x │
│ --break-system-packages │
└─────────────────────────────────────────────┘
Then restart RFCP.
[Copy Command] [Run Diagnostics]
```
### Testing
- [ ] Backend startup logs GPU status
- [ ] /api/gpu/diagnostics returns WSL detection + install hint
- [ ] Frontend shows clear install instructions
- [ ] After installing CuPy in WSL + restart → GPU appears in list
---
## Task 3: Terrain Profile Click Fix (Priority 2 — 5 min)
### Problem
Clicking "Terrain Profile" button in ruler measurement also adds a point on the map.
### Solution
In the Terrain Profile button handler:
```tsx
const handleTerrainProfile = (e: React.MouseEvent) => {
e.stopPropagation();
e.preventDefault();
// ... open terrain profile
};
```
Also check if the button is rendered inside a map click handler area —
may need `L.DomEvent.disableClickPropagation(container)` on the parent.
### Testing
- [ ] Click "Terrain Profile" → opens profile, NO new ruler point added
- [ ] Map click still works normally when not clicking the button
---
## Task 4: Coverage Boundary — Real Contour Shape (Priority 2 — 45 min)
### Problem
Current boundary is a rough circle/ellipse. Should follow actual coverage contour.
### Approaches
**Option A: Shapely Alpha Shape (recommended)**
```python
# backend/app/services/boundary_service.py
from shapely.geometry import MultiPoint
from shapely.ops import unary_union
import numpy as np
def calculate_coverage_boundary(points: list, threshold_dbm: float = -100) -> list:
"""Calculate concave hull of coverage area above threshold."""
# Filter points above threshold
valid = [(p['lon'], p['lat']) for p in points if p['rsrp'] >= threshold_dbm]
if len(valid) < 3:
return []
mp = MultiPoint(valid)
# Use convex hull first, then try concave
try:
# Shapely 2.0+ has concave_hull
from shapely import concave_hull
hull = concave_hull(mp, ratio=0.3)
except ImportError:
# Fallback to convex hull
hull = mp.convex_hull
# Simplify to reduce points (0.001 deg ≈ 100m)
simplified = hull.simplify(0.001, preserve_topology=True)
# Extract coordinates
if simplified.geom_type == 'Polygon':
coords = list(simplified.exterior.coords)
return [{'lat': c[1], 'lon': c[0]} for c in coords]
return []
```
**Option B: Grid-based contour (simpler)**
```python
def grid_contour_boundary(points: list, threshold_dbm: float, resolution: float):
"""Find boundary by detecting edge cells in grid."""
# Create binary grid: 1 = above threshold, 0 = below
# Find cells where 1 is adjacent to 0 = boundary
# Convert cell coords back to lat/lon
# Return ordered boundary points
```
### API Endpoint
```python
# Add to coverage calculation response
@router.post("/coverage/calculate")
async def calculate_coverage(...):
result = coverage_service.calculate(...)
# Calculate boundary
if result.points:
boundary = calculate_coverage_boundary(
result.points,
threshold_dbm=settings.min_signal
)
result.boundary = boundary
return result
```
### Frontend
```tsx
// CoverageBoundary.tsx — use returned boundary coords
// Instead of calculating alpha shape on frontend
const CoverageBoundary = ({ points, boundary }) => {
// If server returned boundary, use it
if (boundary && boundary.length > 0) {
return <Polygon positions={boundary.map(p => [p.lat, p.lon])} />;
}
// Fallback to current convex hull implementation
return <CurrentImplementation points={points} />;
};
```
### Dependencies
Need `shapely` installed:
```
pip install shapely # or pip3 install shapely --break-system-packages
```
Check if already in requirements.txt.
### Testing
- [ ] 5km calculation → boundary follows actual coverage shape
- [ ] 10km calculation → boundary is irregular (terrain-dependent)
- [ ] Toggle boundary on/off works
- [ ] Boundary doesn't crash with < 3 points
---
## Task 5: Results Popup Enhancement (Priority 3 — 15 min)
### Problem
Calculation complete toast/popup doesn't show which models were used.
### Solution
Enhance the toast message after calculation:
```tsx
// Current:
toast.success(`Calculated ${points} points in ${time}s`);
// Enhanced:
const modelCount = result.modelsUsed?.length ?? 0;
const freq = sites[0]?.frequency ?? 0;
const presetName = settings.preset ?? 'custom';
toast.success(
`${points} pts • ${time}s • ${presetName}${freq} MHz • ${modelCount} models`,
{ duration: 5000 }
);
```
### Testing
- [ ] After calculation, toast shows: points, time, preset, frequency, model count
---
## Task 6: Native Windows Backend (Priority 3 — Research/Plan)
### Problem
Current setup REQUIRES WSL2. Users without WSL2 can't use RFCP at all.
### Current Flow
```
RFCP.exe (Electron)
→ detects WSL2
→ launches: wsl python3 -m uvicorn ...
→ backend runs in WSL2 Linux
```
### Target Flow
```
RFCP.exe (Electron)
→ Option A: embedded Python (Windows native)
→ Option B: detect system Python (Windows)
→ Option C: keep WSL2 but with fallback
```
### Research Tasks (don't implement yet, just investigate)
1. **Check how Electron currently launches backend:**
```bash
# Look at desktop/ directory
cat desktop/src/main.ts # or main.js
# Find where it spawns python/uvicorn
```
2. **Check if Windows Python works for backend:**
```powershell
# In Windows PowerShell:
cd D:\root\rfcp\backend
python -m uvicorn app.main:app --host 0.0.0.0 --port 8090
# Does it start? What errors?
```
3. **Evaluate embedded Python options:**
- python-embedded (official, ~30 MB)
- PyInstaller (bundle backend as .exe)
- cx_Freeze
- Nuitka (compile Python to C)
4. **Document findings** — create a brief report:
```
RFCP-Native-Backend-Research.md
- Current architecture (WSL2 dependency)
- Windows Python compatibility test results
- Recommended approach
- Migration steps
- Timeline estimate
```
### Goal
User downloads RFCP.exe → installs → clicks icon → everything works.
No WSL2. No manual pip install. GPU auto-detected.
---
## Implementation Order
### Priority 1 (30 min total)
1. **Task 1:** Fix GPU dropdown — make it clickable again
2. **Task 2:** GPU diagnostics + install instructions in UI
3. **Task 3:** Terrain Profile click propagation fix
### Priority 2 (1 hour)
4. **Task 4:** Coverage boundary real contour (shapely)
5. **Task 5:** Results popup enhancement
### Priority 3 (Research only)
6. **Task 6:** Investigate native Windows backend — report only, no implementation
---
## Build & Deploy
```bash
# After implementation:
cd /mnt/d/root/rfcp/frontend
npx tsc --noEmit # TypeScript check
npm run build # Production build
# Rebuild Electron:
cd /mnt/d/root/rfcp/installer
bash build-win.sh
# Test:
# Install new .exe and verify GPU indicator works
```
---
## Success Criteria
- [ ] GPU dropdown opens when clicking badge
- [ ] Dropdown shows device list or install instructions
- [ ] After `pip3 install cupy-cuda12x --break-system-packages` in WSL + restart → GPU visible
- [ ] Terrain Profile click doesn't add ruler points
- [ ] Coverage boundary follows actual signal contour
- [ ] Results toast shows model count and frequency
- [ ] Native Windows backend research document created

23
RFCP.bat Normal file
View File

@@ -0,0 +1,23 @@
@echo off
title RFCP - RF Coverage Planner
cd /d "%~dp0"
REM Check if backend exists
if not exist "backend\app\main.py" (
echo ERROR: RFCP backend not found.
echo Run install.bat first or check your installation.
pause
exit /b 1
)
echo ============================================
echo RFCP - RF Coverage Planner
echo ============================================
echo.
echo Starting backend server...
echo Open http://localhost:8090 in your browser
echo Press Ctrl+C to stop
echo.
cd backend
python -m uvicorn app.main:app --host 0.0.0.0 --port 8090

View File

@@ -0,0 +1,8 @@
# Development and testing dependencies
# Install with: pip install -r requirements-dev.txt
pytest>=7.0.0
pytest-asyncio>=0.21.0
httpx>=0.27.0
ruff>=0.1.0
mypy>=1.7.0

View File

@@ -0,0 +1,10 @@
# NVIDIA GPU acceleration via CuPy
# Install with: pip install -r requirements-gpu-nvidia.txt
#
# Choose ONE based on your CUDA version:
# - cupy-cuda12x for CUDA 12.x (RTX 30xx, 40xx, newer)
# - cupy-cuda11x for CUDA 11.x (older cards)
#
# CuPy bundles CUDA runtime (~700 MB) - no separate CUDA install needed
cupy-cuda12x>=13.0.0

View File

@@ -0,0 +1,14 @@
# Intel/AMD GPU acceleration via PyOpenCL
# Install with: pip install -r requirements-gpu-opencl.txt
#
# Works with:
# - Intel UHD/Iris Graphics (integrated)
# - AMD Radeon (discrete)
# - NVIDIA GPUs (alternative to CUDA)
#
# Requires OpenCL runtime:
# - Intel: Intel GPU Computing Runtime
# - AMD: AMD Adrenalin driver (includes OpenCL)
# - NVIDIA: NVIDIA driver (includes OpenCL)
pyopencl>=2023.1

View File

@@ -4,7 +4,7 @@
*/
import { useSitesStore } from '@/store/sites.ts';
import { COMMON_FREQUENCIES, FREQUENCY_GROUPS } from '@/constants/frequencies.ts';
import { COMMON_FREQUENCIES } from '@/constants/frequencies.ts';
const QUICK_BANDS = [
{ freq: 70, label: '70', color: 'text-indigo-400' },

41
install.bat Normal file
View File

@@ -0,0 +1,41 @@
@echo off
title RFCP - First Time Setup
echo ============================================
echo RFCP - RF Coverage Planner - Setup
echo ============================================
echo.
REM Check if Python exists
python --version >nul 2>&1
if errorlevel 1 (
echo ERROR: Python not found!
echo.
echo Please install Python 3.10+ from:
echo https://www.python.org/downloads/
echo.
echo Make sure to check "Add Python to PATH" during installation.
echo.
pause
exit /b 1
)
echo Python found:
python --version
echo.
REM Change to script directory
cd /d "%~dp0"
REM Run installer
echo Running RFCP installer...
echo.
python install_rfcp.py
echo.
echo ============================================
echo Setup complete!
echo.
echo To start RFCP, run: RFCP.bat
echo Then open: http://localhost:8090
echo ============================================
pause

498
install_rfcp.py Normal file
View File

@@ -0,0 +1,498 @@
#!/usr/bin/env python3
"""
RFCP Installer — Detects hardware, installs dependencies, sets up GPU acceleration.
Usage:
python install_rfcp.py
The installer handles:
- Python dependency installation
- GPU detection (NVIDIA/Intel/AMD)
- GPU acceleration setup (CuPy for CUDA, PyOpenCL for Intel/AMD)
- Frontend build (if Node.js available)
- Verification of installation
"""
import subprocess
import sys
import platform
import os
import shutil
def print_header(text: str):
"""Print section header."""
print(f"\n{'=' * 60}")
print(f" {text}")
print('=' * 60)
def print_step(text: str):
"""Print step indicator."""
print(f"\n>>> {text}")
def check_python() -> bool:
"""Verify Python 3.10+ is available."""
version = sys.version_info
if version.major < 3 or version.minor < 10:
print(f"[X] Python 3.10+ required, found {version.major}.{version.minor}")
return False
print(f"[OK] Python {version.major}.{version.minor}.{version.micro}")
return True
def check_node() -> bool:
"""Verify Node.js 18+ is available."""
try:
result = subprocess.run(
["node", "--version"],
capture_output=True,
text=True,
timeout=10
)
version = result.stdout.strip().lstrip('v')
major = int(version.split('.')[0])
if major < 18:
print(f"[!] Node.js 18+ recommended, found {version}")
return False
print(f"[OK] Node.js {version}")
return True
except FileNotFoundError:
print("[!] Node.js not found (frontend build will be skipped)")
return False
except Exception as e:
print(f"[!] Node.js check failed: {e}")
return False
def detect_gpu() -> dict:
"""Detect available GPU hardware."""
gpus = {
"nvidia": False,
"nvidia_name": "",
"nvidia_memory_mb": 0,
"intel": False,
"intel_name": "",
"amd": False,
"amd_name": ""
}
# Check NVIDIA via nvidia-smi
try:
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,driver_version,memory.total",
"--format=csv,noheader"],
capture_output=True,
text=True,
timeout=10
)
if result.returncode == 0 and result.stdout.strip():
info = result.stdout.strip()
parts = info.split(",")
gpus["nvidia"] = True
gpus["nvidia_name"] = parts[0].strip()
if len(parts) >= 3:
mem_str = parts[2].strip().replace(" MiB", "").replace(" MB", "")
try:
gpus["nvidia_memory_mb"] = int(mem_str)
except ValueError:
pass
print(f"[OK] NVIDIA GPU: {gpus['nvidia_name']}")
except FileNotFoundError:
pass
except subprocess.TimeoutExpired:
print("[!] nvidia-smi timed out")
except Exception as e:
print(f"[!] NVIDIA detection error: {e}")
# Check Intel/AMD via WMI (Windows) or lspci (Linux)
if platform.system() == "Windows":
try:
result = subprocess.run(
["wmic", "path", "win32_videocontroller", "get",
"name", "/format:csv"],
capture_output=True,
text=True,
timeout=10
)
for line in result.stdout.strip().split('\n'):
line_lower = line.lower()
if 'intel' in line_lower and ('uhd' in line_lower or 'iris' in line_lower or 'hd graphics' in line_lower):
gpus["intel"] = True
# Extract name from CSV
parts = line.split(',')
for part in parts:
if 'Intel' in part:
gpus["intel_name"] = part.strip()
break
if gpus["intel_name"]:
print(f"[OK] Intel GPU: {gpus['intel_name']}")
elif 'amd' in line_lower or 'radeon' in line_lower:
gpus["amd"] = True
parts = line.split(',')
for part in parts:
if 'AMD' in part or 'Radeon' in part:
gpus["amd_name"] = part.strip()
break
if gpus["amd_name"]:
print(f"[OK] AMD GPU: {gpus['amd_name']}")
except Exception:
pass
else:
# Linux: use lspci
try:
result = subprocess.run(
["lspci"],
capture_output=True,
text=True,
timeout=10
)
for line in result.stdout.split('\n'):
if 'VGA' in line or 'Display' in line or '3D' in line:
if 'Intel' in line:
gpus["intel"] = True
gpus["intel_name"] = line.split(':')[-1].strip() if ':' in line else "Intel GPU"
print(f"[OK] Intel GPU: {gpus['intel_name']}")
elif 'AMD' in line or 'Radeon' in line:
gpus["amd"] = True
gpus["amd_name"] = line.split(':')[-1].strip() if ':' in line else "AMD GPU"
print(f"[OK] AMD GPU: {gpus['amd_name']}")
except Exception:
pass
if not gpus["nvidia"] and not gpus["intel"] and not gpus["amd"]:
print("[i] No GPU detected - will use CPU (NumPy)")
return gpus
def install_core_dependencies() -> bool:
"""Install core Python dependencies."""
print_step("Installing core dependencies...")
req_file = os.path.join(os.path.dirname(__file__), "backend", "requirements.txt")
if not os.path.exists(req_file):
print(f"[X] requirements.txt not found at {req_file}")
return False
try:
subprocess.run(
[sys.executable, "-m", "pip", "install", "-r", req_file,
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print("[OK] Core dependencies installed")
return True
except subprocess.CalledProcessError as e:
print(f"[X] pip install failed: {e}")
return False
except subprocess.TimeoutExpired:
print("[X] pip install timed out (10 min)")
return False
def install_gpu_dependencies(gpus: dict) -> bool:
"""Install GPU-specific dependencies based on detected hardware."""
print_step("Setting up GPU acceleration...")
gpu_installed = False
# NVIDIA - install CuPy (includes CUDA runtime)
if gpus["nvidia"]:
print(f" Installing CuPy for {gpus['nvidia_name']}...")
try:
# Try CUDA 12 first (newer cards, RTX 30xx/40xx)
subprocess.run(
[sys.executable, "-m", "pip", "install", "cupy-cuda12x",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print(f" [OK] CuPy (CUDA 12) installed")
gpu_installed = True
except (subprocess.CalledProcessError, subprocess.TimeoutExpired):
try:
# Fallback to CUDA 11 (older cards)
print(" [!] CUDA 12 failed, trying CUDA 11...")
subprocess.run(
[sys.executable, "-m", "pip", "install", "cupy-cuda11x",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print(f" [OK] CuPy (CUDA 11) installed")
gpu_installed = True
except Exception as e:
print(f" [X] CuPy installation failed: {e}")
print(f" Manual install: pip install cupy-cuda12x")
# Intel/AMD - install PyOpenCL
if gpus["intel"] or gpus["amd"]:
gpu_name = gpus["intel_name"] or gpus["amd_name"]
print(f" Installing PyOpenCL for {gpu_name}...")
try:
subprocess.run(
[sys.executable, "-m", "pip", "install", "pyopencl",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=300
)
print(f" [OK] PyOpenCL installed")
gpu_installed = True
except Exception as e:
print(f" [X] PyOpenCL installation failed: {e}")
print(f" Manual install: pip install pyopencl")
if not gpu_installed and not gpus["nvidia"] and not gpus["intel"] and not gpus["amd"]:
print(" [i] No GPU acceleration - using CPU (NumPy)")
print(" This is fine! GPU just makes large calculations faster.")
return gpu_installed
def install_frontend(has_node: bool) -> bool:
"""Install frontend dependencies and build."""
if not has_node:
print_step("Skipping frontend build (Node.js not available)")
return False
print_step("Setting up frontend...")
frontend_dir = os.path.join(os.path.dirname(__file__), "frontend")
if not os.path.exists(os.path.join(frontend_dir, "package.json")):
print("[!] Frontend directory not found")
return False
try:
print(" Installing npm packages...")
subprocess.run(
["npm", "install"],
cwd=frontend_dir,
check=True,
timeout=300,
capture_output=True
)
print(" Building frontend...")
subprocess.run(
["npm", "run", "build"],
cwd=frontend_dir,
check=True,
timeout=300,
capture_output=True
)
print("[OK] Frontend built")
return True
except subprocess.CalledProcessError as e:
print(f"[X] Frontend build failed: {e}")
return False
except subprocess.TimeoutExpired:
print("[X] Frontend build timed out")
return False
def create_launcher() -> bool:
"""Create launcher scripts."""
print_step("Creating launcher scripts...")
base_dir = os.path.dirname(os.path.abspath(__file__))
if platform.system() == "Windows":
# Create RFCP.bat
launcher_path = os.path.join(base_dir, "RFCP.bat")
with open(launcher_path, 'w') as f:
f.write('@echo off\n')
f.write('title RFCP - RF Coverage Planner\n')
f.write(f'cd /d "{base_dir}"\n')
f.write('echo Starting RFCP...\n')
f.write('echo Open http://localhost:8090 in your browser\n')
f.write('echo Press Ctrl+C to stop\n')
f.write('echo.\n')
f.write(f'cd backend\n')
f.write(f'"{sys.executable}" -m uvicorn app.main:app --host 0.0.0.0 --port 8090\n')
print(f" [OK] Created: RFCP.bat")
# Create install.bat for first-time setup
install_bat_path = os.path.join(base_dir, "install.bat")
with open(install_bat_path, 'w') as f:
f.write('@echo off\n')
f.write('title RFCP - First Time Setup\n')
f.write('echo ============================================\n')
f.write('echo RFCP - RF Coverage Planner - Setup\n')
f.write('echo ============================================\n')
f.write('echo.\n')
f.write('python --version >nul 2>&1\n')
f.write('if errorlevel 1 (\n')
f.write(' echo ERROR: Python not found!\n')
f.write(' echo Please install Python 3.10+ from python.org\n')
f.write(' pause\n')
f.write(' exit /b 1\n')
f.write(')\n')
f.write(f'cd /d "{base_dir}"\n')
f.write('python install_rfcp.py\n')
f.write('echo.\n')
f.write('echo Setup complete! Run RFCP.bat to start.\n')
f.write('pause\n')
print(f" [OK] Created: install.bat")
else:
# Linux/macOS
launcher_path = os.path.join(base_dir, "rfcp.sh")
with open(launcher_path, 'w') as f:
f.write('#!/bin/bash\n')
f.write(f'cd "{base_dir}"\n')
f.write('echo "Starting RFCP..."\n')
f.write('echo "Open http://localhost:8090 in your browser"\n')
f.write('echo "Press Ctrl+C to stop"\n')
f.write('cd backend\n')
f.write(f'{sys.executable} -m uvicorn app.main:app --host 0.0.0.0 --port 8090\n')
os.chmod(launcher_path, 0o755)
print(f" [OK] Created: rfcp.sh")
return True
def verify_installation() -> bool:
"""Run quick verification tests."""
print_step("Verifying installation...")
checks = []
critical_fail = False
# Check core imports
try:
import numpy as np
checks.append(f"[OK] NumPy {np.__version__}")
except ImportError:
checks.append("[X] NumPy missing")
critical_fail = True
try:
import scipy
checks.append(f"[OK] SciPy {scipy.__version__}")
except ImportError:
checks.append("[X] SciPy missing")
critical_fail = True
try:
import fastapi
checks.append(f"[OK] FastAPI {fastapi.__version__}")
except ImportError:
checks.append("[X] FastAPI missing")
critical_fail = True
try:
import uvicorn
checks.append(f"[OK] Uvicorn {uvicorn.__version__}")
except ImportError:
checks.append("[X] Uvicorn missing")
critical_fail = True
# Check GPU acceleration
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
if device_count > 0:
props = cp.cuda.runtime.getDeviceProperties(0)
name = props["name"]
if isinstance(name, bytes):
name = name.decode()
mem_mb = props["totalGlobalMem"] // (1024 * 1024)
checks.append(f"[OK] CuPy (CUDA) -> {name} ({mem_mb} MB)")
else:
checks.append("[i] CuPy installed but no CUDA devices found")
except ImportError:
checks.append("[i] CuPy not available (NVIDIA GPU acceleration disabled)")
except Exception as e:
checks.append(f"[!] CuPy error: {e}")
try:
import pyopencl as cl
devices = []
for p in cl.get_platforms():
for d in p.get_devices():
devices.append(d.name.strip())
if devices:
checks.append(f"[OK] PyOpenCL -> {', '.join(devices[:2])}")
else:
checks.append("[i] PyOpenCL installed but no devices found")
except ImportError:
checks.append("[i] PyOpenCL not available (Intel/AMD GPU acceleration disabled)")
except Exception as e:
checks.append(f"[!] PyOpenCL error: {e}")
for check in checks:
print(f" {check}")
return not critical_fail
def main():
"""Main installer entry point."""
print_header("RFCP - RF Coverage Planner - Installer")
# Step 1: Check prerequisites
print_step("Checking prerequisites...")
if not check_python():
print("\n[X] Python 3.10+ is required. Please install from python.org")
sys.exit(1)
has_node = check_node()
# Step 2: Detect GPU
print_step("Detecting GPU hardware...")
gpus = detect_gpu()
# Step 3: Install core dependencies
if not install_core_dependencies():
print("\n[X] Core dependency installation failed")
sys.exit(1)
# Step 4: Install GPU dependencies
install_gpu_dependencies(gpus)
# Step 5: Frontend (optional)
install_frontend(has_node)
# Step 6: Create launcher
create_launcher()
# Step 7: Verify
success = verify_installation()
# Summary
print_header("Installation Summary")
if success:
print(" [OK] RFCP installed successfully!")
print()
print(" To start RFCP:")
if platform.system() == "Windows":
print(" Double-click RFCP.bat")
print(" Or run: python -m uvicorn app.main:app --port 8090")
else:
print(" Run: ./rfcp.sh")
print(" Or: python -m uvicorn app.main:app --port 8090")
print()
print(" Then open: http://localhost:8090")
print()
# GPU summary
if gpus["nvidia"]:
print(f" GPU: {gpus['nvidia_name']} (CUDA)")
elif gpus["intel"]:
print(f" GPU: {gpus['intel_name']} (OpenCL)")
elif gpus["amd"]:
print(f" GPU: {gpus['amd_name']} (OpenCL)")
else:
print(" Mode: CPU only (NumPy)")
else:
print(" [!] Installation completed with errors")
print(" Some features may not work correctly")
print()
print('=' * 60)
if __name__ == "__main__":
main()

24
rfcp.sh Normal file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
# RFCP - RF Coverage Planner - Launcher
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
# Check if backend exists
if [ ! -f "backend/app/main.py" ]; then
echo "ERROR: RFCP backend not found."
echo "Run: python install_rfcp.py"
exit 1
fi
echo "============================================"
echo " RFCP - RF Coverage Planner"
echo "============================================"
echo ""
echo "Starting backend server..."
echo "Open http://localhost:8090 in your browser"
echo "Press Ctrl+C to stop"
echo ""
cd backend
python3 -m uvicorn app.main:app --host 0.0.0.0 --port 8090