Compare commits
2 Commits
a61753c642
...
6dcc5a19b9
| Author | SHA1 | Date | |
|---|---|---|---|
| 6dcc5a19b9 | |||
| 6cd9d869cc |
@@ -45,7 +45,8 @@
|
|||||||
"Bash(journalctl:*)",
|
"Bash(journalctl:*)",
|
||||||
"Bash(pkill:*)",
|
"Bash(pkill:*)",
|
||||||
"Bash(pip3 list:*)",
|
"Bash(pip3 list:*)",
|
||||||
"Bash(chmod:*)"
|
"Bash(chmod:*)",
|
||||||
|
"Bash(pyinstaller:*)"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -24,3 +24,7 @@ installer/dist/
|
|||||||
__pycache__/
|
__pycache__/
|
||||||
*.pyc
|
*.pyc
|
||||||
nul
|
nul
|
||||||
|
|
||||||
|
# PyInstaller build artifacts
|
||||||
|
backend/build/
|
||||||
|
backend/dist/
|
||||||
|
|||||||
130
RFCP-3.6.0-GPU-Build-Task.md
Normal file
130
RFCP-3.6.0-GPU-Build-Task.md
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
# RFCP 3.6.0 — Production GPU Build (Claude Code Task)
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Build `rfcp-server.exe` (PyInstaller) with CuPy GPU support so production RFCP
|
||||||
|
detects the NVIDIA GPU without manual `pip install`.
|
||||||
|
|
||||||
|
Currently production exe shows "CPU (NumPy)" because CuPy is not bundled.
|
||||||
|
|
||||||
|
## Current Environment (CONFIRMED WORKING)
|
||||||
|
|
||||||
|
```
|
||||||
|
Windows 10 (10.0.26200)
|
||||||
|
Python 3.11.8 (C:\Python311)
|
||||||
|
NVIDIA GeForce RTX 4060 Laptop GPU (8 GB VRAM)
|
||||||
|
CUDA Toolkit 13.1 (C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1)
|
||||||
|
CUDA_PATH = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1
|
||||||
|
|
||||||
|
Packages:
|
||||||
|
cupy-cuda13x 13.6.0 ← NOT cuda12x!
|
||||||
|
numpy 1.26.4
|
||||||
|
scipy 1.17.0
|
||||||
|
fastrlock 0.8.3
|
||||||
|
pyinstaller 6.18.0
|
||||||
|
|
||||||
|
GPU compute verified:
|
||||||
|
python -c "import cupy; a = cupy.array([1,2,3]); print(a.sum())" → 6 ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
## What We Already Tried (And Why It Failed)
|
||||||
|
|
||||||
|
### Attempt 1: ONEFILE spec with collect_all('cupy')
|
||||||
|
- `collect_all('cupy')` returns 1882 datas, **0 binaries** — CuPy pip doesn't bundle DLLs on Windows
|
||||||
|
- CUDA DLLs come from two separate sources:
|
||||||
|
- **nvidia pip packages** (14 DLLs in `C:\Python311\Lib\site-packages\nvidia\*/bin/`)
|
||||||
|
- **CUDA Toolkit** (13 DLLs in `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1\bin\x64\`)
|
||||||
|
- We manually collected these 27 DLLs in the spec
|
||||||
|
- Build succeeded (3 GB exe!) but crashed on launch:
|
||||||
|
```
|
||||||
|
[PYI-10456:ERROR] Failed to extract cufft64_12.dll: decompression resulted in return code -1!
|
||||||
|
```
|
||||||
|
- Root cause: `cufft64_12.dll` is 297 MB — PyInstaller's zlib compression fails on it in ONEFILE mode
|
||||||
|
|
||||||
|
### Attempt 2: We were about to try ONEDIR but haven't built it yet
|
||||||
|
|
||||||
|
### Key Insight: Duplicate DLLs from two sources
|
||||||
|
nvidia pip packages have CUDA 12.x DLLs (cublas64_12.dll etc.)
|
||||||
|
CUDA Toolkit 13.1 has CUDA 13.x DLLs (cublas64_13.dll etc.)
|
||||||
|
CuPy-cuda13x needs the 13.x versions. The 12.x from pip may conflict.
|
||||||
|
|
||||||
|
## What Needs To Happen
|
||||||
|
|
||||||
|
1. **Build rfcp-server as ONEDIR** (folder with exe + DLLs, not single exe)
|
||||||
|
- This avoids the decompression crash with large CUDA DLLs
|
||||||
|
- Output: `backend/dist/rfcp-server/rfcp-server.exe` + all DLLs alongside
|
||||||
|
|
||||||
|
2. **Include ONLY the correct CUDA DLLs**
|
||||||
|
- Prefer CUDA Toolkit 13.1 DLLs (match cupy-cuda13x)
|
||||||
|
- The nvidia pip packages have cuda12x DLLs — may cause version conflicts
|
||||||
|
- Key DLLs needed: cublas, cusparse, cusolver, curand, cufft, nvrtc, cudart
|
||||||
|
|
||||||
|
3. **Exclude bloat** — the previous build pulled in tensorflow, grpc, opentelemetry etc.
|
||||||
|
making it 3 GB. Real size should be ~600-800 MB.
|
||||||
|
|
||||||
|
4. **Test the built exe** — run it standalone and verify:
|
||||||
|
- `curl http://localhost:8090/api/health` returns `"build": "gpu"`
|
||||||
|
- `curl http://localhost:8090/api/gpu/status` returns `"available": true`
|
||||||
|
- Or at minimum: the exe starts without errors and CuPy imports successfully
|
||||||
|
|
||||||
|
5. **Update Electron integration** if needed:
|
||||||
|
- Current Electron expects a single `rfcp-server.exe` file
|
||||||
|
- With ONEDIR, it's a folder `rfcp-server/rfcp-server.exe`
|
||||||
|
- File: `desktop/main.js` or `desktop/src/main.ts` — look for where it spawns backend
|
||||||
|
- The path needs to change from `resources/backend/rfcp-server.exe`
|
||||||
|
to `resources/backend/rfcp-server/rfcp-server.exe`
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
```
|
||||||
|
D:\root\rfcp\
|
||||||
|
├── backend\
|
||||||
|
│ ├── run_server.py ← PyInstaller entry point
|
||||||
|
│ ├── app\
|
||||||
|
│ │ ├── main.py ← FastAPI app
|
||||||
|
│ │ ├── services\
|
||||||
|
│ │ │ ├── gpu_backend.py ← GPU detection (CuPy/NumPy fallback)
|
||||||
|
│ │ │ └── coverage_service.py ← Uses get_array_module()
|
||||||
|
│ │ └── api\routes\gpu.py ← /api/gpu/status, /api/gpu/diagnostics
|
||||||
|
│ ├── dist\ ← PyInstaller output goes here
|
||||||
|
│ └── build\ ← PyInstaller build cache
|
||||||
|
├── installer\
|
||||||
|
│ ├── rfcp-server-gpu.spec ← GPU spec (needs fixing)
|
||||||
|
│ ├── rfcp-server.spec ← CPU spec (working, don't touch)
|
||||||
|
│ ├── rfcp.ico ← Icon (exists)
|
||||||
|
│ └── build-gpu.bat ← Build script
|
||||||
|
├── desktop\
|
||||||
|
│ ├── main.js or src/main.ts ← Electron main process
|
||||||
|
│ └── resources\backend\ ← Where production exe lives
|
||||||
|
└── frontend\ ← React frontend (no changes needed)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Existing CPU spec for reference
|
||||||
|
|
||||||
|
The working CPU-only spec is at `installer/rfcp-server.spec`. Use it as the base
|
||||||
|
and ADD CuPy + CUDA on top. Don't reinvent the wheel.
|
||||||
|
|
||||||
|
## Build Command
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
```
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] `dist/rfcp-server/rfcp-server.exe` starts without errors
|
||||||
|
- [ ] CuPy imports successfully inside the exe (no missing DLL errors)
|
||||||
|
- [ ] `/api/gpu/status` returns `"available": true, "device": "RTX 4060"`
|
||||||
|
- [ ] Total folder size < 1 GB (ideally 600-800 MB)
|
||||||
|
- [ ] No tensorflow/grpc/opentelemetry bloat
|
||||||
|
- [ ] Electron can find and launch the backend (path updated if needed)
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
- Do NOT use cupy-cuda12x — we migrated to cupy-cuda13x
|
||||||
|
- Do NOT try ONEFILE mode — cufft64_12.dll (297 MB) crashes decompression
|
||||||
|
- The nvidia pip packages (nvidia-cublas-cu12, etc.) are still installed but may
|
||||||
|
conflict with CUDA Toolkit 13.1 — prefer Toolkit DLLs
|
||||||
|
- `collect_all('cupy')` gives 0 binaries on Windows — DLLs must be manually specified
|
||||||
|
- gpu_backend.py already handles CuPy absence gracefully (falls back to NumPy)
|
||||||
133
RFCP-3.7.0-GPU-Coverage-Task.md
Normal file
133
RFCP-3.7.0-GPU-Coverage-Task.md
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
# RFCP 3.7.0 — GPU-Accelerated Coverage Calculations
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Iteration 3.6.0 completed: CuPy-cuda13x works in production PyInstaller build,
|
||||||
|
RTX 4060 detected, ONEDIR build with CUDA DLLs. BUT coverage calculations still
|
||||||
|
run on CPU because coverage_service.py uses `import numpy as np` directly instead
|
||||||
|
of the GPU backend.
|
||||||
|
|
||||||
|
The GPU infrastructure is ready:
|
||||||
|
- `app/services/gpu_backend.py` has `GPUManager.get_array_module()` → returns cupy or numpy
|
||||||
|
- `/api/gpu/status` confirms `"active_backend": "cuda"`
|
||||||
|
- CuPy is imported and GPU detected in the frozen exe
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Replace direct `np.` calls in coverage_service.py with `xp = gpu_manager.get_array_module()`
|
||||||
|
so calculations run on GPU when available, with automatic NumPy fallback.
|
||||||
|
|
||||||
|
## Files to Modify
|
||||||
|
|
||||||
|
### `app/services/coverage_service.py`
|
||||||
|
|
||||||
|
**Line 7**: `import numpy as np` — keep this but also import gpu_manager
|
||||||
|
|
||||||
|
Add near top:
|
||||||
|
```python
|
||||||
|
from app.services.gpu_backend import gpu_manager
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key sections to GPU-accelerate** (highest impact first):
|
||||||
|
|
||||||
|
#### 1. Grid array creation (lines 549-550, 922-923)
|
||||||
|
```python
|
||||||
|
# BEFORE:
|
||||||
|
grid_lats = np.array([lat for lat, lon in grid])
|
||||||
|
grid_lons = np.array([lon for lat, lon in grid])
|
||||||
|
|
||||||
|
# AFTER:
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
grid_lats = xp.array([lat for lat, lon in grid])
|
||||||
|
grid_lons = xp.array([lon for lat, lon in grid])
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Trig calculations (line 468, 1031, 1408-1415, 1442)
|
||||||
|
These use np.cos, np.radians, np.sin, np.degrees, np.arctan2 — all have CuPy equivalents.
|
||||||
|
```python
|
||||||
|
# BEFORE:
|
||||||
|
lon_delta = settings.radius / (111000 * np.cos(np.radians(center_lat)))
|
||||||
|
cos_lat = np.cos(np.radians(center_lat))
|
||||||
|
|
||||||
|
# AFTER:
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
lon_delta = settings.radius / (111000 * float(xp.cos(xp.radians(center_lat))))
|
||||||
|
cos_lat = float(xp.cos(xp.radians(center_lat)))
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. The heavy calculation loop — `_run_point_loop` (line 1070) and `_calculate_point_sync` (line 1112)
|
||||||
|
This is where 90% of time is spent. Currently processes points one-by-one.
|
||||||
|
The GPU win comes from vectorizing the path loss calculation across ALL grid points at once.
|
||||||
|
|
||||||
|
**Strategy**: Instead of looping through points, create arrays of all distances/angles
|
||||||
|
and compute path loss for all points in one vectorized operation.
|
||||||
|
|
||||||
|
#### 4. `_calculate_bearing` (line 1402) — already vectorizable
|
||||||
|
```python
|
||||||
|
# All np.* functions here have direct CuPy equivalents
|
||||||
|
# Just replace np → xp
|
||||||
|
```
|
||||||
|
|
||||||
|
## Important Rules
|
||||||
|
|
||||||
|
1. **Always get xp at function scope**, not module scope:
|
||||||
|
```python
|
||||||
|
def my_function(self, ...):
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
# use xp instead of np
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Convert GPU arrays back to CPU** before returning to non-GPU code:
|
||||||
|
```python
|
||||||
|
if hasattr(result, 'get'): # CuPy array
|
||||||
|
result = result.get() # → numpy array
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Keep np for small/scalar operations** — GPU overhead isn't worth it for single values.
|
||||||
|
Only use xp for array operations on 100+ elements.
|
||||||
|
|
||||||
|
4. **Don't break the fallback** — if CuPy isn't available, `get_array_module()` returns numpy,
|
||||||
|
so `xp.array()` etc. work identically.
|
||||||
|
|
||||||
|
5. **Test both paths** — run with GPU and verify same results as CPU.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
After changes:
|
||||||
|
```powershell
|
||||||
|
# Rebuild
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --noconfirm
|
||||||
|
|
||||||
|
# Run
|
||||||
|
.\dist\rfcp-server\rfcp-server.exe
|
||||||
|
|
||||||
|
# Test calculation via frontend — watch Task Manager GPU utilization
|
||||||
|
# Should see GPU Compute spike during coverage calculation
|
||||||
|
# Time should be significantly faster than 10s for 1254 points
|
||||||
|
```
|
||||||
|
|
||||||
|
Compare before/after:
|
||||||
|
- Current (CPU): ~10s for 1254 points, 5km radius
|
||||||
|
- Expected (GPU): 1-3s for same calculation
|
||||||
|
|
||||||
|
Also test GPU diagnostics:
|
||||||
|
```
|
||||||
|
curl http://localhost:8888/api/gpu/diagnostics
|
||||||
|
```
|
||||||
|
|
||||||
|
## What NOT to Change
|
||||||
|
|
||||||
|
- Don't modify gpu_backend.py — it's working correctly
|
||||||
|
- Don't change the API endpoints or response format
|
||||||
|
- Don't remove the NumPy import — keep it for non-array operations
|
||||||
|
- Don't change propagation model math — only the array operations
|
||||||
|
- Don't change _filter_buildings_to_bbox or OSM functions — they use lists not arrays
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] Coverage calculation uses GPU (visible in Task Manager)
|
||||||
|
- [ ] Calculation time reduced for 1000+ point grids
|
||||||
|
- [ ] CPU fallback still works (test by setting active_backend to cpu via API)
|
||||||
|
- [ ] Same coverage results (heatmap should look identical)
|
||||||
|
- [ ] No regression in tiled processing mode
|
||||||
181
RFCP-3.8.0-Vectorize-Coverage-Task.md
Normal file
181
RFCP-3.8.0-Vectorize-Coverage-Task.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
# RFCP 3.8.0 — Vectorize Per-Point Coverage Calculations
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Iteration 3.7.0 added GPU precompute for distances + base path loss (Phase 2.5).
|
||||||
|
But Phase 3 (per-point loop) still runs on CPU, one point at a time across workers.
|
||||||
|
This is where 95% of time goes on Full preset (195s for 6,642 points).
|
||||||
|
|
||||||
|
Current pipeline:
|
||||||
|
```
|
||||||
|
Phase 2.5 (GPU, 0.01s): distances + base path_loss → precomputed arrays
|
||||||
|
Phase 3 (CPU, 195s): per-point terrain_loss, building_loss, reflections, vegetation
|
||||||
|
```
|
||||||
|
|
||||||
|
Goal: Vectorize the heavy per-point calculations so GPU handles them in bulk.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
The key insight: `_calculate_point_sync` (line ~1127) does these steps per point:
|
||||||
|
|
||||||
|
1. **Terrain LOS check** — get elevation profile between site and point, check clearance
|
||||||
|
2. **Diffraction loss** — knife-edge based on Fresnel zone clearance
|
||||||
|
3. **Building obstruction** — find buildings between site and point, calculate penetration loss
|
||||||
|
4. **Materials penalty** — add loss based on building material type
|
||||||
|
5. **Dominant path analysis** — LOS vs reflection vs diffraction
|
||||||
|
6. **Street canyon** — check if point is in urban canyon
|
||||||
|
7. **Reflections** — find reflection paths off buildings (most expensive!)
|
||||||
|
8. **Vegetation loss** — check vegetation between site and point
|
||||||
|
9. **Final RSRP** — tx_power - path_loss - terrain_loss - building_loss - veg_loss + gains
|
||||||
|
|
||||||
|
## Strategy: Vectorize in Stages
|
||||||
|
|
||||||
|
NOT everything can be vectorized equally. Prioritize by time spent:
|
||||||
|
|
||||||
|
### Stage 1: Terrain LOS + Diffraction (HIGH IMPACT)
|
||||||
|
Currently: For each point, sample ~50-100 elevation values along radial path,
|
||||||
|
find min clearance, compute knife-edge diffraction.
|
||||||
|
|
||||||
|
**Vectorize**: Create 2D elevation profiles for ALL points at once.
|
||||||
|
- All points share the same site location
|
||||||
|
- For N points, create N terrain profiles (each M samples)
|
||||||
|
- Compute Fresnel clearance for all profiles vectorized
|
||||||
|
- Compute diffraction loss vectorized
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Instead of per-point:
|
||||||
|
for point in grid:
|
||||||
|
profile = get_terrain_profile(site, point, num_samples=50)
|
||||||
|
clearance = min_clearance(profile)
|
||||||
|
loss = diffraction_loss(clearance, freq)
|
||||||
|
|
||||||
|
# Vectorized:
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
# all_profiles shape: (N_points, M_samples)
|
||||||
|
all_profiles = get_terrain_profiles_batch(site, all_points, num_samples=50)
|
||||||
|
all_clearances = compute_clearances_batch(all_profiles, site_elev, point_elevs, distances)
|
||||||
|
all_terrain_loss = diffraction_loss_batch(all_clearances, freq)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stage 2: Building Obstruction (HIGH IMPACT)
|
||||||
|
Currently: For each point, find nearby buildings, check if they obstruct path.
|
||||||
|
|
||||||
|
**Vectorize**: Use spatial indexing but batch the geometry checks.
|
||||||
|
- Pre-compute building bounding boxes as GPU arrays
|
||||||
|
- For each point, ray-building intersection can be done as matrix operation
|
||||||
|
- Building penetration loss is simple lookup after intersection
|
||||||
|
|
||||||
|
NOTE: This is harder to vectorize because each point has different number of
|
||||||
|
nearby buildings. Options:
|
||||||
|
a) Pad to max buildings per point (wastes memory but simple)
|
||||||
|
b) Use sparse representation
|
||||||
|
c) Keep per-point but use GPU for the geometry math
|
||||||
|
|
||||||
|
Recommend option (c) initially — keep the spatial query on CPU but move
|
||||||
|
the trig/geometry calculations to GPU.
|
||||||
|
|
||||||
|
### Stage 3: Reflections (MEDIUM IMPACT, only on Full preset)
|
||||||
|
Currently: For each point with buildings, compute reflection paths.
|
||||||
|
This is the most complex calculation and hardest to vectorize.
|
||||||
|
|
||||||
|
**Approach**: Keep reflections per-point for now, but optimize the inner math
|
||||||
|
with vectorized operations.
|
||||||
|
|
||||||
|
### Stage 4: Vegetation Loss (LOW IMPACT)
|
||||||
|
Simple lookup — not worth GPU overhead.
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
### Step 1: Batch terrain profiling
|
||||||
|
Add to coverage_service.py a new method:
|
||||||
|
```python
|
||||||
|
def _batch_terrain_profiles(self, site_lat, site_lon, site_elev,
|
||||||
|
grid_lats, grid_lons, grid_elevs,
|
||||||
|
distances, frequency, num_samples=50):
|
||||||
|
"""Compute terrain LOS and diffraction loss for all points at once."""
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
N = len(grid_lats)
|
||||||
|
|
||||||
|
# Interpolate terrain profiles for all points
|
||||||
|
# Each profile: site → point, num_samples elevation values
|
||||||
|
# Use terrain tile data directly
|
||||||
|
|
||||||
|
# Compute Fresnel zone clearance for each profile
|
||||||
|
# Compute knife-edge diffraction loss
|
||||||
|
|
||||||
|
return terrain_losses # shape (N,)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Batch building check
|
||||||
|
Add method:
|
||||||
|
```python
|
||||||
|
def _batch_building_obstruction(self, site_lat, site_lon,
|
||||||
|
grid_lats, grid_lons,
|
||||||
|
distances, buildings_spatial_index,
|
||||||
|
all_buildings):
|
||||||
|
"""Compute building loss for all points at once."""
|
||||||
|
# For each point, query spatial index (CPU)
|
||||||
|
# Batch the geometry intersection math (GPU)
|
||||||
|
# Return losses
|
||||||
|
|
||||||
|
return building_losses # shape (N,)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Replace _run_point_loop
|
||||||
|
Instead of ProcessPool workers, do:
|
||||||
|
```python
|
||||||
|
# In calculate_coverage, after Phase 2.5:
|
||||||
|
terrain_losses = self._batch_terrain_profiles(...)
|
||||||
|
building_losses = self._batch_building_obstruction(...)
|
||||||
|
|
||||||
|
# Final RSRP is now fully vectorized:
|
||||||
|
rsrp = tx_power - precomputed_path_loss - terrain_losses - building_losses - veg_losses
|
||||||
|
# + antenna_gains + reflection_gains
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Keep worker fallback
|
||||||
|
If GPU not available or for very complex calculations (reflections),
|
||||||
|
fall back to the existing per-point ProcessPool approach.
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
1. **GPU code only in main process** — learned from 3.7.0, never import gpu_manager in workers
|
||||||
|
2. **Terrain data access** — terrain tiles are in memory, need efficient sampling for batch profiles
|
||||||
|
3. **CuPy ↔ NumPy bridge** — use `xp.asnumpy()` or `.get()` to convert back to CPU
|
||||||
|
4. **Memory** — 6,642 points × 50 terrain samples = 332,100 floats = 2.5 MB on GPU, no problem
|
||||||
|
5. **Accuracy** — results must match existing per-point calculation within 1 dB
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --noconfirm
|
||||||
|
.\dist\rfcp-server\rfcp-server.exe
|
||||||
|
```
|
||||||
|
|
||||||
|
Compare Full preset:
|
||||||
|
- Before (3.7.0): ~195s for 6,642 points
|
||||||
|
- Target (3.8.0): <30s for same calculation
|
||||||
|
- Stretch goal: <10s
|
||||||
|
|
||||||
|
Verify accuracy:
|
||||||
|
- Run same location with GPU and CPU backend
|
||||||
|
- Compare RSRP values — should be within 1 dB
|
||||||
|
- Coverage percentages (Excellent/Good/Fair/Weak) should be very close
|
||||||
|
|
||||||
|
## What NOT to Change
|
||||||
|
|
||||||
|
- Don't modify propagation model math (Okumura-Hata, COST-231, Free-Space formulas)
|
||||||
|
- Don't change API endpoints or response format
|
||||||
|
- Don't remove the ProcessPool fallback — keep it for CPU-only mode
|
||||||
|
- Don't change OSM fetching or caching
|
||||||
|
- Don't modify the frontend
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] Full preset completes in <30s (was 195s)
|
||||||
|
- [ ] Standard preset completes in <5s (was 7.2s)
|
||||||
|
- [ ] No CuPy errors in worker processes
|
||||||
|
- [ ] CPU fallback still works
|
||||||
|
- [ ] Results match within 1 dB accuracy
|
||||||
|
- [ ] GPU utilization visible in Task Manager during calculation
|
||||||
556
RFCP-Iteration-3.6.0-Production-GPU-Build.md
Normal file
556
RFCP-Iteration-3.6.0-Production-GPU-Build.md
Normal file
@@ -0,0 +1,556 @@
|
|||||||
|
# RFCP — Iteration 3.6.0: Production GPU Build
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Enable GPU acceleration in the production PyInstaller build. Currently production
|
||||||
|
runs CPU-only (NumPy) because CuPy is not included in rfcp-server.exe.
|
||||||
|
|
||||||
|
**Goal:** User with NVIDIA GPU installs RFCP → GPU detected automatically →
|
||||||
|
coverage calculations use CUDA acceleration. No manual pip install required.
|
||||||
|
|
||||||
|
**Context from diagnostics screenshot:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"python_executable": "C:\\Users\\Administrator\\AppData\\Local\\Programs\\RFCP\\resources\\backend\\rfcp-server.exe",
|
||||||
|
"platform": "Windows-10-10.0.26288-SP0",
|
||||||
|
"is_wsl": false,
|
||||||
|
"numpy": { "version": "1.26.4" },
|
||||||
|
"cuda": {
|
||||||
|
"error": "CuPy not installed",
|
||||||
|
"install_hint": "pip install cupy-cuda12x"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Architecture:** Production uses PyInstaller-bundled rfcp-server.exe (self-contained).
|
||||||
|
CuPy not included → GPU not available for end users.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Strategy: Two-Tier Build
|
||||||
|
|
||||||
|
Instead of one massive binary, produce two builds:
|
||||||
|
|
||||||
|
```
|
||||||
|
RFCP-Setup-{version}.exe (~150 MB) — CPU-only, works everywhere
|
||||||
|
RFCP-Setup-{version}-GPU.exe (~700 MB) — includes CuPy + CUDA runtime
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why not dynamic loading?**
|
||||||
|
PyInstaller bundles everything at build time. CuPy can't be pip-installed
|
||||||
|
into a frozen exe at runtime. Options are:
|
||||||
|
|
||||||
|
1. **Bundle CuPy in PyInstaller** ← cleanest, what we'll do
|
||||||
|
2. Side-load CuPy DLLs (fragile, version-sensitive)
|
||||||
|
3. Hybrid: unfrozen Python + CuPy installed separately (defeats purpose of exe)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 1: PyInstaller Spec with CuPy (Priority 1 — 30 min)
|
||||||
|
|
||||||
|
### File: `installer/rfcp-server-gpu.spec`
|
||||||
|
|
||||||
|
Create a separate .spec file that includes CuPy:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# rfcp-server-gpu.spec — GPU-enabled build
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from PyInstaller.utils.hooks import collect_all, collect_dynamic_libs
|
||||||
|
|
||||||
|
backend_path = os.path.abspath(os.path.join(os.path.dirname(SPEC), '..', 'backend'))
|
||||||
|
|
||||||
|
# Collect CuPy and its CUDA dependencies
|
||||||
|
cupy_datas, cupy_binaries, cupy_hiddenimports = collect_all('cupy')
|
||||||
|
# Also collect cupy_backends
|
||||||
|
cupyb_datas, cupyb_binaries, cupyb_hiddenimports = collect_all('cupy_backends')
|
||||||
|
|
||||||
|
# CUDA runtime libraries that CuPy needs
|
||||||
|
cuda_binaries = collect_dynamic_libs('cupy')
|
||||||
|
|
||||||
|
a = Analysis(
|
||||||
|
[os.path.join(backend_path, 'run_server.py')],
|
||||||
|
pathex=[backend_path],
|
||||||
|
binaries=cupy_binaries + cupyb_binaries + cuda_binaries,
|
||||||
|
datas=[
|
||||||
|
(os.path.join(backend_path, 'data', 'terrain'), 'data/terrain'),
|
||||||
|
] + cupy_datas + cupyb_datas,
|
||||||
|
hiddenimports=[
|
||||||
|
# Existing imports from rfcp-server.spec
|
||||||
|
'uvicorn.logging',
|
||||||
|
'uvicorn.loops',
|
||||||
|
'uvicorn.loops.auto',
|
||||||
|
'uvicorn.protocols',
|
||||||
|
'uvicorn.protocols.http',
|
||||||
|
'uvicorn.protocols.http.auto',
|
||||||
|
'uvicorn.protocols.websockets',
|
||||||
|
'uvicorn.protocols.websockets.auto',
|
||||||
|
'uvicorn.lifespan',
|
||||||
|
'uvicorn.lifespan.on',
|
||||||
|
'motor',
|
||||||
|
'pymongo',
|
||||||
|
'numpy',
|
||||||
|
'scipy',
|
||||||
|
'shapely',
|
||||||
|
'shapely.geometry',
|
||||||
|
'shapely.ops',
|
||||||
|
# CuPy-specific
|
||||||
|
'cupy',
|
||||||
|
'cupy.cuda',
|
||||||
|
'cupy.cuda.runtime',
|
||||||
|
'cupy.cuda.driver',
|
||||||
|
'cupy.cuda.memory',
|
||||||
|
'cupy.cuda.stream',
|
||||||
|
'cupy._core',
|
||||||
|
'cupy._core.core',
|
||||||
|
'cupy._core._routines_math',
|
||||||
|
'cupy.fft',
|
||||||
|
'cupy.linalg',
|
||||||
|
'fastrlock',
|
||||||
|
] + cupy_hiddenimports + cupyb_hiddenimports,
|
||||||
|
hookspath=[],
|
||||||
|
hooksconfig={},
|
||||||
|
runtime_hooks=[],
|
||||||
|
excludes=[],
|
||||||
|
noarchive=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
pyz = PYZ(a.pure)
|
||||||
|
|
||||||
|
exe = EXE(
|
||||||
|
pyz,
|
||||||
|
a.scripts,
|
||||||
|
a.binaries,
|
||||||
|
a.datas,
|
||||||
|
[],
|
||||||
|
name='rfcp-server',
|
||||||
|
debug=False,
|
||||||
|
bootloader_ignore_signals=False,
|
||||||
|
strip=False,
|
||||||
|
upx=False, # Don't compress CUDA libs — they need fast loading
|
||||||
|
console=True,
|
||||||
|
icon=os.path.join(os.path.dirname(SPEC), 'rfcp.ico'),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Points:
|
||||||
|
- `collect_all('cupy')` grabs all CuPy submodules + CUDA DLLs
|
||||||
|
- `fastrlock` is a CuPy dependency (must be in hiddenimports)
|
||||||
|
- `upx=False` — don't compress CUDA binaries (breaks them)
|
||||||
|
- One-file mode (`a.binaries + a.datas` in EXE) for single exe
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 2: Build Script for GPU Variant (Priority 1 — 15 min)
|
||||||
|
|
||||||
|
### File: `installer/build-gpu.bat` (Windows)
|
||||||
|
|
||||||
|
```batch
|
||||||
|
@echo off
|
||||||
|
echo ========================================
|
||||||
|
echo RFCP GPU Build — rfcp-server-gpu.exe
|
||||||
|
echo ========================================
|
||||||
|
|
||||||
|
REM Ensure CuPy is installed in build environment
|
||||||
|
echo Checking CuPy installation...
|
||||||
|
python -c "import cupy; print(f'CuPy {cupy.__version__} with CUDA {cupy.cuda.runtime.runtimeGetVersion()}')"
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo ERROR: CuPy not installed. Run: pip install cupy-cuda12x
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
REM Build with GPU spec
|
||||||
|
echo Building rfcp-server with GPU support...
|
||||||
|
cd /d %~dp0\..\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo Build complete! Output: dist\rfcp-server.exe
|
||||||
|
echo Size:
|
||||||
|
dir dist\rfcp-server.exe
|
||||||
|
|
||||||
|
REM Optional: copy to Electron resources
|
||||||
|
if exist "..\desktop\resources" (
|
||||||
|
copy /y dist\rfcp-server.exe ..\desktop\resources\rfcp-server.exe
|
||||||
|
echo Copied to desktop\resources\
|
||||||
|
)
|
||||||
|
|
||||||
|
pause
|
||||||
|
```
|
||||||
|
|
||||||
|
### File: `installer/build-gpu.sh` (WSL/Linux)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "========================================"
|
||||||
|
echo " RFCP GPU Build — rfcp-server (GPU)"
|
||||||
|
echo "========================================"
|
||||||
|
|
||||||
|
# Check CuPy
|
||||||
|
python3 -c "import cupy; print(f'CuPy {cupy.__version__}')" 2>/dev/null || {
|
||||||
|
echo "ERROR: CuPy not installed. Run: pip install cupy-cuda12x"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/../backend"
|
||||||
|
pyinstaller ../installer/rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Build complete!"
|
||||||
|
ls -lh dist/rfcp-server*
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 3: GPU Backend — Graceful CuPy Detection (Priority 1 — 15 min)
|
||||||
|
|
||||||
|
### File: `backend/app/services/gpu_backend.py`
|
||||||
|
|
||||||
|
The existing gpu_backend.py should already handle CuPy absence gracefully.
|
||||||
|
Verify and fix if needed:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# gpu_backend.py — must work in BOTH CPU and GPU builds
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
# Try importing CuPy — this is the key detection
|
||||||
|
_cupy_available = False
|
||||||
|
_gpu_device_name = None
|
||||||
|
_gpu_memory_mb = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
import cupy as cp
|
||||||
|
# Verify we can actually use it (not just import)
|
||||||
|
device = cp.cuda.Device(0)
|
||||||
|
_gpu_device_name = device.attributes.get('name', f'CUDA Device {device.id}')
|
||||||
|
# Try to get name via runtime
|
||||||
|
try:
|
||||||
|
props = cp.cuda.runtime.getDeviceProperties(0)
|
||||||
|
_gpu_device_name = props.get('name', _gpu_device_name)
|
||||||
|
if isinstance(_gpu_device_name, bytes):
|
||||||
|
_gpu_device_name = _gpu_device_name.decode('utf-8').strip('\x00')
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
_gpu_memory_mb = device.mem_info[1] // (1024 * 1024)
|
||||||
|
_cupy_available = True
|
||||||
|
except ImportError:
|
||||||
|
cp = None # CuPy not installed (CPU build)
|
||||||
|
except Exception as e:
|
||||||
|
cp = None # CuPy installed but CUDA not available
|
||||||
|
print(f"[GPU] CuPy found but CUDA unavailable: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def is_gpu_available() -> bool:
|
||||||
|
return _cupy_available
|
||||||
|
|
||||||
|
def get_gpu_info() -> dict:
|
||||||
|
if _cupy_available:
|
||||||
|
return {
|
||||||
|
"available": True,
|
||||||
|
"backend": "CuPy (CUDA)",
|
||||||
|
"device": _gpu_device_name,
|
||||||
|
"memory_mb": _gpu_memory_mb,
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
"available": False,
|
||||||
|
"backend": "NumPy (CPU)",
|
||||||
|
"device": "CPU",
|
||||||
|
"memory_mb": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_array_module():
|
||||||
|
"""Return cupy if available, otherwise numpy."""
|
||||||
|
if _cupy_available:
|
||||||
|
return cp
|
||||||
|
return np
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage in coverage_service.py:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from app.services.gpu_backend import get_array_module, is_gpu_available
|
||||||
|
|
||||||
|
xp = get_array_module() # cupy or numpy — same API
|
||||||
|
|
||||||
|
# All calculations use xp instead of np:
|
||||||
|
distances = xp.sqrt(dx**2 + dy**2)
|
||||||
|
path_loss = 20 * xp.log10(distances) + 20 * xp.log10(freq_mhz) - 27.55
|
||||||
|
|
||||||
|
# If using cupy, results need to come back to CPU for JSON serialization:
|
||||||
|
if is_gpu_available():
|
||||||
|
results = xp.asnumpy(path_loss)
|
||||||
|
else:
|
||||||
|
results = path_loss
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 4: GPU Status in Frontend Header (Priority 2 — 10 min)
|
||||||
|
|
||||||
|
### Update GPUIndicator.tsx
|
||||||
|
|
||||||
|
When GPU is detected, the badge should clearly show it:
|
||||||
|
|
||||||
|
```
|
||||||
|
CPU build: [⚙ CPU] (gray badge)
|
||||||
|
GPU detected: [⚡ RTX 4060] (green badge)
|
||||||
|
```
|
||||||
|
|
||||||
|
The existing GPUIndicator already does this. Just verify:
|
||||||
|
1. Badge color changes from gray → green when GPU available
|
||||||
|
2. Dropdown shows "Active: GPU (CUDA)" not just "CPU (NumPy)"
|
||||||
|
3. No install hints shown when CuPy IS available
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 5: Build Environment Setup (Priority 1 — Manual by Олег)
|
||||||
|
|
||||||
|
### Prerequisites for GPU build:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# 1. Install CuPy in Windows Python (NOT WSL)
|
||||||
|
pip install cupy-cuda12x
|
||||||
|
|
||||||
|
# 2. Verify CuPy works
|
||||||
|
python -c "import cupy; print(cupy.cuda.runtime.runtimeGetVersion())"
|
||||||
|
# Should print: 12000 or similar
|
||||||
|
|
||||||
|
# 3. Install PyInstaller if not present
|
||||||
|
pip install pyinstaller
|
||||||
|
|
||||||
|
# 4. Verify fastrlock (CuPy dependency)
|
||||||
|
pip install fastrlock
|
||||||
|
```
|
||||||
|
|
||||||
|
### Build commands:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# CPU-only build (existing)
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server.spec --clean --noconfirm
|
||||||
|
|
||||||
|
# GPU build (new)
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expected output sizes:
|
||||||
|
```
|
||||||
|
rfcp-server.exe (CPU): ~80 MB
|
||||||
|
rfcp-server.exe (GPU): ~600-800 MB (CuPy bundles CUDA runtime libs)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 6: Electron — Detect Build Variant (Priority 2 — 10 min)
|
||||||
|
|
||||||
|
### File: `desktop/main.js` or `desktop/src/main.ts`
|
||||||
|
|
||||||
|
Add version detection so UI knows which build it's running:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// After backend starts, check GPU status
|
||||||
|
async function checkBackendCapabilities() {
|
||||||
|
try {
|
||||||
|
const response = await fetch('http://127.0.0.1:8090/api/gpu/status');
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
// Send to renderer
|
||||||
|
mainWindow.webContents.send('gpu-status', data);
|
||||||
|
|
||||||
|
if (data.available) {
|
||||||
|
console.log(`[RFCP] GPU: ${data.device} (${data.memory_mb} MB)`);
|
||||||
|
} else {
|
||||||
|
console.log('[RFCP] Running in CPU mode');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.log('[RFCP] Backend not ready for GPU check');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Task 7: About / Version Info (Priority 3 — 5 min)
|
||||||
|
|
||||||
|
### Add build info to `/api/health` response:
|
||||||
|
|
||||||
|
```python
|
||||||
|
@app.get("/api/health")
|
||||||
|
async def health():
|
||||||
|
gpu_info = get_gpu_info()
|
||||||
|
return {
|
||||||
|
"status": "ok",
|
||||||
|
"version": "3.6.0",
|
||||||
|
"build": "gpu" if gpu_info["available"] else "cpu",
|
||||||
|
"gpu": gpu_info,
|
||||||
|
"python": sys.version,
|
||||||
|
"platform": platform.platform(),
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Build & Test Procedure
|
||||||
|
|
||||||
|
### Step 1: Setup Build Environment
|
||||||
|
```powershell
|
||||||
|
# Windows PowerShell (NOT WSL)
|
||||||
|
cd D:\root\rfcp
|
||||||
|
|
||||||
|
# Verify Python environment
|
||||||
|
python --version # Should be 3.11.x
|
||||||
|
pip list | findstr cupy # Should show cupy-cuda12x
|
||||||
|
|
||||||
|
# If CuPy not installed:
|
||||||
|
pip install cupy-cuda12x fastrlock
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Build GPU Variant
|
||||||
|
```powershell
|
||||||
|
cd D:\root\rfcp\backend
|
||||||
|
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Test Standalone
|
||||||
|
```powershell
|
||||||
|
# Run the built exe directly
|
||||||
|
.\dist\rfcp-server.exe
|
||||||
|
|
||||||
|
# In another terminal:
|
||||||
|
curl http://localhost:8090/api/health
|
||||||
|
curl http://localhost:8090/api/gpu/status
|
||||||
|
curl http://localhost:8090/api/gpu/diagnostics
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Verify GPU Detection
|
||||||
|
Expected `/api/gpu/status` response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"available": true,
|
||||||
|
"backend": "CuPy (CUDA)",
|
||||||
|
"device": "NVIDIA GeForce RTX 4060 Laptop GPU",
|
||||||
|
"memory_mb": 8188
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Run Coverage Calculation
|
||||||
|
- Place a site on map
|
||||||
|
- Calculate coverage (10km, 200m resolution)
|
||||||
|
- Check logs for: `[GPU] Using CUDA: RTX 4060 (8188 MB)`
|
||||||
|
- Compare performance: should be 5-10x faster than CPU
|
||||||
|
|
||||||
|
### Step 6: Full Electron Build
|
||||||
|
```powershell
|
||||||
|
# Copy GPU server to Electron resources
|
||||||
|
copy backend\dist\rfcp-server.exe desktop\resources\
|
||||||
|
|
||||||
|
# Build Electron installer
|
||||||
|
cd installer
|
||||||
|
.\build-win.sh # or equivalent Windows script
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Risk Assessment
|
||||||
|
|
||||||
|
### Size Concern
|
||||||
|
CuPy bundles CUDA runtime (~500MB). Total GPU installer ~700-800MB.
|
||||||
|
**Mitigation:** This is acceptable for a professional RF planning tool.
|
||||||
|
AutoCAD is 7GB. QGIS is 1.5GB. Atoll is 3GB+.
|
||||||
|
|
||||||
|
### CUDA Version Compatibility
|
||||||
|
CuPy-cuda12x requires CUDA 12.x compatible driver.
|
||||||
|
RTX 4060 with Driver 581.42 → CUDA 13.0 → backward compatible ✅
|
||||||
|
**Mitigation:** gpu_backend.py already falls back to NumPy gracefully.
|
||||||
|
|
||||||
|
### PyInstaller + CuPy Issues
|
||||||
|
Known issues:
|
||||||
|
- CuPy uses many .so/.dll files that PyInstaller might miss
|
||||||
|
- `collect_all('cupy')` should catch them, but test thoroughly
|
||||||
|
- If missing DLLs → add them manually to `binaries` list
|
||||||
|
|
||||||
|
**Mitigation:** Test the standalone exe on a clean machine (no Python installed).
|
||||||
|
|
||||||
|
### Antivirus False Positives
|
||||||
|
Larger exe = more AV suspicion. PyInstaller exes already trigger some AV.
|
||||||
|
**Mitigation:** Code-sign the exe (future task), submit to AV vendors for whitelisting.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] `rfcp-server-gpu.spec` created and builds successfully
|
||||||
|
- [ ] Built exe detects RTX 4060 on startup
|
||||||
|
- [ ] `/api/gpu/status` returns `"available": true`
|
||||||
|
- [ ] Coverage calculation uses CuPy (check logs)
|
||||||
|
- [ ] GPU badge shows "⚡ RTX 4060" (green) in header
|
||||||
|
- [ ] Fallback to NumPy works if CUDA unavailable
|
||||||
|
- [ ] CPU-only spec (`rfcp-server.spec`) still builds and works
|
||||||
|
- [ ] Build time < 10 minutes
|
||||||
|
- [ ] GPU exe size < 1 GB
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commit Message
|
||||||
|
|
||||||
|
```
|
||||||
|
feat(build): add GPU-enabled PyInstaller build with CuPy + CUDA
|
||||||
|
|
||||||
|
- New rfcp-server-gpu.spec with CuPy/CUDA collection
|
||||||
|
- Build scripts: build-gpu.bat, build-gpu.sh
|
||||||
|
- Graceful GPU detection in gpu_backend.py
|
||||||
|
- Two-tier build: CPU (~80MB) and GPU (~700MB) variants
|
||||||
|
- Auto-detection: RTX 4060 → CuPy acceleration
|
||||||
|
- Fallback: no CUDA → NumPy (CPU mode)
|
||||||
|
|
||||||
|
Iteration 3.6.0 — Production GPU Build
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files Summary
|
||||||
|
|
||||||
|
### New Files:
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `installer/rfcp-server-gpu.spec` | PyInstaller config with CuPy |
|
||||||
|
| `installer/build-gpu.bat` | Windows GPU build script |
|
||||||
|
| `installer/build-gpu.sh` | Linux/WSL GPU build script |
|
||||||
|
|
||||||
|
### Modified Files:
|
||||||
|
| File | Changes |
|
||||||
|
|------|---------|
|
||||||
|
| `backend/app/services/gpu_backend.py` | Verify graceful detection |
|
||||||
|
| `backend/app/main.py` | Health endpoint with build info |
|
||||||
|
| `desktop/main.js` or `main.ts` | GPU status check after backend start |
|
||||||
|
| `frontend/src/components/ui/GPUIndicator.tsx` | Verify badge shows GPU |
|
||||||
|
|
||||||
|
### No Changes Needed:
|
||||||
|
| File | Reason |
|
||||||
|
|------|--------|
|
||||||
|
| `installer/rfcp-server.spec` | CPU build stays as-is |
|
||||||
|
| `backend/app/services/coverage_service.py` | Already uses get_array_module() |
|
||||||
|
| `installer/build-win.sh` | Existing CPU build unchanged |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Timeline
|
||||||
|
|
||||||
|
| Phase | Task | Time |
|
||||||
|
|-------|------|------|
|
||||||
|
| **P1** | Create rfcp-server-gpu.spec | 30 min |
|
||||||
|
| **P1** | Build scripts | 15 min |
|
||||||
|
| **P1** | Verify gpu_backend.py | 15 min |
|
||||||
|
| **P2** | Frontend badge verification | 10 min |
|
||||||
|
| **P2** | Electron GPU status | 10 min |
|
||||||
|
| **P3** | Health endpoint update | 5 min |
|
||||||
|
| **Test** | Build + test standalone | 20 min |
|
||||||
|
| **Test** | Full Electron build | 15 min |
|
||||||
|
| | **Total** | **~2 hours** |
|
||||||
|
|
||||||
|
**Claude Code estimated time: 10-15 min** (spec + scripts + backend changes)
|
||||||
|
**Manual testing by Олег: 30-45 min** (building + verifying)
|
||||||
352
UMTC-Wiki-MEGA-TASK.md
Normal file
352
UMTC-Wiki-MEGA-TASK.md
Normal file
@@ -0,0 +1,352 @@
|
|||||||
|
# UMTC Wiki v2.0 — MEGA TASK: Integration & Polish
|
||||||
|
|
||||||
|
Read UMTC-Wiki-v2.0-REFACTOR.md and UMTC-Wiki-v2.0-ROADMAP.md for full context.
|
||||||
|
|
||||||
|
This is a comprehensive task covering all remaining fixes and integration work.
|
||||||
|
Take your time, think hard, work through each section systematically.
|
||||||
|
Report after completing each major section.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION A: Fix Critical Tauri 404 Bug
|
||||||
|
|
||||||
|
The sidebar loads the full content tree correctly but clicking ANY article shows 404.
|
||||||
|
|
||||||
|
### Debug steps:
|
||||||
|
1. In `frontend/src/lib/api.ts` — find where getPage is called with a slug
|
||||||
|
Add `console.log('[WIKI] getPage called with slug:', slug)`
|
||||||
|
|
||||||
|
2. In `frontend/src/lib/utils/backend.ts` — in the tauriGetPage function
|
||||||
|
Add `console.log('[WIKI] Tauri invoke get_page with:', slug)`
|
||||||
|
|
||||||
|
3. In `desktop/src-tauri/src/commands/content.rs` — in the get_page handler
|
||||||
|
Add `eprintln!("[WIKI] get_page received slug: {}", slug)`
|
||||||
|
Add `eprintln!("[WIKI] trying path: {:?}", resolved_path)`
|
||||||
|
|
||||||
|
4. Check the Sidebar.svelte component — what href/slug does it generate when user clicks?
|
||||||
|
The web version uses `/api/pages/{slug}` — in desktop mode it should invoke with just the slug part.
|
||||||
|
|
||||||
|
5. Common mismatches to check:
|
||||||
|
- Leading slash: sidebar sends `/lte/bbu` but Rust expects `lte/bbu`
|
||||||
|
- File extension: Rust looks for `lte/bbu.md` but file is `lte/bbu/index.md`
|
||||||
|
- URL encoding: Ukrainian characters in slugs
|
||||||
|
- The SvelteKit catch-all route `[...slug]` may pass the slug differently
|
||||||
|
|
||||||
|
6. Fix the mismatch. Test navigation to at least 10 different pages including:
|
||||||
|
- Root sections (lte/, ran/, mikrotik/)
|
||||||
|
- Nested pages (lte/bbu, ran/srsenb-config)
|
||||||
|
- Glossary terms (glossary/prb)
|
||||||
|
- Deep nesting if any
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION B: Fix Web Deployment
|
||||||
|
|
||||||
|
The web version must keep working. Test and fix:
|
||||||
|
|
||||||
|
1. Check that `backend/content.py` imports work:
|
||||||
|
- `from wiki_frontmatter import ArticleFrontmatter`
|
||||||
|
- `from wikilinks import WikiLinksExtension`
|
||||||
|
- `from backlinks import BacklinksIndex`
|
||||||
|
- `from admonitions import AdmonitionsExtension`
|
||||||
|
|
||||||
|
If any import fails, fix the module.
|
||||||
|
|
||||||
|
2. Add the admonitions extension to the markdown pipeline in content.py
|
||||||
|
(wikilinks was already integrated, verify admonitions too)
|
||||||
|
|
||||||
|
3. Make sure the backlinks API endpoint in main.py works:
|
||||||
|
- GET /api/pages/{slug:path}/backlinks
|
||||||
|
- Should return { "slug": "...", "backlinks": [...], "count": N }
|
||||||
|
|
||||||
|
4. Add grade/status/category to the page API response:
|
||||||
|
- GET /api/pages/{slug} should now include grade, status, category fields
|
||||||
|
|
||||||
|
5. Create a simple test script `scripts/test_web.py`:
|
||||||
|
```python
|
||||||
|
# Test that backend starts and key endpoints work
|
||||||
|
import requests
|
||||||
|
BASE = "http://localhost:8000"
|
||||||
|
|
||||||
|
# Test navigation
|
||||||
|
r = requests.get(f"{BASE}/api/navigation")
|
||||||
|
assert r.status_code == 200
|
||||||
|
nav = r.json()
|
||||||
|
print(f"Navigation: {len(nav)} sections")
|
||||||
|
|
||||||
|
# Test page load
|
||||||
|
r = requests.get(f"{BASE}/api/pages/index")
|
||||||
|
assert r.status_code == 200
|
||||||
|
print(f"Home page: {r.json().get('title', 'OK')}")
|
||||||
|
|
||||||
|
# Test search
|
||||||
|
r = requests.get(f"{BASE}/api/search?q=LTE")
|
||||||
|
assert r.status_code == 200
|
||||||
|
print(f"Search 'LTE': {len(r.json())} results")
|
||||||
|
|
||||||
|
# Test backlinks
|
||||||
|
r = requests.get(f"{BASE}/api/pages/glossary/enb/backlinks")
|
||||||
|
print(f"Backlinks for eNB: {r.status_code}")
|
||||||
|
|
||||||
|
print("\nAll tests passed!")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION C: Frontend Wiki Components — Full Integration
|
||||||
|
|
||||||
|
### C.1: Article Grade Badge on Pages
|
||||||
|
|
||||||
|
In the wiki page view (`frontend/src/routes/[...slug]/+page.svelte` or equivalent):
|
||||||
|
- Import ArticleGrade component
|
||||||
|
- Display the grade badge next to the page title
|
||||||
|
- The grade comes from the page API response (field: `grade`)
|
||||||
|
- If no grade, don't show badge
|
||||||
|
- Style: small badge inline with title, not a separate block
|
||||||
|
|
||||||
|
### C.2: Breadcrumbs Component
|
||||||
|
|
||||||
|
Create/update `frontend/src/lib/components/wiki/Breadcrumbs.svelte`:
|
||||||
|
```svelte
|
||||||
|
<!-- Example: Головна > LTE > BBU Setup -->
|
||||||
|
<nav class="breadcrumbs">
|
||||||
|
<a href="/">Головна</a>
|
||||||
|
<span class="separator">/</span>
|
||||||
|
<a href="/lte">LTE</a>
|
||||||
|
<span class="separator">/</span>
|
||||||
|
<span class="current">BBU Setup</span>
|
||||||
|
</nav>
|
||||||
|
```
|
||||||
|
- Generate from current page slug
|
||||||
|
- Each segment is a link except the last
|
||||||
|
- Use titles from navigation tree if available, otherwise humanize slug
|
||||||
|
- Works in both web and desktop mode
|
||||||
|
- Integrate into the page layout — show above article title
|
||||||
|
|
||||||
|
### C.3: Admonition CSS
|
||||||
|
|
||||||
|
Add styles for admonition boxes to the global CSS or a component:
|
||||||
|
```css
|
||||||
|
.admonition {
|
||||||
|
border-left: 4px solid;
|
||||||
|
border-radius: 4px;
|
||||||
|
padding: 12px 16px;
|
||||||
|
margin: 16px 0;
|
||||||
|
}
|
||||||
|
.admonition-note { border-color: #3b82f6; background: rgba(59,130,246,0.1); }
|
||||||
|
.admonition-warning { border-color: #f59e0b; background: rgba(245,158,11,0.1); }
|
||||||
|
.admonition-tip { border-color: #10b981; background: rgba(16,185,129,0.1); }
|
||||||
|
.admonition-danger { border-color: #ef4444; background: rgba(239,68,68,0.1); }
|
||||||
|
|
||||||
|
/* Dark mode */
|
||||||
|
:global(.dark) .admonition-note { background: rgba(59,130,246,0.15); }
|
||||||
|
:global(.dark) .admonition-warning { background: rgba(245,158,11,0.15); }
|
||||||
|
:global(.dark) .admonition-tip { background: rgba(16,185,129,0.15); }
|
||||||
|
:global(.dark) .admonition-danger { background: rgba(239,68,68,0.15); }
|
||||||
|
|
||||||
|
.admonition-title {
|
||||||
|
font-weight: 600;
|
||||||
|
margin-bottom: 4px;
|
||||||
|
}
|
||||||
|
.admonition-icon {
|
||||||
|
margin-right: 8px;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### C.4: Wiki-Link CSS
|
||||||
|
|
||||||
|
Add styles for wiki-links:
|
||||||
|
```css
|
||||||
|
.wiki-link {
|
||||||
|
color: #3b82f6;
|
||||||
|
text-decoration: none;
|
||||||
|
border-bottom: 1px dotted #3b82f6;
|
||||||
|
}
|
||||||
|
.wiki-link:hover {
|
||||||
|
border-bottom-style: solid;
|
||||||
|
}
|
||||||
|
.red-link {
|
||||||
|
color: #ef4444;
|
||||||
|
border-bottom-color: #ef4444;
|
||||||
|
}
|
||||||
|
.red-link:hover::after {
|
||||||
|
content: " (сторінку не знайдено)";
|
||||||
|
font-size: 0.75em;
|
||||||
|
color: #9ca3af;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### C.5: Backlinks Panel Integration
|
||||||
|
|
||||||
|
In the page view, after the article content:
|
||||||
|
- Show BacklinksPanel component
|
||||||
|
- Pass current page slug
|
||||||
|
- Works in both web (API) and desktop (Tauri IPC)
|
||||||
|
- Only show if there are backlinks (don't show empty panel)
|
||||||
|
|
||||||
|
### C.6: Table of Contents (sidebar)
|
||||||
|
|
||||||
|
If the page has headings, generate a table of contents:
|
||||||
|
- Extract h2/h3 from rendered HTML or use TOC data from backend
|
||||||
|
- Show as a floating sidebar on wide screens (>1200px)
|
||||||
|
- Collapsible on smaller screens
|
||||||
|
- Highlight current section on scroll (intersection observer)
|
||||||
|
- Works in both modes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION D: Search Integration for Desktop
|
||||||
|
|
||||||
|
1. Test Tantivy search in Tauri:
|
||||||
|
- The search command should be wired to the Search component
|
||||||
|
- Type in search bar → results appear
|
||||||
|
- Cyrillic text should work (test: "мережа", "антена", "LTE")
|
||||||
|
|
||||||
|
2. If search doesn't work, debug:
|
||||||
|
- Is the search index built on startup? Check Rust logs
|
||||||
|
- Are content files found? Check content path resolution
|
||||||
|
- Is the query reaching the search command?
|
||||||
|
|
||||||
|
3. Search results should show:
|
||||||
|
- Page title
|
||||||
|
- Brief excerpt (first 150 chars of content)
|
||||||
|
- Click navigates to page
|
||||||
|
|
||||||
|
4. Keyboard shortcut: Ctrl+K should focus the search bar (already exists in web, verify in desktop)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION E: Content Quality Pass
|
||||||
|
|
||||||
|
### E.1: Content Audit Script
|
||||||
|
|
||||||
|
Create `scripts/analyze_content.py`:
|
||||||
|
- Scan all .md files in content/
|
||||||
|
- For each file report: has_frontmatter, word_count, has_code_blocks, grade, broken_wiki_links
|
||||||
|
- Summary: total articles, by grade, articles needing work
|
||||||
|
- Print actionable output
|
||||||
|
|
||||||
|
### E.2: Add More Glossary Terms (20 more)
|
||||||
|
|
||||||
|
Create glossary entries with proper frontmatter (grade: B, category: glossary):
|
||||||
|
|
||||||
|
**Radio/RF terms:**
|
||||||
|
- SGW (Serving Gateway)
|
||||||
|
- PGW (PDN Gateway)
|
||||||
|
- HSS (Home Subscriber Server)
|
||||||
|
- RSRP (Reference Signal Received Power)
|
||||||
|
- RSRQ (Reference Signal Received Quality)
|
||||||
|
- SINR (Signal to Interference plus Noise Ratio)
|
||||||
|
- EARFCN (E-UTRA Absolute Radio Frequency Channel Number)
|
||||||
|
- OFDM (Orthogonal Frequency Division Multiplexing)
|
||||||
|
- MIMO (Multiple Input Multiple Output)
|
||||||
|
- QoS (Quality of Service)
|
||||||
|
|
||||||
|
**Infrastructure terms:**
|
||||||
|
- WireGuard
|
||||||
|
- MikroTik
|
||||||
|
- Mesh Network
|
||||||
|
- VLAN (Virtual LAN)
|
||||||
|
- BGP (Border Gateway Protocol)
|
||||||
|
- mTLS (Mutual TLS)
|
||||||
|
- Caddy (Web Server)
|
||||||
|
|
||||||
|
**Protocol terms:**
|
||||||
|
- S1AP (S1 Application Protocol)
|
||||||
|
- GTP (GPRS Tunnelling Protocol)
|
||||||
|
- SCTP (Stream Control Transmission Protocol)
|
||||||
|
|
||||||
|
Each glossary term should:
|
||||||
|
- Have title in English with Ukrainian description
|
||||||
|
- Use [[wiki-links]] to cross-reference other terms
|
||||||
|
- Include: what it is, why it matters for UMTC, key parameters
|
||||||
|
- Be 100-300 words
|
||||||
|
|
||||||
|
### E.3: Upgrade 5 Key Articles to Grade B
|
||||||
|
|
||||||
|
Pick the 5 most important articles and upgrade them:
|
||||||
|
- Add proper frontmatter with all fields
|
||||||
|
- Add :::note and :::warning admonitions where useful
|
||||||
|
- Add [[wiki-links]] to glossary terms
|
||||||
|
- Add "Див. також" (See also) section with related articles
|
||||||
|
- Verify technical accuracy
|
||||||
|
- Set grade: B
|
||||||
|
|
||||||
|
Good candidates:
|
||||||
|
- Main LTE overview
|
||||||
|
- srsENB configuration
|
||||||
|
- WireGuard setup
|
||||||
|
- Open5GS overview
|
||||||
|
- MikroTik basics
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION F: Desktop App Polish
|
||||||
|
|
||||||
|
### F.1: Window Title
|
||||||
|
|
||||||
|
Show current page title in the window title bar:
|
||||||
|
`UMTC Wiki — {Page Title}`
|
||||||
|
|
||||||
|
### F.2: Keyboard Navigation
|
||||||
|
|
||||||
|
- Arrow keys in sidebar to navigate
|
||||||
|
- Enter to open selected item
|
||||||
|
- Backspace to go back
|
||||||
|
- Ctrl+K for search (verify)
|
||||||
|
|
||||||
|
### F.3: Error Handling
|
||||||
|
|
||||||
|
- If page not found, show a friendly Ukrainian message instead of generic 404
|
||||||
|
- If content directory is missing, show setup instructions
|
||||||
|
- If search index fails to build, log error but don't crash
|
||||||
|
|
||||||
|
### F.4: About Dialog
|
||||||
|
|
||||||
|
Add a simple about/info accessible from a gear icon or Help menu:
|
||||||
|
- UMTC Wiki v2.0
|
||||||
|
- Built with Tauri + SvelteKit + Rust
|
||||||
|
- Content articles count
|
||||||
|
- "Офлайн документація для УМТЗ"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SECTION G: Production Builds
|
||||||
|
|
||||||
|
### G.1: Web Docker Build Test
|
||||||
|
|
||||||
|
Update docker-compose.yml if needed to include new backend modules.
|
||||||
|
Make sure Dockerfile copies:
|
||||||
|
- backend/wiki_frontmatter.py
|
||||||
|
- backend/wikilinks.py
|
||||||
|
- backend/backlinks.py
|
||||||
|
- backend/admonitions.py
|
||||||
|
- All content/ files
|
||||||
|
|
||||||
|
### G.2: Tauri Production Build
|
||||||
|
|
||||||
|
Run `npx tauri build` and fix any remaining compilation errors.
|
||||||
|
Report the output binary size and location.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Order of Operations
|
||||||
|
|
||||||
|
Do these in order — each section builds on the previous:
|
||||||
|
|
||||||
|
1. **SECTION A** — Fix 404 bug (CRITICAL, everything depends on this)
|
||||||
|
2. **SECTION B** — Verify web backend
|
||||||
|
3. **SECTION C** — Frontend components
|
||||||
|
4. **SECTION D** — Search
|
||||||
|
5. **SECTION E** — Content
|
||||||
|
6. **SECTION F** — Desktop polish
|
||||||
|
7. **SECTION G** — Production builds
|
||||||
|
|
||||||
|
Report after each section with:
|
||||||
|
- What was done
|
||||||
|
- What files were changed
|
||||||
|
- Any issues found
|
||||||
|
- Ready for next section?
|
||||||
|
|
||||||
|
Think hard about edge cases. Don't break existing functionality.
|
||||||
|
Good luck! 🚀
|
||||||
@@ -14,6 +14,7 @@ from app.services.coverage_service import (
|
|||||||
select_propagation_model,
|
select_propagation_model,
|
||||||
)
|
)
|
||||||
from app.services.parallel_coverage_service import CancellationToken
|
from app.services.parallel_coverage_service import CancellationToken
|
||||||
|
from app.services.boundary_service import calculate_coverage_boundary
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
@@ -24,6 +25,12 @@ class CoverageRequest(BaseModel):
|
|||||||
settings: CoverageSettings = CoverageSettings()
|
settings: CoverageSettings = CoverageSettings()
|
||||||
|
|
||||||
|
|
||||||
|
class BoundaryPoint(BaseModel):
|
||||||
|
"""Single boundary coordinate"""
|
||||||
|
lat: float
|
||||||
|
lon: float
|
||||||
|
|
||||||
|
|
||||||
class CoverageResponse(BaseModel):
|
class CoverageResponse(BaseModel):
|
||||||
"""Coverage calculation response"""
|
"""Coverage calculation response"""
|
||||||
points: List[CoveragePoint]
|
points: List[CoveragePoint]
|
||||||
@@ -32,6 +39,7 @@ class CoverageResponse(BaseModel):
|
|||||||
stats: dict
|
stats: dict
|
||||||
computation_time: float # seconds
|
computation_time: float # seconds
|
||||||
models_used: List[str] # which models were active
|
models_used: List[str] # which models were active
|
||||||
|
boundary: Optional[List[BoundaryPoint]] = None # coverage boundary polygon
|
||||||
|
|
||||||
|
|
||||||
@router.post("/calculate")
|
@router.post("/calculate")
|
||||||
@@ -131,13 +139,24 @@ async def calculate_coverage(request: CoverageRequest) -> CoverageResponse:
|
|||||||
"points_with_atmospheric_loss": sum(1 for p in points if p.atmospheric_loss > 0),
|
"points_with_atmospheric_loss": sum(1 for p in points if p.atmospheric_loss > 0),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Calculate coverage boundary
|
||||||
|
boundary = None
|
||||||
|
if points:
|
||||||
|
boundary_coords = calculate_coverage_boundary(
|
||||||
|
[p.model_dump() for p in points],
|
||||||
|
threshold_dbm=request.settings.min_signal,
|
||||||
|
)
|
||||||
|
if boundary_coords:
|
||||||
|
boundary = [BoundaryPoint(**c) for c in boundary_coords]
|
||||||
|
|
||||||
return CoverageResponse(
|
return CoverageResponse(
|
||||||
points=points,
|
points=points,
|
||||||
count=len(points),
|
count=len(points),
|
||||||
settings=effective_settings,
|
settings=effective_settings,
|
||||||
stats=stats,
|
stats=stats,
|
||||||
computation_time=round(computation_time, 2),
|
computation_time=round(computation_time, 2),
|
||||||
models_used=models_used
|
models_used=models_used,
|
||||||
|
boundary=boundary,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,29 @@
|
|||||||
|
import sys
|
||||||
|
import platform
|
||||||
|
|
||||||
from fastapi import APIRouter, Depends
|
from fastapi import APIRouter, Depends
|
||||||
from app.api.deps import get_db
|
from app.api.deps import get_db
|
||||||
|
from app.services.gpu_backend import gpu_manager
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
@router.get("/")
|
@router.get("/")
|
||||||
async def health_check():
|
async def health_check():
|
||||||
return {"status": "ok", "service": "rfcp-backend", "version": "1.1.0"}
|
gpu_info = gpu_manager.get_status()
|
||||||
|
return {
|
||||||
|
"status": "ok",
|
||||||
|
"service": "rfcp-backend",
|
||||||
|
"version": "3.6.0",
|
||||||
|
"build": "gpu" if gpu_info.get("gpu_available") else "cpu",
|
||||||
|
"gpu": {
|
||||||
|
"available": gpu_info.get("gpu_available", False),
|
||||||
|
"backend": gpu_info.get("active_backend", "cpu"),
|
||||||
|
"device": gpu_info.get("active_device", {}).get("name") if gpu_info.get("active_device") else "CPU",
|
||||||
|
},
|
||||||
|
"python": sys.version.split()[0],
|
||||||
|
"platform": platform.system(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
@router.get("/db")
|
@router.get("/db")
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
from contextlib import asynccontextmanager
|
from contextlib import asynccontextmanager
|
||||||
|
import logging
|
||||||
|
import platform
|
||||||
|
|
||||||
from fastapi import FastAPI, WebSocket
|
from fastapi import FastAPI, WebSocket
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
@@ -7,9 +9,54 @@ from app.core.database import connect_to_mongo, close_mongo_connection
|
|||||||
from app.api.routes import health, projects, terrain, coverage, regions, system, gpu
|
from app.api.routes import health, projects, terrain, coverage, regions, system, gpu
|
||||||
from app.api.websocket import websocket_endpoint
|
from app.api.websocket import websocket_endpoint
|
||||||
|
|
||||||
|
logger = logging.getLogger("rfcp.startup")
|
||||||
|
|
||||||
|
|
||||||
|
def check_gpu_availability():
|
||||||
|
"""Log GPU status on startup for debugging."""
|
||||||
|
is_wsl = "microsoft" in platform.release().lower()
|
||||||
|
env_note = " (WSL2)" if is_wsl else ""
|
||||||
|
|
||||||
|
# Check CuPy / CUDA
|
||||||
|
try:
|
||||||
|
import cupy as cp
|
||||||
|
device_count = cp.cuda.runtime.getDeviceCount()
|
||||||
|
if device_count > 0:
|
||||||
|
props = cp.cuda.runtime.getDeviceProperties(0)
|
||||||
|
name = props["name"]
|
||||||
|
if isinstance(name, bytes):
|
||||||
|
name = name.decode()
|
||||||
|
mem_mb = props["totalGlobalMem"] // (1024 * 1024)
|
||||||
|
logger.info(f"GPU detected{env_note}: {name} ({mem_mb} MB VRAM)")
|
||||||
|
logger.info(f"CuPy {cp.__version__}, CUDA devices: {device_count}")
|
||||||
|
else:
|
||||||
|
logger.warning(f"CuPy installed but no CUDA devices found{env_note}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"CuPy FAILED {env_note}: {e}")
|
||||||
|
if is_wsl:
|
||||||
|
logger.warning("Install: pip3 install cupy-cuda12x --break-system-packages")
|
||||||
|
else:
|
||||||
|
logger.warning("Install: pip install cupy-cuda12x")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"CuPy error{env_note}: {e}")
|
||||||
|
|
||||||
|
# Check PyOpenCL
|
||||||
|
try:
|
||||||
|
import pyopencl as cl
|
||||||
|
platforms = cl.get_platforms()
|
||||||
|
for p in platforms:
|
||||||
|
for d in p.get_devices():
|
||||||
|
logger.info(f"OpenCL device: {d.name.strip()}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug("PyOpenCL not installed (optional)")
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
@asynccontextmanager
|
@asynccontextmanager
|
||||||
async def lifespan(app: FastAPI):
|
async def lifespan(app: FastAPI):
|
||||||
|
# Log GPU status on startup
|
||||||
|
check_gpu_availability()
|
||||||
await connect_to_mongo()
|
await connect_to_mongo()
|
||||||
yield
|
yield
|
||||||
await close_mongo_connection()
|
await close_mongo_connection()
|
||||||
|
|||||||
122
backend/app/services/boundary_service.py
Normal file
122
backend/app/services/boundary_service.py
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
"""
|
||||||
|
Coverage boundary calculation service.
|
||||||
|
|
||||||
|
Computes concave hull (alpha shape) from coverage points to generate
|
||||||
|
a realistic boundary that follows actual coverage contour.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_coverage_boundary(
|
||||||
|
points: list[dict],
|
||||||
|
threshold_dbm: float = -100,
|
||||||
|
simplify_tolerance: float = 0.001,
|
||||||
|
) -> list[dict]:
|
||||||
|
"""
|
||||||
|
Calculate coverage boundary as concave hull of points above threshold.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
points: List of coverage points with 'lat', 'lon', 'rsrp' keys
|
||||||
|
threshold_dbm: RSRP threshold - points below this are excluded
|
||||||
|
simplify_tolerance: Simplification tolerance in degrees (~100m per 0.001)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of {'lat': float, 'lon': float} coordinates forming boundary polygon.
|
||||||
|
Empty list if boundary cannot be computed.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
from shapely.geometry import MultiPoint
|
||||||
|
from shapely import concave_hull
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("Shapely not installed - boundary calculation disabled")
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Filter points above threshold
|
||||||
|
valid_coords = [
|
||||||
|
(p['lon'], p['lat']) # Shapely uses (x, y) = (lon, lat)
|
||||||
|
for p in points
|
||||||
|
if p.get('rsrp', -999) >= threshold_dbm
|
||||||
|
]
|
||||||
|
|
||||||
|
if len(valid_coords) < 3:
|
||||||
|
logger.debug(f"Not enough points for boundary: {len(valid_coords)}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create MultiPoint geometry
|
||||||
|
mp = MultiPoint(valid_coords)
|
||||||
|
|
||||||
|
# Compute concave hull (alpha shape)
|
||||||
|
# ratio: 0 = convex hull, 1 = very tight fit
|
||||||
|
# 0.3-0.5 gives good balance between detail and smoothness
|
||||||
|
hull = concave_hull(mp, ratio=0.3)
|
||||||
|
|
||||||
|
if hull.is_empty:
|
||||||
|
logger.debug("Concave hull is empty")
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Simplify to reduce points (0.001 deg ≈ 100m)
|
||||||
|
if simplify_tolerance > 0:
|
||||||
|
hull = hull.simplify(simplify_tolerance, preserve_topology=True)
|
||||||
|
|
||||||
|
# Extract coordinates based on geometry type
|
||||||
|
if hull.geom_type == 'Polygon':
|
||||||
|
coords = list(hull.exterior.coords)
|
||||||
|
return [{'lat': c[1], 'lon': c[0]} for c in coords]
|
||||||
|
|
||||||
|
elif hull.geom_type == 'MultiPolygon':
|
||||||
|
# Return largest polygon's exterior
|
||||||
|
largest = max(hull.geoms, key=lambda g: g.area)
|
||||||
|
coords = list(largest.exterior.coords)
|
||||||
|
return [{'lat': c[1], 'lon': c[0]} for c in coords]
|
||||||
|
|
||||||
|
elif hull.geom_type == 'GeometryCollection':
|
||||||
|
# Find polygons in collection
|
||||||
|
polygons = [g for g in hull.geoms if g.geom_type == 'Polygon']
|
||||||
|
if polygons:
|
||||||
|
largest = max(polygons, key=lambda g: g.area)
|
||||||
|
coords = list(largest.exterior.coords)
|
||||||
|
return [{'lat': c[1], 'lon': c[0]} for c in coords]
|
||||||
|
|
||||||
|
logger.debug(f"Unexpected hull geometry type: {hull.geom_type}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Boundary calculation error: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_multi_site_boundaries(
|
||||||
|
points: list[dict],
|
||||||
|
threshold_dbm: float = -100,
|
||||||
|
) -> dict[str, list[dict]]:
|
||||||
|
"""
|
||||||
|
Calculate separate boundaries for each site's coverage area.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
points: Coverage points with 'lat', 'lon', 'rsrp', 'site_id' keys
|
||||||
|
threshold_dbm: RSRP threshold
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict mapping site_id to boundary coordinates list.
|
||||||
|
"""
|
||||||
|
# Group points by site_id
|
||||||
|
by_site: dict[str, list[dict]] = {}
|
||||||
|
for p in points:
|
||||||
|
site_id = p.get('site_id', 'default')
|
||||||
|
if site_id not in by_site:
|
||||||
|
by_site[site_id] = []
|
||||||
|
by_site[site_id].append(p)
|
||||||
|
|
||||||
|
# Calculate boundary for each site
|
||||||
|
boundaries = {}
|
||||||
|
for site_id, site_points in by_site.items():
|
||||||
|
boundary = calculate_coverage_boundary(site_points, threshold_dbm)
|
||||||
|
if boundary:
|
||||||
|
boundaries[site_id] = boundary
|
||||||
|
|
||||||
|
return boundaries
|
||||||
@@ -62,6 +62,9 @@ from app.services.parallel_coverage_service import (
|
|||||||
calculate_coverage_parallel, get_cpu_count, get_parallel_backend,
|
calculate_coverage_parallel, get_cpu_count, get_parallel_backend,
|
||||||
CancellationToken,
|
CancellationToken,
|
||||||
)
|
)
|
||||||
|
# NOTE: gpu_manager and gpu_service are imported INSIDE functions that need them,
|
||||||
|
# NOT at module level. This prevents worker processes from initializing CuPy/CUDA
|
||||||
|
# which causes cudaErrorInsufficientDriver errors in child processes.
|
||||||
|
|
||||||
# ── New propagation models (Phase 3.0) ──
|
# ── New propagation models (Phase 3.0) ──
|
||||||
from app.propagation.base import PropagationModel, PropagationInput, PropagationOutput
|
from app.propagation.base import PropagationModel, PropagationInput, PropagationOutput
|
||||||
@@ -546,8 +549,11 @@ class CoverageService:
|
|||||||
from app.services.gpu_service import gpu_service
|
from app.services.gpu_service import gpu_service
|
||||||
|
|
||||||
t_gpu = time.time()
|
t_gpu = time.time()
|
||||||
grid_lats = np.array([lat for lat, lon in grid])
|
# Import GPU modules here (main process only) to avoid CUDA context issues in workers
|
||||||
grid_lons = np.array([lon for lat, lon in grid])
|
from app.services.gpu_backend import gpu_manager
|
||||||
|
xp = gpu_manager.get_array_module()
|
||||||
|
grid_lats = xp.array([lat for lat, lon in grid], dtype=xp.float64)
|
||||||
|
grid_lons = xp.array([lon for lat, lon in grid], dtype=xp.float64)
|
||||||
|
|
||||||
pre_distances = gpu_service.precompute_distances(
|
pre_distances = gpu_service.precompute_distances(
|
||||||
grid_lats, grid_lons, site.lat, site.lon
|
grid_lats, grid_lons, site.lat, site.lon
|
||||||
@@ -556,6 +562,9 @@ class CoverageService:
|
|||||||
pre_distances, site.frequency, site.height,
|
pre_distances, site.frequency, site.height,
|
||||||
environment=getattr(settings, 'environment', 'urban'),
|
environment=getattr(settings, 'environment', 'urban'),
|
||||||
)
|
)
|
||||||
|
gpu_time = time.time() - t_gpu
|
||||||
|
backend_name = "GPU (CUDA)" if gpu_manager.gpu_available else "CPU (NumPy)"
|
||||||
|
_clog(f"Precomputed {len(grid)} distances+path_loss on {backend_name} in {gpu_time:.2f}s")
|
||||||
|
|
||||||
# Build lookup dict for point loop
|
# Build lookup dict for point loop
|
||||||
precomputed = {}
|
precomputed = {}
|
||||||
@@ -918,9 +927,12 @@ class CoverageService:
|
|||||||
await asyncio.sleep(0)
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
from app.services.gpu_service import gpu_service
|
from app.services.gpu_service import gpu_service
|
||||||
|
from app.services.gpu_backend import gpu_manager
|
||||||
|
|
||||||
grid_lats = np.array([lat for lat, _lon in tile_grid])
|
t_gpu = time.time()
|
||||||
grid_lons = np.array([_lon for _lat, _lon in tile_grid])
|
xp = gpu_manager.get_array_module()
|
||||||
|
grid_lats = xp.array([lat for lat, _lon in tile_grid], dtype=xp.float64)
|
||||||
|
grid_lons = xp.array([_lon for _lat, _lon in tile_grid], dtype=xp.float64)
|
||||||
|
|
||||||
pre_distances = gpu_service.precompute_distances(
|
pre_distances = gpu_service.precompute_distances(
|
||||||
grid_lats, grid_lons, site.lat, site.lon,
|
grid_lats, grid_lons, site.lat, site.lon,
|
||||||
@@ -929,6 +941,9 @@ class CoverageService:
|
|||||||
pre_distances, site.frequency, site.height,
|
pre_distances, site.frequency, site.height,
|
||||||
environment=getattr(settings, 'environment', 'urban'),
|
environment=getattr(settings, 'environment', 'urban'),
|
||||||
)
|
)
|
||||||
|
gpu_time = time.time() - t_gpu
|
||||||
|
backend_name = "GPU (CUDA)" if gpu_manager.gpu_available else "CPU (NumPy)"
|
||||||
|
_clog(f"Tile {tile_idx+1}: precomputed {len(tile_grid)} pts on {backend_name} in {gpu_time:.2f}s")
|
||||||
|
|
||||||
precomputed = {}
|
precomputed = {}
|
||||||
for i, (lat, lon) in enumerate(tile_grid):
|
for i, (lat, lon) in enumerate(tile_grid):
|
||||||
@@ -1405,14 +1420,18 @@ class CoverageService:
|
|||||||
lat2: float, lon2: float
|
lat2: float, lon2: float
|
||||||
) -> float:
|
) -> float:
|
||||||
"""Calculate bearing from point 1 to point 2 (degrees)"""
|
"""Calculate bearing from point 1 to point 2 (degrees)"""
|
||||||
lat1, lon1, lat2, lon2 = map(np.radians, [lat1, lon1, lat2, lon2])
|
# Use math for scalar operations (faster than numpy/cupy for single values)
|
||||||
|
lat1_r = math.radians(lat1)
|
||||||
|
lon1_r = math.radians(lon1)
|
||||||
|
lat2_r = math.radians(lat2)
|
||||||
|
lon2_r = math.radians(lon2)
|
||||||
|
|
||||||
dlon = lon2 - lon1
|
dlon = lon2_r - lon1_r
|
||||||
|
|
||||||
x = np.sin(dlon) * np.cos(lat2)
|
x = math.sin(dlon) * math.cos(lat2_r)
|
||||||
y = np.cos(lat1) * np.sin(lat2) - np.sin(lat1) * np.cos(lat2) * np.cos(dlon)
|
y = math.cos(lat1_r) * math.sin(lat2_r) - math.sin(lat1_r) * math.cos(lat2_r) * math.cos(dlon)
|
||||||
|
|
||||||
bearing = np.degrees(np.arctan2(x, y))
|
bearing = math.degrees(math.atan2(x, y))
|
||||||
|
|
||||||
return (bearing + 360) % 360
|
return (bearing + 360) % 360
|
||||||
|
|
||||||
|
|||||||
@@ -171,17 +171,34 @@ class GPUManager:
|
|||||||
"""Full diagnostic info for troubleshooting GPU detection."""
|
"""Full diagnostic info for troubleshooting GPU detection."""
|
||||||
import sys
|
import sys
|
||||||
import platform
|
import platform
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
is_wsl = "microsoft" in platform.release().lower()
|
||||||
|
|
||||||
diag = {
|
diag = {
|
||||||
"python_version": sys.version,
|
"python_version": sys.version,
|
||||||
|
"python_executable": sys.executable,
|
||||||
"platform": platform.platform(),
|
"platform": platform.platform(),
|
||||||
|
"is_wsl": is_wsl,
|
||||||
"numpy": {"version": np.__version__},
|
"numpy": {"version": np.__version__},
|
||||||
"cuda": {},
|
"cuda": {},
|
||||||
"opencl": {},
|
"opencl": {},
|
||||||
|
"nvidia_smi": None,
|
||||||
"detected_devices": len(self._devices),
|
"detected_devices": len(self._devices),
|
||||||
"active_backend": self._active_backend.value,
|
"active_backend": self._active_backend.value,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Check nvidia-smi (works even without CuPy)
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["nvidia-smi", "--query-gpu=name,memory.total,driver_version", "--format=csv,noheader"],
|
||||||
|
capture_output=True, text=True, timeout=5
|
||||||
|
)
|
||||||
|
if result.returncode == 0 and result.stdout.strip():
|
||||||
|
diag["nvidia_smi"] = result.stdout.strip()
|
||||||
|
except Exception:
|
||||||
|
diag["nvidia_smi"] = "not found or error"
|
||||||
|
|
||||||
# Check CuPy/CUDA
|
# Check CuPy/CUDA
|
||||||
try:
|
try:
|
||||||
import cupy as cp
|
import cupy as cp
|
||||||
@@ -200,6 +217,9 @@ class GPUManager:
|
|||||||
}
|
}
|
||||||
except ImportError:
|
except ImportError:
|
||||||
diag["cuda"]["error"] = "CuPy not installed"
|
diag["cuda"]["error"] = "CuPy not installed"
|
||||||
|
if is_wsl:
|
||||||
|
diag["cuda"]["install_hint"] = "pip3 install cupy-cuda12x --break-system-packages"
|
||||||
|
else:
|
||||||
diag["cuda"]["install_hint"] = "pip install cupy-cuda12x"
|
diag["cuda"]["install_hint"] = "pip install cupy-cuda12x"
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
diag["cuda"]["error"] = str(e)
|
diag["cuda"]["error"] = str(e)
|
||||||
@@ -221,6 +241,9 @@ class GPUManager:
|
|||||||
diag["opencl"]["platforms"].append(platform_info)
|
diag["opencl"]["platforms"].append(platform_info)
|
||||||
except ImportError:
|
except ImportError:
|
||||||
diag["opencl"]["error"] = "PyOpenCL not installed"
|
diag["opencl"]["error"] = "PyOpenCL not installed"
|
||||||
|
if is_wsl:
|
||||||
|
diag["opencl"]["install_hint"] = "pip3 install pyopencl --break-system-packages"
|
||||||
|
else:
|
||||||
diag["opencl"]["install_hint"] = "pip install pyopencl"
|
diag["opencl"]["install_hint"] = "pip install pyopencl"
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
diag["opencl"]["error"] = str(e)
|
diag["opencl"]["error"] = str(e)
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ pymongo==4.6.1
|
|||||||
pydantic-settings==2.1.0
|
pydantic-settings==2.1.0
|
||||||
numpy==1.26.4
|
numpy==1.26.4
|
||||||
scipy==1.12.0
|
scipy==1.12.0
|
||||||
|
shapely>=2.0.0
|
||||||
requests==2.31.0
|
requests==2.31.0
|
||||||
httpx==0.27.0
|
httpx==0.27.0
|
||||||
aiosqlite>=0.19.0
|
aiosqlite>=0.19.0
|
||||||
|
|||||||
@@ -52,9 +52,11 @@ const getLogPath = () => {
|
|||||||
const getBackendExePath = () => {
|
const getBackendExePath = () => {
|
||||||
const exeName = process.platform === 'win32' ? 'rfcp-server.exe' : 'rfcp-server';
|
const exeName = process.platform === 'win32' ? 'rfcp-server.exe' : 'rfcp-server';
|
||||||
if (isDev) {
|
if (isDev) {
|
||||||
return path.join(__dirname, '..', 'backend', exeName);
|
// Dev: use the ONEDIR build output
|
||||||
|
return path.join(__dirname, '..', 'backend', 'dist', 'rfcp-server', exeName);
|
||||||
}
|
}
|
||||||
return getResourcePath('backend', exeName);
|
// Production: ONEDIR structure - backend/rfcp-server/rfcp-server.exe
|
||||||
|
return getResourcePath('backend', 'rfcp-server', exeName);
|
||||||
};
|
};
|
||||||
|
|
||||||
/** Frontend index.html path (production only) */
|
/** Frontend index.html path (production only) */
|
||||||
|
|||||||
233
docs/RFCP-Native-Backend-Research.md
Normal file
233
docs/RFCP-Native-Backend-Research.md
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
# RFCP Native Backend Research
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
**Finding:** The production Electron app already supports native Windows operation without WSL2.
|
||||||
|
|
||||||
|
The production build uses PyInstaller to bundle the Python backend as a standalone Windows executable (`rfcp-server.exe`). WSL2 is only used during development. No migration is needed for end users.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Architecture
|
||||||
|
|
||||||
|
### Development Mode
|
||||||
|
```
|
||||||
|
RFCP (Electron dev)
|
||||||
|
└── Spawns: python -m uvicorn app.main:app --host 127.0.0.1 --port 8090
|
||||||
|
└── Uses system Python (Windows or WSL2)
|
||||||
|
└── Requires venv with dependencies
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production Mode (Already Implemented)
|
||||||
|
```
|
||||||
|
RFCP.exe (Electron packaged)
|
||||||
|
└── Spawns: rfcp-server.exe (bundled PyInstaller binary)
|
||||||
|
└── Self-contained Python + all dependencies
|
||||||
|
└── No WSL2 required
|
||||||
|
└── No system Python required
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Evidence from Codebase
|
||||||
|
|
||||||
|
### desktop/main.js (Lines 120-145)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function startBackend() {
|
||||||
|
// Production: use bundled executable
|
||||||
|
if (isProduction) {
|
||||||
|
const serverPath = path.join(process.resourcesPath, 'rfcp-server.exe');
|
||||||
|
if (fs.existsSync(serverPath)) {
|
||||||
|
backendProcess = spawn(serverPath, [], { ... });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Development: use system Python
|
||||||
|
backendProcess = spawn('python', ['-m', 'uvicorn', 'app.main:app', ...]);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### installer/rfcp-server.spec (PyInstaller Config)
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Key configuration
|
||||||
|
a = Analysis(
|
||||||
|
['run_server.py'],
|
||||||
|
pathex=[backend_path],
|
||||||
|
binaries=[],
|
||||||
|
datas=[
|
||||||
|
('data/terrain', 'data/terrain'), # Terrain data bundled
|
||||||
|
],
|
||||||
|
hiddenimports=[
|
||||||
|
'uvicorn.logging', 'uvicorn.loops', 'uvicorn.protocols',
|
||||||
|
'motor', 'pymongo', 'numpy', 'scipy', 'shapely',
|
||||||
|
# Full list of dependencies
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
exe = EXE(
|
||||||
|
pyz,
|
||||||
|
a.scripts,
|
||||||
|
name='rfcp-server',
|
||||||
|
console=True, # Shows console for debugging
|
||||||
|
icon='rfcp.ico',
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GPU Acceleration in Production
|
||||||
|
|
||||||
|
### Current Status
|
||||||
|
The PyInstaller bundle **does not include CuPy** by default because:
|
||||||
|
1. CuPy requires CUDA runtime (large, ~500MB)
|
||||||
|
2. Not all users have NVIDIA GPUs
|
||||||
|
3. Binary would be too large for distribution
|
||||||
|
|
||||||
|
### Solution Options
|
||||||
|
|
||||||
|
**Option A: Ship CPU-only (Current)**
|
||||||
|
- Production build uses NumPy (CPU) for calculations
|
||||||
|
- GPU acceleration available only in dev mode or manual install
|
||||||
|
- Smallest download size (~100MB)
|
||||||
|
|
||||||
|
**Option B: Separate GPU Installer**
|
||||||
|
- Main installer: CPU-only (~100MB)
|
||||||
|
- Optional GPU addon: Downloads CuPy + CUDA runtime (~600MB)
|
||||||
|
- Implemented via install_rfcp.py dependency installer
|
||||||
|
|
||||||
|
**Option C: CUDA Toolkit Detection**
|
||||||
|
- Detect if CUDA is already installed on user's system
|
||||||
|
- If yes, attempt to load CuPy dynamically
|
||||||
|
- Graceful fallback to NumPy if not available
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
Keep Option A (CPU-only production) with Option B available for power users:
|
||||||
|
1. Default production build works everywhere
|
||||||
|
2. Users with NVIDIA GPUs can run `install_rfcp.py` to enable GPU acceleration
|
||||||
|
3. No WSL2 required for either path
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Terrain Data Handling
|
||||||
|
|
||||||
|
### Current Implementation
|
||||||
|
Terrain data (SRTM .hgt files) is bundled inside the PyInstaller executable:
|
||||||
|
|
||||||
|
```python
|
||||||
|
datas=[
|
||||||
|
('data/terrain', 'data/terrain'),
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Considerations
|
||||||
|
- Bundled terrain data increases exe size significantly
|
||||||
|
- Alternative: Download terrain on first use (like current region download system)
|
||||||
|
- For initial release, bundling common regions is acceptable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database (MongoDB)
|
||||||
|
|
||||||
|
### Production Architecture
|
||||||
|
The Electron app embeds MongoDB or requires MongoDB to be installed separately.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
1. **Embedded MongoDB** - Ships mongod.exe with the app
|
||||||
|
2. **MongoDB Atlas** - Cloud database (requires internet)
|
||||||
|
3. **SQLite** - Switch to file-based database (significant refactor)
|
||||||
|
4. **In-memory + file persistence** - No MongoDB required (significant refactor)
|
||||||
|
|
||||||
|
Current implementation uses Motor (async MongoDB driver). For true standalone operation, consider SQLite migration in future iteration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Build Process
|
||||||
|
|
||||||
|
### Current Build Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build backend executable
|
||||||
|
cd /mnt/d/root/rfcp/backend
|
||||||
|
pyinstaller ../installer/rfcp-server.spec
|
||||||
|
|
||||||
|
# Build Electron app with bundled backend
|
||||||
|
cd /mnt/d/root/rfcp/installer
|
||||||
|
./build-win.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Output
|
||||||
|
- `rfcp-server.exe` - Standalone backend (~80MB)
|
||||||
|
- `RFCP-Setup-{version}.exe` - Full installer with Electron + backend (~150MB)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Native Build
|
||||||
|
|
||||||
|
### Test Procedure
|
||||||
|
1. Build `rfcp-server.exe` via PyInstaller
|
||||||
|
2. Run directly: `./rfcp-server.exe`
|
||||||
|
3. Verify API responds: `curl http://localhost:8090/api/health`
|
||||||
|
4. Verify coverage calculation works
|
||||||
|
5. Check GPU detection in logs
|
||||||
|
|
||||||
|
### Known Issues
|
||||||
|
1. **First launch slow**: PyInstaller extracts on first run (~5-10 seconds)
|
||||||
|
2. **Antivirus false positives**: Some AV flags PyInstaller executables
|
||||||
|
3. **Console window**: Shows black console (use `console=False` for windowless)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conclusions
|
||||||
|
|
||||||
|
### No Migration Needed
|
||||||
|
The production Electron app already works without WSL2. The current architecture is:
|
||||||
|
- ✅ Native Windows executable
|
||||||
|
- ✅ No Python installation required
|
||||||
|
- ✅ No WSL2 required
|
||||||
|
- ✅ Self-contained dependencies
|
||||||
|
|
||||||
|
### Development vs Production
|
||||||
|
| Aspect | Development | Production |
|
||||||
|
|--------|-------------|------------|
|
||||||
|
| Python | System Python / venv | Bundled via PyInstaller |
|
||||||
|
| WSL2 | Optional (for testing) | Not required |
|
||||||
|
| GPU | CuPy if installed | CPU-only (NumPy) |
|
||||||
|
| MongoDB | Local instance | Embedded or Atlas |
|
||||||
|
| Terrain | Local data/ folder | Bundled in exe |
|
||||||
|
|
||||||
|
### Remaining Work
|
||||||
|
1. **GPU for production**: Implement Optional GPU addon installer
|
||||||
|
2. **Smaller package**: On-demand terrain download instead of bundling
|
||||||
|
3. **Database portability**: Consider SQLite migration for offline-first
|
||||||
|
4. **Installer polish**: Signed executables, auto-update support
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix: Full PyInstaller Hidden Imports
|
||||||
|
|
||||||
|
From `installer/rfcp-server.spec`:
|
||||||
|
```python
|
||||||
|
hiddenimports=[
|
||||||
|
'uvicorn.logging',
|
||||||
|
'uvicorn.loops',
|
||||||
|
'uvicorn.loops.auto',
|
||||||
|
'uvicorn.protocols',
|
||||||
|
'uvicorn.protocols.http',
|
||||||
|
'uvicorn.protocols.http.auto',
|
||||||
|
'uvicorn.protocols.websockets',
|
||||||
|
'uvicorn.protocols.websockets.auto',
|
||||||
|
'uvicorn.lifespan',
|
||||||
|
'uvicorn.lifespan.on',
|
||||||
|
'motor',
|
||||||
|
'pymongo',
|
||||||
|
'numpy',
|
||||||
|
'scipy',
|
||||||
|
'shapely',
|
||||||
|
'shapely.geometry',
|
||||||
|
'shapely.ops',
|
||||||
|
# ... additional imports
|
||||||
|
]
|
||||||
|
```
|
||||||
@@ -444,11 +444,14 @@ export default function App() {
|
|||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
const timeStr = result.calculationTime.toFixed(1);
|
const timeStr = result.calculationTime.toFixed(1);
|
||||||
|
const firstSite = sites.find((s) => s.visible);
|
||||||
|
const freqStr = firstSite ? ` \u2022 ${firstSite.frequency} MHz` : '';
|
||||||
|
const presetStr = settings.preset ? ` \u2022 ${settings.preset}` : '';
|
||||||
const modelsStr = result.modelsUsed?.length
|
const modelsStr = result.modelsUsed?.length
|
||||||
? ` • ${result.modelsUsed.length} models`
|
? ` \u2022 ${result.modelsUsed.length} models`
|
||||||
: '';
|
: '';
|
||||||
addToast(
|
addToast(
|
||||||
`Calculated ${result.totalPoints.toLocaleString()} points in ${timeStr}s${modelsStr}`,
|
`${result.totalPoints.toLocaleString()} pts \u2022 ${timeStr}s${presetStr}${freqStr}${modelsStr}`,
|
||||||
'success'
|
'success'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -481,7 +484,7 @@ export default function App() {
|
|||||||
return (
|
return (
|
||||||
<div className="h-screen w-screen flex flex-col bg-gray-100 dark:bg-dark-bg">
|
<div className="h-screen w-screen flex flex-col bg-gray-100 dark:bg-dark-bg">
|
||||||
{/* Header */}
|
{/* Header */}
|
||||||
<header className="bg-slate-800 dark:bg-slate-900 text-white px-4 py-2 flex items-center justify-between flex-shrink-0 z-10">
|
<header className="bg-slate-800 dark:bg-slate-900 text-white px-4 py-2 flex items-center justify-between flex-shrink-0 z-[1010]">
|
||||||
<div className="flex items-center gap-2">
|
<div className="flex items-center gap-2">
|
||||||
<span className="text-base font-bold">RFCP</span>
|
<span className="text-base font-bold">RFCP</span>
|
||||||
<span className="text-xs text-slate-400 hidden sm:inline">
|
<span className="text-xs text-slate-400 hidden sm:inline">
|
||||||
@@ -684,6 +687,7 @@ export default function App() {
|
|||||||
points={coverageResult.points.filter(p => p.rsrp >= settings.rsrpThreshold)}
|
points={coverageResult.points.filter(p => p.rsrp >= settings.rsrpThreshold)}
|
||||||
visible={showBoundary}
|
visible={showBoundary}
|
||||||
resolution={settings.resolution}
|
resolution={settings.resolution}
|
||||||
|
boundary={coverageResult.boundary}
|
||||||
/>
|
/>
|
||||||
)}
|
)}
|
||||||
</>
|
</>
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
/**
|
/**
|
||||||
* Renders a dashed polyline around the coverage zone boundary.
|
* Renders a dashed polyline around the coverage zone boundary.
|
||||||
*
|
*
|
||||||
* Uses @turf/concave to compute a concave hull (alpha shape) per site,
|
* Prefers server-computed boundary if available (shapely concave_hull).
|
||||||
* which correctly follows sector/wedge shapes — not just convex circles.
|
* Falls back to client-side @turf/concave computation.
|
||||||
*
|
*
|
||||||
* Performance: ~20-50ms for 10k points (runs once per coverage change).
|
* Performance: ~20-50ms for 10k points (runs once per coverage change).
|
||||||
*/
|
*/
|
||||||
@@ -12,7 +12,7 @@ import { useMap } from 'react-leaflet';
|
|||||||
import L from 'leaflet';
|
import L from 'leaflet';
|
||||||
import concave from '@turf/concave';
|
import concave from '@turf/concave';
|
||||||
import { featureCollection, point } from '@turf/helpers';
|
import { featureCollection, point } from '@turf/helpers';
|
||||||
import type { CoveragePoint } from '@/types/index.ts';
|
import type { CoveragePoint, BoundaryPoint } from '@/types/index.ts';
|
||||||
import { logger } from '@/utils/logger.ts';
|
import { logger } from '@/utils/logger.ts';
|
||||||
|
|
||||||
interface CoverageBoundaryProps {
|
interface CoverageBoundaryProps {
|
||||||
@@ -21,6 +21,7 @@ interface CoverageBoundaryProps {
|
|||||||
resolution: number; // meters — controls concave hull detail
|
resolution: number; // meters — controls concave hull detail
|
||||||
color?: string;
|
color?: string;
|
||||||
weight?: number;
|
weight?: number;
|
||||||
|
boundary?: BoundaryPoint[]; // server-provided boundary (preferred)
|
||||||
}
|
}
|
||||||
|
|
||||||
export default function CoverageBoundary({
|
export default function CoverageBoundary({
|
||||||
@@ -29,13 +30,25 @@ export default function CoverageBoundary({
|
|||||||
resolution,
|
resolution,
|
||||||
color = '#ffffff', // white — visible against red-to-blue gradient
|
color = '#ffffff', // white — visible against red-to-blue gradient
|
||||||
weight = 2,
|
weight = 2,
|
||||||
|
boundary,
|
||||||
}: CoverageBoundaryProps) {
|
}: CoverageBoundaryProps) {
|
||||||
const map = useMap();
|
const map = useMap();
|
||||||
const layerRef = useRef<L.LayerGroup | null>(null);
|
const layerRef = useRef<L.LayerGroup | null>(null);
|
||||||
|
|
||||||
// Compute boundary paths grouped by site
|
// Compute boundary paths - prefer server boundary, fallback to client-side
|
||||||
const boundaryPaths = useMemo(() => {
|
const boundaryPaths = useMemo(() => {
|
||||||
if (!visible || points.length === 0) return [];
|
if (!visible) return [];
|
||||||
|
|
||||||
|
// Use server-provided boundary if available
|
||||||
|
if (boundary && boundary.length >= 3) {
|
||||||
|
const serverPath: L.LatLngExpression[] = boundary.map(
|
||||||
|
(p) => [p.lat, p.lon] as L.LatLngExpression
|
||||||
|
);
|
||||||
|
return [serverPath];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to client-side computation
|
||||||
|
if (points.length === 0) return [];
|
||||||
|
|
||||||
// Group points by siteId (fallback to 'all' when siteId not available from API)
|
// Group points by siteId (fallback to 'all' when siteId not available from API)
|
||||||
const bySite = new Map<string, CoveragePoint[]>();
|
const bySite = new Map<string, CoveragePoint[]>();
|
||||||
@@ -61,7 +74,7 @@ export default function CoverageBoundary({
|
|||||||
}
|
}
|
||||||
|
|
||||||
return paths;
|
return paths;
|
||||||
}, [points, visible, resolution]);
|
}, [points, visible, resolution, boundary]);
|
||||||
|
|
||||||
// Render / cleanup polylines
|
// Render / cleanup polylines
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
|
|||||||
@@ -121,6 +121,7 @@ export default function MeasurementTool({ enabled, onComplete, onProfileRequest
|
|||||||
<button
|
<button
|
||||||
onClick={(e) => {
|
onClick={(e) => {
|
||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
|
e.preventDefault();
|
||||||
onProfileRequest(points[0], points[points.length - 1]);
|
onProfileRequest(points[0], points[points.length - 1]);
|
||||||
}}
|
}}
|
||||||
style={{
|
style={{
|
||||||
|
|||||||
@@ -33,6 +33,13 @@ export default function GPUIndicator() {
|
|||||||
return () => document.removeEventListener('mousedown', handler);
|
return () => document.removeEventListener('mousedown', handler);
|
||||||
}, [open]);
|
}, [open]);
|
||||||
|
|
||||||
|
// Auto-fetch diagnostics when dropdown opens and only CPU available
|
||||||
|
useEffect(() => {
|
||||||
|
if (open && status?.active_backend === 'cpu' && !diagnostics) {
|
||||||
|
api.getGPUDiagnostics().then(setDiagnostics).catch(() => {});
|
||||||
|
}
|
||||||
|
}, [open, status?.active_backend, diagnostics]);
|
||||||
|
|
||||||
if (!status) return null;
|
if (!status) return null;
|
||||||
|
|
||||||
const isGPU = status.active_backend !== 'cpu';
|
const isGPU = status.active_backend !== 'cpu';
|
||||||
@@ -119,15 +126,30 @@ export default function GPUIndicator() {
|
|||||||
<div className="text-[10px] text-yellow-600 dark:text-yellow-400 mb-2">
|
<div className="text-[10px] text-yellow-600 dark:text-yellow-400 mb-2">
|
||||||
No GPU detected. For faster calculations:
|
No GPU detected. For faster calculations:
|
||||||
</div>
|
</div>
|
||||||
|
{diagnostics?.is_wsl ? (
|
||||||
|
<div className="text-[10px] text-gray-500 dark:text-dark-muted space-y-1">
|
||||||
|
<div className="text-[9px] text-gray-400 dark:text-dark-muted mb-1">WSL2 detected - use pip3:</div>
|
||||||
|
<div className="bg-gray-100 dark:bg-dark-border px-2 py-1 rounded font-mono text-[9px] break-all">
|
||||||
|
pip3 install cupy-cuda12x --break-system-packages
|
||||||
|
</div>
|
||||||
|
<div className="text-[9px] text-gray-400 dark:text-dark-muted mt-1">Then restart RFCP</div>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
<div className="text-[10px] text-gray-500 dark:text-dark-muted space-y-0.5">
|
<div className="text-[10px] text-gray-500 dark:text-dark-muted space-y-0.5">
|
||||||
<div>NVIDIA: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install cupy-cuda12x</code></div>
|
<div>NVIDIA: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install cupy-cuda12x</code></div>
|
||||||
<div>Intel/AMD: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install pyopencl</code></div>
|
<div>Intel/AMD: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install pyopencl</code></div>
|
||||||
</div>
|
</div>
|
||||||
|
)}
|
||||||
|
{typeof diagnostics?.nvidia_smi === 'string' && diagnostics.nvidia_smi !== 'not found or error' && (
|
||||||
|
<div className="mt-2 text-[9px] text-green-600 dark:text-green-400">
|
||||||
|
GPU hardware found: {diagnostics.nvidia_smi.split(',')[0]}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
<button
|
<button
|
||||||
onClick={handleRunDiagnostics}
|
onClick={handleRunDiagnostics}
|
||||||
className="mt-2 w-full text-[10px] text-blue-600 dark:text-blue-400 hover:underline text-left"
|
className="mt-2 w-full text-[10px] text-blue-600 dark:text-blue-400 hover:underline text-left"
|
||||||
>
|
>
|
||||||
Run Diagnostics
|
{diagnostics ? 'Refresh Diagnostics' : 'Run Diagnostics'}
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -75,6 +75,11 @@ export interface ApiCoverageStats {
|
|||||||
points_with_atmospheric_loss: number;
|
points_with_atmospheric_loss: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface ApiBoundaryPoint {
|
||||||
|
lat: number;
|
||||||
|
lon: number;
|
||||||
|
}
|
||||||
|
|
||||||
export interface CoverageResponse {
|
export interface CoverageResponse {
|
||||||
points: ApiCoveragePoint[];
|
points: ApiCoveragePoint[];
|
||||||
count: number;
|
count: number;
|
||||||
@@ -82,6 +87,7 @@ export interface CoverageResponse {
|
|||||||
stats: ApiCoverageStats;
|
stats: ApiCoverageStats;
|
||||||
computation_time: number;
|
computation_time: number;
|
||||||
models_used: string[];
|
models_used: string[];
|
||||||
|
boundary?: ApiBoundaryPoint[];
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface Preset {
|
export interface Preset {
|
||||||
|
|||||||
@@ -98,6 +98,7 @@ function responseToResult(response: CoverageResponse, settings: CoverageSettings
|
|||||||
settings: settings,
|
settings: settings,
|
||||||
stats: response.stats as CoverageApiStats,
|
stats: response.stats as CoverageApiStats,
|
||||||
modelsUsed: response.models_used,
|
modelsUsed: response.models_used,
|
||||||
|
boundary: response.boundary,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -217,6 +218,12 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
|
|||||||
setError: (error) => set({ error }),
|
setError: (error) => set({ error }),
|
||||||
|
|
||||||
calculateCoverage: async () => {
|
calculateCoverage: async () => {
|
||||||
|
// Guard against duplicate calculations
|
||||||
|
if (get().isCalculating) {
|
||||||
|
console.warn('[Coverage] Calculation already in progress, ignoring duplicate request');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
const { settings } = get();
|
const { settings } = get();
|
||||||
const sites = useSitesStore.getState().sites;
|
const sites = useSitesStore.getState().sites;
|
||||||
|
|
||||||
@@ -251,11 +258,14 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
|
|||||||
addToast('No coverage points. Try increasing radius.', 'warning');
|
addToast('No coverage points. Try increasing radius.', 'warning');
|
||||||
} else {
|
} else {
|
||||||
const timeStr = result.calculationTime.toFixed(1);
|
const timeStr = result.calculationTime.toFixed(1);
|
||||||
|
const firstSite = useSitesStore.getState().sites.find((s) => s.visible);
|
||||||
|
const freqStr = firstSite ? ` \u2022 ${firstSite.frequency} MHz` : '';
|
||||||
|
const presetStr = settings.preset ? ` \u2022 ${settings.preset}` : '';
|
||||||
const modelsStr = result.modelsUsed?.length
|
const modelsStr = result.modelsUsed?.length
|
||||||
? ` \u2022 ${result.modelsUsed.length} models`
|
? ` \u2022 ${result.modelsUsed.length} models`
|
||||||
: '';
|
: '';
|
||||||
addToast(
|
addToast(
|
||||||
`Calculated ${result.totalPoints.toLocaleString()} points in ${timeStr}s${modelsStr}`,
|
`${result.totalPoints.toLocaleString()} pts \u2022 ${timeStr}s${presetStr}${freqStr}${modelsStr}`,
|
||||||
'success'
|
'success'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,6 +15,11 @@ export interface CoveragePoint {
|
|||||||
atmospheric_loss?: number; // dB atmospheric absorption
|
atmospheric_loss?: number; // dB atmospheric absorption
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface BoundaryPoint {
|
||||||
|
lat: number;
|
||||||
|
lon: number;
|
||||||
|
}
|
||||||
|
|
||||||
export interface CoverageResult {
|
export interface CoverageResult {
|
||||||
points: CoveragePoint[];
|
points: CoveragePoint[];
|
||||||
calculationTime: number; // seconds (was ms for browser calc)
|
calculationTime: number; // seconds (was ms for browser calc)
|
||||||
@@ -23,6 +28,7 @@ export interface CoverageResult {
|
|||||||
// API-provided fields
|
// API-provided fields
|
||||||
stats?: CoverageApiStats;
|
stats?: CoverageApiStats;
|
||||||
modelsUsed?: string[];
|
modelsUsed?: string[];
|
||||||
|
boundary?: BoundaryPoint[]; // server-computed coverage boundary
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface CoverageApiStats {
|
export interface CoverageApiStats {
|
||||||
|
|||||||
@@ -5,5 +5,6 @@ export type {
|
|||||||
CoverageSettings,
|
CoverageSettings,
|
||||||
CoverageApiStats,
|
CoverageApiStats,
|
||||||
GridPoint,
|
GridPoint,
|
||||||
|
BoundaryPoint,
|
||||||
} from './coverage.ts';
|
} from './coverage.ts';
|
||||||
export type { FrequencyBand } from './frequency.ts';
|
export type { FrequencyBand } from './frequency.ts';
|
||||||
|
|||||||
70
installer/build-gpu.bat
Normal file
70
installer/build-gpu.bat
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
@echo off
|
||||||
|
echo ========================================
|
||||||
|
echo RFCP GPU Build — ONEDIR mode
|
||||||
|
echo CuPy-cuda13x + CUDA Toolkit 13.x
|
||||||
|
echo ========================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
REM ── Check CuPy ──
|
||||||
|
echo [1/5] Checking CuPy installation...
|
||||||
|
python -c "import cupy; print(f' CuPy {cupy.__version__}')" 2>nul
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo ERROR: CuPy not installed.
|
||||||
|
echo Run: pip install cupy-cuda13x
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
REM ── Check CUDA compute ──
|
||||||
|
echo [2/5] Testing GPU compute...
|
||||||
|
python -c "import cupy; a = cupy.array([1,2,3]); assert a.sum() == 6; print(' GPU compute: OK')" 2>nul
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo ERROR: CuPy installed but GPU compute failed.
|
||||||
|
echo Check: CUDA Toolkit installed? nvidia-smi works?
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
REM ── Check CUDA_PATH ──
|
||||||
|
echo [3/5] Checking CUDA Toolkit...
|
||||||
|
if defined CUDA_PATH (
|
||||||
|
echo CUDA_PATH: %CUDA_PATH%
|
||||||
|
) else (
|
||||||
|
echo WARNING: CUDA_PATH not set
|
||||||
|
)
|
||||||
|
|
||||||
|
REM ── Check nvidia pip DLLs ──
|
||||||
|
echo [4/5] Checking nvidia pip packages...
|
||||||
|
python -c "import nvidia; import os; base=os.path.dirname(nvidia.__file__); dlls=[f for d in os.listdir(base) if os.path.isdir(os.path.join(base,d,'bin')) for f in os.listdir(os.path.join(base,d,'bin')) if f.endswith('.dll')]; print(f' nvidia pip DLLs: {len(dlls)}')" 2>nul
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo No nvidia pip packages (will use CUDA Toolkit)
|
||||||
|
)
|
||||||
|
|
||||||
|
REM ── Build ──
|
||||||
|
echo.
|
||||||
|
echo [5/5] Building rfcp-server (ONEDIR mode)...
|
||||||
|
echo This may take 3-5 minutes...
|
||||||
|
echo.
|
||||||
|
|
||||||
|
cd /d "%~dp0\..\backend"
|
||||||
|
pyinstaller "..\installer\rfcp-server-gpu.spec" --clean --noconfirm
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo ========================================
|
||||||
|
if exist "dist\rfcp-server\rfcp-server.exe" (
|
||||||
|
echo BUILD COMPLETE! (ONEDIR mode)
|
||||||
|
echo.
|
||||||
|
echo Output: backend\dist\rfcp-server\
|
||||||
|
dir /b dist\rfcp-server\*.exe dist\rfcp-server\*.dll 2>nul | find /c /v "" > nul
|
||||||
|
echo.
|
||||||
|
echo Test commands:
|
||||||
|
echo cd dist\rfcp-server
|
||||||
|
echo rfcp-server.exe
|
||||||
|
echo curl http://localhost:8090/api/health
|
||||||
|
echo curl http://localhost:8090/api/gpu/status
|
||||||
|
echo ========================================
|
||||||
|
) else (
|
||||||
|
echo BUILD FAILED — check errors above
|
||||||
|
echo ========================================
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
pause
|
||||||
84
installer/build-gpu.sh
Normal file
84
installer/build-gpu.sh
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "========================================"
|
||||||
|
echo " RFCP GPU Build — ONEDIR mode"
|
||||||
|
echo " CuPy-cuda13x + CUDA Toolkit 13.x"
|
||||||
|
echo "========================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
BACKEND_DIR="$SCRIPT_DIR/../backend"
|
||||||
|
|
||||||
|
# Check backend exists
|
||||||
|
if [ ! -f "$BACKEND_DIR/run_server.py" ]; then
|
||||||
|
echo "ERROR: Backend not found at $BACKEND_DIR"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check Python
|
||||||
|
echo "[1/5] Checking Python..."
|
||||||
|
python3 --version || { echo "ERROR: Python3 not found"; exit 1; }
|
||||||
|
|
||||||
|
# Check CuPy
|
||||||
|
echo ""
|
||||||
|
echo "[2/5] Checking CuPy installation..."
|
||||||
|
if ! python3 -c "import cupy; print(f' CuPy {cupy.__version__}')" 2>/dev/null; then
|
||||||
|
echo "ERROR: CuPy not installed"
|
||||||
|
echo ""
|
||||||
|
echo "Install CuPy:"
|
||||||
|
echo " pip3 install cupy-cuda13x"
|
||||||
|
echo " # or for WSL2:"
|
||||||
|
echo " pip3 install cupy-cuda13x --break-system-packages"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check GPU compute
|
||||||
|
echo ""
|
||||||
|
echo "[3/5] Testing GPU compute..."
|
||||||
|
if python3 -c "import cupy; a = cupy.array([1,2,3]); assert a.sum() == 6; print(' GPU compute: OK')" 2>/dev/null; then
|
||||||
|
:
|
||||||
|
else
|
||||||
|
echo "WARNING: GPU compute test failed (may still work)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check CUDA
|
||||||
|
echo ""
|
||||||
|
echo "[4/5] Checking CUDA..."
|
||||||
|
if [ -n "$CUDA_PATH" ]; then
|
||||||
|
echo " CUDA_PATH: $CUDA_PATH"
|
||||||
|
else
|
||||||
|
echo " CUDA_PATH not set (relying on nvidia pip packages)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check nvidia pip packages
|
||||||
|
echo ""
|
||||||
|
echo "[5/5] Checking nvidia pip packages..."
|
||||||
|
python3 -c "import nvidia; print(' nvidia packages found')" 2>/dev/null || echo " No nvidia pip packages"
|
||||||
|
|
||||||
|
# Build
|
||||||
|
echo ""
|
||||||
|
echo "Building rfcp-server (ONEDIR mode)..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
cd "$BACKEND_DIR"
|
||||||
|
pyinstaller "$SCRIPT_DIR/rfcp-server-gpu.spec" --clean --noconfirm
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "========================================"
|
||||||
|
if [ -f "dist/rfcp-server/rfcp-server" ] || [ -f "dist/rfcp-server/rfcp-server.exe" ]; then
|
||||||
|
echo " BUILD COMPLETE! (ONEDIR mode)"
|
||||||
|
echo ""
|
||||||
|
echo " Output: backend/dist/rfcp-server/"
|
||||||
|
ls -lh dist/rfcp-server/ | head -20
|
||||||
|
echo ""
|
||||||
|
echo " Test:"
|
||||||
|
echo " cd dist/rfcp-server"
|
||||||
|
echo " ./rfcp-server"
|
||||||
|
echo " curl http://localhost:8090/api/health"
|
||||||
|
echo "========================================"
|
||||||
|
else
|
||||||
|
echo " BUILD FAILED — check errors above"
|
||||||
|
echo "========================================"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -3,6 +3,7 @@ set -e
|
|||||||
|
|
||||||
echo "========================================="
|
echo "========================================="
|
||||||
echo " RFCP Desktop Build (Windows)"
|
echo " RFCP Desktop Build (Windows)"
|
||||||
|
echo " GPU-enabled ONEDIR build"
|
||||||
echo "========================================="
|
echo "========================================="
|
||||||
|
|
||||||
cd "$(dirname "$0")/.."
|
cd "$(dirname "$0")/.."
|
||||||
@@ -14,15 +15,30 @@ npm ci
|
|||||||
npm run build
|
npm run build
|
||||||
cd ..
|
cd ..
|
||||||
|
|
||||||
# 2. Build backend with PyInstaller
|
# 2. Build backend with PyInstaller (GPU ONEDIR mode)
|
||||||
echo "[2/4] Building backend..."
|
echo "[2/4] Building backend (GPU)..."
|
||||||
cd backend
|
cd backend
|
||||||
|
|
||||||
|
# Check CuPy is available
|
||||||
|
if ! python -c "import cupy" 2>/dev/null; then
|
||||||
|
echo "WARNING: CuPy not installed - GPU acceleration will not be available"
|
||||||
|
echo " Install with: pip install cupy-cuda13x"
|
||||||
|
fi
|
||||||
|
|
||||||
python -m pip install -r requirements.txt
|
python -m pip install -r requirements.txt
|
||||||
python -m pip install pyinstaller
|
python -m pip install pyinstaller
|
||||||
cd ../installer
|
|
||||||
python -m PyInstaller rfcp-server.spec --clean --noconfirm
|
# Build using GPU spec (ONEDIR output)
|
||||||
|
python -m PyInstaller ../installer/rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
|
||||||
|
# Copy ONEDIR folder to desktop staging area
|
||||||
|
# Result: desktop/backend-dist/win/rfcp-server/rfcp-server.exe + _internal/
|
||||||
mkdir -p ../desktop/backend-dist/win
|
mkdir -p ../desktop/backend-dist/win
|
||||||
cp dist/rfcp-server.exe ../desktop/backend-dist/win/
|
rm -rf ../desktop/backend-dist/win/rfcp-server # Clean old build
|
||||||
|
cp -r dist/rfcp-server ../desktop/backend-dist/win/rfcp-server
|
||||||
|
|
||||||
|
echo " Backend copied to: desktop/backend-dist/win/rfcp-server/"
|
||||||
|
ls -la ../desktop/backend-dist/win/rfcp-server/*.exe 2>/dev/null || true
|
||||||
cd ..
|
cd ..
|
||||||
|
|
||||||
# 3. Build Electron app
|
# 3. Build Electron app
|
||||||
|
|||||||
305
installer/rfcp-server-gpu.spec
Normal file
305
installer/rfcp-server-gpu.spec
Normal file
@@ -0,0 +1,305 @@
|
|||||||
|
# rfcp-server-gpu.spec — GPU-enabled build (CuPy + CUDA 13.x)
|
||||||
|
# RFCP Iteration 3.6.0
|
||||||
|
#
|
||||||
|
# Mode: ONEDIR (directory output, not single exe)
|
||||||
|
# This is better for CUDA — DLLs load directly without temp extraction
|
||||||
|
#
|
||||||
|
# Requirements:
|
||||||
|
# pip install cupy-cuda13x fastrlock pyinstaller
|
||||||
|
# CUDA Toolkit 13.x installed (winget install Nvidia.CUDA)
|
||||||
|
#
|
||||||
|
# Build:
|
||||||
|
# cd backend && pyinstaller ../installer/rfcp-server-gpu.spec --clean --noconfirm
|
||||||
|
#
|
||||||
|
# Output:
|
||||||
|
# backend/dist/rfcp-server/rfcp-server.exe (+ DLLs in same folder)
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import glob
|
||||||
|
from PyInstaller.utils.hooks import collect_all, collect_dynamic_libs
|
||||||
|
|
||||||
|
backend_path = os.path.abspath(os.path.join(os.path.dirname(SPEC), '..', 'backend'))
|
||||||
|
print(f"[GPU SPEC] Backend path: {backend_path}")
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# Collect CuPy packages
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
cupy_datas = []
|
||||||
|
cupy_binaries = []
|
||||||
|
cupy_hiddenimports = []
|
||||||
|
cupyb_datas = []
|
||||||
|
cupyb_binaries = []
|
||||||
|
cupyb_hiddenimports = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
cupy_datas, cupy_binaries, cupy_hiddenimports = collect_all('cupy')
|
||||||
|
cupyb_datas, cupyb_binaries, cupyb_hiddenimports = collect_all('cupy_backends')
|
||||||
|
print(f"[GPU SPEC] CuPy: {len(cupy_binaries)} binaries, {len(cupy_datas)} data files")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[GPU SPEC] WARNING: CuPy collection failed: {e}")
|
||||||
|
|
||||||
|
# NOTE: nvidia pip packages REMOVED - they have cuda12 DLLs that conflict with cupy-cuda13x
|
||||||
|
# We use CUDA Toolkit 13.x DLLs only
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# Collect CUDA Toolkit DLLs (system install)
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# Installed via: winget install Nvidia.CUDA
|
||||||
|
cuda_toolkit_binaries = []
|
||||||
|
cuda_path = os.environ.get('CUDA_PATH', '')
|
||||||
|
|
||||||
|
if cuda_path:
|
||||||
|
# Scan BOTH bin\ and bin\x64\ directories
|
||||||
|
cuda_bin_dirs = [
|
||||||
|
os.path.join(cuda_path, 'bin'),
|
||||||
|
os.path.join(cuda_path, 'bin', 'x64'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Only essential CUDA runtime DLLs (exclude NPP, nvjpeg, nvblas, nvfatbin)
|
||||||
|
cuda_dll_patterns = [
|
||||||
|
'cublas64_*.dll',
|
||||||
|
'cublasLt64_*.dll',
|
||||||
|
'cudart64_*.dll',
|
||||||
|
'cufft64_*.dll',
|
||||||
|
'cufftw64_*.dll',
|
||||||
|
'curand64_*.dll',
|
||||||
|
'cusolver64_*.dll',
|
||||||
|
'cusolverMg64_*.dll',
|
||||||
|
'cusparse64_*.dll',
|
||||||
|
'nvrtc64_*.dll',
|
||||||
|
'nvrtc-builtins64_*.dll',
|
||||||
|
'nvJitLink_*.dll',
|
||||||
|
'nvjitlink_*.dll',
|
||||||
|
]
|
||||||
|
|
||||||
|
collected_dlls = set() # Avoid duplicates
|
||||||
|
for cuda_bin in cuda_bin_dirs:
|
||||||
|
if os.path.isdir(cuda_bin):
|
||||||
|
for pattern in cuda_dll_patterns:
|
||||||
|
for dll in glob.glob(os.path.join(cuda_bin, pattern)):
|
||||||
|
dll_name = os.path.basename(dll)
|
||||||
|
if dll_name not in collected_dlls:
|
||||||
|
cuda_toolkit_binaries.append((dll, '.'))
|
||||||
|
collected_dlls.add(dll_name)
|
||||||
|
print(f"[GPU SPEC] Scanned: {cuda_bin}")
|
||||||
|
|
||||||
|
print(f"[GPU SPEC] CUDA Toolkit ({cuda_path}): {len(cuda_toolkit_binaries)} DLLs")
|
||||||
|
for dll, _ in cuda_toolkit_binaries:
|
||||||
|
print(f"[GPU SPEC] {os.path.basename(dll)}")
|
||||||
|
else:
|
||||||
|
print("[GPU SPEC] ERROR: CUDA_PATH not set!")
|
||||||
|
print("[GPU SPEC] Install: winget install Nvidia.CUDA")
|
||||||
|
|
||||||
|
# All GPU binaries (CUDA Toolkit only, no nvidia pip packages)
|
||||||
|
all_gpu_binaries = cuda_toolkit_binaries
|
||||||
|
|
||||||
|
if len(all_gpu_binaries) == 0:
|
||||||
|
print("[GPU SPEC] ⚠ NO CUDA DLLs FOUND!")
|
||||||
|
print("[GPU SPEC] Install CUDA Toolkit: winget install Nvidia.CUDA")
|
||||||
|
else:
|
||||||
|
print(f"[GPU SPEC] ✅ Total GPU DLLs: {len(all_gpu_binaries)}")
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# Collect fastrlock (CuPy dependency)
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
fl_datas = []
|
||||||
|
fl_binaries = []
|
||||||
|
fl_hiddenimports = []
|
||||||
|
try:
|
||||||
|
fl_datas, fl_binaries, fl_hiddenimports = collect_all('fastrlock')
|
||||||
|
print(f"[GPU SPEC] fastrlock: {len(fl_binaries)} binaries")
|
||||||
|
except Exception:
|
||||||
|
print("[GPU SPEC] fastrlock not found (optional)")
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# PyInstaller Analysis
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
|
||||||
|
a = Analysis(
|
||||||
|
[os.path.join(backend_path, 'run_server.py')],
|
||||||
|
pathex=[backend_path],
|
||||||
|
binaries=(
|
||||||
|
cupy_binaries + cupyb_binaries +
|
||||||
|
fl_binaries + all_gpu_binaries
|
||||||
|
),
|
||||||
|
datas=[
|
||||||
|
# Include app/ source code
|
||||||
|
(os.path.join(backend_path, 'app'), 'app'),
|
||||||
|
] + cupy_datas + cupyb_datas + fl_datas,
|
||||||
|
hiddenimports=[
|
||||||
|
# ── Uvicorn internals ──
|
||||||
|
'uvicorn.logging',
|
||||||
|
'uvicorn.loops',
|
||||||
|
'uvicorn.loops.auto',
|
||||||
|
'uvicorn.loops.asyncio',
|
||||||
|
'uvicorn.protocols',
|
||||||
|
'uvicorn.protocols.http',
|
||||||
|
'uvicorn.protocols.http.auto',
|
||||||
|
'uvicorn.protocols.http.h11_impl',
|
||||||
|
'uvicorn.protocols.http.httptools_impl',
|
||||||
|
'uvicorn.protocols.websockets',
|
||||||
|
'uvicorn.protocols.websockets.auto',
|
||||||
|
'uvicorn.protocols.websockets.wsproto_impl',
|
||||||
|
'uvicorn.lifespan',
|
||||||
|
'uvicorn.lifespan.on',
|
||||||
|
'uvicorn.lifespan.off',
|
||||||
|
# ── FastAPI / Starlette ──
|
||||||
|
'fastapi',
|
||||||
|
'fastapi.middleware',
|
||||||
|
'fastapi.middleware.cors',
|
||||||
|
'fastapi.routing',
|
||||||
|
'fastapi.responses',
|
||||||
|
'fastapi.exceptions',
|
||||||
|
'starlette',
|
||||||
|
'starlette.routing',
|
||||||
|
'starlette.middleware',
|
||||||
|
'starlette.middleware.cors',
|
||||||
|
'starlette.responses',
|
||||||
|
'starlette.requests',
|
||||||
|
'starlette.concurrency',
|
||||||
|
'starlette.formparsers',
|
||||||
|
'starlette.staticfiles',
|
||||||
|
# ── Pydantic ──
|
||||||
|
'pydantic',
|
||||||
|
'pydantic.fields',
|
||||||
|
'pydantic_settings',
|
||||||
|
'pydantic_core',
|
||||||
|
# ── HTTP / networking ──
|
||||||
|
'httpx',
|
||||||
|
'httpcore',
|
||||||
|
'h11',
|
||||||
|
'httptools',
|
||||||
|
'anyio',
|
||||||
|
'anyio._backends',
|
||||||
|
'anyio._backends._asyncio',
|
||||||
|
'sniffio',
|
||||||
|
# ── MongoDB (motor/pymongo) ──
|
||||||
|
'motor',
|
||||||
|
'motor.motor_asyncio',
|
||||||
|
'pymongo',
|
||||||
|
'pymongo.errors',
|
||||||
|
'pymongo.collection',
|
||||||
|
'pymongo.database',
|
||||||
|
'pymongo.mongo_client',
|
||||||
|
# ── Async I/O ──
|
||||||
|
'aiofiles',
|
||||||
|
'aiofiles.os',
|
||||||
|
'aiofiles.ospath',
|
||||||
|
# ── Scientific ──
|
||||||
|
'numpy',
|
||||||
|
'numpy.core',
|
||||||
|
'scipy',
|
||||||
|
'scipy.special',
|
||||||
|
'scipy.interpolate',
|
||||||
|
'shapely',
|
||||||
|
'shapely.geometry',
|
||||||
|
'shapely.ops',
|
||||||
|
# ── Multipart ──
|
||||||
|
'multipart',
|
||||||
|
'python_multipart',
|
||||||
|
# ── Encoding ──
|
||||||
|
'email.mime',
|
||||||
|
'email.mime.multipart',
|
||||||
|
# ── Multiprocessing ──
|
||||||
|
'multiprocessing',
|
||||||
|
'multiprocessing.pool',
|
||||||
|
'multiprocessing.queues',
|
||||||
|
'concurrent.futures',
|
||||||
|
# ── CuPy + CUDA ──
|
||||||
|
'cupy',
|
||||||
|
'cupy.cuda',
|
||||||
|
'cupy.cuda.runtime',
|
||||||
|
'cupy.cuda.driver',
|
||||||
|
'cupy.cuda.memory',
|
||||||
|
'cupy.cuda.stream',
|
||||||
|
'cupy.cuda.device',
|
||||||
|
'cupy._core',
|
||||||
|
'cupy._core.core',
|
||||||
|
'cupy._core._routines_math',
|
||||||
|
'cupy._core._routines_logic',
|
||||||
|
'cupy._core._routines_manipulation',
|
||||||
|
'cupy._core._routines_sorting',
|
||||||
|
'cupy._core._routines_statistics',
|
||||||
|
'cupy._core._cub_reduction',
|
||||||
|
'cupy.fft',
|
||||||
|
'cupy.linalg',
|
||||||
|
'cupy.random',
|
||||||
|
'cupy_backends',
|
||||||
|
'cupy_backends.cuda',
|
||||||
|
'cupy_backends.cuda.api',
|
||||||
|
'cupy_backends.cuda.libs',
|
||||||
|
'fastrlock',
|
||||||
|
'fastrlock.rlock',
|
||||||
|
] + cupy_hiddenimports + cupyb_hiddenimports + fl_hiddenimports,
|
||||||
|
hookspath=[],
|
||||||
|
hooksconfig={},
|
||||||
|
runtime_hooks=[os.path.join(os.path.dirname(SPEC), 'rthook_cuda_dlls.py')],
|
||||||
|
# ── Exclude bloat ──
|
||||||
|
excludes=[
|
||||||
|
# GUI
|
||||||
|
'tkinter',
|
||||||
|
'matplotlib',
|
||||||
|
'PIL',
|
||||||
|
'IPython',
|
||||||
|
# Data science bloat
|
||||||
|
'pandas',
|
||||||
|
'tensorflow',
|
||||||
|
'torch',
|
||||||
|
'keras',
|
||||||
|
# Testing
|
||||||
|
'pytest',
|
||||||
|
|
||||||
|
# Jupyter
|
||||||
|
'jupyter',
|
||||||
|
'notebook',
|
||||||
|
'ipykernel',
|
||||||
|
# gRPC / telemetry (often pulled in by dependencies)
|
||||||
|
'grpc',
|
||||||
|
'grpcio',
|
||||||
|
'google.protobuf',
|
||||||
|
'opentelemetry',
|
||||||
|
'opentelemetry.sdk',
|
||||||
|
'opentelemetry.instrumentation',
|
||||||
|
# Ray (too heavy, we use multiprocessing)
|
||||||
|
'ray',
|
||||||
|
# Other
|
||||||
|
'cv2',
|
||||||
|
'sklearn',
|
||||||
|
'sympy',
|
||||||
|
],
|
||||||
|
noarchive=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
pyz = PYZ(a.pure)
|
||||||
|
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# ONEDIR mode: EXE + COLLECT
|
||||||
|
# ═══════════════════════════════════════════
|
||||||
|
# Creates: dist/rfcp-server/rfcp-server.exe + all DLLs in same folder
|
||||||
|
# Better for CUDA — no temp extraction needed
|
||||||
|
|
||||||
|
exe = EXE(
|
||||||
|
pyz,
|
||||||
|
a.scripts,
|
||||||
|
[], # No binaries/datas in EXE — they go in COLLECT
|
||||||
|
exclude_binaries=True, # ONEDIR mode
|
||||||
|
name='rfcp-server',
|
||||||
|
debug=False,
|
||||||
|
bootloader_ignore_signals=False,
|
||||||
|
strip=False,
|
||||||
|
upx=False, # Don't compress — CUDA libs need fast loading
|
||||||
|
console=True,
|
||||||
|
icon=os.path.join(os.path.dirname(SPEC), 'rfcp.ico') if os.path.exists(os.path.join(os.path.dirname(SPEC), 'rfcp.ico')) else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
coll = COLLECT(
|
||||||
|
exe,
|
||||||
|
a.binaries,
|
||||||
|
a.zipfiles,
|
||||||
|
a.datas,
|
||||||
|
strip=False,
|
||||||
|
upx=False,
|
||||||
|
upx_exclude=[],
|
||||||
|
name='rfcp-server',
|
||||||
|
)
|
||||||
BIN
installer/rfcp.ico
Normal file
BIN
installer/rfcp.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 116 KiB |
24
installer/rthook_cuda_dlls.py
Normal file
24
installer/rthook_cuda_dlls.py
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# PyInstaller runtime hook for CUDA DLL loading
|
||||||
|
# Must run BEFORE any CuPy import
|
||||||
|
#
|
||||||
|
# Problem: Windows Python 3.8+ requires os.add_dll_directory() for DLL search
|
||||||
|
# PyInstaller ONEDIR mode puts DLLs in _internal/ which isn't in the search path
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if sys.platform == 'win32' and getattr(sys, 'frozen', False):
|
||||||
|
# _MEIPASS points to _internal/ in ONEDIR mode
|
||||||
|
base = getattr(sys, '_MEIPASS', None)
|
||||||
|
if base and os.path.isdir(base):
|
||||||
|
os.add_dll_directory(base)
|
||||||
|
print(f"[CUDA DLL Hook] Added DLL directory: {base}")
|
||||||
|
|
||||||
|
# Also add CUDA_PATH if available (fallback to system CUDA)
|
||||||
|
cuda_path = os.environ.get('CUDA_PATH', '')
|
||||||
|
if cuda_path:
|
||||||
|
for subdir in ['bin', os.path.join('bin', 'x64')]:
|
||||||
|
d = os.path.join(cuda_path, subdir)
|
||||||
|
if os.path.isdir(d):
|
||||||
|
os.add_dll_directory(d)
|
||||||
|
print(f"[CUDA DLL Hook] Added CUDA_PATH: {d}")
|
||||||
64
rfcp-gpu-preflight.bat
Normal file
64
rfcp-gpu-preflight.bat
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
@echo off
|
||||||
|
echo ========================================
|
||||||
|
echo RFCP GPU Build — Pre-flight Check
|
||||||
|
echo ========================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo [1] Python version:
|
||||||
|
python --version
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [2] CuPy status:
|
||||||
|
python -c "import cupy; print(f' CuPy {cupy.__version__}')"
|
||||||
|
python -c "import cupy; d=cupy.cuda.Device(0); print(f' Device: {d.id}'); print(f' Memory: {d.mem_info[1]//1024//1024} MB')"
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [3] CUDA runtime version:
|
||||||
|
python -c "import cupy; v=cupy.cuda.runtime.runtimeGetVersion(); print(f' CUDA Runtime: {v}')"
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [4] CUDA_PATH environment:
|
||||||
|
if defined CUDA_PATH (
|
||||||
|
echo CUDA_PATH = %CUDA_PATH%
|
||||||
|
) else (
|
||||||
|
echo WARNING: CUDA_PATH not set!
|
||||||
|
echo.
|
||||||
|
echo Checking common locations...
|
||||||
|
if exist "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA" (
|
||||||
|
for /d %%i in ("C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v*") do (
|
||||||
|
echo Found: %%i
|
||||||
|
echo.
|
||||||
|
echo To fix, run:
|
||||||
|
echo setx CUDA_PATH "%%i"
|
||||||
|
echo Then restart terminal.
|
||||||
|
)
|
||||||
|
) else (
|
||||||
|
echo No CUDA Toolkit found in default location.
|
||||||
|
echo CuPy bundles its own CUDA runtime, so this may be OK.
|
||||||
|
echo But PyInstaller build might need it.
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [5] nvidia-smi:
|
||||||
|
nvidia-smi --query-gpu=name,driver_version,memory.total --format=csv,noheader 2>nul
|
||||||
|
if errorlevel 1 echo nvidia-smi not found in PATH
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [6] CuPy CUDA libs location:
|
||||||
|
python -c "import cupy; import os; print(f' {os.path.dirname(cupy.__file__)}')"
|
||||||
|
python -c "import cupy._core.core" 2>nul && echo cupy._core.core: OK || echo cupy._core.core: FAILED
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [7] fastrlock:
|
||||||
|
python -c "import fastrlock; print(f' fastrlock {fastrlock.__version__}')"
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo [8] PyInstaller:
|
||||||
|
python -c "import PyInstaller; print(f' PyInstaller {PyInstaller.__version__}')" 2>nul || echo PyInstaller NOT installed! Run: pip install pyinstaller
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo ========================================
|
||||||
|
echo Pre-flight complete
|
||||||
|
echo ========================================
|
||||||
|
pause
|
||||||
Reference in New Issue
Block a user