15 Commits

Author SHA1 Message Date
833dead43c @mytec: stack done, rust next 2026-02-07 12:56:25 +02:00
1d8375af02 @mytec: 10km grad works 2026-02-07 01:14:01 +02:00
acfd9b8f7b @mytec: WebGL works 2026-02-06 22:17:24 +02:00
81e078e92a @mytec: iter3.10 start, baseline rc ready 2026-02-04 15:56:09 +02:00
e392b449cc @mytec: 3.8.0a done 2026-02-04 00:50:52 +02:00
6dcc5a19b9 @mytec: 3.8.0 start, stable w/0 ref+ 2026-02-03 23:24:12 +02:00
6cd9d869cc @mytec: iter3.7.0 start, gpu calc int 2026-02-03 22:41:08 +02:00
a61753c642 @mytec: iter3.2.5 gpu polish start 2026-02-03 12:33:52 +02:00
20d19d09ae @mytec: iter3.5.1 ready for testing 2026-02-03 12:04:36 +02:00
255b91f257 @mytec iter3.5.1 start 2026-02-03 10:51:26 +02:00
3b36535d4e @mytec: iter3.5.0 ready for testing 2026-02-03 10:32:38 +02:00
f46bf16428 @mytec: 3.5.0 cont 2026-02-03 02:53:46 +02:00
57106df5ae @mytec: iter3.4.0 ready for testing 2026-02-02 21:58:03 +02:00
867ee3d0f4 @mytec: iter3.4.0 start 2026-02-02 21:30:00 +02:00
7f0b4d2269 @mytec: before 3.3.0 refactor2 2026-02-02 13:48:30 +02:00
109 changed files with 19161 additions and 613 deletions

View File

@@ -30,7 +30,26 @@
"Bash(pip3 install numpy)",
"Bash(echo:*)",
"Bash(find:*)",
"Bash(node -c:*)"
"Bash(node -c:*)",
"Bash(curl:*)",
"Bash(head -3 python3 -c \"import numpy; print\\(numpy.__file__\\)\")",
"Bash(pip3 install:*)",
"Bash(apt list:*)",
"Bash(dpkg:*)",
"Bash(sudo apt-get install:*)",
"Bash(docker:*)",
"Bash(~/.local/bin/pip install:*)",
"Bash(pgrep:*)",
"Bash(kill:*)",
"Bash(sort:*)",
"Bash(journalctl:*)",
"Bash(pkill:*)",
"Bash(pip3 list:*)",
"Bash(chmod:*)",
"Bash(pyinstaller:*)",
"Bash(npm i:*)",
"Bash(npm uninstall:*)",
"Bash(npm rebuild:*)"
]
}
}

4
.gitignore vendored
View File

@@ -24,3 +24,7 @@ installer/dist/
__pycache__/
*.pyc
nul
# PyInstaller build artifacts
backend/build/
backend/dist/

1513
RFCP-RUST-MIGRATION-PLAN.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,233 +0,0 @@
# RFCP Development Session Summary
## Date: February 1, 2025 (actually 2026)
## Status: Phase 3.0 Complete, Performance Optimization Ongoing
---
## 🎯 Project Overview
**RFCP (Radio Frequency Coverage Planning)** — desktop application for tactical LTE network planning, part of UMTC (Ukrainian Military Tactical Communications) project.
**Tech Stack:**
- Backend: Python/FastAPI + NumPy + ProcessPoolExecutor
- Frontend: React + TypeScript + Vite
- Desktop: Electron
- Build: PyInstaller (backend), electron-builder (desktop)
**Goal:** Calculate RF coverage maps with terrain, buildings, vegetation analysis.
---
## ✅ What Works (Phase 3.0 Achievements)
### Performance
| Preset | Before | After | Status |
|--------|--------|-------|--------|
| Standard (100-200m res) | 38s | **~5s** | ✅ EXCELLENT |
| Detailed (300m, 5km) | timeout | timeout | ❌ Still broken |
### Architecture (48 new files, 82 tests)
- ✅ Modular propagation models (8 models: FreeSpace, Okumura-Hata, COST-231, ITU-R P.1546, etc.)
- ✅ SharedMemoryManager for terrain data (zero-copy, 25 MB)
- ✅ Building filtering (351k → 27k bbox → 15k cap)
- ✅ WebSocket progress streaming (backend works)
- ✅ Clean model selection by frequency/environment
- ✅ Worker cleanup on shutdown
- ✅ Overpass API retry with failover (3 attempts, mirror endpoint)
### New Files Structure
```
backend/app/
├── propagation/ # 8 model files
├── geometry/ # 5 files (haversine, intersection, reflection, diffraction, los)
├── core/ # 4 files (engine, grid, calculator, result)
├── parallel/ # 3 files (manager, worker, pool)
├── services/ # cache.py, osm_client.py
├── utils/ # logging.py, progress.py, units.py
└── api/websocket.py
frontend/src/
├── hooks/useWebSocket.ts
├── services/websocket.ts
└── components/FrequencyBandPanel.tsx
```
---
## ❌ Current Blockers
### 1. Detailed Preset Timeout (CRITICAL)
**Symptom:** 300s timeout, only 194/868 points calculated
**Latest test results:**
```
[DOMINANT_PATH_VEC] Point #1: buildings=30, walls=214, dist=4887m
302.8ms/point × 868 points = 262 seconds
```
**Root Cause Analysis:**
- Early return fix (Claude Code) was for `buildings=[]` case
- But in reality, buildings ARE present (15,000 after cap)
- Each point finds 17-30 nearby buildings
- Each building has 100-295 wall segments
- **dominant_path_service** geometry calculations are expensive
**The real problem is NOT "buildings=0 is slow"**
**The real problem IS "dominant_path with buildings is inherently slow"**
**Potential solutions:**
1. Simplify building geometry (reduce wall count)
2. Use spatial indexing more aggressively
3. Skip dominant_path for distant points (>3km?)
4. Reduce building query radius
5. Use simpler path loss model when buildings present
6. GPU acceleration (CuPy) for geometry
### 2. Progress Bar Stuck at "Initializing 5%"
**Symptom:** UI shows "Initializing 5%" forever
**Fix attempted:** `await asyncio.sleep(0)` after progress_fn() — not working
**Likely cause:** Frontend WebSocket connection or state update issue
### 3. App Close Broken
**Symptom:** Clicking X kills backend but frontend stays open
**Partial fix:** Worker cleanup works, but Electron window doesn't close
### 4. Memory Not Released
**Symptom:** 1328 MB not freed after calculation
```
Before: 3904 MB free
After: 2576 MB free
```
---
## 📊 Performance Analysis
### Why Detailed is slow (the math):
```
Points: 868
Buildings nearby per point: ~25 average
Walls per building: ~150 average
Wall intersection checks: 868 × 25 × 150 = 3,255,000
At 0.1ms per check = 325 seconds
```
### Why Standard is fast:
- Lower resolution = fewer points (~200 vs 868)
- Likely skips some detailed calculations
- Buildings still processed but fewer points to check
---
## 🔧 Key Files to Review
### Backend (performance critical)
```
backend/app/services/
├── dominant_path_service.py # THE BOTTLENECK
├── coverage_service.py # Orchestration, progress
├── parallel_coverage_service.py # Worker management
└── buildings_service.py # OSM fetch, caching
```
### Frontend (UI bugs)
```
frontend/src/
├── App.tsx # Progress display
├── store/coverage.ts # WebSocket state
└── services/websocket.ts # WS connection
```
### Desktop (close bug)
```
desktop/main.js # Electron lifecycle
```
---
## 🎯 Recommended Next Steps
### Priority 1: Fix Detailed Performance
**Option A: Aggressive spatial filtering**
```python
# In dominant_path_service.py
# Only check buildings within line-of-sight corridor
# Not all buildings within radius
```
**Option B: LOD (Level of Detail)**
```python
# Distance > 2km: skip dominant path entirely
# Distance 1-2km: simplified model
# Distance < 1km: full calculation
```
**Option C: Building simplification**
```python
# Reduce wall count per building
# Merge adjacent buildings
# Use bounding boxes instead of polygons for far buildings
```
### Priority 2: Fix UI Bugs
- Debug WebSocket in browser DevTools
- Check Electron close handler
### Priority 3: Memory
- Explicit cleanup after calculation
- Check for leaked references
---
## 📝 Session Timeline
1. **Phase 2.4-2.5.1** — Vectorization attempt (didn't help)
2. **Decision** — Full Phase 3.0 architecture refactor
3. **Architecture Doc** — 1719 lines specification
4. **Claude Code Round 1** — 48 files, 82 tests (35 min)
5. **Integration Round** — WebSocket, progress, model selection (20 min)
6. **Bug Fix Round** — Memory, workers, app close (15 min)
7. **Claude Code Fix** — Dominant path early return, Overpass retry, progress (13 min)
8. **Current** — Still timeout, need different approach
---
## 💡 Key Insights
1. **Vectorization alone doesn't help** — problem is algorithmic, not just numpy
2. **SharedMemory works** — terrain in shared memory is efficient
3. **Building count matters** — 351k→15k filtering helps but not enough
4. **dominant_path is the bottleneck** — consistently 200-300ms/point
5. **Standard preset proves architecture works** — fast when less work needed
---
## 🔗 Related Documents
- `/mnt/project/RFCP-Phase-3.0-Architecture-Refactor.md` — Full architecture spec
- `/mnt/project/SESSION-2025-01-30-Iteration-10_1-Complete.md` — Previous session
- `/mnt/transcripts/2026-02-01-19-06-32-phase-3.0-refactor-implementation-results.txt` — Detailed transcript
---
## 🎮 Side Project
During this session, also designed **DF Diplomacy Expanded** mod:
- Design doc: `DF-Diplomacy-Expanded-Design-Doc.md` (1202 lines)
- MVP: War score, peace negotiation, tribute, reputation
- Motto: *"Losing is fun, but sometimes you want to lose diplomatically."*
---
*"Standard preset works beautifully. Detailed preset needs love. The architecture is solid — now we optimize."*

23
RFCP.bat Normal file
View File

@@ -0,0 +1,23 @@
@echo off
title RFCP - RF Coverage Planner
cd /d "%~dp0"
REM Check if backend exists
if not exist "backend\app\main.py" (
echo ERROR: RFCP backend not found.
echo Run install.bat first or check your installation.
pause
exit /b 1
)
echo ============================================
echo RFCP - RF Coverage Planner
echo ============================================
echo.
echo Starting backend server...
echo Open http://localhost:8090 in your browser
echo Press Ctrl+C to stop
echo.
cd backend
python -m uvicorn app.main:app --host 0.0.0.0 --port 8090

View File

@@ -14,6 +14,7 @@ from app.services.coverage_service import (
select_propagation_model,
)
from app.services.parallel_coverage_service import CancellationToken
from app.services.boundary_service import calculate_coverage_boundary
router = APIRouter()
@@ -24,6 +25,12 @@ class CoverageRequest(BaseModel):
settings: CoverageSettings = CoverageSettings()
class BoundaryPoint(BaseModel):
"""Single boundary coordinate"""
lat: float
lon: float
class CoverageResponse(BaseModel):
"""Coverage calculation response"""
points: List[CoveragePoint]
@@ -32,6 +39,7 @@ class CoverageResponse(BaseModel):
stats: dict
computation_time: float # seconds
models_used: List[str] # which models were active
boundary: Optional[List[BoundaryPoint]] = None # coverage boundary polygon
@router.post("/calculate")
@@ -69,8 +77,16 @@ async def calculate_coverage(request: CoverageRequest) -> CoverageResponse:
start_time = time.time()
cancel_token = CancellationToken()
# Dynamic timeout based on radius (large radius needs more time for tiled processing)
radius_m = request.settings.radius
if radius_m > 30_000:
calc_timeout = 600.0 # 10 min for 30-50km
elif radius_m > 10_000:
calc_timeout = 480.0 # 8 min for 10-30km
else:
calc_timeout = 300.0 # 5 min for ≤10km
try:
# Calculate with 5-minute timeout
if len(request.sites) == 1:
points = await asyncio.wait_for(
coverage_service.calculate_coverage(
@@ -78,7 +94,7 @@ async def calculate_coverage(request: CoverageRequest) -> CoverageResponse:
request.settings,
cancel_token,
),
timeout=300.0
timeout=calc_timeout,
)
else:
points = await asyncio.wait_for(
@@ -87,14 +103,15 @@ async def calculate_coverage(request: CoverageRequest) -> CoverageResponse:
request.settings,
cancel_token,
),
timeout=300.0
timeout=calc_timeout,
)
except asyncio.TimeoutError:
cancel_token.cancel()
# Force cleanup orphaned worker processes
from app.services.parallel_coverage_service import _kill_worker_processes
killed = _kill_worker_processes()
detail = f"Calculation timeout (5 min). Cleaned up {killed} workers." if killed else "Calculation timeout (5 min) — try smaller radius or lower resolution"
timeout_min = int(calc_timeout / 60)
detail = f"Calculation timeout ({timeout_min} min). Cleaned up {killed} workers." if killed else f"Calculation timeout ({timeout_min} min) — try smaller radius or lower resolution"
raise HTTPException(408, detail)
except asyncio.CancelledError:
cancel_token.cancel()
@@ -122,13 +139,24 @@ async def calculate_coverage(request: CoverageRequest) -> CoverageResponse:
"points_with_atmospheric_loss": sum(1 for p in points if p.atmospheric_loss > 0),
}
# Calculate coverage boundary
boundary = None
if points:
boundary_coords = calculate_coverage_boundary(
[p.model_dump() for p in points],
threshold_dbm=request.settings.min_signal,
)
if boundary_coords:
boundary = [BoundaryPoint(**c) for c in boundary_coords]
return CoverageResponse(
points=points,
count=len(points),
settings=effective_settings,
stats=stats,
computation_time=round(computation_time, 2),
models_used=models_used
models_used=models_used,
boundary=boundary,
)
@@ -240,6 +268,358 @@ async def get_buildings(
}
@router.post("/link-budget")
async def calculate_link_budget(request: dict):
"""Calculate point-to-point link budget.
Body: {
"tx_lat": 48.46, "tx_lon": 35.04,
"tx_power_dbm": 43, "tx_gain_dbi": 18, "tx_cable_loss_db": 2,
"tx_height_m": 30,
"rx_lat": 48.50, "rx_lon": 35.10,
"rx_gain_dbi": 0, "rx_cable_loss_db": 0, "rx_sensitivity_dbm": -100,
"rx_height_m": 1.5,
"frequency_mhz": 1800
}
"""
import math
from app.services.terrain_service import terrain_service
# Extract parameters with defaults
tx_lat = request.get("tx_lat", 48.46)
tx_lon = request.get("tx_lon", 35.04)
tx_power_dbm = request.get("tx_power_dbm", 43)
tx_gain_dbi = request.get("tx_gain_dbi", 18)
tx_cable_loss_db = request.get("tx_cable_loss_db", 2)
tx_height_m = request.get("tx_height_m", 30)
rx_lat = request.get("rx_lat", 48.50)
rx_lon = request.get("rx_lon", 35.10)
rx_gain_dbi = request.get("rx_gain_dbi", 0)
rx_cable_loss_db = request.get("rx_cable_loss_db", 0)
rx_sensitivity_dbm = request.get("rx_sensitivity_dbm", -100)
rx_height_m = request.get("rx_height_m", 1.5)
freq = request.get("frequency_mhz", 1800)
# Calculate distance
distance_m = terrain_service.haversine_distance(tx_lat, tx_lon, rx_lat, rx_lon)
distance_km = distance_m / 1000
# Get elevations
tx_elev = await terrain_service.get_elevation(tx_lat, tx_lon)
rx_elev = await terrain_service.get_elevation(rx_lat, rx_lon)
# EIRP
eirp_dbm = tx_power_dbm + tx_gain_dbi - tx_cable_loss_db
# Free space path loss
if distance_km > 0:
fspl_db = 20 * math.log10(distance_km) + 20 * math.log10(freq) + 32.45
else:
fspl_db = 0
# Terrain profile for LOS check
profile = await terrain_service.get_elevation_profile(
tx_lat, tx_lon, rx_lat, rx_lon, num_points=100
)
# LOS check - does terrain block line of sight?
tx_total_height = tx_elev + tx_height_m
rx_total_height = rx_elev + rx_height_m
terrain_loss_db = 0.0
los_clear = True
obstructions = []
for i, point in enumerate(profile):
if i == 0 or i == len(profile) - 1:
continue
# Linear interpolation of LOS line at this point
fraction = i / (len(profile) - 1)
los_height = tx_total_height + fraction * (rx_total_height - tx_total_height)
if point["elevation"] > los_height:
los_clear = False
obstruction_height = point["elevation"] - los_height
obstructions.append({
"distance_m": point["distance"],
"height_above_los_m": round(obstruction_height, 1),
})
# Knife-edge diffraction estimate: ~6dB per major obstruction
terrain_loss_db += min(6.0, obstruction_height * 0.3)
# Cap terrain loss at reasonable max
terrain_loss_db = min(terrain_loss_db, 40.0)
total_path_loss = fspl_db + terrain_loss_db
# Received power
rx_power_dbm = eirp_dbm - total_path_loss + rx_gain_dbi - rx_cable_loss_db
# Link margin
margin_db = rx_power_dbm - rx_sensitivity_dbm
return {
"distance_km": round(distance_km, 2),
"distance_m": round(distance_m, 1),
"tx_elevation_m": round(tx_elev, 1),
"rx_elevation_m": round(rx_elev, 1),
"eirp_dbm": round(eirp_dbm, 1),
"fspl_db": round(fspl_db, 1),
"terrain_loss_db": round(terrain_loss_db, 1),
"total_path_loss_db": round(total_path_loss, 1),
"los_clear": los_clear,
"obstructions": obstructions,
"rx_power_dbm": round(rx_power_dbm, 1),
"margin_db": round(margin_db, 1),
"status": "OK" if margin_db >= 0 else "FAIL",
"link_budget": {
"tx_power_dbm": tx_power_dbm,
"tx_gain_dbi": tx_gain_dbi,
"tx_cable_loss_db": tx_cable_loss_db,
"rx_gain_dbi": rx_gain_dbi,
"rx_cable_loss_db": rx_cable_loss_db,
"rx_sensitivity_dbm": rx_sensitivity_dbm,
},
}
@router.post("/fresnel-profile")
async def fresnel_profile(request: dict):
"""Calculate terrain profile with Fresnel zone boundaries.
Body: {
"tx_lat": 48.46, "tx_lon": 35.04, "tx_height_m": 30,
"rx_lat": 48.50, "rx_lon": 35.10, "rx_height_m": 1.5,
"frequency_mhz": 1800,
"num_points": 100
}
"""
import math
from app.services.terrain_service import terrain_service
tx_lat = request.get("tx_lat", 48.46)
tx_lon = request.get("tx_lon", 35.04)
rx_lat = request.get("rx_lat", 48.50)
rx_lon = request.get("rx_lon", 35.10)
tx_height = request.get("tx_height_m", 30)
rx_height = request.get("rx_height_m", 1.5)
freq = request.get("frequency_mhz", 1800)
num_points = request.get("num_points", 100)
# Get terrain profile
profile = await terrain_service.get_elevation_profile(
tx_lat, tx_lon, rx_lat, rx_lon, num_points
)
if not profile:
return {"error": "Could not generate terrain profile"}
total_distance = profile[-1]["distance"] if profile else 0
# Get endpoint elevations
tx_elev = profile[0]["elevation"]
rx_elev = profile[-1]["elevation"]
tx_total = tx_elev + tx_height
rx_total = rx_elev + rx_height
wavelength = 300.0 / freq # meters
# Calculate Fresnel zone at each profile point
fresnel_data = []
los_blocked = False
fresnel_blocked = False
worst_clearance = float('inf')
fresnel_intrusion_count = 0
for i, point in enumerate(profile):
d1 = point["distance"] # distance from tx
d2 = total_distance - d1 # distance to rx
# LOS height at this point (linear interpolation)
if total_distance > 0:
fraction = d1 / total_distance
else:
fraction = 0
los_height = tx_total + fraction * (rx_total - tx_total)
# First Fresnel zone radius
if d1 > 0 and d2 > 0 and total_distance > 0:
f1_radius = math.sqrt((1 * wavelength * d1 * d2) / total_distance)
else:
f1_radius = 0
# Fresnel zone boundaries (height above sea level)
fresnel_top = los_height + f1_radius
fresnel_bottom = los_height - f1_radius
# Clearance: how much space between terrain and Fresnel bottom
clearance = fresnel_bottom - point["elevation"]
if clearance < worst_clearance:
worst_clearance = clearance
if point["elevation"] > los_height:
los_blocked = True
if point["elevation"] > fresnel_bottom:
fresnel_blocked = True
fresnel_intrusion_count += 1
fresnel_data.append({
"distance": round(point["distance"], 1),
"lat": point["lat"],
"lon": point["lon"],
"terrain_elevation": round(point["elevation"], 1),
"los_height": round(los_height, 1),
"fresnel_top": round(fresnel_top, 1),
"fresnel_bottom": round(fresnel_bottom, 1),
"f1_radius": round(f1_radius, 1),
"clearance": round(clearance, 1),
})
# Calculate Fresnel clearance percentage
fresnel_clear_pct = round(100 * (1 - fresnel_intrusion_count / len(profile)), 1) if profile else 100
# Estimate additional loss due to Fresnel obstruction
if los_blocked:
estimated_loss_db = 10 + abs(worst_clearance) * 0.5 # rough estimate
elif fresnel_blocked:
estimated_loss_db = 3 + (100 - fresnel_clear_pct) * 0.06 # 3-6 dB typical
else:
estimated_loss_db = 0
return {
"profile": fresnel_data,
"total_distance_m": round(total_distance, 1),
"tx_elevation": round(tx_elev, 1),
"rx_elevation": round(rx_elev, 1),
"frequency_mhz": freq,
"wavelength_m": round(wavelength, 4),
"los_clear": not los_blocked,
"fresnel_clear": not fresnel_blocked,
"fresnel_clear_pct": fresnel_clear_pct,
"worst_clearance_m": round(worst_clearance, 1),
"estimated_loss_db": round(estimated_loss_db, 1),
"recommendation": (
"Clear — excellent link" if not fresnel_blocked
else "Fresnel zone partially blocked — expect 3-6 dB additional loss"
if not los_blocked
else "LOS blocked — significant diffraction loss expected"
),
}
@router.post("/interference")
async def calculate_interference(request: CoverageRequest):
"""Calculate C/I (carrier-to-interference) ratio for multi-site scenario.
Uses the same request format as /calculate but returns interference analysis
instead of raw coverage. Requires 2+ sites to be meaningful.
Returns for each grid point:
- C/I ratio (carrier to interference) in dB
- Best server index
- Best server RSRP
"""
import numpy as np
from app.services.gpu_service import gpu_service
if len(request.sites) < 2:
raise HTTPException(400, "At least 2 sites required for interference analysis")
if len(request.sites) > 10:
raise HTTPException(400, "Maximum 10 sites per request")
# First calculate coverage for all sites
start_time = time.time()
cancel_token = CancellationToken()
try:
# Calculate coverage for each site individually
site_results = []
for site in request.sites:
points = await asyncio.wait_for(
coverage_service.calculate_coverage(
site,
request.settings,
cancel_token,
),
timeout=120.0, # 2 min per site
)
site_results.append(points)
except asyncio.TimeoutError:
cancel_token.cancel()
raise HTTPException(408, "Calculation timeout")
computation_time = time.time() - start_time
# Build coordinate -> RSRP mapping for each site
# We need to align the grids (same points for all sites)
coord_set = set()
for points in site_results:
for p in points:
coord_set.add((round(p.lat, 6), round(p.lon, 6)))
coord_list = sorted(coord_set)
# Build RSRP arrays aligned to coord_list
rsrp_grids = []
frequencies = []
for idx, (site, points) in enumerate(zip(request.sites, site_results)):
# Map coordinates to RSRP
point_map = {(round(p.lat, 6), round(p.lon, 6)): p.rsrp for p in points}
rsrp_array = np.array([
point_map.get(coord, -150) # -150 dBm = no coverage
for coord in coord_list
], dtype=np.float64)
rsrp_grids.append(rsrp_array)
frequencies.append(site.frequency)
# Calculate C/I using GPU service
ci_ratio, best_server_idx, best_rsrp = gpu_service.calculate_interference_vectorized(
rsrp_grids, frequencies
)
# Build result points with C/I data
ci_points = []
for i, (lat, lon) in enumerate(coord_list):
ci_points.append({
"lat": lat,
"lon": lon,
"ci_ratio_db": round(float(ci_ratio[i]), 1),
"best_server_idx": int(best_server_idx[i]),
"best_server_rsrp": round(float(best_rsrp[i]), 1),
})
# Calculate statistics
ci_values = [p["ci_ratio_db"] for p in ci_points]
stats = {
"min_ci_db": round(min(ci_values), 1) if ci_values else 0,
"max_ci_db": round(max(ci_values), 1) if ci_values else 0,
"avg_ci_db": round(sum(ci_values) / len(ci_values), 1) if ci_values else 0,
"good_coverage_pct": round(100 * sum(1 for c in ci_values if c >= 10) / len(ci_values), 1) if ci_values else 0,
"marginal_coverage_pct": round(100 * sum(1 for c in ci_values if 0 <= c < 10) / len(ci_values), 1) if ci_values else 0,
"interference_dominant_pct": round(100 * sum(1 for c in ci_values if c < 0) / len(ci_values), 1) if ci_values else 0,
}
# Check for frequency groups
unique_freqs = set(frequencies)
freq_groups = {}
for freq in unique_freqs:
freq_groups[freq] = sum(1 for f in frequencies if f == freq)
return {
"points": ci_points,
"count": len(ci_points),
"stats": stats,
"computation_time": round(computation_time, 2),
"sites": [{"name": s.name, "frequency_mhz": s.frequency} for s in request.sites],
"frequency_groups": freq_groups,
"warning": None if any(c > 1 for c in freq_groups.values()) else "All sites on different frequencies - no co-channel interference",
}
def _get_active_models(settings: CoverageSettings) -> List[str]:
"""Determine which propagation models are active"""
models = [] # Base propagation model added by caller via select_propagation_model()

View File

@@ -0,0 +1,41 @@
"""GPU management API endpoints."""
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from app.services.gpu_backend import gpu_manager
router = APIRouter()
class SetDeviceRequest(BaseModel):
backend: str
index: int = 0
@router.get("/status")
async def gpu_status():
"""Return GPU manager status: active backend, device, available devices."""
return gpu_manager.get_status()
@router.get("/devices")
async def gpu_devices():
"""Return list of available compute devices."""
return {"devices": gpu_manager.get_devices()}
@router.post("/set")
async def gpu_set_device(request: SetDeviceRequest):
"""Switch active compute device."""
try:
result = gpu_manager.set_device(request.backend, request.index)
return {"status": "ok", **result}
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
@router.get("/diagnostics")
async def gpu_diagnostics():
"""Full GPU diagnostic info for troubleshooting detection issues."""
return gpu_manager.get_diagnostics()

View File

@@ -1,12 +1,29 @@
import sys
import platform
from fastapi import APIRouter, Depends
from app.api.deps import get_db
from app.services.gpu_backend import gpu_manager
router = APIRouter()
@router.get("/")
async def health_check():
return {"status": "ok", "service": "rfcp-backend", "version": "1.1.0"}
gpu_info = gpu_manager.get_status()
return {
"status": "ok",
"service": "rfcp-backend",
"version": "3.6.0",
"build": "gpu" if gpu_info.get("gpu_available") else "cpu",
"gpu": {
"available": gpu_info.get("gpu_available", False),
"backend": gpu_info.get("active_backend", "cpu"),
"device": gpu_info.get("active_device", {}).get("name") if gpu_info.get("active_device") else "CPU",
},
"python": sys.version.split()[0],
"platform": platform.system(),
}
@router.get("/db")

View File

@@ -180,3 +180,93 @@ async def get_terrain_file(region: str):
if os.path.exists(terrain_path):
return FileResponse(terrain_path)
raise HTTPException(status_code=404, detail=f"Region '{region}' not found")
@router.get("/status")
async def terrain_status():
"""Return terrain data availability info."""
cached_tiles = terrain_service.get_cached_tiles()
cache_size = terrain_service.get_cache_size_mb()
# Categorize by resolution based on file size
srtm1_tiles = []
srtm3_tiles = []
for t in cached_tiles:
tile_path = terrain_service.terrain_path / f"{t}.hgt"
try:
if tile_path.stat().st_size == 3601 * 3601 * 2:
srtm1_tiles.append(t)
else:
srtm3_tiles.append(t)
except Exception:
pass
return {
"total_tiles": len(cached_tiles),
"srtm1": {
"count": len(srtm1_tiles),
"resolution_m": 30,
"tiles": sorted(srtm1_tiles),
},
"srtm3": {
"count": len(srtm3_tiles),
"resolution_m": 90,
"tiles": sorted(srtm3_tiles),
},
"cache_size_mb": round(cache_size, 1),
"memory_cached": len(terrain_service._tile_cache),
"terra_server": "https://terra.eliah.one",
}
@router.post("/download")
async def terrain_download(request: dict):
"""Pre-download tiles for a region.
Body: {"center_lat": 48.46, "center_lon": 35.04, "radius_km": 50}
Or: {"tiles": ["N48E034", "N48E035", "N47E034", "N47E035"]}
"""
if "tiles" in request:
tile_list = request["tiles"]
else:
center_lat = request.get("center_lat", 48.46)
center_lon = request.get("center_lon", 35.04)
radius_km = request.get("radius_km", 50)
tile_list = terrain_service.get_required_tiles(center_lat, center_lon, radius_km)
missing = [t for t in tile_list if not terrain_service.get_tile_path(t).exists()]
if not missing:
return {"status": "ok", "message": "All tiles already cached", "count": len(tile_list)}
# Download missing tiles
downloaded = []
failed = []
for tile_name in missing:
success = await terrain_service.download_tile(tile_name)
if success:
downloaded.append(tile_name)
else:
failed.append(tile_name)
return {
"status": "ok",
"required": len(tile_list),
"already_cached": len(tile_list) - len(missing),
"downloaded": downloaded,
"failed": failed,
}
@router.get("/index")
async def terrain_index():
"""Fetch tile index from terra server."""
import httpx
try:
async with httpx.AsyncClient(timeout=10.0) as client:
resp = await client.get("https://terra.eliah.one/api/index")
if resp.status_code == 200:
return resp.json()
except Exception:
pass
return {"error": "Could not reach terra.eliah.one", "offline": True}

View File

@@ -8,7 +8,6 @@ progress updates during computation phases.
import time
import asyncio
import logging
import threading
from typing import Optional
from fastapi import WebSocket, WebSocketDisconnect
@@ -51,7 +50,7 @@ class ConnectionManager:
"data": result,
})
except Exception as e:
logger.debug(f"[WS] send_result failed: {e}")
logger.warning(f"[WS] send_result failed: {e}")
async def send_error(self, ws: WebSocket, calc_id: str, error: str):
try:
@@ -61,7 +60,24 @@ class ConnectionManager:
"message": error,
})
except Exception as e:
logger.debug(f"[WS] send_error failed: {e}")
logger.warning(f"[WS] send_error failed: {e}")
async def send_partial_results(
self, ws: WebSocket, calc_id: str,
points: list, tile_idx: int, total_tiles: int,
):
"""Send per-tile partial results for progressive rendering."""
try:
await ws.send_json({
"type": "partial_results",
"calculation_id": calc_id,
"points": [p.model_dump() for p in points],
"tile": tile_idx,
"total_tiles": total_tiles,
"progress": (tile_idx + 1) / total_tiles,
})
except Exception as e:
logger.debug(f"[WS] send_partial_results failed: {e}")
ws_manager = ConnectionManager()
@@ -74,14 +90,32 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
# Shared progress state — written by worker threads, polled by event loop.
# Python GIL makes dict value assignment atomic for simple types.
_progress = {"phase": "Initializing", "pct": 0.05, "seq": 0}
_progress = {"phase": "Initializing", "pct": 0.0, "seq": 0}
_done = False
# Get event loop for cross-thread scheduling of WS sends.
loop = asyncio.get_running_loop()
_last_direct_pct = 0.0
_last_direct_phase = ""
def sync_progress_fn(phase: str, pct: float, _eta: Optional[float] = None):
"""Thread-safe progress callback — just updates a shared dict."""
"""Thread-safe progress callback — updates dict AND schedules direct WS send."""
nonlocal _last_direct_pct, _last_direct_phase
_progress["phase"] = phase
_progress["pct"] = pct
_progress["seq"] += 1
# Schedule direct WS send via event loop (works from any thread).
# Throttle: only send on phase change or >=2% progress.
if phase != _last_direct_phase or pct - _last_direct_pct >= 0.02:
_last_direct_pct = pct
_last_direct_phase = phase
try:
loop.call_soon_threadsafe(
asyncio.ensure_future,
ws_manager.send_progress(ws, calc_id, phase, pct),
)
except RuntimeError:
pass # Event loop closed
try:
sites_data = data.get("sites", [])
@@ -116,24 +150,45 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
if primary_model.name not in models_used:
models_used.insert(0, primary_model.name)
await ws_manager.send_progress(ws, calc_id, "Initializing", 0.05)
await ws_manager.send_progress(ws, calc_id, "Initializing", 0.02)
# ── Progress poller: reads shared dict and sends WS updates ──
# ── Tile callback for progressive results (large radius) ──
async def _tile_callback(tile_points, tile_idx, total_tiles):
await ws_manager.send_partial_results(
ws, calc_id, tile_points, tile_idx, total_tiles,
)
# ── Backup progress poller: catches anything call_soon_threadsafe missed ──
async def progress_poller():
last_sent_seq = 0
last_sent_pct = 0.0
last_sent_phase = "Initializing"
while not _done:
await asyncio.sleep(0.3)
await asyncio.sleep(0.5)
seq = _progress["seq"]
pct = _progress["pct"]
phase = _progress["phase"]
if seq != last_sent_seq and (pct - last_sent_pct >= 0.01 or phase != "Calculating coverage"):
# Send on any phase change OR >=3% progress (primary sends handle fine-grained)
if seq != last_sent_seq and (
phase != last_sent_phase
or pct - last_sent_pct >= 0.03
):
await ws_manager.send_progress(ws, calc_id, phase, pct)
last_sent_seq = seq
last_sent_pct = pct
last_sent_phase = phase
poller_task = asyncio.create_task(progress_poller())
# Dynamic timeout based on radius
radius_m = settings.radius
if radius_m > 30_000:
calc_timeout = 600.0 # 10 min for 30-50km
elif radius_m > 10_000:
calc_timeout = 480.0 # 8 min for 10-30km
else:
calc_timeout = 300.0 # 5 min for ≤10km
# Run calculation with timeout
start_time = time.time()
try:
@@ -142,15 +197,18 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
coverage_service.calculate_coverage(
sites[0], settings, cancel_token,
progress_fn=sync_progress_fn,
tile_callback=_tile_callback,
),
timeout=300.0,
timeout=calc_timeout,
)
else:
points = await asyncio.wait_for(
coverage_service.calculate_multi_site_coverage(
sites, settings, cancel_token,
progress_fn=sync_progress_fn,
tile_callback=_tile_callback,
),
timeout=300.0,
timeout=calc_timeout,
)
except asyncio.TimeoutError:
cancel_token.cancel()
@@ -158,7 +216,8 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
await poller_task
from app.services.parallel_coverage_service import _kill_worker_processes
_kill_worker_processes()
await ws_manager.send_error(ws, calc_id, "Calculation timeout (5 min)")
timeout_min = int(calc_timeout / 60)
await ws_manager.send_error(ws, calc_id, f"Calculation timeout ({timeout_min} min)")
return
except asyncio.CancelledError:
cancel_token.cancel()
@@ -170,7 +229,6 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
# Stop poller and send final progress
_done = True
await poller_task
await ws_manager.send_progress(ws, calc_id, "Finalizing", 0.98)
computation_time = time.time() - start_time
@@ -201,7 +259,10 @@ async def _run_calculation(ws: WebSocket, calc_id: str, data: dict):
"models_used": models_used,
}
# Send "Complete" before result so frontend shows 100%
await ws_manager.send_progress(ws, calc_id, "Complete", 1.0)
await ws_manager.send_result(ws, calc_id, result)
logger.info(f"[WS] calc={calc_id} done: {len(points)} pts, {computation_time:.1f}s")
except Exception as e:
logger.error(f"[WS] Calculation error: {e}", exc_info=True)

View File

@@ -1,15 +1,62 @@
from contextlib import asynccontextmanager
from contextlib import asynccontextmanager
import logging
import platform
from fastapi import FastAPI, WebSocket
from fastapi.middleware.cors import CORSMiddleware
from app.core.database import connect_to_mongo, close_mongo_connection
from app.api.routes import health, projects, terrain, coverage, regions, system
from app.api.routes import health, projects, terrain, coverage, regions, system, gpu
from app.api.websocket import websocket_endpoint
logger = logging.getLogger("rfcp.startup")
def check_gpu_availability():
"""Log GPU status on startup for debugging."""
is_wsl = "microsoft" in platform.release().lower()
env_note = " (WSL2)" if is_wsl else ""
# Check CuPy / CUDA
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
if device_count > 0:
props = cp.cuda.runtime.getDeviceProperties(0)
name = props["name"]
if isinstance(name, bytes):
name = name.decode()
mem_mb = props["totalGlobalMem"] // (1024 * 1024)
logger.info(f"GPU detected{env_note}: {name} ({mem_mb} MB VRAM)")
logger.info(f"CuPy {cp.__version__}, CUDA devices: {device_count}")
else:
logger.warning(f"CuPy installed but no CUDA devices found{env_note}")
except Exception as e:
logger.warning(f"CuPy FAILED {env_note}: {e}")
if is_wsl:
logger.warning("Install: pip3 install cupy-cuda12x --break-system-packages")
else:
logger.warning("Install: pip install cupy-cuda12x")
except Exception as e:
logger.warning(f"CuPy error{env_note}: {e}")
# Check PyOpenCL
try:
import pyopencl as cl
platforms = cl.get_platforms()
for p in platforms:
for d in p.get_devices():
logger.info(f"OpenCL device: {d.name.strip()}")
except Exception as e:
logger.debug("PyOpenCL not installed (optional)")
except Exception:
pass
@asynccontextmanager
async def lifespan(app: FastAPI):
# Log GPU status on startup
check_gpu_availability()
await connect_to_mongo()
yield
await close_mongo_connection()
@@ -38,6 +85,7 @@ app.include_router(terrain.router, prefix="/api/terrain", tags=["terrain"])
app.include_router(coverage.router, prefix="/api/coverage", tags=["coverage"])
app.include_router(regions.router, prefix="/api/regions", tags=["regions"])
app.include_router(system.router, prefix="/api/system", tags=["system"])
app.include_router(gpu.router, prefix="/api/gpu", tags=["gpu"])
# WebSocket endpoint for real-time coverage with progress
app.websocket("/ws")(websocket_endpoint)

View File

@@ -0,0 +1,122 @@
"""
Coverage boundary calculation service.
Computes concave hull (alpha shape) from coverage points to generate
a realistic boundary that follows actual coverage contour.
"""
import logging
from typing import Optional
logger = logging.getLogger(__name__)
def calculate_coverage_boundary(
points: list[dict],
threshold_dbm: float = -100,
simplify_tolerance: float = 0.001,
) -> list[dict]:
"""
Calculate coverage boundary as concave hull of points above threshold.
Args:
points: List of coverage points with 'lat', 'lon', 'rsrp' keys
threshold_dbm: RSRP threshold - points below this are excluded
simplify_tolerance: Simplification tolerance in degrees (~100m per 0.001)
Returns:
List of {'lat': float, 'lon': float} coordinates forming boundary polygon.
Empty list if boundary cannot be computed.
"""
try:
from shapely.geometry import MultiPoint
from shapely import concave_hull
except ImportError:
logger.warning("Shapely not installed - boundary calculation disabled")
return []
# Filter points above threshold
valid_coords = [
(p['lon'], p['lat']) # Shapely uses (x, y) = (lon, lat)
for p in points
if p.get('rsrp', -999) >= threshold_dbm
]
if len(valid_coords) < 3:
logger.debug(f"Not enough points for boundary: {len(valid_coords)}")
return []
try:
# Create MultiPoint geometry
mp = MultiPoint(valid_coords)
# Compute concave hull (alpha shape)
# ratio: 0 = convex hull, 1 = very tight fit
# 0.3-0.5 gives good balance between detail and smoothness
hull = concave_hull(mp, ratio=0.3)
if hull.is_empty:
logger.debug("Concave hull is empty")
return []
# Simplify to reduce points (0.001 deg ≈ 100m)
if simplify_tolerance > 0:
hull = hull.simplify(simplify_tolerance, preserve_topology=True)
# Extract coordinates based on geometry type
if hull.geom_type == 'Polygon':
coords = list(hull.exterior.coords)
return [{'lat': c[1], 'lon': c[0]} for c in coords]
elif hull.geom_type == 'MultiPolygon':
# Return largest polygon's exterior
largest = max(hull.geoms, key=lambda g: g.area)
coords = list(largest.exterior.coords)
return [{'lat': c[1], 'lon': c[0]} for c in coords]
elif hull.geom_type == 'GeometryCollection':
# Find polygons in collection
polygons = [g for g in hull.geoms if g.geom_type == 'Polygon']
if polygons:
largest = max(polygons, key=lambda g: g.area)
coords = list(largest.exterior.coords)
return [{'lat': c[1], 'lon': c[0]} for c in coords]
logger.debug(f"Unexpected hull geometry type: {hull.geom_type}")
return []
except Exception as e:
logger.warning(f"Boundary calculation error: {e}")
return []
def calculate_multi_site_boundaries(
points: list[dict],
threshold_dbm: float = -100,
) -> dict[str, list[dict]]:
"""
Calculate separate boundaries for each site's coverage area.
Args:
points: Coverage points with 'lat', 'lon', 'rsrp', 'site_id' keys
threshold_dbm: RSRP threshold
Returns:
Dict mapping site_id to boundary coordinates list.
"""
# Group points by site_id
by_site: dict[str, list[dict]] = {}
for p in points:
site_id = p.get('site_id', 'default')
if site_id not in by_site:
by_site[site_id] = []
by_site[site_id].append(p)
# Calculate boundary for each site
boundaries = {}
for site_id, site_points in by_site.items():
boundary = calculate_coverage_boundary(site_points, threshold_dbm)
if boundary:
boundaries[site_id] = boundary
return boundaries

View File

@@ -0,0 +1,241 @@
"""
SQLite cache for OSM data — buildings, vegetation, water, streets.
Replaces in-memory caching for large-area calculations. Instead of holding
hundreds of thousands of buildings in RAM, data is stored on disk in SQLite
and queried per-tile using spatial bbox queries.
Location: ~/.rfcp/osm_cache.db
"""
import json
import time
import sqlite3
from pathlib import Path
from typing import List, Dict, Optional
def _default_db_path() -> str:
"""Get default database path at ~/.rfcp/osm_cache.db."""
cache_dir = Path.home() / '.rfcp'
cache_dir.mkdir(parents=True, exist_ok=True)
return str(cache_dir / 'osm_cache.db')
class OSMCacheDB:
"""SQLite-backed cache for OSM feature data with bbox queries.
Stores buildings and vegetation as JSON blobs with bounding-box
columns for fast spatial queries. Cache freshness is tracked
per 1-degree cell (matching the OSM grid fetch pattern).
"""
def __init__(self, db_path: Optional[str] = None):
if db_path is None:
db_path = _default_db_path()
self.db_path = db_path
self._conn: Optional[sqlite3.Connection] = None
@property
def conn(self) -> sqlite3.Connection:
"""Lazy connection with WAL mode for concurrent reads."""
if self._conn is None:
self._conn = sqlite3.connect(self.db_path, check_same_thread=False)
self._conn.execute("PRAGMA journal_mode=WAL")
self._conn.execute("PRAGMA synchronous=NORMAL")
self._init_tables()
return self._conn
def _init_tables(self):
assert self._conn is not None
self._conn.executescript("""
CREATE TABLE IF NOT EXISTS buildings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
osm_id INTEGER,
min_lat REAL NOT NULL,
min_lon REAL NOT NULL,
max_lat REAL NOT NULL,
max_lon REAL NOT NULL,
height REAL DEFAULT 10.0,
data TEXT NOT NULL,
cell_key TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_bld_cell ON buildings(cell_key);
CREATE INDEX IF NOT EXISTS idx_bld_bbox
ON buildings(min_lat, max_lat, min_lon, max_lon);
CREATE TABLE IF NOT EXISTS vegetation (
id INTEGER PRIMARY KEY AUTOINCREMENT,
osm_id INTEGER,
min_lat REAL NOT NULL,
min_lon REAL NOT NULL,
max_lat REAL NOT NULL,
max_lon REAL NOT NULL,
data TEXT NOT NULL,
cell_key TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_veg_cell ON vegetation(cell_key);
CREATE INDEX IF NOT EXISTS idx_veg_bbox
ON vegetation(min_lat, max_lat, min_lon, max_lon);
CREATE TABLE IF NOT EXISTS cache_meta (
cell_key TEXT NOT NULL,
data_type TEXT NOT NULL,
fetched_at REAL NOT NULL,
item_count INTEGER DEFAULT 0,
PRIMARY KEY (cell_key, data_type)
);
""")
self._conn.commit()
# ── Cell key helpers ──
@staticmethod
def cell_key(min_lat: float, min_lon: float, max_lat: float, max_lon: float) -> str:
"""Generate cell key from bbox (matches 1-degree grid alignment)."""
return f"{min_lat:.0f},{min_lon:.0f},{max_lat:.0f},{max_lon:.0f}"
def is_cell_cached(
self, cell_key: str, data_type: str, max_age_hours: float = 24.0
) -> bool:
"""Check if cell data is cached and fresh."""
cursor = self.conn.execute(
"SELECT fetched_at FROM cache_meta "
"WHERE cell_key = ? AND data_type = ?",
(cell_key, data_type),
)
row = cursor.fetchone()
if row is None:
return False
age_hours = (time.time() - row[0]) / 3600
return age_hours < max_age_hours
def mark_cell_cached(self, cell_key: str, data_type: str, item_count: int):
"""Record that a cell has been fetched."""
self.conn.execute(
"INSERT OR REPLACE INTO cache_meta "
"(cell_key, data_type, fetched_at, item_count) VALUES (?, ?, ?, ?)",
(cell_key, data_type, time.time(), item_count),
)
self.conn.commit()
# ── Buildings ──
def insert_buildings_bulk(self, buildings_data: List[Dict], cell_key: str):
"""Bulk insert serialised building dicts for a cell.
Each dict must have 'geometry' (list of [lon, lat]) and 'id'.
"""
rows = []
for b in buildings_data:
geom = b.get('geometry', [])
if not geom:
continue
lats = [p[1] for p in geom]
lons = [p[0] for p in geom]
rows.append((
b.get('id', 0),
min(lats), min(lons), max(lats), max(lons),
b.get('height', 10.0),
json.dumps(b),
cell_key,
))
if rows:
self.conn.executemany(
"INSERT INTO buildings "
"(osm_id, min_lat, min_lon, max_lat, max_lon, height, data, cell_key) "
"VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
rows,
)
self.conn.commit()
def query_buildings_bbox(
self,
min_lat: float, max_lat: float,
min_lon: float, max_lon: float,
limit: int = 20000,
) -> List[Dict]:
"""Query buildings whose bbox overlaps the given bbox."""
cursor = self.conn.execute(
"SELECT data FROM buildings "
"WHERE max_lat >= ? AND min_lat <= ? "
"AND max_lon >= ? AND min_lon <= ? "
"LIMIT ?",
(min_lat, max_lat, min_lon, max_lon, limit),
)
return [json.loads(row[0]) for row in cursor]
# ── Vegetation ──
def insert_vegetation_bulk(self, veg_data: List[Dict], cell_key: str):
"""Bulk insert serialised vegetation dicts for a cell."""
rows = []
for v in veg_data:
geom = v.get('geometry', [])
if not geom:
continue
lats = [p[1] for p in geom]
lons = [p[0] for p in geom]
rows.append((
v.get('id', 0),
min(lats), min(lons), max(lats), max(lons),
json.dumps(v),
cell_key,
))
if rows:
self.conn.executemany(
"INSERT INTO vegetation "
"(osm_id, min_lat, min_lon, max_lat, max_lon, data, cell_key) "
"VALUES (?, ?, ?, ?, ?, ?, ?)",
rows,
)
self.conn.commit()
def query_vegetation_bbox(
self,
min_lat: float, max_lat: float,
min_lon: float, max_lon: float,
limit: int = 10000,
) -> List[Dict]:
"""Query vegetation whose bbox overlaps the given bbox."""
cursor = self.conn.execute(
"SELECT data FROM vegetation "
"WHERE max_lat >= ? AND min_lat <= ? "
"AND max_lon >= ? AND min_lon <= ? "
"LIMIT ?",
(min_lat, max_lat, min_lon, max_lon, limit),
)
return [json.loads(row[0]) for row in cursor]
# ── Housekeeping ──
def close(self):
"""Close the database connection."""
if self._conn:
self._conn.close()
self._conn = None
def get_stats(self) -> Dict[str, int]:
"""Get cache statistics."""
stats: Dict[str, int] = {}
for table in ('buildings', 'vegetation'):
cursor = self.conn.execute(f"SELECT COUNT(*) FROM {table}") # noqa: S608
stats[table] = cursor.fetchone()[0]
cursor = self.conn.execute("SELECT COUNT(*) FROM cache_meta")
stats['cached_cells'] = cursor.fetchone()[0]
return stats
# ── Singleton ──
_cache_db: Optional[OSMCacheDB] = None
def get_osm_cache_db() -> OSMCacheDB:
"""Get or create the singleton OSM cache database."""
global _cache_db
if _cache_db is None:
_cache_db = OSMCacheDB()
return _cache_db

View File

@@ -1,3 +1,4 @@
import gc
import math
import os
import sys
@@ -61,6 +62,9 @@ from app.services.parallel_coverage_service import (
calculate_coverage_parallel, get_cpu_count, get_parallel_backend,
CancellationToken,
)
# NOTE: gpu_manager and gpu_service are imported INSIDE functions that need them,
# NOT at module level. This prevents worker processes from initializing CuPy/CUDA
# which causes cudaErrorInsufficientDriver errors in child processes.
# ── New propagation models (Phase 3.0) ──
from app.propagation.base import PropagationModel, PropagationInput, PropagationOutput
@@ -122,13 +126,14 @@ def _filter_buildings_to_bbox(
max_lat: float, max_lon: float,
site_lat: float, site_lon: float,
log_fn=None,
max_buildings: int = MAX_BUILDINGS_FOR_WORKERS,
) -> list:
"""Filter buildings to coverage bbox and cap at MAX_BUILDINGS_FOR_WORKERS.
"""Filter buildings to coverage bbox and cap at max_buildings.
Returns buildings sorted by distance to site (nearest first) so the
cap preserves buildings most likely to affect coverage.
"""
if not buildings or len(buildings) <= MAX_BUILDINGS_FOR_WORKERS:
if not buildings or len(buildings) <= max_buildings:
return buildings
original = len(buildings)
@@ -148,7 +153,7 @@ def _filter_buildings_to_bbox(
log_fn(f"Building bbox filter: {original} -> {len(filtered)}")
# If still too many, sort by centroid distance and cap
if len(filtered) > MAX_BUILDINGS_FOR_WORKERS:
if len(filtered) > max_buildings:
def _centroid_dist(b):
lats = [p[1] for p in b.geometry]
lons = [p[0] for p in b.geometry]
@@ -157,7 +162,7 @@ def _filter_buildings_to_bbox(
return (clat - site_lat) ** 2 + (clon - site_lon) ** 2
filtered.sort(key=_centroid_dist)
filtered = filtered[:MAX_BUILDINGS_FOR_WORKERS]
filtered = filtered[:max_buildings]
if log_fn:
log_fn(f"Building distance cap: -> {len(filtered)} (nearest to site)")
@@ -245,6 +250,9 @@ class CoverageSettings(BaseModel):
temperature_c: float = 15.0
humidity_percent: float = 50.0
# Fading margin (dB) — additional safety loss subtracted from RSRP
fading_margin: float = 0.0
# Preset
preset: Optional[str] = None # fast, standard, detailed, full
@@ -426,18 +434,28 @@ class CoverageService:
settings: CoverageSettings,
cancel_token: Optional[CancellationToken] = None,
progress_fn: Optional[Callable[[str, float], None]] = None,
tile_callback: Optional[Callable] = None,
) -> List[CoveragePoint]:
"""
Calculate coverage grid for a single site
Returns list of CoveragePoint with RSRP values.
progress_fn(phase, pct): optional callback for progress updates (0.0-1.0).
tile_callback(points, tile_idx, total_tiles): optional callback for per-tile
partial results when using tiled processing (radius > 10km).
"""
calc_start = time.time()
# Apply preset if specified
settings = apply_preset(settings)
# ── Tiled processing for large radius ──
from app.services.tile_processor import TILE_THRESHOLD_M
if settings.radius > TILE_THRESHOLD_M:
return await self.calculate_coverage_tiled(
site, settings, cancel_token, progress_fn, tile_callback
)
points = []
# Generate grid
@@ -485,7 +503,16 @@ class CoverageService:
)
streets = _filter_osm_list_to_bbox(streets, min_lat, min_lon, max_lat, max_lon)
water_bodies = _filter_osm_list_to_bbox(water_bodies, min_lat, min_lon, max_lat, max_lon)
vegetation_areas = _filter_osm_list_to_bbox(vegetation_areas, min_lat, min_lon, max_lat, max_lon)
# Cap vegetation at 5000 — each area requires O(samples × areas)
# point-in-polygon checks per grid point. 20k+ areas with dominant
# path enabled causes OOM via worker memory explosion.
vegetation_areas = _filter_osm_list_to_bbox(
vegetation_areas, min_lat, min_lon, max_lat, max_lon,
max_count=5000,
)
_clog(f"Filtered OSM data: {len(buildings)} bldgs, {len(streets)} streets, "
f"{len(water_bodies)} water, {len(vegetation_areas)} veg")
# Build spatial index for buildings
spatial_idx: Optional[SpatialIndex] = None
@@ -499,19 +526,33 @@ class CoverageService:
progress_fn("Loading terrain", 0.25)
await asyncio.sleep(0)
t_terrain = time.time()
# Check for missing tiles before attempting download
radius_km = settings.radius / 1000.0
missing_tiles = self.terrain.get_missing_tiles(site.lat, site.lon, radius_km)
if missing_tiles:
_clog(f"⚠ Missing terrain tiles: {missing_tiles} - will attempt download")
tile_names = await self.terrain.ensure_tiles_for_bbox(
min_lat, min_lon, max_lat, max_lon
)
for tn in tile_names:
self.terrain._load_tile(tn)
# Check what actually loaded
loaded_tiles = [tn for tn in tile_names if tn in self.terrain._tile_cache]
failed_tiles = [tn for tn in tile_names if tn not in self.terrain._tile_cache]
if failed_tiles:
_clog(f"⚠ TERRAIN WARNING: Failed to load tiles {failed_tiles}. "
"Coverage accuracy reduced - using flat terrain for affected areas.")
site_elevation = self.terrain.get_elevation_sync(site.lat, site.lon)
point_elevations = {}
for lat, lon in grid:
point_elevations[(lat, lon)] = self.terrain.get_elevation_sync(lat, lon)
terrain_time = time.time() - t_terrain
_clog(f"Tiles: {len(tile_names)}, site elev: {site_elevation:.0f}m, "
_clog(f"Tiles: {len(loaded_tiles)}/{len(tile_names)} loaded, site elev: {site_elevation:.0f}m, "
f"pre-computed {len(grid)} elevations")
_clog(f"━━━ PHASE 2 done: {terrain_time:.1f}s ━━━")
@@ -522,8 +563,11 @@ class CoverageService:
from app.services.gpu_service import gpu_service
t_gpu = time.time()
grid_lats = np.array([lat for lat, lon in grid])
grid_lons = np.array([lon for lat, lon in grid])
# Import GPU modules here (main process only) to avoid CUDA context issues in workers
from app.services.gpu_backend import gpu_manager
xp = gpu_manager.get_array_module()
grid_lats = xp.array([lat for lat, lon in grid], dtype=xp.float64)
grid_lons = xp.array([lon for lat, lon in grid], dtype=xp.float64)
pre_distances = gpu_service.precompute_distances(
grid_lats, grid_lons, site.lat, site.lon
@@ -532,6 +576,9 @@ class CoverageService:
pre_distances, site.frequency, site.height,
environment=getattr(settings, 'environment', 'urban'),
)
gpu_time = time.time() - t_gpu
backend_name = "GPU (CUDA)" if gpu_manager.gpu_available else "CPU (NumPy)"
_clog(f"Precomputed {len(grid)} distances+path_loss on {backend_name} in {gpu_time:.2f}s")
# Build lookup dict for point loop
precomputed = {}
@@ -548,6 +595,60 @@ class CoverageService:
f"({len(grid)} points, model={selected_model.name}, freq={site.frequency}MHz, "
f"env={env}, backend={'GPU' if gpu_service.available else 'CPU/NumPy'}) ━━━")
# ━━━ PHASE 2.6: GPU-Vectorized Terrain LOS + Diffraction ━━━
# This replaces the per-point LOS calculation in workers
t_batch_terrain = time.time()
grid_elevs = np.array([point_elevations.get((lat, lon), 0.0) for lat, lon in grid])
if settings.use_terrain and gpu_service.available:
_clog("━━━ PHASE 2.6: Batch terrain LOS (GPU) ━━━")
has_los_arr, terrain_loss_arr = gpu_service.batch_terrain_los(
site.lat, site.lon, site.height, site_elevation,
grid_lats.get() if hasattr(grid_lats, 'get') else grid_lats,
grid_lons.get() if hasattr(grid_lons, 'get') else grid_lons,
grid_elevs,
pre_distances,
site.frequency,
self.terrain._tile_cache,
num_samples=30,
)
batch_terrain_time = time.time() - t_batch_terrain
blocked_count = np.sum(~has_los_arr)
_clog(f"━━━ PHASE 2.6 done: {batch_terrain_time:.2f}s "
f"({blocked_count}/{len(grid)} blocked by terrain) ━━━")
# Add terrain results to precomputed dict
for i, (lat, lon) in enumerate(grid):
if (lat, lon) in precomputed:
precomputed[(lat, lon)]['has_los'] = bool(has_los_arr[i])
precomputed[(lat, lon)]['terrain_loss'] = float(terrain_loss_arr[i])
else:
_clog("━━━ PHASE 2.6: Skipped (terrain disabled or no GPU) ━━━")
# Initialize with defaults
for lat, lon in grid:
if (lat, lon) in precomputed:
precomputed[(lat, lon)]['has_los'] = True
precomputed[(lat, lon)]['terrain_loss'] = 0.0
# ━━━ PHASE 2.7: GPU-Vectorized Antenna Pattern ━━━
if site.azimuth is not None and site.beamwidth and gpu_service.available:
t_batch_antenna = time.time()
antenna_loss_arr = gpu_service.batch_antenna_pattern(
site.lat, site.lon,
grid_lats.get() if hasattr(grid_lats, 'get') else grid_lats,
grid_lons.get() if hasattr(grid_lons, 'get') else grid_lons,
site.azimuth,
site.beamwidth,
)
for i, (lat, lon) in enumerate(grid):
if (lat, lon) in precomputed:
precomputed[(lat, lon)]['antenna_loss'] = float(antenna_loss_arr[i])
_clog(f"━━━ PHASE 2.7: Batch antenna pattern done: {time.time() - t_batch_antenna:.2f}s ━━━")
else:
for lat, lon in grid:
if (lat, lon) in precomputed:
precomputed[(lat, lon)]['antenna_loss'] = 0.0
# ━━━ PHASE 3: Point calculation ━━━
dominant_path_service._log_count = 0 # Reset diagnostic counter
t_points = time.time()
@@ -650,10 +751,15 @@ class CoverageService:
sites: List[SiteParams],
settings: CoverageSettings,
cancel_token: Optional[CancellationToken] = None,
progress_fn: Optional[Callable[[str, float], None]] = None,
tile_callback: Optional[Callable] = None,
) -> List[CoveragePoint]:
"""
Calculate combined coverage from multiple sites
Best server (strongest signal) wins at each point
progress_fn(phase, pct): optional callback for progress updates (0.0-1.0).
tile_callback: forwarded to calculate_coverage for progressive results.
"""
if not sites:
return []
@@ -661,10 +767,27 @@ class CoverageService:
# Apply preset once
settings = apply_preset(settings)
# Per-site progress tracking for averaged overall progress
num_sites = len(sites)
_site_progress = [0.0] * num_sites
def _make_site_progress(idx: int):
"""Create a progress_fn for one site that reports scaled overall progress."""
def _site_fn(phase: str, pct: float, _eta=None):
_site_progress[idx] = pct
if progress_fn:
overall = sum(_site_progress) / num_sites
progress_fn(f"Site {idx + 1}/{num_sites}: {phase}", overall)
return _site_fn
# Get all individual coverages
all_coverages = await asyncio.gather(*[
self.calculate_coverage(site, settings, cancel_token)
for site in sites
self.calculate_coverage(
site, settings, cancel_token,
progress_fn=_make_site_progress(i) if progress_fn else None,
tile_callback=tile_callback,
)
for i, site in enumerate(sites)
])
# Combine by best signal
@@ -679,6 +802,293 @@ class CoverageService:
return list(point_map.values())
async def calculate_coverage_tiled(
self,
site: SiteParams,
settings: CoverageSettings,
cancel_token: Optional[CancellationToken] = None,
progress_fn: Optional[Callable[[str, float], None]] = None,
tile_callback: Optional[Callable] = None,
) -> List[CoveragePoint]:
"""Tile-based coverage for large radius (>10km).
Splits the coverage area into 5km sub-tiles. Each tile loads its
own OSM data and terrain, processes its grid points, then frees
memory before moving to the next tile. This keeps peak RAM
bounded regardless of total coverage area.
tile_callback(points, tile_idx, total_tiles): async callback
invoked with partial results after each tile completes.
"""
from app.services.tile_processor import (
generate_tile_grid, partition_grid_to_tiles, get_adaptive_worker_count,
)
calc_start = time.time()
# NOTE: settings already has preset applied by calculate_coverage()
# Generate full adaptive grid (lightweight — just coordinate tuples)
grid = self._generate_grid(
site.lat, site.lon, settings.radius, settings.resolution,
)
_clog(f"Tiled mode: {len(grid)} total grid points, radius={settings.radius}m")
# Generate tiles and partition grid points
tiles = generate_tile_grid(site.lat, site.lon, settings.radius)
total_tiles = len(tiles)
tile_grids = partition_grid_to_tiles(grid, tiles)
_clog(f"Generated {total_tiles} tiles")
# Free full grid reference
del grid
# ── Pre-fetch buildings for inner zone (≤20km) ──
# This avoids re-reading the disk JSON cache (7-8s) per tile.
inner_radius_m = min(settings.radius, 20_000)
needs_osm = (settings.use_buildings
or getattr(settings, 'use_street_canyon', False)
or getattr(settings, 'use_water_reflection', False)
or getattr(settings, 'use_vegetation', False))
prefetched_buildings: List[Building] = []
prefetched_streets: list = []
prefetched_water: list = []
prefetched_vegetation: list = []
if needs_osm:
lat_delta = inner_radius_m / 111_320.0
lon_delta = inner_radius_m / (111_320.0 * max(math.cos(math.radians(site.lat)), 0.01))
inner_bbox = (
site.lat - lat_delta, site.lon - lon_delta,
site.lat + lat_delta, site.lon + lon_delta,
)
if progress_fn:
progress_fn("Pre-fetching map data", 0.02)
_clog(f"Pre-fetching OSM for inner zone ({inner_radius_m/1000:.0f}km)")
osm_prefetch = await self._fetch_osm_grid_aligned(
inner_bbox[0], inner_bbox[1], inner_bbox[2], inner_bbox[3],
settings,
)
prefetched_buildings = osm_prefetch.get("buildings", [])
prefetched_streets = osm_prefetch.get("streets", [])
prefetched_water = osm_prefetch.get("water_bodies", [])
prefetched_vegetation = osm_prefetch.get("vegetation_areas", [])
del osm_prefetch
_clog(f"Pre-fetched: {len(prefetched_buildings)} buildings, "
f"{len(prefetched_streets)} streets, "
f"{len(prefetched_water)} water, "
f"{len(prefetched_vegetation)} veg")
# Clear singleton memory cache — we hold our own reference
self.buildings._memory_cache.clear()
gc.collect()
site_elevation: Optional[float] = None
all_points: List[CoveragePoint] = []
# FSPL pre-check: compute minimum distance to each tile and estimate
# free-space signal. Skip tiles where even best-case FSPL < min_signal.
eirp_dbm = site.power + site.gain
min_signal = getattr(settings, 'min_signal', -130)
tiles_skipped_fspl = 0
for tile_idx, tile in enumerate(tiles):
if cancel_token and cancel_token.is_cancelled:
_clog("Tiled calculation cancelled")
break
tile_grid = tile_grids.get(tile.index, [])
if not tile_grid:
continue
tile_start = time.time()
min_lat, min_lon, max_lat, max_lon = tile.bbox
# Quick FSPL check: closest edge of tile to site
clamp_lat = max(min_lat, min(site.lat, max_lat))
clamp_lon = max(min_lon, min(site.lon, max_lon))
closest_dist = TerrainService.haversine_distance(
site.lat, site.lon, clamp_lat, clamp_lon,
)
if closest_dist > 500: # Skip check for tiles containing the site
fspl_db = 20 * math.log10(closest_dist) + 20 * math.log10(site.frequency * 1e6) - 147.55
best_rsrp = eirp_dbm - fspl_db
if best_rsrp < min_signal:
tiles_skipped_fspl += 1
continue
_clog(f"━━━ Tile {tile_idx + 1}/{total_tiles}: "
f"{len(tile_grid)} points ━━━")
# Per-tile progress mapped to overall progress range
def _tile_progress(phase: str, pct: float, _idx=tile_idx):
if progress_fn:
overall = (_idx + pct) / total_tiles
progress_fn(
f"Tile {_idx + 1}/{total_tiles}: {phase}", overall,
)
# ── 1. Filter pre-fetched OSM data for this tile ──
tile_center_lat = (min_lat + max_lat) / 2
tile_center_lon = (min_lon + max_lon) / 2
tile_dist_m = TerrainService.haversine_distance(
site.lat, site.lon, tile_center_lat, tile_center_lon,
)
skip_buildings = tile_dist_m > 20_000
_tile_progress("Filtering map data", 0.10)
await asyncio.sleep(0)
if skip_buildings:
buildings: list = []
streets: list = []
water_bodies: list = []
vegetation_areas: list = []
else:
# Fast in-memory filter from pre-fetched data (no disk I/O)
buildings = _filter_buildings_to_bbox(
prefetched_buildings, min_lat, min_lon, max_lat, max_lon,
site.lat, site.lon, _clog,
max_buildings=5000,
)
streets = _filter_osm_list_to_bbox(
prefetched_streets, min_lat, min_lon, max_lat, max_lon,
)
water_bodies = _filter_osm_list_to_bbox(
prefetched_water, min_lat, min_lon, max_lat, max_lon,
)
vegetation_areas = _filter_osm_list_to_bbox(
prefetched_vegetation, min_lat, min_lon, max_lat, max_lon,
max_count=5000,
)
spatial_idx: Optional[SpatialIndex] = None
if buildings:
cache_key = f"tile_{tile_idx}_{min_lat:.3f},{min_lon:.3f}"
spatial_idx = get_spatial_index(cache_key, buildings)
# ── 2. Pre-load terrain for this tile ──
_tile_progress("Loading terrain", 0.25)
await asyncio.sleep(0)
tile_names = await self.terrain.ensure_tiles_for_bbox(
min_lat, min_lon, max_lat, max_lon,
)
for tn in tile_names:
self.terrain._load_tile(tn)
if site_elevation is None:
site_elevation = self.terrain.get_elevation_sync(
site.lat, site.lon,
)
point_elevations = {}
for lat, lon in tile_grid:
point_elevations[(lat, lon)] = self.terrain.get_elevation_sync(
lat, lon,
)
# ── 3. Precompute distances / path loss ──
_tile_progress("Pre-computing propagation", 0.35)
await asyncio.sleep(0)
from app.services.gpu_service import gpu_service
from app.services.gpu_backend import gpu_manager
t_gpu = time.time()
xp = gpu_manager.get_array_module()
grid_lats = xp.array([lat for lat, _lon in tile_grid], dtype=xp.float64)
grid_lons = xp.array([_lon for _lat, _lon in tile_grid], dtype=xp.float64)
pre_distances = gpu_service.precompute_distances(
grid_lats, grid_lons, site.lat, site.lon,
)
pre_path_loss = gpu_service.precompute_path_loss(
pre_distances, site.frequency, site.height,
environment=getattr(settings, 'environment', 'urban'),
)
gpu_time = time.time() - t_gpu
backend_name = "GPU (CUDA)" if gpu_manager.gpu_available else "CPU (NumPy)"
_clog(f"Tile {tile_idx+1}: precomputed {len(tile_grid)} pts on {backend_name} in {gpu_time:.2f}s")
precomputed = {}
for i, (lat, lon) in enumerate(tile_grid):
precomputed[(lat, lon)] = {
'distance': float(pre_distances[i]),
'path_loss': float(pre_path_loss[i]),
}
# ── 4. Calculate points (parallel with adaptive workers) ──
_tile_progress("Calculating coverage", 0.40)
await asyncio.sleep(0)
num_workers = get_adaptive_worker_count(
settings.radius, get_cpu_count(),
)
use_parallel = len(tile_grid) > 100 and num_workers > 1
if use_parallel:
loop = asyncio.get_event_loop()
result_dicts, _timing = await loop.run_in_executor(
None,
lambda: calculate_coverage_parallel(
tile_grid, point_elevations,
site.model_dump(), settings.model_dump(),
self.terrain._tile_cache,
buildings, streets, water_bodies, vegetation_areas,
site_elevation, num_workers, _clog,
cancel_token=cancel_token,
precomputed=precomputed,
),
)
tile_points = [CoveragePoint(**d) for d in result_dicts]
else:
loop = asyncio.get_event_loop()
tile_points, _timing = await loop.run_in_executor(
None,
lambda: self._run_point_loop(
tile_grid, site, settings, buildings, streets,
spatial_idx, water_bodies, vegetation_areas,
site_elevation, point_elevations,
cancel_token=cancel_token,
precomputed=precomputed,
),
)
all_points.extend(tile_points)
# Send partial results via callback
if tile_callback and tile_points:
await tile_callback(tile_points, tile_idx, total_tiles)
tile_time = time.time() - tile_start
_clog(f"Tile {tile_idx + 1}/{total_tiles} done: "
f"{len(tile_points)} points in {tile_time:.1f}s")
# ── 5. Free per-tile memory ──
del buildings, streets, water_bodies, vegetation_areas
del spatial_idx, point_elevations, precomputed
del pre_distances, pre_path_loss, grid_lats, grid_lons
gc.collect()
# Free pre-fetched OSM data
del prefetched_buildings, prefetched_streets
del prefetched_water, prefetched_vegetation
gc.collect()
total_time = time.time() - calc_start
_clog(f"━━━ Tiled calculation complete: "
f"{len(all_points)} points in {total_time:.1f}s "
f"({tiles_skipped_fspl} tiles skipped by FSPL pre-check) ━━━")
if progress_fn:
progress_fn("Finalizing", 0.95)
await asyncio.sleep(0)
return all_points
# Adaptive resolution zone boundaries (meters)
_ADAPTIVE_ZONES = [
(0, 2000), # Inner: full user resolution
@@ -751,7 +1161,8 @@ class CoverageService:
points = []
timing = {"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0}
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0}
total = len(grid)
log_interval = max(1, total // 20)
@@ -774,6 +1185,9 @@ class CoverageService:
timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if point.rsrp >= settings.min_signal:
points.append(point)
@@ -796,6 +1210,9 @@ class CoverageService:
timing: dict,
precomputed_distance: Optional[float] = None,
precomputed_path_loss: Optional[float] = None,
precomputed_has_los: Optional[bool] = None,
precomputed_terrain_loss: Optional[float] = None,
precomputed_antenna_loss: Optional[float] = None,
) -> CoveragePoint:
"""Fully synchronous point calculation. All terrain tiles must be pre-loaded."""
@@ -822,29 +1239,37 @@ class CoverageService:
)
path_loss = model.calculate(prop_input).path_loss_db
# Antenna pattern
antenna_loss = 0.0
if site.azimuth is not None and site.beamwidth:
# Antenna pattern (use precomputed if available)
if precomputed_antenna_loss is not None:
antenna_loss = precomputed_antenna_loss
elif site.azimuth is not None and site.beamwidth:
t0 = time.time()
antenna_loss = self._antenna_pattern_loss(
site.lat, site.lon, lat, lon, site.azimuth, site.beamwidth
)
timing["antenna"] += time.time() - t0
else:
antenna_loss = 0.0
# Terrain LOS (sync)
terrain_loss = 0.0
has_los = True
if settings.use_terrain:
# Terrain LOS (use precomputed if available)
if precomputed_has_los is not None and precomputed_terrain_loss is not None:
has_los = precomputed_has_los
terrain_loss = precomputed_terrain_loss
elif settings.use_terrain:
t0 = time.time()
los_result = self.los.check_line_of_sight_sync(
site.lat, site.lon, site.height, lat, lon, 1.5
)
has_los = los_result["has_los"]
terrain_loss = 0.0
if not has_los:
terrain_loss = self._diffraction_loss(
los_result["clearance"], site.frequency
)
timing["los"] += time.time() - t0
else:
has_los = True
terrain_loss = 0.0
# Building loss (spatial index)
building_loss = 0.0
@@ -901,7 +1326,6 @@ class CoverageService:
# LOD_NONE: skip dominant path entirely for distant points (>3km)
if lod == LODLevel.NONE:
timing.setdefault("lod_none", 0)
timing["lod_none"] += 1
else:
t0 = time.time()
@@ -909,12 +1333,10 @@ class CoverageService:
# LOD_SIMPLIFIED: limit buildings for mid-range points (1.5-3km)
dp_buildings = nearby_buildings
if lod == LODLevel.SIMPLIFIED:
timing.setdefault("lod_simplified", 0)
timing["lod_simplified"] += 1
if len(nearby_buildings) > SIMPLIFIED_MAX_BUILDINGS:
dp_buildings = nearby_buildings[:SIMPLIFIED_MAX_BUILDINGS]
else:
timing.setdefault("lod_full", 0)
timing["lod_full"] += 1
# nearby_buildings already filtered via spatial index —
@@ -1040,7 +1462,8 @@ class CoverageService:
rsrp = (site.power + site.gain - path_loss - antenna_loss
- terrain_loss - building_loss - veg_loss
- rain_loss - indoor_loss - atmo_loss
+ reflection_gain)
+ reflection_gain
- settings.fading_margin)
return CoveragePoint(
lat=lat, lon=lon, rsrp=rsrp, distance=distance,
@@ -1079,14 +1502,18 @@ class CoverageService:
lat2: float, lon2: float
) -> float:
"""Calculate bearing from point 1 to point 2 (degrees)"""
lat1, lon1, lat2, lon2 = map(np.radians, [lat1, lon1, lat2, lon2])
# Use math for scalar operations (faster than numpy/cupy for single values)
lat1_r = math.radians(lat1)
lon1_r = math.radians(lon1)
lat2_r = math.radians(lat2)
lon2_r = math.radians(lon2)
dlon = lon2 - lon1
dlon = lon2_r - lon1_r
x = np.sin(dlon) * np.cos(lat2)
y = np.cos(lat1) * np.sin(lat2) - np.sin(lat1) * np.cos(lat2) * np.cos(dlon)
x = math.sin(dlon) * math.cos(lat2_r)
y = math.cos(lat1_r) * math.sin(lat2_r) - math.sin(lat1_r) * math.cos(lat2_r) * math.cos(dlon)
bearing = np.degrees(np.arctan2(x, y))
bearing = math.degrees(math.atan2(x, y))
return (bearing + 360) % 360
@@ -1186,7 +1613,8 @@ class CoverageService:
)
rsrp = (site.power + site.gain - path_loss
- antenna_loss - terrain_loss)
- antenna_loss - terrain_loss
- settings.fading_margin)
if rsrp >= settings.min_signal:
points.append(CoveragePoint(

View File

@@ -0,0 +1,275 @@
"""
GPU Backend Manager — detects and manages compute backends.
Supports:
- CUDA via CuPy
- OpenCL via PyOpenCL (future)
- CPU via NumPy (always available)
Usage:
from app.services.gpu_backend import gpu_manager
xp = gpu_manager.get_array_module() # cupy or numpy
status = gpu_manager.get_status()
"""
import logging
from enum import Enum
from dataclasses import dataclass, field
from typing import Any, Optional
import numpy as np
logger = logging.getLogger(__name__)
class GPUBackend(str, Enum):
CUDA = "cuda"
OPENCL = "opencl"
CPU = "cpu"
@dataclass
class GPUDevice:
backend: GPUBackend
index: int
name: str
memory_mb: int
extra: dict = field(default_factory=dict)
class GPUManager:
"""Singleton GPU manager with device detection and selection."""
def __init__(self):
self._devices: list[GPUDevice] = []
self._active_backend: GPUBackend = GPUBackend.CPU
self._active_device: Optional[GPUDevice] = None
self._cupy = None
self._detect_devices()
def _detect_devices(self):
"""Probe available GPU backends."""
# Always add CPU
cpu_device = GPUDevice(
backend=GPUBackend.CPU,
index=0,
name="CPU (NumPy)",
memory_mb=0,
)
self._devices.append(cpu_device)
# Try CuPy / CUDA
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
for i in range(device_count):
props = cp.cuda.runtime.getDeviceProperties(i)
name = props["name"]
if isinstance(name, bytes):
name = name.decode()
mem_mb = props["totalGlobalMem"] // (1024 * 1024)
cuda_ver = cp.cuda.runtime.runtimeGetVersion()
device = GPUDevice(
backend=GPUBackend.CUDA,
index=i,
name=str(name),
memory_mb=mem_mb,
extra={"cuda_version": cuda_ver},
)
self._devices.append(device)
logger.info(f"[GPU] CUDA device {i}: {name} ({mem_mb} MB)")
if device_count > 0:
self._cupy = cp
except ImportError:
logger.info("[GPU] CuPy not installed — CUDA unavailable")
except Exception as e:
logger.warning(f"[GPU] CuPy probe error: {e}")
# Try PyOpenCL (future — stub for detection only)
try:
import pyopencl as cl
platforms = cl.get_platforms()
for plat in platforms:
for dev in plat.get_devices():
mem_mb = dev.global_mem_size // (1024 * 1024)
device = GPUDevice(
backend=GPUBackend.OPENCL,
index=len([d for d in self._devices if d.backend == GPUBackend.OPENCL]),
name=dev.name.strip(),
memory_mb=mem_mb,
extra={"platform": plat.name.strip()},
)
self._devices.append(device)
logger.info(f"[GPU] OpenCL device: {device.name} ({mem_mb} MB)")
except ImportError:
pass
except Exception as e:
logger.debug(f"[GPU] OpenCL probe error: {e}")
# Auto-select best: prefer CUDA > OpenCL > CPU
cuda_devices = [d for d in self._devices if d.backend == GPUBackend.CUDA]
if cuda_devices:
self._active_backend = GPUBackend.CUDA
self._active_device = cuda_devices[0]
logger.info(f"[GPU] Active backend: CUDA — {self._active_device.name}")
else:
self._active_backend = GPUBackend.CPU
self._active_device = cpu_device
logger.info("[GPU] Active backend: CPU (NumPy)")
@property
def gpu_available(self) -> bool:
return self._active_backend != GPUBackend.CPU
def get_array_module(self) -> Any:
"""Return cupy (if CUDA active) or numpy."""
if self._active_backend == GPUBackend.CUDA and self._cupy is not None:
return self._cupy
return np
def to_cpu(self, arr: Any) -> np.ndarray:
"""Transfer array to CPU numpy."""
if hasattr(arr, 'get'):
return arr.get()
return np.asarray(arr)
def get_status(self) -> dict:
"""Full status dict for API."""
return {
"active_backend": self._active_backend.value,
"active_device": {
"backend": self._active_device.backend.value,
"index": self._active_device.index,
"name": self._active_device.name,
"memory_mb": self._active_device.memory_mb,
} if self._active_device else None,
"gpu_available": self.gpu_available,
"available_devices": [
{
"backend": d.backend.value,
"index": d.index,
"name": d.name,
"memory_mb": d.memory_mb,
}
for d in self._devices
],
}
def get_devices(self) -> list[dict]:
"""Device list for API."""
return [
{
"backend": d.backend.value,
"index": d.index,
"name": d.name,
"memory_mb": d.memory_mb,
}
for d in self._devices
]
def get_diagnostics(self) -> dict:
"""Full diagnostic info for troubleshooting GPU detection."""
import sys
import platform
import subprocess
is_wsl = "microsoft" in platform.release().lower()
diag = {
"python_version": sys.version,
"python_executable": sys.executable,
"platform": platform.platform(),
"is_wsl": is_wsl,
"numpy": {"version": np.__version__},
"cuda": {},
"opencl": {},
"nvidia_smi": None,
"detected_devices": len(self._devices),
"active_backend": self._active_backend.value,
}
# Check nvidia-smi (works even without CuPy)
try:
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,memory.total,driver_version", "--format=csv,noheader"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0 and result.stdout.strip():
diag["nvidia_smi"] = result.stdout.strip()
except Exception:
diag["nvidia_smi"] = "not found or error"
# Check CuPy/CUDA
try:
import cupy as cp
diag["cuda"]["cupy_version"] = cp.__version__
diag["cuda"]["cuda_runtime_version"] = cp.cuda.runtime.runtimeGetVersion()
diag["cuda"]["device_count"] = cp.cuda.runtime.getDeviceCount()
for i in range(diag["cuda"]["device_count"]):
props = cp.cuda.runtime.getDeviceProperties(i)
name = props["name"]
if isinstance(name, bytes):
name = name.decode()
diag["cuda"][f"device_{i}"] = {
"name": str(name),
"memory_mb": props["totalGlobalMem"] // (1024 * 1024),
"compute_capability": f"{props['major']}.{props['minor']}",
}
except ImportError:
diag["cuda"]["error"] = "CuPy not installed"
if is_wsl:
diag["cuda"]["install_hint"] = "pip3 install cupy-cuda12x --break-system-packages"
else:
diag["cuda"]["install_hint"] = "pip install cupy-cuda12x"
except Exception as e:
diag["cuda"]["error"] = str(e)
# Check PyOpenCL
try:
import pyopencl as cl
diag["opencl"]["pyopencl_version"] = cl.VERSION_TEXT
diag["opencl"]["platforms"] = []
for p in cl.get_platforms():
platform_info = {"name": p.name.strip(), "devices": []}
for d in p.get_devices():
platform_info["devices"].append({
"name": d.name.strip(),
"type": cl.device_type.to_string(d.type),
"memory_mb": d.global_mem_size // (1024 * 1024),
"compute_units": d.max_compute_units,
})
diag["opencl"]["platforms"].append(platform_info)
except ImportError:
diag["opencl"]["error"] = "PyOpenCL not installed"
if is_wsl:
diag["opencl"]["install_hint"] = "pip3 install pyopencl --break-system-packages"
else:
diag["opencl"]["install_hint"] = "pip install pyopencl"
except Exception as e:
diag["opencl"]["error"] = str(e)
return diag
def set_device(self, backend: str, index: int = 0) -> dict:
"""Switch active compute device."""
target_backend = GPUBackend(backend)
candidates = [d for d in self._devices
if d.backend == target_backend and d.index == index]
if not candidates:
raise ValueError(f"No device found: backend={backend}, index={index}")
self._active_device = candidates[0]
self._active_backend = target_backend
if target_backend == GPUBackend.CUDA and self._cupy is not None:
self._cupy.cuda.Device(index).use()
logger.info(f"[GPU] Switched to: {self._active_device.name} ({target_backend.value})")
return {
"backend": self._active_backend.value,
"device": self._active_device.name,
}
# Singleton
gpu_manager = GPUManager()

View File

@@ -3,7 +3,7 @@ GPU-accelerated computation service using CuPy.
Falls back to NumPy when CuPy/CUDA is not available.
Provides vectorized batch operations for coverage calculation:
- Haversine distance (site all grid points)
- Haversine distance (site -> all grid points)
- Okumura-Hata path loss (all distances at once)
Usage:
@@ -11,48 +11,29 @@ Usage:
"""
import numpy as np
from typing import Dict, Any, Optional
from typing import Dict, Any
# ── Try CuPy import ──
GPU_AVAILABLE = False
GPU_INFO: Optional[Dict[str, Any]] = None
cp = None
try:
import cupy as _cp
device_count = _cp.cuda.runtime.getDeviceCount()
if device_count > 0:
cp = _cp
GPU_AVAILABLE = True
props = _cp.cuda.runtime.getDeviceProperties(0)
GPU_INFO = {
"name": props["name"].decode() if isinstance(props["name"], bytes) else str(props["name"]),
"memory_mb": props["totalGlobalMem"] // (1024 * 1024),
"cuda_version": _cp.cuda.runtime.runtimeGetVersion(),
}
print(f"[GPU] CUDA available: {GPU_INFO['name']} ({GPU_INFO['memory_mb']} MB)", flush=True)
else:
print("[GPU] No CUDA devices found", flush=True)
except ImportError:
print("[GPU] CuPy not installed — using CPU/NumPy", flush=True)
print("[GPU] To enable GPU acceleration, install CuPy:", flush=True)
print("[GPU] For CUDA 12.x: pip install cupy-cuda12x", flush=True)
print("[GPU] For CUDA 11.x: pip install cupy-cuda11x", flush=True)
print("[GPU] Check CUDA version: nvidia-smi", flush=True)
except Exception as e:
print(f"[GPU] CuPy error: {e} — GPU acceleration disabled", flush=True)
from app.services.gpu_backend import gpu_manager
# Backward-compatible exports
GPU_AVAILABLE = gpu_manager.gpu_available
GPU_INFO: Dict[str, Any] | None = (
{
"name": gpu_manager._active_device.name,
"memory_mb": gpu_manager._active_device.memory_mb,
**gpu_manager._active_device.extra,
}
if gpu_manager.gpu_available and gpu_manager._active_device
else None
)
# Array module: cupy on GPU, numpy on CPU
xp = cp if GPU_AVAILABLE else np
xp = gpu_manager.get_array_module()
def _to_cpu(arr):
"""Transfer array to CPU numpy if on GPU."""
if GPU_AVAILABLE and hasattr(arr, 'get'):
return arr.get()
return np.asarray(arr)
return gpu_manager.to_cpu(arr)
class GPUService:
@@ -60,13 +41,13 @@ class GPUService:
@property
def available(self) -> bool:
return GPU_AVAILABLE
return gpu_manager.gpu_available
def get_info(self) -> Dict[str, Any]:
"""Return GPU info dict for system endpoint."""
if not GPU_AVAILABLE:
if not gpu_manager.gpu_available:
return {"available": False, "name": None, "memory_mb": None}
return {"available": True, **GPU_INFO}
return {"available": True, **(GPU_INFO or {})}
def precompute_distances(
self,
@@ -79,16 +60,17 @@ class GPUService:
Returns distances in meters as a CPU numpy array.
"""
lat1 = xp.radians(xp.asarray(grid_lats, dtype=xp.float64))
lon1 = xp.radians(xp.asarray(grid_lons, dtype=xp.float64))
lat2 = xp.radians(xp.float64(site_lat))
lon2 = xp.radians(xp.float64(site_lon))
_xp = gpu_manager.get_array_module()
lat1 = _xp.radians(_xp.asarray(grid_lats, dtype=_xp.float64))
lon1 = _xp.radians(_xp.asarray(grid_lons, dtype=_xp.float64))
lat2 = _xp.radians(_xp.float64(site_lat))
lon2 = _xp.radians(_xp.float64(site_lon))
dlat = lat2 - lat1
dlon = lon2 - lon1
a = xp.sin(dlat / 2) ** 2 + xp.cos(lat1) * xp.cos(lat2) * xp.sin(dlon / 2) ** 2
c = 2 * xp.arcsin(xp.sqrt(a))
a = _xp.sin(dlat / 2) ** 2 + _xp.cos(lat1) * _xp.cos(lat2) * _xp.sin(dlon / 2) ** 2
c = 2 * _xp.arcsin(_xp.sqrt(a))
distances = 6371000.0 * c
return _to_cpu(distances)
@@ -108,40 +90,41 @@ class GPUService:
Returns path loss in dB as a CPU numpy array.
"""
d_arr = xp.asarray(distances, dtype=xp.float64)
d_km = xp.maximum(d_arr / 1000.0, 0.1)
_xp = gpu_manager.get_array_module()
d_arr = _xp.asarray(distances, dtype=_xp.float64)
d_km = _xp.maximum(d_arr / 1000.0, 0.1)
freq = float(frequency_mhz)
h_tx = max(float(tx_height), 1.0)
h_rx = max(float(rx_height), 1.0)
log_f = xp.log10(xp.float64(freq))
log_hb = xp.log10(xp.float64(max(h_tx, 1.0)))
log_f = _xp.log10(_xp.float64(freq))
log_hb = _xp.log10(_xp.float64(max(h_tx, 1.0)))
if freq > 2000:
# Free-Space Path Loss: FSPL = 20*log10(d_km) + 20*log10(f) + 32.45
L = 20.0 * xp.log10(d_km) + 20.0 * log_f + 32.45
L = 20.0 * _xp.log10(d_km) + 20.0 * log_f + 32.45
elif freq > 1500:
# COST-231 Hata: extends Okumura-Hata to 1500-2000 MHz
a_hm = (1.1 * log_f - 0.7) * h_rx - (1.56 * log_f - 0.8)
L = (46.3 + 33.9 * log_f - 13.82 * log_hb - a_hm
+ (44.9 - 6.55 * log_hb) * xp.log10(d_km))
+ (44.9 - 6.55 * log_hb) * _xp.log10(d_km))
if environment == "urban":
L += 3.0 # Metropolitan center correction
elif freq >= 150:
# Okumura-Hata: 150-1500 MHz
if environment == "urban" and freq >= 400:
a_hm = 3.2 * (xp.log10(11.75 * h_rx) ** 2) - 4.97
a_hm = 3.2 * (_xp.log10(11.75 * h_rx) ** 2) - 4.97
else:
a_hm = (1.1 * log_f - 0.7) * h_rx - (1.56 * log_f - 0.8)
L_urban = (69.55 + 26.16 * log_f - 13.82 * log_hb - a_hm
+ (44.9 - 6.55 * log_hb) * xp.log10(d_km))
+ (44.9 - 6.55 * log_hb) * _xp.log10(d_km))
if environment == "suburban":
L = L_urban - 2 * (xp.log10(freq / 28) ** 2) - 5.4
L = L_urban - 2 * (_xp.log10(freq / 28) ** 2) - 5.4
elif environment == "rural":
L = L_urban - 4.78 * (log_f ** 2) + 18.33 * log_f - 35.94
elif environment == "open":
@@ -152,10 +135,440 @@ class GPUService:
else:
# Very low frequency — Longley-Rice simplified (area mode)
# Use FSPL as baseline with terrain roughness correction
L = 20.0 * xp.log10(d_km) + 20.0 * log_f + 32.45 + 10.0
L = 20.0 * _xp.log10(d_km) + 20.0 * log_f + 32.45 + 10.0
return _to_cpu(L)
def batch_terrain_los(
self,
site_lat: float,
site_lon: float,
site_height: float,
site_elevation: float,
grid_lats: np.ndarray,
grid_lons: np.ndarray,
grid_elevations: np.ndarray,
distances: np.ndarray,
frequency_mhz: float,
terrain_cache: dict,
num_samples: int = 30,
) -> tuple[np.ndarray, np.ndarray]:
"""Batch compute terrain LOS and diffraction loss for all grid points.
This is the key GPU optimization — instead of sampling terrain profiles
one point at a time, we sample ALL profiles in parallel using vectorized
operations.
Args:
site_lat, site_lon: Site coordinates
site_height: Antenna height above ground (meters)
site_elevation: Ground elevation at site (meters)
grid_lats, grid_lons: All grid point coordinates
grid_elevations: Ground elevation at each grid point
distances: Pre-computed distances from site to each point (meters)
frequency_mhz: Frequency for diffraction calculation
terrain_cache: Dict[tile_name -> numpy array] from terrain_service
num_samples: Number of samples per terrain profile
Returns:
(has_los, terrain_loss) - both shape (N,)
has_los: boolean array, True if clear line of sight
terrain_loss: diffraction loss in dB (0 if has_los)
"""
_xp = gpu_manager.get_array_module()
N = len(grid_lats)
if N == 0:
return np.array([], dtype=bool), np.array([], dtype=np.float64)
# Convert inputs to GPU arrays
g_lats = _xp.asarray(grid_lats, dtype=_xp.float64)
g_lons = _xp.asarray(grid_lons, dtype=_xp.float64)
g_elevs = _xp.asarray(grid_elevations, dtype=_xp.float64)
g_dists = _xp.asarray(distances, dtype=_xp.float64)
# Heights
tx_total = float(site_elevation + site_height)
rx_height = 1.5 # Receiver height above ground
# Earth curvature constants
EARTH_RADIUS = 6371000.0
K_FACTOR = 4.0 / 3.0
effective_radius = K_FACTOR * EARTH_RADIUS
# Sample terrain profiles for all points at once
# Create sample positions: shape (N, num_samples)
t = _xp.linspace(0, 1, num_samples, dtype=_xp.float64) # (S,)
t = t.reshape(1, -1) # (1, S)
# Interpolate lat/lon for all sample points
# sample_lats[i, j] = site_lat + t[j] * (grid_lats[i] - site_lat)
dlat = g_lats.reshape(-1, 1) - site_lat # (N, 1)
dlon = g_lons.reshape(-1, 1) - site_lon # (N, 1)
sample_lats = site_lat + t * dlat # (N, S)
sample_lons = site_lon + t * dlon # (N, S)
# Sample distances along path: shape (N, S)
sample_dists = t * g_dists.reshape(-1, 1) # (N, S)
# Get terrain elevations for all samples
# This is the tricky part - we need to look up from the tile cache
# For GPU efficiency, we'll do this on CPU then transfer
sample_lats_cpu = _to_cpu(sample_lats).flatten()
sample_lons_cpu = _to_cpu(sample_lons).flatten()
# Batch elevation lookup from cache
sample_elevs_cpu = self._batch_elevation_lookup(
sample_lats_cpu, sample_lons_cpu, terrain_cache
)
sample_elevs = _xp.asarray(sample_elevs_cpu, dtype=_xp.float64).reshape(N, num_samples)
# Compute LOS line height at each sample point
# Linear interpolation from tx to rx
rx_total = g_elevs + rx_height # (N,)
los_heights = tx_total + t * (rx_total.reshape(-1, 1) - tx_total) # (N, S)
# Earth curvature correction at each sample
total_dist = g_dists.reshape(-1, 1) # (N, 1)
d = sample_dists # (N, S)
curvature = (d * (total_dist - d)) / (2 * effective_radius) # (N, S)
los_heights_corrected = los_heights - curvature # (N, S)
# Clearance at each sample point
clearances = los_heights_corrected - sample_elevs # (N, S)
# Minimum clearance per profile
min_clearances = _xp.min(clearances, axis=1) # (N,)
# Has LOS if minimum clearance > 0
has_los = min_clearances > 0 # (N,)
# Diffraction loss for points without LOS
# Using simplified ITU-R P.526 formula
terrain_loss = _xp.zeros(N, dtype=_xp.float64)
# Only compute diffraction where blocked
blocked_mask = ~has_los
blocked_clearances = min_clearances[blocked_mask]
if _xp.any(blocked_mask):
# v = |clearance| / 10 (simplified Fresnel parameter)
v = _xp.abs(blocked_clearances) / 10.0
# Diffraction loss formula from ITU-R P.526
loss = _xp.where(
v <= 0,
_xp.zeros_like(v),
_xp.where(
v < 2.4,
6.02 + 9.11 * v + 1.65 * v ** 2,
12.95 + 20 * _xp.log10(v)
)
)
# Cap at reasonable max
loss = _xp.minimum(loss, 40.0)
terrain_loss[blocked_mask] = loss
return _to_cpu(has_los).astype(bool), _to_cpu(terrain_loss)
def _batch_elevation_lookup(
self,
lats: np.ndarray,
lons: np.ndarray,
terrain_cache: dict,
) -> np.ndarray:
"""Look up elevations from cached terrain tiles with bilinear interpolation.
Vectorized implementation: processes per-tile (1-4 tiles) instead of
per-point (thousands of points). Uses bilinear interpolation for
sub-meter accuracy (vs 15m error with nearest-neighbor at 30m resolution).
Args:
lats, lons: Flattened arrays of coordinates
terrain_cache: Dict mapping tile_name -> numpy array
Returns:
elevations: Same shape as input lats
"""
elevations = np.zeros(len(lats), dtype=np.float64)
# Vectorized tile identification
lat_ints = np.floor(lats).astype(int)
lon_ints = np.floor(lons).astype(int)
# Process per tile (usually 1-4 tiles, not per point)
unique_tiles = set(zip(lat_ints, lon_ints))
for lat_int, lon_int in unique_tiles:
lat_letter = 'N' if lat_int >= 0 else 'S'
lon_letter = 'E' if lon_int >= 0 else 'W'
tile_name = f"{lat_letter}{abs(lat_int):02d}{lon_letter}{abs(lon_int):03d}"
tile = terrain_cache.get(tile_name)
if tile is None:
continue
# Mask for points in this tile
mask = (lat_ints == lat_int) & (lon_ints == lon_int)
tile_lats = lats[mask]
tile_lons = lons[mask]
size = tile.shape[0]
# Vectorized bilinear interpolation
lat_frac = tile_lats - lat_int
lon_frac = tile_lons - lon_int
row_exact = (1.0 - lat_frac) * (size - 1)
col_exact = lon_frac * (size - 1)
r0 = np.clip(row_exact.astype(int), 0, size - 2)
c0 = np.clip(col_exact.astype(int), 0, size - 2)
r1 = r0 + 1
c1 = c0 + 1
dr = row_exact - r0
dc = col_exact - c0
# Get four corner values for all points at once
z00 = tile[r0, c0].astype(np.float64)
z01 = tile[r0, c1].astype(np.float64)
z10 = tile[r1, c0].astype(np.float64)
z11 = tile[r1, c1].astype(np.float64)
# Bilinear interpolation (vectorized)
result = (z00 * (1 - dr) * (1 - dc) +
z01 * (1 - dr) * dc +
z10 * dr * (1 - dc) +
z11 * dr * dc)
# Handle void values (-32768) - set to 0
void_mask = (z00 == -32768) | (z01 == -32768) | (z10 == -32768) | (z11 == -32768)
result[void_mask] = 0.0
elevations[mask] = result
return elevations
def batch_antenna_pattern(
self,
site_lat: float,
site_lon: float,
grid_lats: np.ndarray,
grid_lons: np.ndarray,
azimuth: float,
beamwidth: float,
) -> np.ndarray:
"""Batch compute antenna pattern loss for all grid points.
Returns antenna_loss in dB, shape (N,)
"""
_xp = gpu_manager.get_array_module()
N = len(grid_lats)
if N == 0 or azimuth is None or not beamwidth:
return np.zeros(N, dtype=np.float64)
# Convert to radians
lat1 = _xp.radians(_xp.float64(site_lat))
lon1 = _xp.radians(_xp.float64(site_lon))
lat2 = _xp.radians(_xp.asarray(grid_lats, dtype=_xp.float64))
lon2 = _xp.radians(_xp.asarray(grid_lons, dtype=_xp.float64))
# Calculate bearing from site to each point
dlon = lon2 - lon1
x = _xp.sin(dlon) * _xp.cos(lat2)
y = _xp.cos(lat1) * _xp.sin(lat2) - _xp.sin(lat1) * _xp.cos(lat2) * _xp.cos(dlon)
bearings = (_xp.degrees(_xp.arctan2(x, y)) + 360) % 360
# Angle difference from antenna azimuth
angle_diff = _xp.abs(bearings - azimuth)
angle_diff = _xp.where(angle_diff > 180, 360 - angle_diff, angle_diff)
# Antenna pattern loss (simplified sector pattern)
half_bw = beamwidth / 2
in_main = angle_diff <= half_bw
loss_main = 3 * (angle_diff / half_bw) ** 2
loss_side = 3 + 12 * ((angle_diff - half_bw) / half_bw) ** 2
loss_side = _xp.minimum(loss_side, 25.0)
antenna_loss = _xp.where(in_main, loss_main, loss_side)
return _to_cpu(antenna_loss)
def batch_final_rsrp(
self,
tx_power: float,
tx_gain: float,
path_loss: np.ndarray,
terrain_loss: np.ndarray,
antenna_loss: np.ndarray,
building_loss: np.ndarray,
vegetation_loss: np.ndarray,
rain_loss: np.ndarray,
indoor_loss: np.ndarray,
atmospheric_loss: np.ndarray,
reflection_gain: np.ndarray,
fading_margin: float = 0.0,
) -> np.ndarray:
"""Vectorized final RSRP calculation.
RSRP = tx_power + tx_gain - path_loss - terrain_loss - antenna_loss
- building_loss - vegetation_loss - rain_loss - indoor_loss
- atmospheric_loss + reflection_gain - fading_margin
Returns RSRP in dBm, shape (N,)
"""
_xp = gpu_manager.get_array_module()
rsrp = (
float(tx_power) + float(tx_gain)
- _xp.asarray(path_loss, dtype=_xp.float64)
- _xp.asarray(terrain_loss, dtype=_xp.float64)
- _xp.asarray(antenna_loss, dtype=_xp.float64)
- _xp.asarray(building_loss, dtype=_xp.float64)
- _xp.asarray(vegetation_loss, dtype=_xp.float64)
- _xp.asarray(rain_loss, dtype=_xp.float64)
- _xp.asarray(indoor_loss, dtype=_xp.float64)
- _xp.asarray(atmospheric_loss, dtype=_xp.float64)
+ _xp.asarray(reflection_gain, dtype=_xp.float64)
- float(fading_margin)
)
return _to_cpu(rsrp)
def calculate_interference(
self,
rsrp_grids: list,
frequencies: list,
) -> tuple:
"""Calculate C/I (carrier-to-interference) ratio for multi-site scenarios.
For each grid point:
- C = signal strength from strongest (serving) cell
- I = sum of signal strengths from all other co-frequency cells
- C/I = C(dBm) - 10*log10(sum of linear interference powers)
Args:
rsrp_grids: List of RSRP arrays, one per site, shape (N,) each
frequencies: List of frequencies (MHz) for each site
Returns:
(ci_ratio, best_server_idx, best_rsrp)
ci_ratio: C/I in dB, shape (N,)
best_server_idx: Index of serving cell per point, shape (N,)
best_rsrp: RSRP of serving cell per point, shape (N,)
"""
_xp = gpu_manager.get_array_module()
if len(rsrp_grids) < 2:
# Single site - no interference, return infinity C/I
if rsrp_grids:
n_points = len(rsrp_grids[0])
return (
np.full(n_points, 50.0, dtype=np.float64), # 50 dB = effectively no interference
np.zeros(n_points, dtype=np.int32),
np.array(rsrp_grids[0], dtype=np.float64),
)
return np.array([]), np.array([]), np.array([])
# Stack RSRP grids: shape (num_sites, num_points)
rsrp_stack = _xp.stack([_xp.asarray(g, dtype=_xp.float64) for g in rsrp_grids], axis=0)
num_sites, num_points = rsrp_stack.shape
# Convert to linear power (mW)
rsrp_linear = _xp.power(10.0, rsrp_stack / 10.0)
# Best server per point
best_server_idx = _xp.argmax(rsrp_stack, axis=0)
best_rsrp = _xp.take_along_axis(rsrp_stack, best_server_idx[_xp.newaxis, :], axis=0)[0]
best_rsrp_linear = _xp.take_along_axis(rsrp_linear, best_server_idx[_xp.newaxis, :], axis=0)[0]
# Group sites by frequency for co-channel interference
freq_array = _xp.asarray(frequencies, dtype=_xp.float64)
# Calculate interference only from co-frequency sites
interference_linear = _xp.zeros(num_points, dtype=_xp.float64)
for point_idx in range(num_points):
serving_site = int(_to_cpu(best_server_idx[point_idx]))
serving_freq = frequencies[serving_site]
# Sum power from all other sites on same frequency
for site_idx in range(num_sites):
if site_idx != serving_site and frequencies[site_idx] == serving_freq:
interference_linear[point_idx] += rsrp_linear[site_idx, point_idx]
# C/I ratio in dB
# Avoid log10(0) with small epsilon
epsilon = 1e-30
ci_ratio = 10 * _xp.log10(best_rsrp_linear / (interference_linear + epsilon))
# Clip to reasonable range (-20 to 50 dB)
ci_ratio = _xp.clip(ci_ratio, -20, 50)
return (
_to_cpu(ci_ratio),
_to_cpu(best_server_idx).astype(np.int32),
_to_cpu(best_rsrp),
)
def calculate_interference_vectorized(
self,
rsrp_grids: list,
frequencies: list,
) -> tuple:
"""Fully vectorized C/I calculation (faster for GPU).
Same as calculate_interference but avoids Python loops.
"""
_xp = gpu_manager.get_array_module()
if len(rsrp_grids) < 2:
if rsrp_grids:
n_points = len(rsrp_grids[0])
return (
np.full(n_points, 50.0, dtype=np.float64),
np.zeros(n_points, dtype=np.int32),
np.array(rsrp_grids[0], dtype=np.float64),
)
return np.array([]), np.array([]), np.array([])
# Stack RSRP grids: shape (num_sites, num_points)
rsrp_stack = _xp.stack([_xp.asarray(g, dtype=_xp.float64) for g in rsrp_grids], axis=0)
num_sites, num_points = rsrp_stack.shape
# Convert to linear power (mW)
rsrp_linear = _xp.power(10.0, rsrp_stack / 10.0)
# Best server per point
best_server_idx = _xp.argmax(rsrp_stack, axis=0)
best_rsrp = _xp.take_along_axis(rsrp_stack, best_server_idx[_xp.newaxis, :], axis=0)[0]
best_rsrp_linear = _xp.take_along_axis(rsrp_linear, best_server_idx[_xp.newaxis, :], axis=0)[0]
# Create frequency match matrix: (num_sites, num_sites)
freq_array = _xp.asarray(frequencies, dtype=_xp.float64)
freq_match = freq_array[:, _xp.newaxis] == freq_array[_xp.newaxis, :]
# Total power from all sites
total_power = _xp.sum(rsrp_linear, axis=0)
# For simplified calculation (all sites same frequency):
# Interference = total - serving
interference_linear = total_power - best_rsrp_linear
# C/I ratio in dB
epsilon = 1e-30
ci_ratio = 10 * _xp.log10(best_rsrp_linear / (interference_linear + epsilon))
# Clip to reasonable range
ci_ratio = _xp.clip(ci_ratio, -20, 50)
return (
_to_cpu(ci_ratio),
_to_cpu(best_server_idx).astype(np.int32),
_to_cpu(best_rsrp),
)
# Singleton
gpu_service = GPUService()

View File

@@ -164,11 +164,16 @@ except ImportError:
ray = None # type: ignore
# ── Worker-level spatial index cache (persists across tasks in same worker) ──
# ── Worker-level caches (persist across tasks in same worker process) ──
_worker_spatial_idx = None
_worker_cache_key: Optional[str] = None
# Shared-memory buildings/OSM — unpickled once per worker, cached by key
_worker_shared_buildings = None
_worker_shared_osm_data = None
_worker_shared_data_key: Optional[str] = None
def _ray_process_chunk_impl(chunk, terrain_cache, buildings, osm_data, config):
"""Implementation: process a chunk of (lat, lon, elevation) tuples.
@@ -205,6 +210,7 @@ def _ray_process_chunk_impl(chunk, terrain_cache, buildings, osm_data, config):
"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0,
}
precomputed = config.get('precomputed')
@@ -220,6 +226,9 @@ def _ray_process_chunk_impl(chunk, terrain_cache, buildings, osm_data, config):
config['site_elevation'], point_elev, timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if point.rsrp >= settings.min_signal:
results.append(point.model_dump())
@@ -238,9 +247,14 @@ if RAY_AVAILABLE:
def get_cpu_count() -> int:
"""Get number of usable CPU cores, capped at 14."""
"""Get number of usable CPU cores, capped at 6.
Each worker holds its own copy of buildings + OSM data + spatial index
(~200-400 MB per worker). Capping at 6 prevents OOM on systems with
8-16 GB RAM (especially WSL2 with limited memory allocation).
"""
try:
return min(mp.cpu_count() or 4, 14)
return min(mp.cpu_count() or 4, 6)
except Exception:
return 4
@@ -327,8 +341,25 @@ def calculate_coverage_parallel(
except Exception as e:
log_fn(f"Ray execution failed: {e} — falling back to sequential")
# Fallback: ProcessPoolExecutor with reduced workers to avoid MemoryError
pool_workers = min(num_workers, 6)
# Fallback: ProcessPoolExecutor (shared memory eliminates per-chunk pickle)
pool_workers = num_workers
# Scale workers down based on data volume to prevent OOM.
# Each worker unpickles + holds its own copy of buildings, OSM data, and
# spatial index. With large datasets the per-worker memory can exceed
# 300 MB, so reduce workers to keep total under ~2 GB.
data_items = len(buildings) + len(streets) + len(water_bodies) + len(vegetation_areas)
if data_items > 20000:
pool_workers = min(pool_workers, 2)
log_fn(f"Data volume high ({data_items} items) — capping workers at {pool_workers}")
elif data_items > 10000:
pool_workers = min(pool_workers, 3)
log_fn(f"Data volume moderate ({data_items} items) — capping workers at {pool_workers}")
elif data_items > 5000:
pool_workers = min(pool_workers, 4)
log_fn(f"Data volume elevated ({data_items} items) — capping workers at {pool_workers}")
log_fn(f"ProcessPool: {pool_workers} workers (cpu_count={num_workers}, data_items={data_items})")
if pool_workers > 1 and total_points > 100:
try:
return _calculate_with_process_pool(
@@ -338,6 +369,8 @@ def calculate_coverage_parallel(
pool_workers, log_fn, cancel_token, precomputed,
progress_fn,
)
except (MemoryError, OSError) as e:
log_fn(f"ProcessPool OOM/OS error: {e} — falling back to sequential")
except Exception as e:
log_fn(f"ProcessPool failed: {e} — falling back to sequential")
@@ -396,8 +429,8 @@ def _calculate_with_ray(
for lat, lon in grid
]
# ~4 chunks per worker for granular progress
chunk_size = max(1, len(items) // (num_workers * 4))
# Larger chunks to amortize IPC overhead (was num_workers*4)
chunk_size = max(1, min(400, len(items) // max(2, num_workers)))
chunks = [items[i:i + chunk_size] for i in range(0, len(items), chunk_size)]
log_fn(f"Submitting {len(chunks)} chunks of ~{chunk_size} points")
@@ -489,6 +522,7 @@ def _pool_worker_process_chunk(args):
"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0,
}
precomputed = config.get('precomputed')
@@ -504,6 +538,9 @@ def _pool_worker_process_chunk(args):
config['site_elevation'], point_elev, timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if point.rsrp >= settings.min_signal:
results.append(point.model_dump())
@@ -542,6 +579,28 @@ def _store_terrain_in_shm(terrain_cache: Dict[str, np.ndarray], log_fn) -> Tuple
return blocks, refs
def _store_pickle_in_shm(data, label: str, log_fn) -> Tuple[Optional[Any], Optional[dict]]:
"""Pickle arbitrary data into a SharedMemory block.
Returns (shm_block, ref_dict) where ref_dict = {shm_name, size}.
On failure returns (None, None) and caller should fall back to pickle.
"""
import multiprocessing.shared_memory as shm_mod
import pickle
try:
blob = pickle.dumps(data, protocol=pickle.HIGHEST_PROTOCOL)
size = len(blob)
block = shm_mod.SharedMemory(create=True, size=size)
block.buf[:size] = blob
mb = size / (1024 * 1024)
log_fn(f"{label} in shared memory: {mb:.1f} MB")
return block, {'shm_name': block.name, 'size': size}
except Exception as e:
log_fn(f"Failed to store {label} in shm: {e}")
return None, None
def _pool_worker_shm_chunk(args):
"""Worker function that reads terrain from shared memory instead of pickle."""
import multiprocessing.shared_memory as shm_mod
@@ -585,6 +644,7 @@ def _pool_worker_shm_chunk(args):
"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0,
}
precomputed = config.get('precomputed')
@@ -600,6 +660,9 @@ def _pool_worker_shm_chunk(args):
config['site_elevation'], point_elev, timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if point.rsrp >= settings.min_signal:
results.append(point.model_dump())
@@ -607,6 +670,203 @@ def _pool_worker_shm_chunk(args):
return results
_worker_chunk_count: int = 0 # per-worker chunk counter
def _pool_worker_shm_shared(args):
"""Worker: terrain + buildings + OSM all via shared memory.
Per-chunk args are tiny (~8 KB): just point coords, shm refs, and config.
Buildings and OSM data are unpickled from shared memory ONCE per worker
and cached in module globals for subsequent chunks.
"""
import multiprocessing.shared_memory as shm_mod
import pickle
global _worker_chunk_count
_worker_chunk_count += 1
pid = os.getpid()
t_worker_start = time.perf_counter()
chunk, terrain_shm_refs, shared_data_refs, config = args
# ── Reconstruct terrain from shared memory ──
t0 = time.perf_counter()
terrain_cache = {}
for tile_name, ref in terrain_shm_refs.items():
try:
block = shm_mod.SharedMemory(name=ref['shm_name'])
terrain_cache[tile_name] = np.ndarray(
ref['shape'], dtype=ref['dtype'], buffer=block.buf,
)
except Exception:
pass
from app.services.terrain_service import terrain_service
terrain_service._tile_cache = terrain_cache
t_terrain_shm = time.perf_counter() - t0
# ── Read buildings + OSM from shared memory (cached per worker) ──
global _worker_shared_buildings, _worker_shared_osm_data, _worker_shared_data_key
global _worker_spatial_idx, _worker_cache_key
data_key = config.get('cache_key', '')
cached = (_worker_shared_data_key == data_key)
t_unpickle_bld = 0.0
t_unpickle_osm = 0.0
t_spatial = 0.0
if not cached:
# First chunk for this calculation — unpickle from shm
buildings_ref = shared_data_refs.get('buildings')
osm_ref = shared_data_refs.get('osm_data')
if buildings_ref:
try:
t0 = time.perf_counter()
blk = shm_mod.SharedMemory(name=buildings_ref['shm_name'])
_worker_shared_buildings = pickle.loads(bytes(blk.buf[:buildings_ref['size']]))
t_unpickle_bld = time.perf_counter() - t0
except Exception:
_worker_shared_buildings = []
else:
_worker_shared_buildings = []
if osm_ref:
try:
t0 = time.perf_counter()
blk = shm_mod.SharedMemory(name=osm_ref['shm_name'])
_worker_shared_osm_data = pickle.loads(bytes(blk.buf[:osm_ref['size']]))
t_unpickle_osm = time.perf_counter() - t0
except Exception:
_worker_shared_osm_data = {}
else:
_worker_shared_osm_data = {}
_worker_shared_data_key = data_key
# Rebuild spatial index for new data
t0 = time.perf_counter()
if _worker_shared_buildings:
from app.services.spatial_index import SpatialIndex
_worker_spatial_idx = SpatialIndex()
_worker_spatial_idx.build(_worker_shared_buildings)
else:
_worker_spatial_idx = None
_worker_cache_key = data_key
t_spatial = time.perf_counter() - t0
print(
f"[WORKER {pid}] Init: terrain_shm={t_terrain_shm*1000:.1f}ms "
f"unpickle_bld={t_unpickle_bld*1000:.1f}ms "
f"unpickle_osm={t_unpickle_osm*1000:.1f}ms "
f"spatial={t_spatial*1000:.1f}ms "
f"buildings={len(_worker_shared_buildings or [])} "
f"tiles={len(terrain_cache)}",
flush=True,
)
print(
f"[WORKER {pid}] Processing chunk {_worker_chunk_count}, "
f"cached={cached}, points={len(chunk)}",
flush=True,
)
buildings = _worker_shared_buildings or []
osm_data = _worker_shared_osm_data or {}
# ── Imports + object creation (timed) ──
t0 = time.perf_counter()
from app.services.coverage_service import CoverageService, SiteParams, CoverageSettings
t_import = time.perf_counter() - t0
t0 = time.perf_counter()
site = SiteParams(**config['site_dict'])
settings = CoverageSettings(**config['settings_dict'])
svc = CoverageService()
t_pydantic = time.perf_counter() - t0
timing = {
"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0,
}
precomputed = config.get('precomputed')
streets = osm_data.get('streets', [])
water = osm_data.get('water_bodies', [])
veg = osm_data.get('vegetation_areas', [])
site_elev = config['site_elevation']
t_init_done = time.perf_counter()
init_ms = (t_init_done - t_worker_start) * 1000
# ── Process points with per-point profiling (first 3 only) ──
results = []
t_loop_start = time.perf_counter()
t_model_dump_total = 0.0
n_dumped = 0
for i, (lat, lon, point_elev) in enumerate(chunk):
pre = precomputed.get((lat, lon)) if precomputed else None
# Snapshot timing dict before call (for first 3 points)
if i < 3:
timing_before = {k: v for k, v in timing.items()}
t_pt = time.perf_counter()
point = svc._calculate_point_sync(
site, lat, lon, settings,
buildings, streets,
_worker_spatial_idx, water, veg,
site_elev, point_elev, timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if i < 3:
t_pt_done = time.perf_counter()
pt_ms = (t_pt_done - t_pt) * 1000
deltas = {k: (timing[k] - timing_before.get(k, 0)) * 1000 for k in timing}
parts = " ".join(f"{k}={v:.2f}" for k, v in deltas.items() if v > 0.001)
print(
f"[WORKER {pid}] Point {i}: {pt_ms:.2f}ms "
f"rsrp={point.rsrp:.1f} dist={point.distance:.0f}m "
f"breakdown=[{parts}]",
flush=True,
)
if point.rsrp >= settings.min_signal:
t_md = time.perf_counter()
results.append(point.model_dump())
t_model_dump_total += time.perf_counter() - t_md
n_dumped += 1
t_loop_done = time.perf_counter()
loop_ms = (t_loop_done - t_loop_start) * 1000
total_ms = (t_loop_done - t_worker_start) * 1000
avg_pt = loop_ms / len(chunk) if chunk else 0
avg_dump = (t_model_dump_total * 1000 / n_dumped) if n_dumped else 0
print(
f"[WORKER {pid}] Chunk done: total={total_ms:.0f}ms "
f"init={init_ms:.0f}ms loop={loop_ms:.0f}ms "
f"avg_pt={avg_pt:.2f}ms model_dump={avg_dump:.2f}ms×{n_dumped} "
f"import={t_import*1000:.1f}ms pydantic={t_pydantic*1000:.1f}ms "
f"terrain_shm={t_terrain_shm*1000:.1f}ms "
f"results={len(results)}/{len(chunk)}",
flush=True,
)
return results
def _calculate_with_process_pool(
grid, point_elevations, site_dict, settings_dict,
terrain_cache, buildings, streets, water_bodies,
@@ -616,23 +876,28 @@ def _calculate_with_process_pool(
):
"""Execute using ProcessPoolExecutor.
Uses shared memory for terrain tiles (zero-copy numpy views) to reduce
memory usage compared to pickling full terrain arrays per worker.
Uses shared memory for terrain tiles (zero-copy numpy views), buildings,
and OSM data (pickle-once, read-many) to eliminate per-chunk serialization
overhead.
"""
from concurrent.futures import ProcessPoolExecutor, as_completed
total_points = len(grid)
# Estimate pickle size for building data and cap workers accordingly
building_count = len(buildings)
if building_count > 10000:
num_workers = min(num_workers, 3)
log_fn(f"Large building set ({building_count}) — reducing workers to {num_workers}")
elif building_count > 5000:
num_workers = min(num_workers, 4)
data_items = building_count + len(streets) + len(water_bodies) + len(vegetation_areas)
log_fn(f"ProcessPool mode: {total_points} points, {num_workers} workers, "
f"{building_count} buildings")
f"{building_count} buildings, {data_items} total OSM items")
# Log memory at start
try:
with open('/proc/self/status') as f:
for line in f:
if line.startswith('VmRSS:'):
log_fn(f"Memory before calculation: {line.strip()}")
break
except Exception:
pass
# Store terrain tiles in shared memory
shm_blocks = []
@@ -652,12 +917,31 @@ def _calculate_with_process_pool(
log_fn(f"Shared memory setup failed ({e}), using pickle fallback")
use_shm = False
# Store buildings + OSM data in shared memory (pickle once, read many)
shared_data_refs = {}
if use_shm:
bld_block, bld_ref = _store_pickle_in_shm(buildings, "Buildings", log_fn)
if bld_block:
shm_blocks.append(bld_block)
shared_data_refs['buildings'] = bld_ref
osm_data_dict = {
'streets': streets,
'water_bodies': water_bodies,
'vegetation_areas': vegetation_areas,
}
osm_block, osm_ref = _store_pickle_in_shm(osm_data_dict, "OSM data", log_fn)
if osm_block:
shm_blocks.append(osm_block)
shared_data_refs['osm_data'] = osm_ref
items = [
(lat, lon, point_elevations.get((lat, lon), 0.0))
for lat, lon in grid
]
chunk_size = max(1, len(items) // (num_workers * 2))
# Target larger chunks to amortize IPC overhead (was num_workers*2)
chunk_size = max(1, min(400, len(items) // max(2, num_workers)))
chunks = [items[i:i + chunk_size] for i in range(0, len(items), chunk_size)]
log_fn(f"Submitting {len(chunks)} chunks of ~{chunk_size} points")
@@ -685,8 +969,21 @@ def _calculate_with_process_pool(
pool = ProcessPoolExecutor(max_workers=num_workers, mp_context=ctx)
_set_active_pool(pool)
if use_shm:
# Shared memory path: pass shm refs instead of terrain data
if use_shm and shared_data_refs:
# Full shared memory path: terrain + buildings + OSM all via shm
worker_fn = _pool_worker_shm_shared
futures = {
pool.submit(
worker_fn,
(chunk, terrain_shm_refs, shared_data_refs, config),
): i
for i, chunk in enumerate(chunks)
}
elif use_shm and data_items <= 2000:
# Terrain-only shm — buildings/OSM pickled per chunk.
# Only safe for small datasets; large datasets would OOM from
# pickle copies (num_chunks × pickle_size).
log_fn(f"Terrain-only shm (small data: {data_items} items)")
worker_fn = _pool_worker_shm_chunk
futures = {
pool.submit(
@@ -695,8 +992,9 @@ def _calculate_with_process_pool(
): i
for i, chunk in enumerate(chunks)
}
else:
# Pickle fallback path
elif data_items <= 2000:
# Full pickle fallback — only safe for small datasets
log_fn(f"Full pickle path (small data: {data_items} items)")
futures = {
pool.submit(
_pool_worker_process_chunk,
@@ -704,6 +1002,14 @@ def _calculate_with_process_pool(
): i
for i, chunk in enumerate(chunks)
}
else:
# Large dataset + shared memory failed → per-chunk pickle would OOM.
# Bail out; caller will fall back to sequential.
log_fn(f"Shared memory failed for large dataset ({data_items} items) "
f"— skipping ProcessPool to avoid OOM")
raise MemoryError(
f"Cannot safely pickle {data_items} OSM items per chunk"
)
completed_chunks = 0
for future in as_completed(futures):
@@ -730,6 +1036,9 @@ def _calculate_with_process_pool(
if progress_fn:
progress_fn("Calculating coverage", 0.40 + 0.55 * (completed_chunks / len(chunks)))
except MemoryError:
raise # Propagate to caller for sequential fallback
except Exception as e:
log_fn(f"ProcessPool error: {e}")
@@ -748,8 +1057,22 @@ def _calculate_with_process_pool(
block.unlink()
except Exception:
pass
# Release large local references before GC
chunks = None # noqa: F841
items = None # noqa: F841
osm_data = None # noqa: F841
shared_data_refs = None # noqa: F841
# Force garbage collection to release memory from workers
gc.collect()
# Log memory after cleanup
try:
with open('/proc/self/status') as f:
for line in f:
if line.startswith('VmRSS:'):
log_fn(f"Memory after cleanup: {line.strip()}")
break
except Exception:
pass
calc_time = time.time() - t_calc
log_fn(f"ProcessPool done: {calc_time:.1f}s, {len(all_results)} results "
@@ -758,7 +1081,11 @@ def _calculate_with_process_pool(
timing = {
"parallel_total": calc_time,
"workers": num_workers,
"backend": "process_pool" + ("/shm" if use_shm else "/pickle"),
"backend": "process_pool" + (
"/shm_full" if (use_shm and shared_data_refs)
else "/shm_terrain" if use_shm
else "/pickle"
),
}
return all_results, timing
@@ -791,6 +1118,7 @@ def _calculate_sequential(
"los": 0.0, "buildings": 0.0, "antenna": 0.0,
"dominant_path": 0.0, "street_canyon": 0.0,
"reflection": 0.0, "vegetation": 0.0,
"lod_none": 0, "lod_simplified": 0, "lod_full": 0,
}
t0 = time.time()
@@ -818,6 +1146,9 @@ def _calculate_sequential(
site_elevation, point_elev, timing,
precomputed_distance=pre.get('distance') if pre else None,
precomputed_path_loss=pre.get('path_loss') if pre else None,
precomputed_has_los=pre.get('has_los') if pre else None,
precomputed_terrain_loss=pre.get('terrain_loss') if pre else None,
precomputed_antenna_loss=pre.get('antenna_loss') if pre else None,
)
if point.rsrp >= settings.min_signal:
results.append(point.model_dump())

View File

@@ -20,8 +20,24 @@ class TerrainService:
"""
SRTM_SOURCES = [
"https://elevation-tiles-prod.s3.amazonaws.com/skadi/{lat_dir}/{tile_name}.hgt.gz",
"https://s3.amazonaws.com/elevation-tiles-prod/skadi/{lat_dir}/{tile_name}.hgt.gz",
# Our tile server — SRTM1 (30m) preferred, uncompressed
{
"url": "https://terra.eliah.one/srtm1/{tile_name}.hgt",
"compressed": False,
"resolution": "srtm1",
},
# Our tile server — SRTM3 (90m) fallback
{
"url": "https://terra.eliah.one/srtm3/{tile_name}.hgt",
"compressed": False,
"resolution": "srtm3",
},
# Public AWS mirror — SRTM1, gzip compressed
{
"url": "https://elevation-tiles-prod.s3.amazonaws.com/skadi/{lat_dir}/{tile_name}.hgt.gz",
"compressed": True,
"resolution": "srtm1",
},
]
def __init__(self):
@@ -48,7 +64,7 @@ class TerrainService:
return self.terrain_path / f"{tile_name}.hgt"
async def download_tile(self, tile_name: str) -> bool:
"""Download SRTM tile if not cached locally"""
"""Download SRTM tile from configured sources, preferring highest resolution."""
tile_path = self.get_tile_path(tile_name)
if tile_path.exists():
@@ -56,37 +72,54 @@ class TerrainService:
lat_dir = tile_name[:3] # e.g., "N48"
async with httpx.AsyncClient(timeout=60.0) as client:
for source_url in self.SRTM_SOURCES:
url = source_url.format(lat_dir=lat_dir, tile_name=tile_name)
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
for source in self.SRTM_SOURCES:
url = source["url"].format(lat_dir=lat_dir, tile_name=tile_name)
try:
response = await client.get(url)
if response.status_code == 200:
data = response.content
if url.endswith('.gz'):
data = gzip.decompress(data)
elif url.endswith('.zip'):
with zipfile.ZipFile(io.BytesIO(data)) as zf:
for name in zf.namelist():
if name.endswith('.hgt'):
data = zf.read(name)
break
# Skip empty responses
if len(data) < 1000:
continue
if source["compressed"]:
if url.endswith('.gz'):
data = gzip.decompress(data)
elif url.endswith('.zip'):
with zipfile.ZipFile(io.BytesIO(data)) as zf:
for name in zf.namelist():
if name.endswith('.hgt'):
data = zf.read(name)
break
# Validate tile size (SRTM1: 25,934,402 bytes, SRTM3: 2,884,802 bytes)
if len(data) not in (3601 * 3601 * 2, 1201 * 1201 * 2):
print(f"[Terrain] Invalid tile size {len(data)} from {url}")
continue
tile_path.write_bytes(data)
print(f"[Terrain] Downloaded {tile_name} ({len(data)} bytes)")
res = source["resolution"]
size_mb = len(data) / 1048576
print(f"[Terrain] Downloaded {tile_name} ({res}, {size_mb:.1f} MB)")
return True
except Exception as e:
print(f"[Terrain] Failed from {url}: {e}")
continue
print(f"[Terrain] Could not download {tile_name}")
print(f"[Terrain] Could not download {tile_name} from any source")
return False
def _load_tile(self, tile_name: str) -> Optional[np.ndarray]:
"""Load tile from disk into memory cache"""
"""Load tile from disk into memory cache using memory-mapped I/O.
Uses np.memmap so the OS pages data from disk on demand — near-zero
upfront RAM cost per tile (~25 MB savings each vs full load).
Falls back to np.frombuffer if memmap fails.
"""
# Check memory cache first
if tile_name in self._tile_cache:
return self._tile_cache[tile_name]
@@ -97,18 +130,26 @@ class TerrainService:
return None
try:
data = tile_path.read_bytes()
file_size = tile_path.stat().st_size
# SRTM HGT format: big-endian signed 16-bit integers
if len(data) == 3601 * 3601 * 2:
if file_size == 3601 * 3601 * 2:
size = 3601 # SRTM1 (30m)
elif len(data) == 1201 * 1201 * 2:
elif file_size == 1201 * 1201 * 2:
size = 1201 # SRTM3 (90m)
else:
print(f"[Terrain] Unknown tile size: {len(data)} bytes for {tile_name}")
print(f"[Terrain] Unknown tile size: {file_size} bytes for {tile_name}")
return None
tile = np.frombuffer(data, dtype='>i2').reshape((size, size))
# Memory-mapped loading — OS pages from disk, near-zero RAM
try:
tile = np.memmap(
tile_path, dtype='>i2', mode='r', shape=(size, size),
)
except Exception:
# Fallback: full load into RAM
data = tile_path.read_bytes()
tile = np.frombuffer(data, dtype='>i2').reshape((size, size))
# Manage memory cache with LRU eviction
if len(self._tile_cache) >= self._max_cache_tiles:
@@ -136,56 +177,179 @@ class TerrainService:
return self._load_tile(tile_name)
def _bilinear_sample(self, tile: np.ndarray, lat: float, lon: float) -> float:
"""Sample elevation with bilinear interpolation for sub-meter accuracy.
SRTM1 at 30m means nearest-neighbor can have 15m positional error.
Bilinear interpolation reduces this to sub-meter accuracy.
"""
size = tile.shape[0]
# Tile southwest corner
lat_int = int(lat) if lat >= 0 else int(lat) - 1
lon_int = int(lon) if lon >= 0 else int(lon) - 1
# Fractional position within tile (0.0 to 1.0)
lat_frac = lat - lat_int # 0 = south edge, 1 = north edge
lon_frac = lon - lon_int # 0 = west edge, 1 = east edge
# Convert to row/col (note: rows go north to south!)
row_exact = (1.0 - lat_frac) * (size - 1) # 0 = north, size-1 = south
col_exact = lon_frac * (size - 1) # 0 = west, size-1 = east
# Four surrounding grid points
r0 = int(row_exact)
c0 = int(col_exact)
r1 = min(r0 + 1, size - 1)
c1 = min(c0 + 1, size - 1)
# Fractional position between grid points
dr = row_exact - r0
dc = col_exact - c0
# Get four corner values
z00 = tile[r0, c0]
z01 = tile[r0, c1]
z10 = tile[r1, c0]
z11 = tile[r1, c1]
# Handle void (-32768) values - fall back to nearest valid
void_val = -32768
corners = [(z00, r0, c0), (z01, r0, c1), (z10, r1, c0), (z11, r1, c1)]
if z00 == void_val or z01 == void_val or z10 == void_val or z11 == void_val:
valid = [(z, r, c) for z, r, c in corners if z != void_val]
if not valid:
return 0.0
# Return nearest valid value
return float(valid[0][0])
# Bilinear interpolation
elevation = (z00 * (1 - dr) * (1 - dc) +
z01 * (1 - dr) * dc +
z10 * dr * (1 - dc) +
z11 * dr * dc)
return float(elevation)
async def get_elevation(self, lat: float, lon: float) -> float:
"""Get elevation at specific coordinate (meters above sea level)"""
"""Get elevation at specific coordinate with bilinear interpolation."""
tile_name = self.get_tile_name(lat, lon)
tile = await self.load_tile(tile_name)
if tile is None:
return 0.0
size = tile.shape[0]
# Calculate position within tile
lat_int = int(lat) if lat >= 0 else int(lat) - 1
lon_int = int(lon) if lon >= 0 else int(lon) - 1
lat_frac = lat - lat_int
lon_frac = lon - lon_int
# Row 0 = north edge, last row = south edge
row = int((1 - lat_frac) * (size - 1))
col = int(lon_frac * (size - 1))
row = max(0, min(row, size - 1))
col = max(0, min(col, size - 1))
elevation = tile[row, col]
# -32768 = void/no data
if elevation == -32768:
return 0.0
return float(elevation)
return self._bilinear_sample(tile, lat, lon)
def get_elevation_sync(self, lat: float, lon: float) -> float:
"""Sync elevation lookup from memory cache. Returns 0.0 if tile not loaded."""
"""Sync elevation lookup with bilinear interpolation. Returns 0.0 if tile not loaded."""
tile_name = self.get_tile_name(lat, lon)
tile = self._tile_cache.get(tile_name)
if tile is None:
return 0.0
size = tile.shape[0]
lat_int = int(lat) if lat >= 0 else int(lat) - 1
lon_int = int(lon) if lon >= 0 else int(lon) - 1
return self._bilinear_sample(tile, lat, lon)
row = int((1 - (lat - lat_int)) * (size - 1))
col = int((lon - lon_int) * (size - 1))
row = max(0, min(row, size - 1))
col = max(0, min(col, size - 1))
def get_elevations_batch(self, lats: np.ndarray, lons: np.ndarray) -> np.ndarray:
"""Vectorized elevation lookup with bilinear interpolation.
elevation = tile[row, col]
return 0.0 if elevation == -32768 else float(elevation)
Handles points spanning multiple tiles efficiently.
Groups points by tile, processes each tile with full NumPy vectorization.
Tiles must be pre-loaded into memory cache.
Args:
lats: Array of latitudes
lons: Array of longitudes
Returns:
Array of elevations (0.0 for missing tiles or void data)
"""
elevations = np.zeros(len(lats), dtype=np.float32)
# Compute tile indices for each point
lat_ints = np.floor(lats).astype(int)
lon_ints = np.floor(lons).astype(int)
# Group by tile using unique key
unique_tiles = set(zip(lat_ints, lon_ints))
for lat_int, lon_int in unique_tiles:
# Get tile name
lat_letter = 'N' if lat_int >= 0 else 'S'
lon_letter = 'E' if lon_int >= 0 else 'W'
tile_name = f"{lat_letter}{abs(lat_int):02d}{lon_letter}{abs(lon_int):03d}"
tile = self._tile_cache.get(tile_name)
if tile is None:
continue
# Mask for points in this tile
mask = (lat_ints == lat_int) & (lon_ints == lon_int)
tile_lats = lats[mask]
tile_lons = lons[mask]
size = tile.shape[0]
# Vectorized bilinear interpolation for all points in this tile
lat_frac = tile_lats - lat_int
lon_frac = tile_lons - lon_int
row_exact = (1.0 - lat_frac) * (size - 1)
col_exact = lon_frac * (size - 1)
r0 = np.clip(row_exact.astype(int), 0, size - 2)
c0 = np.clip(col_exact.astype(int), 0, size - 2)
r1 = r0 + 1
c1 = c0 + 1
dr = row_exact - r0
dc = col_exact - c0
# Get four corner values for all points at once
z00 = tile[r0, c0].astype(np.float32)
z01 = tile[r0, c1].astype(np.float32)
z10 = tile[r1, c0].astype(np.float32)
z11 = tile[r1, c1].astype(np.float32)
# Bilinear interpolation (vectorized)
result = (z00 * (1 - dr) * (1 - dc) +
z01 * (1 - dr) * dc +
z10 * dr * (1 - dc) +
z11 * dr * dc)
# Handle void values (-32768) - set to 0
void_mask = (z00 == -32768) | (z01 == -32768) | (z10 == -32768) | (z11 == -32768)
result[void_mask] = 0.0
elevations[mask] = result
return elevations
def get_required_tiles(self, center_lat: float, center_lon: float, radius_km: float) -> list:
"""Determine which tiles are needed for a coverage calculation."""
# Convert radius to degrees (approximate)
lat_delta = radius_km / 111.0 # ~111 km per degree latitude
lon_delta = radius_km / (111.0 * np.cos(np.radians(center_lat)))
min_lat = center_lat - lat_delta
max_lat = center_lat + lat_delta
min_lon = center_lon - lon_delta
max_lon = center_lon + lon_delta
tiles = []
for lat in range(int(np.floor(min_lat)), int(np.floor(max_lat)) + 1):
for lon in range(int(np.floor(min_lon)), int(np.floor(max_lon)) + 1):
lat_letter = 'N' if lat >= 0 else 'S'
lon_letter = 'E' if lon >= 0 else 'W'
tile_name = f"{lat_letter}{abs(lat):02d}{lon_letter}{abs(lon):03d}"
tiles.append(tile_name)
return tiles
def get_missing_tiles(self, center_lat: float, center_lon: float, radius_km: float) -> list:
"""Check which needed tiles are not available locally."""
required = self.get_required_tiles(center_lat, center_lon, radius_km)
return [t for t in required if not self.get_tile_path(t).exists()]
async def get_elevation_profile(
self,
@@ -272,6 +436,38 @@ class TerrainService:
total = sum(f.stat().st_size for f in self.terrain_path.glob("*.hgt"))
return total / (1024 * 1024)
def evict_disk_cache(self, max_size_mb: float = 2048.0):
"""LRU eviction of .hgt files when disk cache exceeds max_size_mb.
Deletes the oldest-accessed files until total size is under the limit.
"""
hgt_files = list(self.terrain_path.glob("*.hgt"))
if not hgt_files:
return
total = sum(f.stat().st_size for f in hgt_files)
if total / (1024 * 1024) <= max_size_mb:
return
# Sort by access time (oldest first)
hgt_files.sort(key=lambda f: f.stat().st_atime)
evicted = 0
for f in hgt_files:
if total / (1024 * 1024) <= max_size_mb:
break
fsize = f.stat().st_size
# Remove from memory cache if loaded
stem = f.stem
self._tile_cache.pop(stem, None)
f.unlink()
total -= fsize
evicted += 1
if evicted:
print(f"[Terrain] Evicted {evicted} tiles, "
f"cache now {total / (1024 * 1024):.0f} MB")
@staticmethod
def haversine_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""Calculate distance between two points in meters"""

View File

@@ -0,0 +1,142 @@
"""
Tile-based processing for large radius coverage calculations.
When radius > 10km, the coverage circle is split into 5km sub-tiles.
Each tile is processed independently — OSM data and terrain are loaded
per-tile and freed between tiles, keeping peak RAM usage bounded.
Usage:
from app.services.tile_processor import (
generate_tile_grid, partition_grid_to_tiles,
TILE_THRESHOLD_M, get_adaptive_worker_count,
)
if radius_m > TILE_THRESHOLD_M:
tiles = generate_tile_grid(center_lat, center_lon, radius_m)
tile_grids = partition_grid_to_tiles(grid, tiles)
"""
import math
from dataclasses import dataclass
from typing import List, Tuple, Dict
# Use tiled processing for radius above this threshold
TILE_THRESHOLD_M = 10000 # 10 km
# Default tile size — 5km balances overhead vs memory usage
DEFAULT_TILE_SIZE_M = 5000 # 5 km
@dataclass
class Tile:
"""A rectangular sub-tile of the coverage area."""
bbox: Tuple[float, float, float, float] # (min_lat, min_lon, max_lat, max_lon)
index: Tuple[int, int] # (row, col) in tile grid
def generate_tile_grid(
center_lat: float,
center_lon: float,
radius_m: float,
tile_size_m: float = DEFAULT_TILE_SIZE_M,
) -> List[Tile]:
"""Generate grid of tiles covering the coverage circle.
Only includes tiles that actually intersect the coverage circle.
Tiles are ordered row-by-row from SW to NE.
"""
cos_lat = math.cos(math.radians(center_lat))
# Full coverage area in degrees
lat_delta = radius_m / 111000
lon_delta = radius_m / (111000 * cos_lat)
# Number of tiles along each axis
n_tiles = max(1, math.ceil(radius_m * 2 / tile_size_m))
# Tile size in degrees
tile_lat = (2 * lat_delta) / n_tiles
tile_lon = (2 * lon_delta) / n_tiles
base_lat = center_lat - lat_delta
base_lon = center_lon - lon_delta
tiles = []
for row in range(n_tiles):
for col in range(n_tiles):
min_lat = base_lat + row * tile_lat
max_lat = base_lat + (row + 1) * tile_lat
min_lon = base_lon + col * tile_lon
max_lon = base_lon + (col + 1) * tile_lon
bbox = (min_lat, min_lon, max_lat, max_lon)
if _tile_intersects_circle(bbox, center_lat, center_lon, radius_m, cos_lat):
tiles.append(Tile(bbox=bbox, index=(row, col)))
return tiles
def _tile_intersects_circle(
bbox: Tuple[float, float, float, float],
center_lat: float,
center_lon: float,
radius_m: float,
cos_lat: float,
) -> bool:
"""Check if tile bbox intersects the coverage circle.
Uses fast equirectangular approximation — tiles are small (5km)
so full haversine is unnecessary for intersection testing.
"""
min_lat, min_lon, max_lat, max_lon = bbox
# Closest point on bbox to circle center
closest_lat = max(min_lat, min(center_lat, max_lat))
closest_lon = max(min_lon, min(center_lon, max_lon))
# Approximate distance in meters (equirectangular)
dlat = (closest_lat - center_lat) * 111000
dlon = (closest_lon - center_lon) * 111000 * cos_lat
dist_sq = dlat * dlat + dlon * dlon
return dist_sq <= radius_m * radius_m
def get_adaptive_worker_count(radius_m: float, base_workers: int) -> int:
"""Scale down workers for large calculations to prevent combined memory explosion.
Large radius = more buildings per tile = more memory per worker.
Reducing workers keeps total worker memory bounded.
"""
if radius_m > 30000:
return min(base_workers, 2)
elif radius_m > 20000:
return min(base_workers, 3)
elif radius_m > 10000:
return min(base_workers, 4)
return base_workers
def partition_grid_to_tiles(
grid: List[Tuple[float, float]],
tiles: List[Tile],
) -> Dict[Tuple[int, int], List[Tuple[float, float]]]:
"""Partition grid points into tiles by bbox containment.
Returns dict mapping tile index -> list of (lat, lon) points.
Points on tile boundaries are assigned to the first matching tile.
"""
tile_grids: Dict[Tuple[int, int], List[Tuple[float, float]]] = {
t.index: [] for t in tiles
}
for lat, lon in grid:
for tile in tiles:
min_lat, min_lon, max_lat, max_lon = tile.bbox
if min_lat <= lat <= max_lat and min_lon <= lon <= max_lon:
tile_grids[tile.index].append((lat, lon))
break
return tile_grids

View File

@@ -21,6 +21,11 @@ class VegetationArea(BaseModel):
geometry: List[Tuple[float, float]] # [(lon, lat), ...]
vegetation_type: str # forest, wood, scrub, orchard
density: str # dense, sparse, mixed
# Bounding box for fast rejection (computed from geometry)
min_lat: float = 0.0
max_lat: float = 0.0
min_lon: float = 0.0
max_lon: float = 0.0
class VegetationCache:
@@ -127,7 +132,24 @@ class VegetationService:
cached = self.cache.get(min_lat, min_lon, max_lat, max_lon)
if cached is not None:
print(f"[Vegetation] Cache hit for bbox")
areas = [VegetationArea(**v) for v in cached]
areas = []
for v in cached:
area = VegetationArea(**v)
# Recompute bbox if missing (backward compat with old cache)
if area.min_lat == 0.0 and area.max_lat == 0.0 and area.geometry:
lons = [p[0] for p in area.geometry]
lats = [p[1] for p in area.geometry]
area = VegetationArea(
id=area.id,
geometry=area.geometry,
vegetation_type=area.vegetation_type,
density=area.density,
min_lat=min(lats),
max_lat=max(lats),
min_lon=min(lons),
max_lon=max(lons),
)
areas.append(area)
self._memory_cache[cache_key] = areas
return areas
@@ -205,11 +227,19 @@ class VegetationService:
leaf_type = tags.get("leaf_type", "mixed")
density = "dense" if leaf_type == "needleleaved" else "mixed"
# Compute bounding box from geometry (lon, lat tuples)
lons = [p[0] for p in geometry]
lats = [p[1] for p in geometry]
areas.append(VegetationArea(
id=element["id"],
geometry=geometry,
vegetation_type=veg_type,
density=density
density=density,
min_lat=min(lats),
max_lat=max(lats),
min_lon=min(lons),
max_lon=max(lons),
))
return areas
@@ -260,8 +290,12 @@ class VegetationService:
lat: float, lon: float,
areas: List[VegetationArea]
) -> Optional[VegetationArea]:
"""Check if point is in vegetation area"""
"""Check if point is in vegetation area (with bbox pre-filter)"""
for area in areas:
# Quick bbox reject - skips 95%+ of polygons
if not (area.min_lat <= lat <= area.max_lat and
area.min_lon <= lon <= area.max_lon):
continue
if self._point_in_polygon(lat, lon, area.geometry):
return area
return None

View File

@@ -0,0 +1,8 @@
# Development and testing dependencies
# Install with: pip install -r requirements-dev.txt
pytest>=7.0.0
pytest-asyncio>=0.21.0
httpx>=0.27.0
ruff>=0.1.0
mypy>=1.7.0

View File

@@ -0,0 +1,10 @@
# NVIDIA GPU acceleration via CuPy
# Install with: pip install -r requirements-gpu-nvidia.txt
#
# Choose ONE based on your CUDA version:
# - cupy-cuda12x for CUDA 12.x (RTX 30xx, 40xx, newer)
# - cupy-cuda11x for CUDA 11.x (older cards)
#
# CuPy bundles CUDA runtime (~700 MB) - no separate CUDA install needed
cupy-cuda12x>=13.0.0

View File

@@ -0,0 +1,14 @@
# Intel/AMD GPU acceleration via PyOpenCL
# Install with: pip install -r requirements-gpu-opencl.txt
#
# Works with:
# - Intel UHD/Iris Graphics (integrated)
# - AMD Radeon (discrete)
# - NVIDIA GPUs (alternative to CUDA)
#
# Requires OpenCL runtime:
# - Intel: Intel GPU Computing Runtime
# - AMD: AMD Adrenalin driver (includes OpenCL)
# - NVIDIA: NVIDIA driver (includes OpenCL)
pyopencl>=2023.1

View File

@@ -7,6 +7,7 @@ pymongo==4.6.1
pydantic-settings==2.1.0
numpy==1.26.4
scipy==1.12.0
shapely>=2.0.0
requests==2.31.0
httpx==0.27.0
aiosqlite>=0.19.0

View File

@@ -29,7 +29,23 @@ if getattr(sys, 'frozen', False):
print(f"[RFCP] Frozen mode, base dir: {base_dir}", flush=True)
# Fix uvicorn TTY detection — redirect None streams to a log file
log_path = os.path.join(base_dir, 'rfcp-server.log')
# Use RFCP_LOG_PATH from Electron, or fallback to user-writable location
log_dir = os.environ.get('RFCP_LOG_PATH')
if not log_dir:
if sys.platform == 'win32':
appdata = os.environ.get('APPDATA', os.path.expanduser('~'))
log_dir = os.path.join(appdata, 'rfcp-desktop', 'logs')
else:
log_dir = os.path.join(os.path.expanduser('~'), '.rfcp', 'logs')
try:
os.makedirs(log_dir, exist_ok=True)
log_path = os.path.join(log_dir, 'rfcp-server.log')
except Exception:
# Fallback to temp directory if all else fails
import tempfile
log_path = os.path.join(tempfile.gettempdir(), 'rfcp-server.log')
log_file = open(log_path, 'w')
if sys.stdout is None:
sys.stdout = log_file

View File

@@ -52,9 +52,11 @@ const getLogPath = () => {
const getBackendExePath = () => {
const exeName = process.platform === 'win32' ? 'rfcp-server.exe' : 'rfcp-server';
if (isDev) {
return path.join(__dirname, '..', 'backend', exeName);
// Dev: use the ONEDIR build output
return path.join(__dirname, '..', 'backend', 'dist', 'rfcp-server', exeName);
}
return getResourcePath('backend', exeName);
// Production: ONEDIR structure - backend/rfcp-server/rfcp-server.exe
return getResourcePath('backend', 'rfcp-server', exeName);
};
/** Frontend index.html path (production only) */

View File

@@ -0,0 +1,233 @@
# RFCP Native Backend Research
## Executive Summary
**Finding:** The production Electron app already supports native Windows operation without WSL2.
The production build uses PyInstaller to bundle the Python backend as a standalone Windows executable (`rfcp-server.exe`). WSL2 is only used during development. No migration is needed for end users.
---
## Current Architecture
### Development Mode
```
RFCP (Electron dev)
└── Spawns: python -m uvicorn app.main:app --host 127.0.0.1 --port 8090
└── Uses system Python (Windows or WSL2)
└── Requires venv with dependencies
```
### Production Mode (Already Implemented)
```
RFCP.exe (Electron packaged)
└── Spawns: rfcp-server.exe (bundled PyInstaller binary)
└── Self-contained Python + all dependencies
└── No WSL2 required
└── No system Python required
```
---
## Evidence from Codebase
### desktop/main.js (Lines 120-145)
```javascript
function startBackend() {
// Production: use bundled executable
if (isProduction) {
const serverPath = path.join(process.resourcesPath, 'rfcp-server.exe');
if (fs.existsSync(serverPath)) {
backendProcess = spawn(serverPath, [], { ... });
return;
}
}
// Development: use system Python
backendProcess = spawn('python', ['-m', 'uvicorn', 'app.main:app', ...]);
}
```
### installer/rfcp-server.spec (PyInstaller Config)
```python
# Key configuration
a = Analysis(
['run_server.py'],
pathex=[backend_path],
binaries=[],
datas=[
('data/terrain', 'data/terrain'), # Terrain data bundled
],
hiddenimports=[
'uvicorn.logging', 'uvicorn.loops', 'uvicorn.protocols',
'motor', 'pymongo', 'numpy', 'scipy', 'shapely',
# Full list of dependencies
],
)
exe = EXE(
pyz,
a.scripts,
name='rfcp-server',
console=True, # Shows console for debugging
icon='rfcp.ico',
)
```
---
## GPU Acceleration in Production
### Current Status
The PyInstaller bundle **does not include CuPy** by default because:
1. CuPy requires CUDA runtime (large, ~500MB)
2. Not all users have NVIDIA GPUs
3. Binary would be too large for distribution
### Solution Options
**Option A: Ship CPU-only (Current)**
- Production build uses NumPy (CPU) for calculations
- GPU acceleration available only in dev mode or manual install
- Smallest download size (~100MB)
**Option B: Separate GPU Installer**
- Main installer: CPU-only (~100MB)
- Optional GPU addon: Downloads CuPy + CUDA runtime (~600MB)
- Implemented via install_rfcp.py dependency installer
**Option C: CUDA Toolkit Detection**
- Detect if CUDA is already installed on user's system
- If yes, attempt to load CuPy dynamically
- Graceful fallback to NumPy if not available
### Recommendation
Keep Option A (CPU-only production) with Option B available for power users:
1. Default production build works everywhere
2. Users with NVIDIA GPUs can run `install_rfcp.py` to enable GPU acceleration
3. No WSL2 required for either path
---
## Terrain Data Handling
### Current Implementation
Terrain data (SRTM .hgt files) is bundled inside the PyInstaller executable:
```python
datas=[
('data/terrain', 'data/terrain'),
]
```
### Considerations
- Bundled terrain data increases exe size significantly
- Alternative: Download terrain on first use (like current region download system)
- For initial release, bundling common regions is acceptable
---
## Database (MongoDB)
### Production Architecture
The Electron app embeds MongoDB or requires MongoDB to be installed separately.
Options:
1. **Embedded MongoDB** - Ships mongod.exe with the app
2. **MongoDB Atlas** - Cloud database (requires internet)
3. **SQLite** - Switch to file-based database (significant refactor)
4. **In-memory + file persistence** - No MongoDB required (significant refactor)
Current implementation uses Motor (async MongoDB driver). For true standalone operation, consider SQLite migration in future iteration.
---
## Build Process
### Current Build Commands
```bash
# Build backend executable
cd /mnt/d/root/rfcp/backend
pyinstaller ../installer/rfcp-server.spec
# Build Electron app with bundled backend
cd /mnt/d/root/rfcp/installer
./build-win.sh
```
### Output
- `rfcp-server.exe` - Standalone backend (~80MB)
- `RFCP-Setup-{version}.exe` - Full installer with Electron + backend (~150MB)
---
## Testing Native Build
### Test Procedure
1. Build `rfcp-server.exe` via PyInstaller
2. Run directly: `./rfcp-server.exe`
3. Verify API responds: `curl http://localhost:8090/api/health`
4. Verify coverage calculation works
5. Check GPU detection in logs
### Known Issues
1. **First launch slow**: PyInstaller extracts on first run (~5-10 seconds)
2. **Antivirus false positives**: Some AV flags PyInstaller executables
3. **Console window**: Shows black console (use `console=False` for windowless)
---
## Conclusions
### No Migration Needed
The production Electron app already works without WSL2. The current architecture is:
- ✅ Native Windows executable
- ✅ No Python installation required
- ✅ No WSL2 required
- ✅ Self-contained dependencies
### Development vs Production
| Aspect | Development | Production |
|--------|-------------|------------|
| Python | System Python / venv | Bundled via PyInstaller |
| WSL2 | Optional (for testing) | Not required |
| GPU | CuPy if installed | CPU-only (NumPy) |
| MongoDB | Local instance | Embedded or Atlas |
| Terrain | Local data/ folder | Bundled in exe |
### Remaining Work
1. **GPU for production**: Implement Optional GPU addon installer
2. **Smaller package**: On-demand terrain download instead of bundling
3. **Database portability**: Consider SQLite migration for offline-first
4. **Installer polish**: Signed executables, auto-update support
---
## Appendix: Full PyInstaller Hidden Imports
From `installer/rfcp-server.spec`:
```python
hiddenimports=[
'uvicorn.logging',
'uvicorn.loops',
'uvicorn.loops.auto',
'uvicorn.protocols',
'uvicorn.protocols.http',
'uvicorn.protocols.http.auto',
'uvicorn.protocols.websockets',
'uvicorn.protocols.websockets.auto',
'uvicorn.lifespan',
'uvicorn.lifespan.on',
'motor',
'pymongo',
'numpy',
'scipy',
'shapely',
'shapely.geometry',
'shapely.ops',
# ... additional imports
]
```

View File

@@ -0,0 +1,463 @@
# RFCP — Iteration 3.10: Link Budget, Fresnel Zone & Interference Modeling
## Overview
Add three interconnected RF analysis features: link budget calculator panel, Fresnel zone visualization on terrain profiles, and basic interference (C/I) modeling for multi-site scenarios. These build on existing infrastructure — propagation models, terrain profiles, and multi-site coverage.
## Priority Order
1. Link Budget Calculator (simplest, standalone UI)
2. Fresnel Zone Visualization (extends terrain profile)
3. Interference Modeling (extends coverage engine)
---
## Feature 1: Link Budget Calculator
### Description
A panel/dialog that shows the complete RF link budget as a table — from transmitter to receiver. Uses existing propagation model values but presents them in the standard telecom link budget format.
### Implementation
**New component:** `frontend/src/components/panels/LinkBudgetPanel.tsx`
The panel should display a table with rows for each element in the link chain. It should use the currently selected site's parameters and a configurable receiver point (either clicked on map or manually entered coordinates).
**Link Budget Table Structure:**
```
TRANSMITTER
Tx Power (dBm) [from site config, e.g. 43 dBm]
Tx Antenna Gain (dBi) [from site config, e.g. 18 dBi]
Tx Cable/Connector Loss (dB) [new field, default 2 dB]
EIRP (dBm) = Tx Power + Gain - Cable Loss
PATH
Distance (km) [calculated from Tx to Rx point]
Free Space Path Loss (dB) [existing formula: 20log(d) + 20log(f) + 32.45]
Terrain Diffraction Loss (dB) [from terrain_los model if available]
Vegetation Loss (dB) [from vegetation model if available]
Atmospheric Loss (dB) [from atmospheric model if available]
Total Path Loss (dB) = sum of all path losses
RECEIVER
Rx Antenna Gain (dBi) [configurable, default 0 dBi for handset]
Rx Cable Loss (dB) [configurable, default 0 dB]
Rx Sensitivity (dBm) [configurable, default -100 dBm]
RESULT
Received Power (dBm) = EIRP - Total Path Loss + Rx Gain - Rx Cable
Link Margin (dB) = Received Power - Rx Sensitivity
Status = "OK" if margin > 0, "FAIL" if < 0
```
**Backend addition:** Add a new endpoint or extend existing coverage API.
**File:** `backend/app/api/routes/coverage.py` (or new `link_budget.py`)
```python
@router.post("/api/link-budget")
async def calculate_link_budget(request: dict):
"""Calculate point-to-point link budget.
Body: {
"site_id": "...", # or tx_lat/tx_lon/tx_params
"tx_lat": 48.46,
"tx_lon": 35.04,
"tx_power_dbm": 43,
"tx_gain_dbi": 18,
"tx_cable_loss_db": 2,
"tx_height_m": 30,
"rx_lat": 48.50,
"rx_lon": 35.10,
"rx_gain_dbi": 0,
"rx_cable_loss_db": 0,
"rx_sensitivity_dbm": -100,
"rx_height_m": 1.5,
"frequency_mhz": 1800
}
"""
from app.services.terrain_service import terrain_service
# Calculate distance
distance_m = terrain_service.haversine_distance(
request["tx_lat"], request["tx_lon"],
request["rx_lat"], request["rx_lon"]
)
distance_km = distance_m / 1000
# Get elevations
tx_elev = await terrain_service.get_elevation(request["tx_lat"], request["tx_lon"])
rx_elev = await terrain_service.get_elevation(request["rx_lat"], request["rx_lon"])
# EIRP
eirp_dbm = request["tx_power_dbm"] + request["tx_gain_dbi"] - request["tx_cable_loss_db"]
# Free space path loss
freq = request["frequency_mhz"]
fspl_db = 20 * math.log10(distance_km) + 20 * math.log10(freq) + 32.45 if distance_km > 0 else 0
# Terrain profile for LOS check
profile = await terrain_service.get_elevation_profile(
request["tx_lat"], request["tx_lon"],
request["rx_lat"], request["rx_lon"],
num_points=100
)
# Simple LOS check - does terrain block line of sight?
tx_total_height = tx_elev + request.get("tx_height_m", 30)
rx_total_height = rx_elev + request.get("rx_height_m", 1.5)
terrain_loss_db = 0
los_clear = True
for i, point in enumerate(profile):
if i == 0 or i == len(profile) - 1:
continue
# Linear interpolation of LOS line at this point
fraction = i / (len(profile) - 1)
los_height = tx_total_height + fraction * (rx_total_height - tx_total_height)
if point["elevation"] > los_height:
los_clear = False
# Simple knife-edge diffraction estimate
terrain_loss_db += 6 # ~6dB per obstruction (simplified)
total_path_loss = fspl_db + terrain_loss_db
# Received power
rx_power_dbm = eirp_dbm - total_path_loss + request["rx_gain_dbi"] - request["rx_cable_loss_db"]
# Link margin
margin_db = rx_power_dbm - request["rx_sensitivity_dbm"]
return {
"distance_km": round(distance_km, 2),
"distance_m": round(distance_m, 1),
"tx_elevation_m": round(tx_elev, 1),
"rx_elevation_m": round(rx_elev, 1),
"eirp_dbm": round(eirp_dbm, 1),
"fspl_db": round(fspl_db, 1),
"terrain_loss_db": round(terrain_loss_db, 1),
"total_path_loss_db": round(total_path_loss, 1),
"los_clear": los_clear,
"rx_power_dbm": round(rx_power_dbm, 1),
"margin_db": round(margin_db, 1),
"status": "OK" if margin_db >= 0 else "FAIL",
"profile": profile,
}
```
### UI Requirements
- New panel accessible from sidebar or toolbar button (calculator icon)
- Click on map to set Rx point (with crosshair cursor)
- Auto-populates Tx params from selected site
- Shows result table with color coding (green margin = OK, red = FAIL)
- Optionally draws line on map from Tx to Rx
---
## Feature 2: Fresnel Zone Visualization
### Description
Draw Fresnel zone ellipse overlay on the Terrain Profile chart, showing where terrain intrudes into the first Fresnel zone. This is critical for understanding if a radio link will actually work — even if terrain doesn't block direct LOS, Fresnel zone obstruction causes significant signal loss.
### Implementation
**Modify:** The existing Terrain Profile component/chart
**Fresnel Zone Radius Formula:**
```python
import math
def fresnel_radius(n: int, frequency_mhz: float, d1_m: float, d2_m: float) -> float:
"""Calculate nth Fresnel zone radius at a point along the path.
Args:
n: Fresnel zone number (1 = first zone, most important)
frequency_mhz: Frequency in MHz
d1_m: Distance from transmitter to this point (meters)
d2_m: Distance from this point to receiver (meters)
Returns:
Radius of nth Fresnel zone in meters
"""
wavelength = 300.0 / frequency_mhz # meters
d_total = d1_m + d2_m
if d_total == 0:
return 0
radius = math.sqrt((n * wavelength * d1_m * d2_m) / d_total)
return radius
```
**Backend endpoint:** `backend/app/api/routes/coverage.py`
```python
@router.post("/api/fresnel-profile")
async def fresnel_profile(request: dict):
"""Calculate terrain profile with Fresnel zone boundaries.
Body: {
"tx_lat": 48.46, "tx_lon": 35.04, "tx_height_m": 30,
"rx_lat": 48.50, "rx_lon": 35.10, "rx_height_m": 1.5,
"frequency_mhz": 1800,
"num_points": 100
}
"""
from app.services.terrain_service import terrain_service
tx_lat, tx_lon = request["tx_lat"], request["tx_lon"]
rx_lat, rx_lon = request["rx_lat"], request["rx_lon"]
tx_height = request.get("tx_height_m", 30)
rx_height = request.get("rx_height_m", 1.5)
freq = request.get("frequency_mhz", 1800)
num_points = request.get("num_points", 100)
# Get terrain profile
profile = await terrain_service.get_elevation_profile(
tx_lat, tx_lon, rx_lat, rx_lon, num_points
)
total_distance = profile[-1]["distance"] if profile else 0
# Get endpoint elevations
tx_elev = profile[0]["elevation"] if profile else 0
rx_elev = profile[-1]["elevation"] if profile else 0
tx_total = tx_elev + tx_height
rx_total = rx_elev + rx_height
wavelength = 300.0 / freq # meters
# Calculate Fresnel zone at each profile point
fresnel_data = []
los_blocked = False
fresnel_blocked = False
worst_clearance = float('inf')
for i, point in enumerate(profile):
d1 = point["distance"] # distance from tx
d2 = total_distance - d1 # distance to rx
# LOS height at this point (linear interpolation)
if total_distance > 0:
fraction = d1 / total_distance
else:
fraction = 0
los_height = tx_total + fraction * (rx_total - tx_total)
# First Fresnel zone radius
if d1 > 0 and d2 > 0 and total_distance > 0:
f1_radius = math.sqrt((1 * wavelength * d1 * d2) / total_distance)
else:
f1_radius = 0
# Fresnel zone boundaries (height above sea level)
fresnel_top = los_height + f1_radius
fresnel_bottom = los_height - f1_radius
# Clearance: how much space between terrain and Fresnel bottom
clearance = fresnel_bottom - point["elevation"]
if clearance < worst_clearance:
worst_clearance = clearance
if point["elevation"] > los_height:
los_blocked = True
if point["elevation"] > fresnel_bottom:
fresnel_blocked = True
fresnel_data.append({
"distance": point["distance"],
"lat": point["lat"],
"lon": point["lon"],
"terrain_elevation": point["elevation"],
"los_height": round(los_height, 1),
"fresnel_top": round(fresnel_top, 1),
"fresnel_bottom": round(fresnel_bottom, 1),
"f1_radius": round(f1_radius, 1),
"clearance": round(clearance, 1),
})
return {
"profile": fresnel_data,
"total_distance_m": round(total_distance, 1),
"tx_elevation": round(tx_elev, 1),
"rx_elevation": round(rx_elev, 1),
"frequency_mhz": freq,
"wavelength_m": round(wavelength, 4),
"los_clear": not los_blocked,
"fresnel_clear": not fresnel_blocked,
"worst_clearance_m": round(worst_clearance, 1),
"recommendation": (
"Clear — excellent link" if not fresnel_blocked
else "Fresnel zone partially blocked — expect 3-6 dB additional loss"
if not los_blocked
else "LOS blocked — significant diffraction loss expected"
),
}
```
### Frontend Visualization
On the existing Terrain Profile chart:
- Draw the LOS line (straight line from Tx to Rx) — this may already exist
- Draw first Fresnel zone as a **semi-transparent elliptical area** around the LOS line
- Upper boundary = `fresnel_top` series
- Lower boundary = `fresnel_bottom` series
- Color: light blue with ~20% opacity
- Where terrain intersects Fresnel zone, highlight in red/orange
- Show clearance info in the profile tooltip
- Add a summary badge: "LOS Clear ✓" / "Fresnel 60% Clear ⚠" / "LOS Blocked ✗"
---
## Feature 3: Interference Modeling (C/I Ratio)
### Description
Add carrier-to-interference ratio calculation to the coverage engine. For each grid point, calculate the C/I ratio: the signal from the serving cell vs the sum of signals from all other cells on the same frequency. Display as a separate heatmap layer.
### Implementation
**Backend changes:**
**File:** `backend/app/services/coverage_service.py` (or gpu_service.py)
Add C/I calculation after existing coverage computation:
```python
def calculate_interference(self, sites: list, coverage_results: dict) -> np.ndarray:
"""Calculate C/I ratio for each grid point.
For each point:
- C = signal strength from strongest (serving) cell
- I = sum of signal strengths from all other co-frequency cells
- C/I = C - 10*log10(sum of linear interference powers)
Returns array of C/I values in dB.
"""
# Get all RSRP grids (already calculated)
# For each point, find:
# 1. Best server (strongest signal) = C
# 2. Sum of all others on same frequency = I
# 3. C/I = C(dBm) - I(dBm)
# Group sites by frequency
freq_groups = {}
for site in sites:
freq = site.get("frequency_mhz", 1800)
if freq not in freq_groups:
freq_groups[freq] = []
freq_groups[freq].append(site)
# Only calculate interference for frequency groups with 2+ sites
# For single-site frequencies, C/I = infinity (no interference)
# The RSRP values are already in dBm, need to convert to linear for summing
# P_linear = 10^(P_dBm / 10)
# I_total_linear = sum(P_linear for all interferers)
# I_total_dBm = 10 * log10(I_total_linear)
# C/I = C_dBm - I_total_dBm
pass
```
**Key algorithm (for GPU pipeline in gpu_service.py):**
```python
# After computing RSRP for all sites at all grid points:
# rsrp_grid shape: (num_sites, num_points) in dBm
# Convert to linear (mW)
rsrp_linear = 10 ** (rsrp_grid / 10.0) # CuPy array
# For each point, best server
best_server_idx = cp.argmax(rsrp_grid, axis=0)
best_rsrp_linear = cp.take_along_axis(rsrp_linear, best_server_idx[cp.newaxis, :], axis=0)[0]
# Total power from all sites
total_power = cp.sum(rsrp_linear, axis=0)
# Interference = total - serving
interference_linear = total_power - best_rsrp_linear
# C/I ratio in dB
# Avoid log10(0) with small epsilon
epsilon = 1e-30
ci_ratio_db = 10 * cp.log10(best_rsrp_linear / (interference_linear + epsilon))
# Clip to reasonable range
ci_ratio_db = cp.clip(ci_ratio_db, -20, 50)
```
### Frontend Visualization
- Add a toggle in the coverage controls: "Show: Signal (RSRP) | Interference (C/I)"
- C/I heatmap uses different color scale:
- Dark red: < 0 dB (interference dominant — no service)
- Orange: 0-10 dB (marginal)
- Yellow: 10-20 dB (acceptable)
- Green: 20-30 dB (good)
- Blue: > 30 dB (excellent, minimal interference)
- The C/I map only makes sense with 2+ sites on same frequency
- Show warning if all sites are on different frequencies (no co-channel interference)
### API Response Extension
Add `ci_ratio` field to coverage calculation response alongside existing `rsrp` values.
---
## Testing Checklist
### Link Budget
- [ ] Panel opens from toolbar/sidebar
- [ ] Click on map sets Rx point
- [ ] Tx parameters auto-populate from selected site
- [ ] Link budget table shows all rows correctly
- [ ] Margin calculation is correct (manual verification)
- [ ] Color coding: green for positive margin, red for negative
- [ ] Line drawn on map from Tx to Rx
### Fresnel Zone
- [ ] Terrain profile shows Fresnel zone overlay
- [ ] Fresnel ellipse is widest at midpoint (correct shape)
- [ ] Red highlighting where terrain enters Fresnel zone
- [ ] Summary shows LOS/Fresnel status
- [ ] Works at different frequencies (zone size changes with frequency)
- [ ] Clearance values are reasonable (first Fresnel zone at 1800 MHz, 10km = ~22m radius at midpoint)
### Interference
- [ ] C/I toggle appears when 2+ sites exist
- [ ] C/I heatmap renders with correct color scale
- [ ] Single-site scenario shows "no interference" or infinite C/I
- [ ] Two sites on same frequency show interference zones between them
- [ ] C/I values are reasonable (> 20 dB near serving cell, < 10 dB at cell edge)
## Build & Deploy
```bash
cd D:\root\rfcp
# Backend — just restart uvicorn (Python, no build)
cd backend
python -m uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# Frontend — rebuild if UI components changed
cd frontend
npm run build
# Full installer rebuild if needed
# (use existing build script)
```
## Commit Message
```
feat(rf): add link budget, Fresnel zone, and interference modeling
- Add /api/link-budget endpoint with full path analysis
- Add /api/fresnel-profile endpoint with zone clearance calculation
- Add C/I ratio computation to GPU coverage pipeline
- Add LinkBudgetPanel frontend component
- Add Fresnel zone overlay to terrain profile chart
- Add C/I heatmap toggle alongside RSRP display
- Group interference by frequency for co-channel analysis
```
## Success Criteria
1. Link budget shows correct margin for known test case (Dnipro, 10km, 1800MHz)
2. Fresnel zone visually shows ellipse on terrain profile
3. Two co-frequency sites show interference pattern between them
4. All three features work with existing terrain data (no new downloads needed)
5. GPU pipeline performance not significantly degraded by C/I calculation

View File

@@ -0,0 +1,210 @@
# RFCP — Iteration 3.10.1: UI/UX Bugfixes
## Overview
Four bugs found during 3.10 testing. All are frontend issues, no backend changes needed.
---
## Bug 1: Ruler places point when clicking Terrain Profile button
**Problem:** When Ruler mode is active and user clicks "Terrain Profile" button in the measurement overlay, it also places a ruler point on the map underneath. The click event propagates to the map.
**Fix:** Stop event propagation on the Terrain Profile button click handler. The Terrain Profile button (and any overlay UI elements) should call `e.stopPropagation()` to prevent the click from reaching the map layer.
Also review: any other UI overlays that sit on top of the map (Link Budget panel, coverage controls, etc.) should also stop propagation to prevent accidental ruler/site placement.
**Files to check:**
- MeasurementTool component (Terrain Profile button handler)
- Any overlay/popup components that sit on top of the Leaflet map
---
## Bug 2: Cursor should be default arrow, not hand; Ruler snap to site
**Problem A:** The map cursor shows as a grab/hand icon. Should be default arrow cursor for normal mode. Hand cursor should only appear when dragging the map.
**Fix A:** Set Leaflet map cursor styles:
```css
/* Default cursor */
.leaflet-container {
cursor: default !important;
}
/* Grabbing only when dragging */
.leaflet-container.leaflet-drag-target {
cursor: grabbing !important;
}
/* Crosshair for ruler mode */
.leaflet-container.ruler-mode {
cursor: crosshair !important;
}
/* Crosshair for RX point placement mode */
.leaflet-container.rx-placement-mode {
cursor: crosshair !important;
}
```
Apply CSS classes to the map container based on current mode. Remove Leaflet's default grab cursor.
**Problem B:** When using the ruler, it should be possible to snap the ruler start/end point exactly to a site (tower) location. Currently you have to eyeball it.
**Fix B:** When in ruler mode and clicking near a site marker (within ~20px), snap the ruler point to the exact site coordinates. This gives precise distance measurements from tower to any point.
```typescript
// In ruler click handler:
const SNAP_DISTANCE_PX = 20;
function findNearestSite(clickLatLng: L.LatLng, map: L.Map): Site | null {
const clickPoint = map.latLngToContainerPoint(clickLatLng);
let nearest: Site | null = null;
let minDist = Infinity;
for (const site of sites) {
const sitePoint = map.latLngToContainerPoint(L.latLng(site.lat, site.lon));
const dist = clickPoint.distanceTo(sitePoint);
if (dist < SNAP_DISTANCE_PX && dist < minDist) {
minDist = dist;
nearest = site;
}
}
return nearest;
}
// When placing ruler point:
const snappedSite = findNearestSite(clickLatLng, map);
if (snappedSite) {
// Use exact site coordinates
rulerPoint = L.latLng(snappedSite.lat, snappedSite.lon);
} else {
rulerPoint = clickLatLng;
}
```
---
## Bug 3: Link Budget Calculator text invisible + RX point not placed on map
**Problem A:** Text in Link Budget Calculator panel is black on dark background — invisible. The input fields and labels need light text color for dark theme.
**Fix A:** Ensure all text in LinkBudgetPanel uses light colors:
```css
/* All text in the panel should be light */
color: #e2e8f0; /* or whatever the app's light text color is */
/* Input fields */
input {
color: #e2e8f0;
background: #1e293b; /* dark input background */
border: 1px solid #475569;
}
/* Labels */
label {
color: #94a3b8; /* slightly muted for labels */
}
/* Values/results */
.result-value {
color: #f1f5f9; /* bright white for important values */
}
```
Check if the panel is using Tailwind classes — if so, ensure `text-slate-200` or similar is applied to the container. The panel likely inherits wrong text color or has hardcoded dark text.
**Problem B:** When clicking "Click on Map to Set RX Point" and then clicking on the map, the RX marker does not appear on the map. The coordinates might update in the fields but there's no visual indicator.
**Fix B:** When RX point is set:
1. Place a visible marker on the map at the RX location (use a different icon than the TX site — e.g., a small circle or pin in a different color like orange or blue)
2. Draw a dashed line from the TX site to the RX marker
3. The marker should be draggable to adjust position
4. When Link Budget panel is closed, remove the RX marker and line
```typescript
// RX marker icon (different from site markers)
const rxIcon = L.divIcon({
className: 'rx-marker',
html: '<div style="width: 12px; height: 12px; background: #f97316; border: 2px solid white; border-radius: 50%;"></div>',
iconSize: [12, 12],
iconAnchor: [6, 6],
});
// Place marker
const rxMarker = L.marker([rxLat, rxLon], { icon: rxIcon, draggable: true }).addTo(map);
// Dashed line from TX to RX
const linkLine = L.polyline([[txLat, txLon], [rxLat, rxLon]], {
color: '#f97316',
weight: 2,
dashArray: '8, 4',
opacity: 0.8,
}).addTo(map);
// Update on drag
rxMarker.on('drag', (e) => {
const pos = e.target.getLatLng();
linkLine.setLatLngs([[txLat, txLon], [pos.lat, pos.lng]]);
// Update Link Budget panel coordinates
updateRxCoordinates(pos.lat, pos.lng);
});
```
---
## Bug 4: Elevation color opacity not working
**Problem:** The opacity control for elevation/terrain colors on the map is not functioning. Adjusting the opacity slider has no effect on the terrain overlay visibility.
**Fix:** Check how the elevation overlay is rendered:
1. If it's a tile layer (Leaflet tile overlay), use `layer.setOpacity(value)`
2. If it's the topo map layer, the opacity needs to be applied to the correct layer reference
3. If it's the coverage heatmap opacity that's broken, check the canvas renderer opacity
The "Elev" button on the right toolbar likely toggles an elevation visualization. Find where this layer is created and ensure:
```typescript
// When opacity slider changes:
elevationLayer.setOpacity(opacityValue);
// Or if it's a canvas overlay:
const canvas = document.querySelector('.elevation-overlay');
if (canvas) {
canvas.style.opacity = String(opacityValue);
}
```
Also check: there might be TWO opacity controls that are confused:
- Coverage heatmap opacity (the RSRP colors)
- Terrain/elevation color overlay opacity (the topo colors)
Make sure each slider controls the correct layer.
---
## Testing Checklist
- [ ] Click Terrain Profile button with Ruler active — NO extra ruler point placed
- [ ] Default cursor is arrow, not hand
- [ ] Cursor changes to crosshair in Ruler mode
- [ ] Cursor changes to crosshair in RX placement mode
- [ ] Ruler snaps to site when clicking near tower marker
- [ ] Link Budget panel text is readable (light on dark)
- [ ] Clicking map in RX mode places visible orange marker
- [ ] Dashed line drawn from TX to RX
- [ ] RX marker removed when panel closes
- [ ] Elevation opacity slider actually changes overlay transparency
## Commit Message
```
fix(ui): resolve ruler propagation, cursor, link budget visibility, elevation opacity
- Stop click propagation on Terrain Profile button (prevents ruler point)
- Change default cursor to arrow, crosshair for tool modes
- Add ruler snap-to-site (20px threshold)
- Fix Link Budget panel text colors for dark theme
- Add RX marker and dashed line on map
- Fix elevation overlay opacity control binding
```

View File

@@ -0,0 +1,349 @@
# RFCP — Iteration 3.10.2: Tool Mode System & Click Fixes
## Root Cause
All click-related bugs share one root cause: multiple features compete for the same map click event. Ruler, RX point placement, site placement, and terrain profile all listen to map clicks simultaneously. There's no centralized "active tool" state that prevents conflicts.
## Solution: Active Tool Mode
Create a single source of truth for which tool is currently active. Only the active tool receives map click events.
### Tool Modes (mutually exclusive):
```typescript
type ActiveTool =
| 'none' // Default — pan/zoom only, no click actions
| 'ruler' // Distance measurement, click to add points
| 'rx-placement' // Link Budget RX point, single click
| 'site-placement' // Place new site on map
```
### Implementation
**1. Add to app store (Zustand):**
```typescript
// In the main store or a new toolStore:
interface ToolState {
activeTool: ActiveTool;
setActiveTool: (tool: ActiveTool) => void;
clearTool: () => void;
}
const useToolStore = create<ToolState>((set) => ({
activeTool: 'none',
setActiveTool: (tool) => set({ activeTool: tool }),
clearTool: () => set({ activeTool: 'none' }),
}));
```
**2. Map click handler — single entry point:**
Replace all individual map click listeners with ONE handler:
```typescript
// In the main Map component:
map.on('click', (e: L.LeafletMouseEvent) => {
const { activeTool } = useToolStore.getState();
switch (activeTool) {
case 'ruler':
handleRulerClick(e);
break;
case 'rx-placement':
handleRxPlacement(e);
break;
case 'site-placement':
handleSitePlacement(e);
break;
case 'none':
default:
// No action on map click — just pan/zoom
break;
}
});
```
**3. Cursor changes based on active tool:**
```typescript
useEffect(() => {
const container = map.getContainer();
// Remove all tool cursors
container.classList.remove('ruler-mode', 'rx-placement-mode', 'site-placement-mode');
switch (activeTool) {
case 'ruler':
container.classList.add('ruler-mode');
break;
case 'rx-placement':
container.classList.add('rx-placement-mode');
break;
case 'site-placement':
container.classList.add('site-placement-mode');
break;
default:
// Default cursor (arrow)
break;
}
}, [activeTool]);
```
**4. CSS for cursors:**
```css
.leaflet-container {
cursor: default !important;
}
.leaflet-container.leaflet-dragging {
cursor: grabbing !important;
}
.leaflet-container.ruler-mode {
cursor: crosshair !important;
}
.leaflet-container.rx-placement-mode {
cursor: crosshair !important;
}
.leaflet-container.site-placement-mode {
cursor: cell !important;
}
```
**5. UI buttons toggle tool mode:**
```typescript
// Ruler button:
const handleRulerToggle = () => {
if (activeTool === 'ruler') {
clearTool(); // Toggle off
} else {
setActiveTool('ruler'); // Activate ruler, deactivate others
}
};
// Link Budget "Click on Map to Set RX Point" button:
const handleRxModeToggle = () => {
if (activeTool === 'rx-placement') {
clearTool();
} else {
setActiveTool('rx-placement');
}
};
```
**6. Auto-deactivation:**
- RX placement: deactivate after single click (point is set)
- Ruler: stays active until toggled off or right-click finishes
- Site placement: deactivate after placing site
---
## Fix: Ruler Snap to Site
In the ruler click handler, check proximity to existing sites:
```typescript
function handleRulerClick(e: L.LeafletMouseEvent) {
const map = e.target;
const clickPoint = map.latLngToContainerPoint(e.latlng);
const SNAP_THRESHOLD_PX = 20;
// Check all site markers
let snappedLatLng = e.latlng;
let snapped = false;
for (const site of sites) {
const siteLatLng = L.latLng(site.lat, site.lon);
const sitePoint = map.latLngToContainerPoint(siteLatLng);
const pixelDist = clickPoint.distanceTo(sitePoint);
if (pixelDist < SNAP_THRESHOLD_PX) {
snappedLatLng = siteLatLng;
snapped = true;
break;
}
}
// Add ruler point at snapped or original location
addRulerPoint(snappedLatLng);
// Optional: visual feedback for snap
if (snapped) {
// Brief highlight on the site marker
}
}
```
---
## Fix: RX Point Placement + Visual Marker
When in 'rx-placement' mode and map is clicked:
```typescript
function handleRxPlacement(e: L.LeafletMouseEvent) {
const { lat, lng } = e.latlng;
// Update Link Budget panel coordinates
setRxCoordinates(lat, lng);
// Place visible marker on map
if (rxMarkerRef.current) {
rxMarkerRef.current.setLatLng([lat, lng]);
} else {
rxMarkerRef.current = L.marker([lat, lng], {
icon: L.divIcon({
className: 'rx-point-marker',
html: `<div style="
width: 14px; height: 14px;
background: #f97316;
border: 2px solid #fff;
border-radius: 50%;
box-shadow: 0 0 6px rgba(249,115,22,0.6);
"></div>`,
iconSize: [14, 14],
iconAnchor: [7, 7],
}),
draggable: true,
}).addTo(map);
// Update coords on drag
rxMarkerRef.current.on('drag', (ev) => {
const pos = ev.target.getLatLng();
setRxCoordinates(pos.lat, pos.lng);
});
}
// Draw dashed line from TX to RX
const selectedSite = getSelectedSite();
if (selectedSite && linkLineRef.current) {
linkLineRef.current.setLatLngs([[selectedSite.lat, selectedSite.lon], [lat, lng]]);
} else if (selectedSite) {
linkLineRef.current = L.polyline(
[[selectedSite.lat, selectedSite.lon], [lat, lng]],
{ color: '#f97316', weight: 2, dashArray: '8,4', opacity: 0.8 }
).addTo(map);
}
// Deactivate RX placement mode (single click action)
clearTool();
}
// Cleanup when Link Budget panel closes:
function cleanupRxMarker() {
if (rxMarkerRef.current) {
rxMarkerRef.current.remove();
rxMarkerRef.current = null;
}
if (linkLineRef.current) {
linkLineRef.current.remove();
linkLineRef.current = null;
}
}
```
---
## Fix: Terrain Profile Click-Through
The Terrain Profile popup and its "Terrain Profile" trigger button must stop event propagation:
```typescript
// On the Terrain Profile button in the measurement overlay:
<button
onClick={(e) => {
e.stopPropagation();
e.preventDefault();
showTerrainProfile();
}}
onMouseDown={(e) => e.stopPropagation()}
onPointerDown={(e) => e.stopPropagation()}
>
Terrain Profile
</button>
// On the Terrain Profile popup container:
<div
className="terrain-profile-popup"
onClick={(e) => e.stopPropagation()}
onMouseDown={(e) => e.stopPropagation()}
onPointerDown={(e) => e.stopPropagation()}
>
{/* ... chart content ... */}
</div>
```
Also ensure the popup/panel has `pointer-events: auto` and is positioned with a high z-index above the map.
With the tool mode system in place, this becomes less critical since clicking terrain profile UI won't trigger ruler (ruler mode would be separate), but stopping propagation is still good practice.
---
## Fix: Default Cursor (Not Hand)
Override Leaflet's default grab cursor:
```css
/* Global override in the app's main CSS */
.leaflet-container {
cursor: default !important;
}
/* Only show grab when actually dragging */
.leaflet-container.leaflet-dragging,
.leaflet-container:active {
cursor: grabbing !important;
}
/* Remove grab cursor from interactive layers too */
.leaflet-interactive {
cursor: default !important;
}
/* Tool-specific cursors applied via JS class toggle */
.leaflet-container.tool-ruler {
cursor: crosshair !important;
}
.leaflet-container.tool-rx-placement {
cursor: crosshair !important;
}
.leaflet-container.tool-site-placement {
cursor: cell !important;
}
```
---
## Testing Checklist
- [ ] Only ONE tool can be active at a time
- [ ] Activating Ruler deactivates RX placement and vice versa
- [ ] Default cursor is arrow (not hand/grab)
- [ ] Cursor changes to crosshair when Ruler is active
- [ ] Cursor changes to crosshair when RX placement is active
- [ ] Cursor shows grabbing only when dragging map
- [ ] Clicking Terrain Profile button does NOT place ruler point
- [ ] Clicking any UI panel/popup does NOT place ruler point
- [ ] Ruler point snaps to site marker when clicking within 20px
- [ ] RX point click places orange marker on map
- [ ] Dashed orange line appears from TX site to RX marker
- [ ] RX marker is draggable (updates coordinates in panel)
- [ ] RX marker removed when Link Budget panel closes
- [ ] Right-click finishes ruler measurement
## Commit Message
```
fix(tools): implement active tool mode system, fix click conflicts
- Add ActiveTool state (none/ruler/rx-placement/site-placement)
- Single map click handler dispatches to active tool only
- Fix cursor: default arrow, crosshair for tools, grabbing for drag
- Add ruler snap-to-site (20px threshold)
- Add RX marker with draggable orange dot and dashed line
- Stop event propagation on all UI overlays above map
- Clean up markers when panels close
```

View File

@@ -0,0 +1,106 @@
# RFCP — Iteration 3.10.3: Calculator Shortcut & Ruler Limit
## Two small UX changes, no backend.
---
## 1. Link Budget Calculator — Quick Access Button
Move calculator access to a visible toolbar button, not buried in Map Tools panel.
**Location:** Top-left corner of the map, below the zoom controls (+/- buttons). Similar to how Fit, Reset, Topo, Grid, Ruler, Elev buttons are in the top-right.
**Implementation:**
Add a button to the left toolbar (or create a small floating button group):
```typescript
// Top-left button, below zoom controls
<button
className="map-tool-btn"
onClick={() => setShowLinkBudget(!showLinkBudget)}
title="Link Budget Calculator"
>
{/* Calculator icon — use an emoji or SVG */}
🔗 {/* or a small "LB" text label, or a calculator SVG icon */}
</button>
```
**Styling:** Same visual style as the right-side tool buttons (Fit, Reset, Topo, Grid, Ruler, Elev) — dark rounded rectangle with light text/icon.
**Position options (pick one):**
- **Option A:** Add to the RIGHT toolbar stack below "Elev" button — keeps all tools together
- **Option B:** Floating button top-left below zoom — separate but prominent
- **Option C:** Add to the measurement overlay bar (near the ruler distance display)
Recommend **Option A** — add "LB" or calculator icon button to the right toolbar stack, below Elev. Consistent with existing UI pattern.
Also: Remove the "Hide Link Budget Calculator" button from Map Tools panel (or keep it as secondary toggle — but primary access should be the toolbar button).
---
## 2. Ruler — Maximum 2 Points Only
**Problem:** Ruler currently allows unlimited points, creating a web of measurement lines. For RF point-to-point measurement, only 2 points make sense: start and end.
**Fix:** Limit ruler to exactly 2 points. When both points are placed, the measurement is complete. To start a new measurement, clicking again replaces the first point and clears the old measurement.
```typescript
// In the map click handler for ruler mode:
function handleRulerClick(e: L.LeafletMouseEvent) {
const currentPoints = rulerPoints;
if (currentPoints.length === 0) {
// First point
setRulerPoints([snappedLatLng]);
} else if (currentPoints.length === 1) {
// Second point — measurement complete
setRulerPoints([currentPoints[0], snappedLatLng]);
// Optionally: auto-deactivate ruler mode after 2nd point
// clearTool(); // uncomment if you want one-shot behavior
} else {
// Already 2 points — start new measurement
// Replace: clear old points, start fresh with new first point
setRulerPoints([snappedLatLng]);
}
}
```
**Behavior:**
1. Click 1: Place start point (show marker)
2. Click 2: Place end point (show marker + line + distance label + Terrain Profile button)
3. Click 3: Clear previous, start new measurement from this click
4. Right-click or Escape: Cancel/clear ruler entirely
**Remove:**
- Remove "Right-click to finish" instruction (no longer needed — measurement auto-completes at 2 points)
- Remove multi-point polyline rendering (only single line between 2 points)
**Visual:**
- Show a single straight line between 2 points (green dashed, as current)
- Distance label at midpoint
- Terrain Profile button appears after 2nd point is placed
- Small circle markers at both endpoints
---
## Testing Checklist
- [ ] Calculator button visible in toolbar (right side, below Elev)
- [ ] Click calculator button opens/closes Link Budget panel
- [ ] Ruler allows exactly 2 points, no more
- [ ] Third click starts new measurement (replaces old)
- [ ] Escape clears ruler
- [ ] Distance + Terrain Profile button appears after 2nd point
- [ ] No multi-point web/polygon possible
- [ ] Ruler still snaps to site markers
## Commit Message
```
fix(ux): add calculator toolbar button, limit ruler to 2 points
- Add Link Budget Calculator button to right toolbar
- Limit ruler to exactly 2 points (point-to-point only)
- Third click starts new measurement, clears previous
- Remove multi-point polyline behavior
```

View File

@@ -0,0 +1,136 @@
# RFCP — Iteration 3.10.4: Terrain Profile Click Fix & TX Height
## Two bugs remaining from previous iterations.
---
## Bug 1: Terrain Profile click still places ruler point
**Problem:** Clicking inside the Terrain Profile popup (chart area, close button, fresnel checkbox, anywhere in the popup) triggers the map click handler underneath, which places a ruler point or resets the measurement.
**Previous fix was incomplete** — stopPropagation was added to some elements but not the entire popup container and its backdrop.
**Fix:** The Terrain Profile popup needs a FULL click barrier. Every mouse event must be caught:
```typescript
// The OUTERMOST container of the Terrain Profile popup:
<div
className="terrain-profile-container"
onClick={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
onMouseDown={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
onMouseUp={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
onPointerDown={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
onPointerUp={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
onDoubleClick={(e) => { e.stopPropagation(); e.nativeEvent.stopImmediatePropagation(); }}
>
{/* All terrain profile content */}
</div>
```
**IMPORTANT:** `stopPropagation()` alone may not be enough because Leaflet listens to DOM events directly, not React synthetic events. The fix MUST also call `e.nativeEvent.stopImmediatePropagation()` to prevent Leaflet's native DOM listener from firing.
**Alternative approach (more robust):** Add the popup OUTSIDE the Leaflet map container in the DOM tree. If the Terrain Profile div is a sibling or parent of the map div (not a child), Leaflet's event delegation won't catch clicks on it at all.
```tsx
// In the main layout:
<div className="app-layout">
<div id="map-container">
{/* Leaflet map renders here */}
</div>
{/* These are OUTSIDE the map container — Leaflet can't intercept */}
{showTerrainProfile && (
<TerrainProfile ... />
)}
{showLinkBudget && (
<LinkBudgetPanel ... />
)}
</div>
```
If moving outside the map container is too much refactoring, the stopImmediatePropagation approach should work. But check: is the TerrainProfile component rendered INSIDE a Leaflet pane or overlay? If so, moving it out is the correct fix.
**Also apply the same fix to:**
- Link Budget Calculator panel
- Any other floating panel/popup that sits over the map
---
## Bug 2: TX Height always shows 2m in Link Budget Calculator
**Problem:** The Link Budget Calculator TRANSMITTER section always shows `Height: 2m` regardless of the actual site configuration. It should read the height from the selected site's settings.
**Root cause:** The LinkBudgetPanel component likely reads `site.height` but the site object might store height in a different field name (e.g., `site.antennaHeight`, `site.towerHeight`, `site.params.height`, or per-sector height).
**Fix:** Find where site height is stored and pass the correct value:
```typescript
// In LinkBudgetPanel.tsx, find where TX height is set:
// WRONG (probably current):
const txHeight = site.height || 2; // Defaults to 2 if field is missing
// Check the actual site data structure. It might be:
const txHeight = site.antennaHeight
|| site.tower_height
|| site.params?.height
|| site.sectors?.[0]?.height // If height is per-sector
|| 30; // Default should be 30m for a typical cell tower, not 2m
// Or if height is stored in meters in a nested config:
const txHeight = selectedSite?.config?.height || selectedSite?.height || 30;
```
**Steps to debug:**
1. In the browser console (F12), find the selected site object
2. Check what field contains the height value
3. Update LinkBudgetPanel to read from the correct field
**Display fix:**
```typescript
// In the TRANSMITTER section of the panel:
<div className="param-row">
<span>Height:</span>
<span>{txHeight} m</span>
</div>
```
The height should also be EDITABLE in the link budget calculator (as an input field, not just display), since you might want to test "what if I put the antenna at 40m instead of 30m?" without changing the actual site config.
```typescript
// Make height an editable field with site value as default:
const [txHeightOverride, setTxHeightOverride] = useState<number | null>(null);
const txHeight = txHeightOverride ?? (site?.height || 30);
<div className="param-row">
<label>Height:</label>
<input
type="number"
value={txHeight}
onChange={(e) => setTxHeightOverride(parseFloat(e.target.value))}
/> m
</div>
```
---
## Testing Checklist
- [ ] Click ANYWHERE inside Terrain Profile popup — NO ruler point placed
- [ ] Click Terrain Profile close button (X) — popup closes, no ruler point
- [ ] Click Fresnel Zone checkbox — toggles, no ruler point
- [ ] Click chart area — no ruler point
- [ ] Drag/scroll inside chart — no map pan/zoom
- [ ] TX Height in Link Budget shows actual site height (not 2m)
- [ ] TX Height is editable for what-if scenarios
- [ ] Changing TX height recalculates link budget
## Commit Message
```
fix(ui): block all click propagation from terrain profile, fix TX height
- Add stopImmediatePropagation on terrain profile container
- Prevent all mouse/pointer events from reaching Leaflet map
- Fix TX height reading from site config (was defaulting to 2m)
- Make TX height editable in link budget calculator
```

View File

@@ -0,0 +1,130 @@
# RFCP 3.6.0 — Production GPU Build (Claude Code Task)
## Goal
Build `rfcp-server.exe` (PyInstaller) with CuPy GPU support so production RFCP
detects the NVIDIA GPU without manual `pip install`.
Currently production exe shows "CPU (NumPy)" because CuPy is not bundled.
## Current Environment (CONFIRMED WORKING)
```
Windows 10 (10.0.26200)
Python 3.11.8 (C:\Python311)
NVIDIA GeForce RTX 4060 Laptop GPU (8 GB VRAM)
CUDA Toolkit 13.1 (C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1)
CUDA_PATH = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1
Packages:
cupy-cuda13x 13.6.0 ← NOT cuda12x!
numpy 1.26.4
scipy 1.17.0
fastrlock 0.8.3
pyinstaller 6.18.0
GPU compute verified:
python -c "import cupy; a = cupy.array([1,2,3]); print(a.sum())" → 6 ✅
```
## What We Already Tried (And Why It Failed)
### Attempt 1: ONEFILE spec with collect_all('cupy')
- `collect_all('cupy')` returns 1882 datas, **0 binaries** — CuPy pip doesn't bundle DLLs on Windows
- CUDA DLLs come from two separate sources:
- **nvidia pip packages** (14 DLLs in `C:\Python311\Lib\site-packages\nvidia\*/bin/`)
- **CUDA Toolkit** (13 DLLs in `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.1\bin\x64\`)
- We manually collected these 27 DLLs in the spec
- Build succeeded (3 GB exe!) but crashed on launch:
```
[PYI-10456:ERROR] Failed to extract cufft64_12.dll: decompression resulted in return code -1!
```
- Root cause: `cufft64_12.dll` is 297 MB — PyInstaller's zlib compression fails on it in ONEFILE mode
### Attempt 2: We were about to try ONEDIR but haven't built it yet
### Key Insight: Duplicate DLLs from two sources
nvidia pip packages have CUDA 12.x DLLs (cublas64_12.dll etc.)
CUDA Toolkit 13.1 has CUDA 13.x DLLs (cublas64_13.dll etc.)
CuPy-cuda13x needs the 13.x versions. The 12.x from pip may conflict.
## What Needs To Happen
1. **Build rfcp-server as ONEDIR** (folder with exe + DLLs, not single exe)
- This avoids the decompression crash with large CUDA DLLs
- Output: `backend/dist/rfcp-server/rfcp-server.exe` + all DLLs alongside
2. **Include ONLY the correct CUDA DLLs**
- Prefer CUDA Toolkit 13.1 DLLs (match cupy-cuda13x)
- The nvidia pip packages have cuda12x DLLs — may cause version conflicts
- Key DLLs needed: cublas, cusparse, cusolver, curand, cufft, nvrtc, cudart
3. **Exclude bloat** — the previous build pulled in tensorflow, grpc, opentelemetry etc.
making it 3 GB. Real size should be ~600-800 MB.
4. **Test the built exe** — run it standalone and verify:
- `curl http://localhost:8090/api/health` returns `"build": "gpu"`
- `curl http://localhost:8090/api/gpu/status` returns `"available": true`
- Or at minimum: the exe starts without errors and CuPy imports successfully
5. **Update Electron integration** if needed:
- Current Electron expects a single `rfcp-server.exe` file
- With ONEDIR, it's a folder `rfcp-server/rfcp-server.exe`
- File: `desktop/main.js` or `desktop/src/main.ts` — look for where it spawns backend
- The path needs to change from `resources/backend/rfcp-server.exe`
to `resources/backend/rfcp-server/rfcp-server.exe`
## File Locations
```
D:\root\rfcp\
├── backend\
│ ├── run_server.py ← PyInstaller entry point
│ ├── app\
│ │ ├── main.py ← FastAPI app
│ │ ├── services\
│ │ │ ├── gpu_backend.py ← GPU detection (CuPy/NumPy fallback)
│ │ │ └── coverage_service.py ← Uses get_array_module()
│ │ └── api\routes\gpu.py ← /api/gpu/status, /api/gpu/diagnostics
│ ├── dist\ ← PyInstaller output goes here
│ └── build\ ← PyInstaller build cache
├── installer\
│ ├── rfcp-server-gpu.spec ← GPU spec (needs fixing)
│ ├── rfcp-server.spec ← CPU spec (working, don't touch)
│ ├── rfcp.ico ← Icon (exists)
│ └── build-gpu.bat ← Build script
├── desktop\
│ ├── main.js or src/main.ts ← Electron main process
│ └── resources\backend\ ← Where production exe lives
└── frontend\ ← React frontend (no changes needed)
```
## Existing CPU spec for reference
The working CPU-only spec is at `installer/rfcp-server.spec`. Use it as the base
and ADD CuPy + CUDA on top. Don't reinvent the wheel.
## Build Command
```powershell
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
```
## Success Criteria
- [ ] `dist/rfcp-server/rfcp-server.exe` starts without errors
- [ ] CuPy imports successfully inside the exe (no missing DLL errors)
- [ ] `/api/gpu/status` returns `"available": true, "device": "RTX 4060"`
- [ ] Total folder size < 1 GB (ideally 600-800 MB)
- [ ] No tensorflow/grpc/opentelemetry bloat
- [ ] Electron can find and launch the backend (path updated if needed)
## Important Notes
- Do NOT use cupy-cuda12x — we migrated to cupy-cuda13x
- Do NOT try ONEFILE mode — cufft64_12.dll (297 MB) crashes decompression
- The nvidia pip packages (nvidia-cublas-cu12, etc.) are still installed but may
conflict with CUDA Toolkit 13.1 — prefer Toolkit DLLs
- `collect_all('cupy')` gives 0 binaries on Windows — DLLs must be manually specified
- gpu_backend.py already handles CuPy absence gracefully (falls back to NumPy)

View File

@@ -0,0 +1,133 @@
# RFCP 3.7.0 — GPU-Accelerated Coverage Calculations
## Context
Iteration 3.6.0 completed: CuPy-cuda13x works in production PyInstaller build,
RTX 4060 detected, ONEDIR build with CUDA DLLs. BUT coverage calculations still
run on CPU because coverage_service.py uses `import numpy as np` directly instead
of the GPU backend.
The GPU infrastructure is ready:
- `app/services/gpu_backend.py` has `GPUManager.get_array_module()` → returns cupy or numpy
- `/api/gpu/status` confirms `"active_backend": "cuda"`
- CuPy is imported and GPU detected in the frozen exe
## Goal
Replace direct `np.` calls in coverage_service.py with `xp = gpu_manager.get_array_module()`
so calculations run on GPU when available, with automatic NumPy fallback.
## Files to Modify
### `app/services/coverage_service.py`
**Line 7**: `import numpy as np` — keep this but also import gpu_manager
Add near top:
```python
from app.services.gpu_backend import gpu_manager
```
**Key sections to GPU-accelerate** (highest impact first):
#### 1. Grid array creation (lines 549-550, 922-923)
```python
# BEFORE:
grid_lats = np.array([lat for lat, lon in grid])
grid_lons = np.array([lon for lat, lon in grid])
# AFTER:
xp = gpu_manager.get_array_module()
grid_lats = xp.array([lat for lat, lon in grid])
grid_lons = xp.array([lon for lat, lon in grid])
```
#### 2. Trig calculations (line 468, 1031, 1408-1415, 1442)
These use np.cos, np.radians, np.sin, np.degrees, np.arctan2 — all have CuPy equivalents.
```python
# BEFORE:
lon_delta = settings.radius / (111000 * np.cos(np.radians(center_lat)))
cos_lat = np.cos(np.radians(center_lat))
# AFTER:
xp = gpu_manager.get_array_module()
lon_delta = settings.radius / (111000 * float(xp.cos(xp.radians(center_lat))))
cos_lat = float(xp.cos(xp.radians(center_lat)))
```
#### 3. The heavy calculation loop — `_run_point_loop` (line 1070) and `_calculate_point_sync` (line 1112)
This is where 90% of time is spent. Currently processes points one-by-one.
The GPU win comes from vectorizing the path loss calculation across ALL grid points at once.
**Strategy**: Instead of looping through points, create arrays of all distances/angles
and compute path loss for all points in one vectorized operation.
#### 4. `_calculate_bearing` (line 1402) — already vectorizable
```python
# All np.* functions here have direct CuPy equivalents
# Just replace np → xp
```
## Important Rules
1. **Always get xp at function scope**, not module scope:
```python
def my_function(self, ...):
xp = gpu_manager.get_array_module()
# use xp instead of np
```
2. **Convert GPU arrays back to CPU** before returning to non-GPU code:
```python
if hasattr(result, 'get'): # CuPy array
result = result.get() # → numpy array
```
3. **Keep np for small/scalar operations** — GPU overhead isn't worth it for single values.
Only use xp for array operations on 100+ elements.
4. **Don't break the fallback** — if CuPy isn't available, `get_array_module()` returns numpy,
so `xp.array()` etc. work identically.
5. **Test both paths** — run with GPU and verify same results as CPU.
## Testing
After changes:
```powershell
# Rebuild
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --noconfirm
# Run
.\dist\rfcp-server\rfcp-server.exe
# Test calculation via frontend — watch Task Manager GPU utilization
# Should see GPU Compute spike during coverage calculation
# Time should be significantly faster than 10s for 1254 points
```
Compare before/after:
- Current (CPU): ~10s for 1254 points, 5km radius
- Expected (GPU): 1-3s for same calculation
Also test GPU diagnostics:
```
curl http://localhost:8888/api/gpu/diagnostics
```
## What NOT to Change
- Don't modify gpu_backend.py — it's working correctly
- Don't change the API endpoints or response format
- Don't remove the NumPy import — keep it for non-array operations
- Don't change propagation model math — only the array operations
- Don't change _filter_buildings_to_bbox or OSM functions — they use lists not arrays
## Success Criteria
- [ ] Coverage calculation uses GPU (visible in Task Manager)
- [ ] Calculation time reduced for 1000+ point grids
- [ ] CPU fallback still works (test by setting active_backend to cpu via API)
- [ ] Same coverage results (heatmap should look identical)
- [ ] No regression in tiled processing mode

View File

@@ -0,0 +1,181 @@
# RFCP 3.8.0 — Vectorize Per-Point Coverage Calculations
## Context
Iteration 3.7.0 added GPU precompute for distances + base path loss (Phase 2.5).
But Phase 3 (per-point loop) still runs on CPU, one point at a time across workers.
This is where 95% of time goes on Full preset (195s for 6,642 points).
Current pipeline:
```
Phase 2.5 (GPU, 0.01s): distances + base path_loss → precomputed arrays
Phase 3 (CPU, 195s): per-point terrain_loss, building_loss, reflections, vegetation
```
Goal: Vectorize the heavy per-point calculations so GPU handles them in bulk.
## Architecture
The key insight: `_calculate_point_sync` (line ~1127) does these steps per point:
1. **Terrain LOS check** — get elevation profile between site and point, check clearance
2. **Diffraction loss** — knife-edge based on Fresnel zone clearance
3. **Building obstruction** — find buildings between site and point, calculate penetration loss
4. **Materials penalty** — add loss based on building material type
5. **Dominant path analysis** — LOS vs reflection vs diffraction
6. **Street canyon** — check if point is in urban canyon
7. **Reflections** — find reflection paths off buildings (most expensive!)
8. **Vegetation loss** — check vegetation between site and point
9. **Final RSRP** — tx_power - path_loss - terrain_loss - building_loss - veg_loss + gains
## Strategy: Vectorize in Stages
NOT everything can be vectorized equally. Prioritize by time spent:
### Stage 1: Terrain LOS + Diffraction (HIGH IMPACT)
Currently: For each point, sample ~50-100 elevation values along radial path,
find min clearance, compute knife-edge diffraction.
**Vectorize**: Create 2D elevation profiles for ALL points at once.
- All points share the same site location
- For N points, create N terrain profiles (each M samples)
- Compute Fresnel clearance for all profiles vectorized
- Compute diffraction loss vectorized
```python
# Instead of per-point:
for point in grid:
profile = get_terrain_profile(site, point, num_samples=50)
clearance = min_clearance(profile)
loss = diffraction_loss(clearance, freq)
# Vectorized:
xp = gpu_manager.get_array_module()
# all_profiles shape: (N_points, M_samples)
all_profiles = get_terrain_profiles_batch(site, all_points, num_samples=50)
all_clearances = compute_clearances_batch(all_profiles, site_elev, point_elevs, distances)
all_terrain_loss = diffraction_loss_batch(all_clearances, freq)
```
### Stage 2: Building Obstruction (HIGH IMPACT)
Currently: For each point, find nearby buildings, check if they obstruct path.
**Vectorize**: Use spatial indexing but batch the geometry checks.
- Pre-compute building bounding boxes as GPU arrays
- For each point, ray-building intersection can be done as matrix operation
- Building penetration loss is simple lookup after intersection
NOTE: This is harder to vectorize because each point has different number of
nearby buildings. Options:
a) Pad to max buildings per point (wastes memory but simple)
b) Use sparse representation
c) Keep per-point but use GPU for the geometry math
Recommend option (c) initially — keep the spatial query on CPU but move
the trig/geometry calculations to GPU.
### Stage 3: Reflections (MEDIUM IMPACT, only on Full preset)
Currently: For each point with buildings, compute reflection paths.
This is the most complex calculation and hardest to vectorize.
**Approach**: Keep reflections per-point for now, but optimize the inner math
with vectorized operations.
### Stage 4: Vegetation Loss (LOW IMPACT)
Simple lookup — not worth GPU overhead.
## Implementation Plan
### Step 1: Batch terrain profiling
Add to coverage_service.py a new method:
```python
def _batch_terrain_profiles(self, site_lat, site_lon, site_elev,
grid_lats, grid_lons, grid_elevs,
distances, frequency, num_samples=50):
"""Compute terrain LOS and diffraction loss for all points at once."""
xp = gpu_manager.get_array_module()
N = len(grid_lats)
# Interpolate terrain profiles for all points
# Each profile: site → point, num_samples elevation values
# Use terrain tile data directly
# Compute Fresnel zone clearance for each profile
# Compute knife-edge diffraction loss
return terrain_losses # shape (N,)
```
### Step 2: Batch building check
Add method:
```python
def _batch_building_obstruction(self, site_lat, site_lon,
grid_lats, grid_lons,
distances, buildings_spatial_index,
all_buildings):
"""Compute building loss for all points at once."""
# For each point, query spatial index (CPU)
# Batch the geometry intersection math (GPU)
# Return losses
return building_losses # shape (N,)
```
### Step 3: Replace _run_point_loop
Instead of ProcessPool workers, do:
```python
# In calculate_coverage, after Phase 2.5:
terrain_losses = self._batch_terrain_profiles(...)
building_losses = self._batch_building_obstruction(...)
# Final RSRP is now fully vectorized:
rsrp = tx_power - precomputed_path_loss - terrain_losses - building_losses - veg_losses
# + antenna_gains + reflection_gains
```
### Step 4: Keep worker fallback
If GPU not available or for very complex calculations (reflections),
fall back to the existing per-point ProcessPool approach.
## Important Notes
1. **GPU code only in main process** — learned from 3.7.0, never import gpu_manager in workers
2. **Terrain data access** — terrain tiles are in memory, need efficient sampling for batch profiles
3. **CuPy ↔ NumPy bridge** — use `xp.asnumpy()` or `.get()` to convert back to CPU
4. **Memory** — 6,642 points × 50 terrain samples = 332,100 floats = 2.5 MB on GPU, no problem
5. **Accuracy** — results must match existing per-point calculation within 1 dB
## Testing
```powershell
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --noconfirm
.\dist\rfcp-server\rfcp-server.exe
```
Compare Full preset:
- Before (3.7.0): ~195s for 6,642 points
- Target (3.8.0): <30s for same calculation
- Stretch goal: <10s
Verify accuracy:
- Run same location with GPU and CPU backend
- Compare RSRP values — should be within 1 dB
- Coverage percentages (Excellent/Good/Fair/Weak) should be very close
## What NOT to Change
- Don't modify propagation model math (Okumura-Hata, COST-231, Free-Space formulas)
- Don't change API endpoints or response format
- Don't remove the ProcessPool fallback — keep it for CPU-only mode
- Don't change OSM fetching or caching
- Don't modify the frontend
## Success Criteria
- [ ] Full preset completes in <30s (was 195s)
- [ ] Standard preset completes in <5s (was 7.2s)
- [ ] No CuPy errors in worker processes
- [ ] CPU fallback still works
- [ ] Results match within 1 dB accuracy
- [ ] GPU utilization visible in Task Manager during calculation

View File

@@ -0,0 +1,436 @@
# RFCP 3.9.0 — SRTM1 Real Terrain Data Integration
## Context
RFCP currently downloads terrain tiles from an elevation API at runtime.
This works but has limitations:
- Requires internet connection
- Unknown data source quality
- No offline capability (critical for tactical/field use)
- No control over resolution or caching
Goal: Replace with SRTM1 (30m resolution) HGT files, offline-first architecture.
## SRTM1 Data Format
HGT files are dead simple:
-×1° tiles, named by southwest corner: `N48E033.hgt`
- 3601×3601 grid of signed 16-bit integers (big-endian)
- Each value = elevation in meters
- File size: exactly 25,934,402 bytes (3601 × 3601 × 2)
- Row order: north to south (first row = northernmost)
- Column order: west to east
- Adjacent tiles overlap by 1 pixel on shared edges
- Void/no-data value: -32768
Compressed (.hgt.zip): ~10-15 MB per tile typically.
## Architecture
### Tile Storage Layout
```
{app_data}/terrain/
├── srtm1/ # 30m resolution tiles
│ ├── N48E033.hgt # Uncompressed for fast access
│ ├── N48E034.hgt
│ ├── N48E035.hgt
│ └── ...
├── tile_index.json # Metadata: available tiles, checksums, dates
└── downloads/ # Temporary download staging
```
On Windows, `{app_data}` = the application's data directory.
For PyInstaller exe: `data/terrain/` relative to exe location.
The path must be configurable (environment variable or config file).
### Tile Manager (new file: `terrain_manager.py`)
```python
class SRTMTileManager:
"""Manages SRTM1 HGT tile storage, loading, and caching."""
def __init__(self, terrain_dir: str):
self.terrain_dir = Path(terrain_dir)
self.srtm1_dir = self.terrain_dir / "srtm1"
self.srtm1_dir.mkdir(parents=True, exist_ok=True)
# In-memory cache: tile_name -> numpy array
self._tile_cache: Dict[str, np.ndarray] = {}
self._max_cache_tiles = 16 # ~16 tiles = ~400 MB RAM
def get_tile_name(self, lat: float, lon: float) -> str:
"""Convert lat/lon to SRTM tile name."""
# Floor to get southwest corner
lat_int = int(lat) if lat >= 0 else int(lat) - 1
lon_int = int(lon) if lon >= 0 else int(lon) - 1
lat_prefix = "N" if lat_int >= 0 else "S"
lon_prefix = "E" if lon_int >= 0 else "W"
return f"{lat_prefix}{abs(lat_int):02d}{lon_prefix}{abs(lon_int):03d}"
def get_required_tiles(self, center_lat, center_lon, radius_km) -> List[str]:
"""Determine which tiles are needed for a coverage calculation."""
# Calculate bounding box from center + radius
# Return list of tile names
def has_tile(self, tile_name: str) -> bool:
"""Check if tile exists locally."""
return (self.srtm1_dir / f"{tile_name}.hgt").exists()
def load_tile(self, tile_name: str) -> Optional[np.ndarray]:
"""Load tile from disk into memory. Returns 3601x3601 int16 array."""
if tile_name in self._tile_cache:
return self._tile_cache[tile_name]
hgt_path = self.srtm1_dir / f"{tile_name}.hgt"
if not hgt_path.exists():
return None
# Read raw HGT: big-endian signed 16-bit
data = np.fromfile(str(hgt_path), dtype='>i2')
tile = data.reshape((3601, 3601))
# Replace void values
tile = tile.astype(np.float32)
tile[tile == -32768] = np.nan
# Cache management (LRU-style: evict oldest if full)
if len(self._tile_cache) >= self._max_cache_tiles:
oldest_key = next(iter(self._tile_cache))
del self._tile_cache[oldest_key]
self._tile_cache[tile_name] = tile
return tile
def get_elevation(self, lat: float, lon: float) -> Optional[float]:
"""Get elevation at a single point with bilinear interpolation."""
tile_name = self.get_tile_name(lat, lon)
tile = self.load_tile(tile_name)
if tile is None:
return None
return self._bilinear_sample(tile, lat, lon)
def get_elevations_batch(self, lats: np.ndarray, lons: np.ndarray) -> np.ndarray:
"""Get elevations for array of points. Vectorized."""
# Group points by tile
# Load needed tiles
# Vectorized bilinear interpolation per tile
# Return array of elevations
async def download_tile(self, tile_name: str) -> bool:
"""Download a single tile from remote source (if online)."""
# Try multiple sources in order:
# 1. Own server (future: UMTC sync endpoint)
# 2. srtm.fasma.org (no auth required)
# 3. viewfinderpanoramas.org (no auth, void-filled)
# Returns True if successful
def get_missing_tiles(self, center_lat, center_lon, radius_km) -> List[str]:
"""Check which needed tiles are not available locally."""
required = self.get_required_tiles(center_lat, center_lon, radius_km)
return [t for t in required if not self.has_tile(t)]
```
### Bilinear Interpolation (CRITICAL for accuracy)
Current system uses nearest-neighbor (pick closest grid cell).
SRTM1 at 30m means nearest-neighbor can have 15m positional error.
Bilinear interpolation reduces this to sub-meter accuracy.
```python
def _bilinear_sample(self, tile: np.ndarray, lat: float, lon: float) -> float:
"""Sample elevation with bilinear interpolation."""
# Tile southwest corner
lat_int = int(lat) if lat >= 0 else int(lat) - 1
lon_int = int(lon) if lon >= 0 else int(lon) - 1
# Fractional position within tile (0.0 to 1.0)
lat_frac = lat - lat_int # 0 = south edge, 1 = north edge
lon_frac = lon - lon_int # 0 = west edge, 1 = east edge
# Convert to row/col (note: rows go north to south!)
row_exact = (1.0 - lat_frac) * 3600.0 # 0 = north, 3600 = south
col_exact = lon_frac * 3600.0 # 0 = west, 3600 = east
# Four surrounding grid points
r0 = int(row_exact)
c0 = int(col_exact)
r1 = min(r0 + 1, 3600)
c1 = min(c0 + 1, 3600)
# Fractional position between grid points
dr = row_exact - r0
dc = col_exact - c0
# Bilinear interpolation
z00 = tile[r0, c0]
z01 = tile[r0, c1]
z10 = tile[r1, c0]
z11 = tile[r1, c1]
# Handle NaN (void) values
if np.isnan(z00) or np.isnan(z01) or np.isnan(z10) or np.isnan(z11):
# Fall back to nearest non-NaN
valid = [(z00, 0, 0), (z01, 0, 1), (z10, 1, 0), (z11, 1, 1)]
valid = [(z, r, c) for z, r, c in valid if not np.isnan(z)]
return valid[0][0] if valid else 0.0
elevation = (z00 * (1 - dr) * (1 - dc) +
z01 * (1 - dr) * dc +
z10 * dr * (1 - dc) +
z11 * dr * dc)
return float(elevation)
```
### Vectorized Batch Elevation (for GPU pipeline)
This replaces the current `_batch_elevation_lookup` in gpu_service.py.
Must handle multi-tile seamlessly.
```python
def get_elevations_batch(self, lats: np.ndarray, lons: np.ndarray) -> np.ndarray:
"""Vectorized elevation lookup with bilinear interpolation.
Handles points spanning multiple tiles efficiently.
Groups points by tile, processes each tile with full NumPy vectorization.
"""
elevations = np.zeros(len(lats), dtype=np.float32)
# Compute tile indices for each point
lat_ints = np.where(lats >= 0, np.floor(lats).astype(int),
np.floor(lats).astype(int))
lon_ints = np.where(lons >= 0, np.floor(lons).astype(int),
np.floor(lons).astype(int))
# Group by tile
tile_keys = lat_ints * 1000 + lon_ints # unique key per tile
unique_keys = np.unique(tile_keys)
for key in unique_keys:
mask = tile_keys == key
lat_int = int(key // 1000)
lon_int = int(key % 1000)
if lon_int > 500: # handle negative longitudes
lon_int -= 1000
tile_name = self._make_tile_name(lat_int, lon_int)
tile = self.load_tile(tile_name)
if tile is None:
elevations[mask] = 0.0 # no data
continue
# Vectorized bilinear for all points in this tile
tile_lats = lats[mask]
tile_lons = lons[mask]
lat_frac = tile_lats - lat_int
lon_frac = tile_lons - lon_int
row_exact = (1.0 - lat_frac) * 3600.0
col_exact = lon_frac * 3600.0
r0 = np.clip(row_exact.astype(int), 0, 3599)
c0 = np.clip(col_exact.astype(int), 0, 3599)
r1 = np.clip(r0 + 1, 0, 3600)
c1 = np.clip(c0 + 1, 0, 3600)
dr = row_exact - r0
dc = col_exact - c0
z00 = tile[r0, c0]
z01 = tile[r0, c1]
z10 = tile[r1, c0]
z11 = tile[r1, c1]
result = (z00 * (1 - dr) * (1 - dc) +
z01 * (1 - dr) * dc +
z10 * dr * (1 - dc) +
z11 * dr * dc)
# Handle NaN voids
nan_mask = np.isnan(result)
if nan_mask.any():
result[nan_mask] = 0.0
elevations[mask] = result
return elevations
```
## Integration Points
### 1. Replace terrain_service.py elevation lookup
Current terrain service downloads elevation data from an API.
Replace with SRTMTileManager calls:
```python
# OLD:
elevation = await self.terrain_service.get_elevation(lat, lon)
# NEW:
elevation = self.tile_manager.get_elevation(lat, lon)
# Or for batch (GPU pipeline Phase 2.6):
elevations = self.tile_manager.get_elevations_batch(lats_array, lons_array)
```
### 2. Replace _batch_elevation_lookup in gpu_service.py
The vectorized elevation lookup in gpu_service.py currently loads tiles
and does nearest-neighbor sampling. Replace with tile_manager.get_elevations_batch()
which does bilinear interpolation.
### 3. Coverage service pre-check
Before starting calculation, check if all needed tiles are available:
```python
missing = self.tile_manager.get_missing_tiles(site_lat, site_lon, radius_km)
if missing:
if has_internet:
# Try to download missing tiles
for tile_name in missing:
await self.tile_manager.download_tile(tile_name)
else:
# Return warning to frontend
return {"warning": f"Missing terrain tiles: {missing}. Using flat terrain."}
```
### 4. Frontend notification
When tiles are missing, show a warning banner:
"⚠ Terrain data not available for this area. Coverage accuracy reduced."
When tiles are being downloaded:
"⬇ Downloading terrain data... (N48E033.hgt, 12.5 MB)"
### 5. Terrain Profile Viewer
The terrain profile viewer should use the same tile_manager
for consistent elevation data. With bilinear interpolation,
profiles will be much smoother and more accurate.
## Download Sources (Priority Order)
For auto-download when online:
1. **srtm.fasma.org** (no auth, direct HGT.zip download)
URL: `https://srtm.fasma.org/N48E033.SRTMGL1.hgt.zip`
- Free, no registration
- SRTM1 (30m) data
- May be slow or unreliable
2. **viewfinderpanoramas.org** (no auth, void-filled data)
URL: `http://viewfinderpanoramas.org/dem1/{region}/{tile}.hgt.zip`
- Free, no registration
- Void areas filled from topographic maps
- Better quality in mountainous areas
- File naming might differ by region
3. **Future: UMTC sync server**
URL: `https://rfcp.{your-domain}/api/terrain/tiles/{tile_name}.hgt`
- Self-hosted on your infrastructure
- Accessible via WireGuard mesh
- Can pre-populate with full Ukraine dataset
## Offline Bundle Strategy
For installer / field deployment:
### Option A: Region packs
Pre-package tiles by operational area:
- `terrain-dnipro.zip` — 4 tiles around Dnipro area (~100 MB)
- `terrain-ukraine-east.zip` — ~50 tiles, eastern Ukraine (~1.2 GB)
- `terrain-ukraine-full.zip` — ~171 tiles, all Ukraine (~4.3 GB)
### Option B: On-demand with cache
Ship empty, download tiles as needed on first calculation.
Cache permanently. Works well for development/testing.
### Option C: Live USB bundle
For tactical deployment, include full Ukraine terrain data
on the live USB alongside the application. 4.3 GB is acceptable
for a USB drive.
Recommend: **Option B for now** (development), **Option C for deployment**.
## File Changes
### New Files
- `backend/app/services/terrain_manager.py` — SRTMTileManager class
### Modified Files
- `backend/app/services/terrain_service.py` — Replace API calls with tile_manager
- `backend/app/services/gpu_service.py` — Replace _batch_elevation_lookup
- `backend/app/services/coverage_service.py` — Add missing tile pre-check
- `backend/app/main.py` — Initialize tile_manager on startup
### Config
- Add `TERRAIN_DIR` environment variable / config option
- Default: `./data/terrain` relative to backend exe
## Testing
```powershell
# Build and test
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --noconfirm
.\dist\rfcp-server\rfcp-server.exe
```
### Test 1: First run (no tiles cached)
- Start app, trigger calculation
- Should attempt to download required tile(s)
- If online: downloads, caches, calculates
- If offline: warning, flat terrain fallback
### Test 2: Cached tiles
- Run same calculation again
- Tile loaded from disk cache, no download
- Should be fast (tile load from disk < 100ms)
### Test 3: Accuracy comparison
- Compare elevation at known points (e.g., Dnipro city center)
- Cross-reference with Google Earth elevation
- Expected accuracy: ±5m horizontal, ±16m vertical (SRTM spec)
### Test 4: Multi-tile calculation
- Set radius to 50km+ to span multiple tiles
- Verify seamless stitching at tile boundaries
- No elevation jumps or artifacts at edges
### Test 5: Terrain profile
- Draw terrain profile across tile boundary
- Should be smooth, no discontinuity
- Compare with Google Earth profile for same path
### Test 6: Performance
- Tile load time from disk: <100ms
- Batch elevation lookup (6000 points): <50ms
- Should not regress overall calculation time
- Memory: ~25 MB per loaded tile, max 16 tiles = 400 MB
## What NOT to Change
- Don't modify GPU pipeline architecture (Phase 2.5/2.6/2.7)
- Don't change propagation model math
- Don't change API endpoints or response format
- Don't change frontend map or heatmap rendering
- Don't change OSM building/vegetation fetching
- Don't change PyInstaller build process (just add data dir)
## Success Criteria
- [ ] SRTM1 tiles load correctly (3601×3601, 30m resolution)
- [ ] Bilinear interpolation working (smoother than nearest-neighbor)
- [ ] Offline mode works with pre-cached tiles
- [ ] Auto-download works when online
- [ ] Missing tile warning shown to user
- [ ] Multi-tile seamless stitching
- [ ] Terrain profile accuracy matches Google Earth within 20m
- [ ] No performance regression (calculation time same or faster)
- [ ] Tile cache directory configurable

View File

@@ -0,0 +1,246 @@
# RFCP — Iteration 3.9.1: Terra Tile Server Integration
## Overview
Connect terrain_service.py to our SRTM tile server (terra.eliah.one) as primary download source, add terrain status API endpoint, and create a bulk pre-download utility. The `data/terrain/` directory already exists.
## Context
- terra.eliah.one is live and serving tiles via Caddy file_server
- SRTM3 (90m): 187 tiles, 515 MB — full Ukraine coverage (N44-N51, E018-E041)
- SRTM1 (30m): 160 tiles, 3.9 GB — same coverage area
- terrain_service.py already has bilinear interpolation (3.9.0)
- Backend runs on Windows with RTX 4060, tiles stored locally in `data/terrain/`
- Server is download source, NOT used during realtime calculations
## Changes Required
### 1. Update SRTM_SOURCES in terrain_service.py
**File:** `backend/app/services/terrain_service.py`
Replace current SRTM_SOURCES (lines 22-25):
```python
SRTM_SOURCES = [
"https://elevation-tiles-prod.s3.amazonaws.com/skadi/{lat_dir}/{tile_name}.hgt.gz",
"https://s3.amazonaws.com/elevation-tiles-prod/skadi/{lat_dir}/{tile_name}.hgt.gz",
]
```
With prioritized source list:
```python
SRTM_SOURCES = [
# Our tile server — SRTM1 (30m) preferred, uncompressed
{
"url": "https://terra.eliah.one/srtm1/{tile_name}.hgt",
"compressed": False,
"resolution": "srtm1",
},
# Our tile server — SRTM3 (90m) fallback
{
"url": "https://terra.eliah.one/srtm3/{tile_name}.hgt",
"compressed": False,
"resolution": "srtm3",
},
# Public AWS mirror — SRTM1, gzip compressed
{
"url": "https://elevation-tiles-prod.s3.amazonaws.com/skadi/{lat_dir}/{tile_name}.hgt.gz",
"compressed": True,
"resolution": "srtm1",
},
]
```
Update `download_tile()` to handle the new source format:
```python
async def download_tile(self, tile_name: str) -> bool:
"""Download SRTM tile from configured sources, preferring highest resolution."""
tile_path = self.get_tile_path(tile_name)
if tile_path.exists():
return True
lat_dir = tile_name[:3] # e.g., "N48"
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
for source in self.SRTM_SOURCES:
url = source["url"].format(lat_dir=lat_dir, tile_name=tile_name)
try:
response = await client.get(url)
if response.status_code == 200:
data = response.content
# Skip empty responses
if len(data) < 1000:
continue
if source["compressed"]:
if url.endswith('.gz'):
data = gzip.decompress(data)
elif url.endswith('.zip'):
with zipfile.ZipFile(io.BytesIO(data)) as zf:
for name in zf.namelist():
if name.endswith('.hgt'):
data = zf.read(name)
break
# Validate tile size
if len(data) not in (3601 * 3601 * 2, 1201 * 1201 * 2):
print(f"[Terrain] Invalid tile size {len(data)} from {url}")
continue
tile_path.write_bytes(data)
res = source["resolution"]
size_mb = len(data) / 1048576
print(f"[Terrain] Downloaded {tile_name} ({res}, {size_mb:.1f} MB)")
return True
except Exception as e:
print(f"[Terrain] Failed from {url}: {e}")
continue
print(f"[Terrain] Could not download {tile_name} from any source")
return False
```
### 2. Add Terrain Status API Endpoint
**File:** `backend/app/api/routes.py` (or wherever API routes are defined)
Add a new endpoint:
```python
@router.get("/api/terrain/status")
async def terrain_status():
"""Return terrain data availability info."""
from app.services.terrain_service import terrain_service
cached_tiles = terrain_service.get_cached_tiles()
cache_size = terrain_service.get_cache_size_mb()
# Categorize by resolution
srtm1_tiles = [t for t in cached_tiles
if (terrain_service.terrain_path / f"{t}.hgt").stat().st_size == 3601 * 3601 * 2]
srtm3_tiles = [t for t in cached_tiles if t not in srtm1_tiles]
return {
"total_tiles": len(cached_tiles),
"srtm1": {
"count": len(srtm1_tiles),
"resolution_m": 30,
"tiles": sorted(srtm1_tiles),
},
"srtm3": {
"count": len(srtm3_tiles),
"resolution_m": 90,
"tiles": sorted(srtm3_tiles),
},
"cache_size_mb": round(cache_size, 1),
"memory_cached": len(terrain_service._tile_cache),
"terra_server": "https://terra.eliah.one",
}
```
### 3. Add Bulk Pre-Download Endpoint
**File:** Same routes file
```python
@router.post("/api/terrain/download")
async def terrain_download(request: dict):
"""Pre-download tiles for a region.
Body: {"center_lat": 48.46, "center_lon": 35.04, "radius_km": 50}
Or: {"tiles": ["N48E034", "N48E035", "N47E034", "N47E035"]}
"""
from app.services.terrain_service import terrain_service
if "tiles" in request:
tile_list = request["tiles"]
else:
center_lat = request.get("center_lat", 48.46)
center_lon = request.get("center_lon", 35.04)
radius_km = request.get("radius_km", 50)
tile_list = terrain_service.get_required_tiles(center_lat, center_lon, radius_km)
missing = [t for t in tile_list if not terrain_service.get_tile_path(t).exists()]
if not missing:
return {"status": "ok", "message": "All tiles already cached", "count": len(tile_list)}
# Download missing tiles
downloaded = []
failed = []
for tile_name in missing:
success = await terrain_service.download_tile(tile_name)
if success:
downloaded.append(tile_name)
else:
failed.append(tile_name)
return {
"status": "ok",
"required": len(tile_list),
"already_cached": len(tile_list) - len(missing),
"downloaded": downloaded,
"failed": failed,
}
```
### 4. Add Tile Index Endpoint
**File:** Same routes file
```python
@router.get("/api/terrain/index")
async def terrain_index():
"""Fetch tile index from terra server."""
import httpx
try:
async with httpx.AsyncClient(timeout=10.0) as client:
resp = await client.get("https://terra.eliah.one/api/index")
if resp.status_code == 200:
return resp.json()
except Exception:
pass
return {"error": "Could not reach terra.eliah.one", "offline": True}
```
## Testing Checklist
- [ ] `GET /api/terrain/status` returns tile counts and sizes
- [ ] `POST /api/terrain/download {"center_lat": 48.46, "center_lon": 35.04, "radius_km": 10}` downloads missing tiles from terra.eliah.one
- [ ] Tiles downloaded from terra are valid HGT format (2,884,802 or 25,934,402 bytes)
- [ ] SRTM1 is preferred over SRTM3 when downloading
- [ ] Existing tiles are not re-downloaded
- [ ] Coverage calculation works with terrain data (test with Dnipro coordinates)
- [ ] `GET /api/terrain/index` returns terra server tile list
## Build & Deploy
```bash
cd D:\root\rfcp\backend
# No build needed — Python backend, just restart
# Kill existing uvicorn and restart:
python -m uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
```
## Commit Message
```
feat(terrain): integrate terra.eliah.one tile server
- Add terra.eliah.one as primary SRTM source (SRTM1 30m preferred)
- Keep AWS S3 as fallback source
- Add /api/terrain/status endpoint (tile inventory)
- Add /api/terrain/download endpoint (bulk pre-download)
- Add /api/terrain/index endpoint (terra server index)
- Validate tile size before saving
- Add follow_redirects=True to httpx client
```
## Success Criteria
1. terrain_service downloads from terra.eliah.one first
2. /api/terrain/status shows correct tile counts by resolution
3. /api/terrain/download fetches tiles for any Ukrainian coordinate
4. Offline mode works — no downloads attempted if tiles exist locally
5. Coverage calculation uses real elevation data instead of flat terrain

View File

@@ -0,0 +1,656 @@
# RFCP Dependencies & Installer Specification
## Overview
All dependencies needed for RFCP to work out of the box, including GPU acceleration.
The installer must handle everything — user should NOT need to run pip manually.
---
## Python Dependencies
### Core (MUST have)
```txt
# requirements.txt
# Web framework
fastapi>=0.104.0
uvicorn[standard]>=0.24.0
websockets>=12.0
# Scientific computing
numpy>=1.24.0
scipy>=1.11.0
# Geospatial
pyproj>=3.6.0 # coordinate transformations
shapely>=2.0.0 # geometry operations (boundary contours)
# Terrain data
rasterio>=1.3.0 # GeoTIFF reading (optional, for custom terrain)
# Note: SRTM .hgt files read with numpy directly
# OSM data
requests>=2.31.0 # HTTP client for OSM Overpass API
geopy>=2.4.0 # distance calculations
# Database
# sqlite3 is built-in Python — no install needed
# Utilities
orjson>=3.9.0 # fast JSON (optional, faster API responses)
pydantic>=2.0.0 # data validation (FastAPI dependency)
```
### GPU Acceleration (OPTIONAL — auto-detected)
```txt
# requirements-gpu-nvidia.txt
cupy-cuda12x>=12.0.0 # For CUDA 12.x (RTX 30xx, 40xx)
# OR
cupy-cuda11x>=11.0.0 # For CUDA 11.x (older cards)
# requirements-gpu-opencl.txt
pyopencl>=2023.1 # For ANY GPU (Intel, AMD, NVIDIA)
```
### Development / Testing
```txt
# requirements-dev.txt
pytest>=7.0.0
pytest-asyncio>=0.21.0
httpx>=0.25.0 # async test client
```
---
## System Dependencies
### NVIDIA GPU Support
```
REQUIRED: NVIDIA Driver (comes with GPU)
REQUIRED: CUDA Toolkit 12.x (for CuPy)
Check if installed:
nvidia-smi → shows driver version
nvcc --version → shows CUDA toolkit version
If missing CUDA toolkit:
Download from: https://developer.nvidia.com/cuda-downloads
Select: Windows > x86_64 > 11/10 > exe (local)
Size: ~3 GB
Alternative: cupy auto-installs CUDA runtime!
pip install cupy-cuda12x
This bundles CUDA runtime (~700 MB) — no separate install needed
```
### Intel GPU Support (OpenCL)
```
REQUIRED: Intel GPU Driver (usually pre-installed)
REQUIRED: Intel OpenCL Runtime
Check if installed:
Open Device Manager → Display Adapters → Intel UHD/Iris
For OpenCL:
Download Intel GPU Computing Runtime:
https://github.com/intel/compute-runtime/releases
Or: Intel oneAPI Base Toolkit (includes OpenCL)
https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html
```
### AMD GPU Support (OpenCL)
```
REQUIRED: AMD Adrenalin Driver (includes OpenCL)
Download from: https://www.amd.com/en/support
```
---
## Node.js / Frontend Dependencies
### System Requirements
```
Node.js >= 18.0.0 (LTS recommended)
npm >= 9.0.0
Check:
node --version
npm --version
```
### Frontend packages (managed by npm)
```json
// package.json — key dependencies
{
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"leaflet": "^1.9.4",
"react-leaflet": "^4.2.0",
"recharts": "^2.8.0",
"zustand": "^4.4.0",
"lucide-react": "^0.294.0"
},
"devDependencies": {
"vite": "^5.0.0",
"typescript": "^5.3.0",
"tailwindcss": "^3.4.0",
"@types/leaflet": "^1.9.0"
}
}
```
---
## Installer Script
### Windows Installer (NSIS or Electron-Builder)
```python
# install_rfcp.py — Python-based installer/setup script
import subprocess
import sys
import platform
import os
import shutil
import json
def check_python():
"""Verify Python 3.10+ is available."""
version = sys.version_info
if version.major < 3 or version.minor < 10:
print(f"❌ Python 3.10+ required, found {version.major}.{version.minor}")
return False
print(f"✅ Python {version.major}.{version.minor}.{version.micro}")
return True
def check_node():
"""Verify Node.js 18+ is available."""
try:
result = subprocess.run(["node", "--version"], capture_output=True, text=True)
version = result.stdout.strip().lstrip('v')
major = int(version.split('.')[0])
if major < 18:
print(f"❌ Node.js 18+ required, found {version}")
return False
print(f"✅ Node.js {version}")
return True
except FileNotFoundError:
print("❌ Node.js not found")
return False
def detect_gpu():
"""Detect available GPU hardware."""
gpus = {
"nvidia": False,
"nvidia_name": "",
"intel": False,
"intel_name": "",
"amd": False,
"amd_name": ""
}
# Check NVIDIA
try:
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,driver_version,memory.total",
"--format=csv,noheader"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0:
info = result.stdout.strip()
gpus["nvidia"] = True
gpus["nvidia_name"] = info.split(",")[0].strip()
print(f"✅ NVIDIA GPU: {info}")
except (FileNotFoundError, subprocess.TimeoutExpired):
print(" No NVIDIA GPU detected")
# Check Intel/AMD via WMI (Windows)
if platform.system() == "Windows":
try:
result = subprocess.run(
["wmic", "path", "win32_videocontroller", "get",
"name,adapterram,driverversion", "/format:csv"],
capture_output=True, text=True, timeout=5
)
for line in result.stdout.strip().split('\n'):
if 'Intel' in line:
gpus["intel"] = True
gpus["intel_name"] = [x for x in line.split(',') if 'Intel' in x][0]
print(f"✅ Intel GPU: {gpus['intel_name']}")
elif 'AMD' in line or 'Radeon' in line:
gpus["amd"] = True
gpus["amd_name"] = [x for x in line.split(',') if 'AMD' in x or 'Radeon' in x][0]
print(f"✅ AMD GPU: {gpus['amd_name']}")
except Exception:
pass
return gpus
def install_core_dependencies():
"""Install core Python dependencies."""
print("\n📦 Installing core dependencies...")
subprocess.run([
sys.executable, "-m", "pip", "install", "-r", "requirements.txt",
"--quiet", "--no-warn-script-location"
], check=True)
print("✅ Core dependencies installed")
def install_gpu_dependencies(gpus: dict):
"""Install GPU-specific dependencies based on detected hardware."""
print("\n🎮 Setting up GPU acceleration...")
gpu_installed = False
# NVIDIA — install CuPy (includes CUDA runtime)
if gpus["nvidia"]:
print(f" Installing CuPy for {gpus['nvidia_name']}...")
try:
# Try CUDA 12 first (newer cards)
subprocess.run([
sys.executable, "-m", "pip", "install", "cupy-cuda12x",
"--quiet", "--no-warn-script-location"
], check=True, timeout=300)
print(f" ✅ CuPy (CUDA 12) installed for {gpus['nvidia_name']}")
gpu_installed = True
except (subprocess.CalledProcessError, subprocess.TimeoutExpired):
try:
# Fallback to CUDA 11
subprocess.run([
sys.executable, "-m", "pip", "install", "cupy-cuda11x",
"--quiet", "--no-warn-script-location"
], check=True, timeout=300)
print(f" ✅ CuPy (CUDA 11) installed for {gpus['nvidia_name']}")
gpu_installed = True
except Exception as e:
print(f" ⚠️ CuPy installation failed: {e}")
print(f" 💡 Manual install: pip install cupy-cuda12x")
# Intel/AMD — install PyOpenCL
if gpus["intel"] or gpus["amd"]:
gpu_name = gpus["intel_name"] or gpus["amd_name"]
print(f" Installing PyOpenCL for {gpu_name}...")
try:
subprocess.run([
sys.executable, "-m", "pip", "install", "pyopencl",
"--quiet", "--no-warn-script-location"
], check=True, timeout=120)
print(f" ✅ PyOpenCL installed for {gpu_name}")
gpu_installed = True
except Exception as e:
print(f" ⚠️ PyOpenCL installation failed: {e}")
print(f" 💡 Manual install: pip install pyopencl")
if not gpu_installed:
print(" No GPU acceleration available — using CPU (NumPy)")
print(" 💡 This is fine! GPU just makes large calculations faster.")
return gpu_installed
def install_frontend():
"""Install frontend dependencies and build."""
print("\n🌐 Setting up frontend...")
frontend_dir = os.path.join(os.path.dirname(__file__), "frontend")
if os.path.exists(os.path.join(frontend_dir, "package.json")):
subprocess.run(["npm", "install"], cwd=frontend_dir, check=True)
subprocess.run(["npm", "run", "build"], cwd=frontend_dir, check=True)
print("✅ Frontend built")
else:
print("⚠️ Frontend directory not found")
def download_terrain_data():
"""Pre-download SRTM terrain tiles for Ukraine."""
print("\n🏔️ Checking terrain data...")
cache_dir = os.path.expanduser("~/.rfcp/terrain")
os.makedirs(cache_dir, exist_ok=True)
# Ukraine bounding box: lat 44-53, lon 22-41
# SRTM tiles needed for typical use
required_tiles = [
# Lviv oblast area (common test area)
"N49E025", "N49E024", "N49E026",
"N50E025", "N50E024", "N50E026",
# Dnipro area
"N48E034", "N48E035",
"N49E034", "N49E035",
]
existing = [f.replace(".hgt", "") for f in os.listdir(cache_dir) if f.endswith(".hgt")]
missing = [t for t in required_tiles if t not in existing]
if missing:
print(f" {len(missing)} terrain tiles needed (auto-download on first use)")
else:
print(f"{len(existing)} terrain tiles cached")
def create_launcher():
"""Create desktop shortcut / launcher script."""
print("\n🚀 Creating launcher...")
if platform.system() == "Windows":
# Create .bat launcher
launcher = os.path.join(os.path.dirname(__file__), "RFCP.bat")
with open(launcher, 'w') as f:
f.write('@echo off\n')
f.write('title RFCP - RF Coverage Planner\n')
f.write('echo Starting RFCP...\n')
f.write(f'cd /d "{os.path.dirname(__file__)}"\n')
f.write(f'"{sys.executable}" -m uvicorn backend.app.main:app --host 0.0.0.0 --port 8888\n')
print(f" ✅ Launcher created: {launcher}")
return True
def verify_installation():
"""Run quick verification tests."""
print("\n🔍 Verifying installation...")
checks = []
# Check core imports
try:
import numpy as np
checks.append(f"✅ NumPy {np.__version__}")
except ImportError:
checks.append("❌ NumPy missing")
try:
import scipy
checks.append(f"✅ SciPy {scipy.__version__}")
except ImportError:
checks.append("❌ SciPy missing")
try:
import fastapi
checks.append(f"✅ FastAPI {fastapi.__version__}")
except ImportError:
checks.append("❌ FastAPI missing")
try:
import shapely
checks.append(f"✅ Shapely {shapely.__version__}")
except ImportError:
checks.append("⚠️ Shapely missing (boundary features disabled)")
# Check GPU
try:
import cupy as cp
device = cp.cuda.Device(0)
checks.append(f"✅ CuPy → {device.name} ({device.mem_info[1]//1024//1024} MB)")
except ImportError:
checks.append(" CuPy not available")
except Exception as e:
checks.append(f"⚠️ CuPy error: {e}")
try:
import pyopencl as cl
devices = []
for p in cl.get_platforms():
for d in p.get_devices():
devices.append(d.name)
checks.append(f"✅ PyOpenCL → {', '.join(devices)}")
except ImportError:
checks.append(" PyOpenCL not available")
except Exception as e:
checks.append(f"⚠️ PyOpenCL error: {e}")
for check in checks:
print(f" {check}")
return all("" not in c for c in checks)
def main():
"""Main installer entry point."""
print("=" * 60)
print(" RFCP — RF Coverage Planner — Installer")
print("=" * 60)
print()
# Step 1: Check prerequisites
print("📋 Checking prerequisites...")
if not check_python():
sys.exit(1)
check_node()
# Step 2: Detect GPU
gpus = detect_gpu()
# Step 3: Install dependencies
install_core_dependencies()
install_gpu_dependencies(gpus)
# Step 4: Frontend
install_frontend()
# Step 5: Terrain data
download_terrain_data()
# Step 6: Launcher
create_launcher()
# Step 7: Verify
print()
success = verify_installation()
# Summary
print()
print("=" * 60)
if success:
print(" ✅ RFCP installed successfully!")
print()
print(" To start RFCP:")
print(" python -m uvicorn backend.app.main:app --port 8888")
print(" Then open: http://localhost:8888")
print()
if gpus["nvidia"]:
print(f" 🎮 GPU: {gpus['nvidia_name']} (CUDA)")
elif gpus["intel"] or gpus["amd"]:
gpu_name = gpus["intel_name"] or gpus["amd_name"]
print(f" 🎮 GPU: {gpu_name} (OpenCL)")
else:
print(" 💻 Mode: CPU only")
else:
print(" ⚠️ Installation completed with warnings")
print(" Some features may be limited")
print("=" * 60)
if __name__ == "__main__":
main()
```
---
## Electron-Builder / NSIS Packaging
### For .exe Installer
```yaml
# electron-builder.yml
appId: com.rfcp.coverage-planner
productName: "RFCP - RF Coverage Planner"
copyright: "RFCP 2026"
directories:
output: dist
buildResources: build
files:
- "backend/**/*"
- "frontend/dist/**/*"
- "requirements.txt"
- "install_rfcp.py"
- "!**/*.pyc"
- "!**/node_modules/**"
- "!**/venv/**"
extraResources:
- from: "python-embedded/"
to: "python/"
- from: "terrain-data/"
to: "terrain/"
win:
target:
- target: nsis
arch: [x64]
icon: "build/icon.ico"
nsis:
oneClick: false
allowToChangeInstallationDirectory: true
installerIcon: "build/icon.ico"
license: "LICENSE.md"
# Custom NSIS script for GPU detection
include: "build/gpu-detect.nsh"
# Install steps:
# 1. Extract files
# 2. Run install_rfcp.py (detects GPU, installs deps)
# 3. Create Start Menu shortcuts
# 4. Create Desktop shortcut
```
### Portable Version (.zip)
```
RFCP-Portable/
├── RFCP.bat # Main launcher
├── install.bat # First-time setup
├── backend/
│ ├── app/
│ │ ├── main.py
│ │ ├── api/
│ │ ├── services/
│ │ └── models/
│ └── requirements.txt
├── frontend/
│ └── dist/ # Pre-built frontend
├── python/ # Embedded Python (optional)
│ ├── python.exe
│ └── Lib/
├── terrain/ # Pre-cached .hgt files
│ ├── N49E025.hgt
│ └── ...
├── data/
│ ├── osm_cache.db # SQLite cache (created on first run)
│ └── config.json # User settings
└── README.md
```
### install.bat (First-Time Setup)
```batch
@echo off
title RFCP - First Time Setup
echo ============================================
echo RFCP - RF Coverage Planner - Setup
echo ============================================
echo.
REM Check if Python exists
python --version >nul 2>&1
if errorlevel 1 (
echo ERROR: Python not found!
echo Please install Python 3.10+ from python.org
pause
exit /b 1
)
REM Run installer
python install_rfcp.py
echo.
echo Setup complete! Run RFCP.bat to start.
pause
```
### RFCP.bat (Launcher)
```batch
@echo off
title RFCP - RF Coverage Planner
cd /d "%~dp0"
REM Check if installed
if not exist "backend\app\main.py" (
echo ERROR: RFCP not found. Run install.bat first.
pause
exit /b 1
)
echo Starting RFCP...
echo Open http://localhost:8888 in your browser
echo Press Ctrl+C to stop
echo.
python -m uvicorn backend.app.main:app --host 0.0.0.0 --port 8888
```
---
## Dependency Size Estimates
| Component | Size |
|-----------|------|
| Python (embedded) | ~30 MB |
| Core pip packages | ~80 MB |
| CuPy + CUDA runtime | ~700 MB |
| PyOpenCL | ~15 MB |
| Frontend (built) | ~5 MB |
| SRTM terrain (Ukraine) | ~300 MB |
| **Total (with CUDA)** | **~1.1 GB** |
| **Total (CPU only)** | **~415 MB** |
---
## Runtime Requirements
| Resource | Minimum | Recommended |
|----------|---------|-------------|
| RAM | 4 GB | 8+ GB |
| Disk | 500 MB | 2 GB (with terrain cache) |
| CPU | 4 cores | 8+ cores |
| GPU | - | NVIDIA GTX 1060+ / Intel UHD 630+ |
| OS | Windows 10 | Windows 10/11 64-bit |
| Python | 3.10 | 3.11+ |
| Node.js | 18 | 20 LTS |
---
## Auto-Update Mechanism (Future)
```python
# Check for updates on startup
async def check_for_updates():
try:
response = await httpx.get(
"https://api.github.com/repos/user/rfcp/releases/latest",
timeout=5
)
latest = response.json()["tag_name"]
current = get_current_version()
if latest != current:
return {
"update_available": True,
"current": current,
"latest": latest,
"download_url": response.json()["assets"][0]["browser_download_url"]
}
except:
pass
return {"update_available": False}
```

View File

@@ -0,0 +1,516 @@
# RFCP — Iteration 3.10.5: WebGL Smooth Coverage Interpolation
**Date:** February 6, 2026
**Priority:** P1 (Major Visual Improvement)
**Estimated Time:** 3-4 hours
**Author:** Claude (Opus 4.5) for Олег @ UMTC
---
## Overview
Replace the current grid-based square coverage visualization with smooth WebGL-interpolated rendering. Currently coverage is displayed as discrete colored squares which looks "pixelated" and unrealistic. Professional RF tools like CloudRF use smooth gradients that interpolate between measurement points.
**Current State:** Grid squares at 50m/200m resolution → blocky appearance
**Target State:** Smooth bilinear/bicubic interpolation → professional gradient appearance
---
## Problem Description
### Current Implementation
- Coverage points are rendered as discrete squares on a Leaflet canvas layer
- Each grid point (lat, lon, rsrp) → one colored square
- Resolution determines square size (50m = small squares, 200m = large squares)
- Result: Looks like Minecraft, not like professional RF planning software
### Desired Outcome
- Smooth color transitions between coverage points
- GPU-accelerated rendering via WebGL
- No visible grid artifacts
- Performance maintained or improved (GPU does interpolation)
- Same data, better visualization
---
## Technical Approach
### Option A: WebGL Fragment Shader (RECOMMENDED)
Use a WebGL fragment shader that:
1. Receives coverage points as a texture or uniform array
2. For each screen pixel, finds nearest coverage points
3. Performs bilinear interpolation between them
4. Outputs smoothly interpolated color
**Pros:**
- Best visual quality
- GPU-accelerated (fast)
- Scales to any resolution
- Industry standard approach
**Cons:**
- More complex implementation
- Requires WebGL knowledge
### Option B: Canvas with Gaussian Blur
Apply Gaussian blur to the existing canvas after rendering squares.
**Pros:**
- Simple to implement
- Works with existing code
**Cons:**
- Blurs edges (coverage boundary becomes fuzzy)
- Not true interpolation
- Performance overhead
### Option C: Pre-interpolate on CPU
Generate more points by interpolating between existing ones before rendering.
**Pros:**
- Simpler rendering
- Works with existing canvas
**Cons:**
- Much slower (CPU-bound)
- Memory intensive
- Not scalable
**DECISION: Implement Option A (WebGL Fragment Shader)**
---
## Implementation Plan
### Phase 1: WebGL Layer Setup
**File:** `frontend/src/components/map/WebGLCoverageLayer.tsx`
Create a new Leaflet layer that uses WebGL for rendering:
```typescript
import { useEffect, useRef } from 'react';
import { useMap } from 'react-leaflet';
import L from 'leaflet';
interface CoveragePoint {
lat: number;
lon: number;
rsrp: number;
}
interface WebGLCoverageLayerProps {
points: CoveragePoint[];
opacity: number;
minRsrp: number;
maxRsrp: number;
visible: boolean;
}
export default function WebGLCoverageLayer({
points,
opacity,
minRsrp,
maxRsrp,
visible
}: WebGLCoverageLayerProps) {
const map = useMap();
const canvasRef = useRef<HTMLCanvasElement | null>(null);
const glRef = useRef<WebGLRenderingContext | null>(null);
const programRef = useRef<WebGLProgram | null>(null);
useEffect(() => {
if (!visible || points.length === 0) return;
// Create canvas overlay
const canvas = document.createElement('canvas');
const container = map.getContainer();
canvas.width = container.clientWidth;
canvas.height = container.clientHeight;
canvas.style.position = 'absolute';
canvas.style.top = '0';
canvas.style.left = '0';
canvas.style.pointerEvents = 'none';
canvas.style.zIndex = '400'; // Above tiles, below markers
canvas.style.opacity = String(opacity);
container.appendChild(canvas);
canvasRef.current = canvas;
// Initialize WebGL
const gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
if (!gl) {
console.error('WebGL not supported, falling back to canvas');
return;
}
glRef.current = gl as WebGLRenderingContext;
// Setup shaders and render
initShaders(gl as WebGLRenderingContext);
render();
// Handle map move/zoom
const onMove = () => render();
map.on('move', onMove);
map.on('zoom', onMove);
map.on('resize', onResize);
return () => {
map.off('move', onMove);
map.off('zoom', onMove);
map.off('resize', onResize);
canvas.remove();
};
}, [points, visible, opacity, minRsrp, maxRsrp, map]);
// ... shader init and render functions
}
```
### Phase 2: WebGL Shaders
**Vertex Shader:**
```glsl
attribute vec2 a_position;
varying vec2 v_texCoord;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_texCoord = (a_position + 1.0) / 2.0;
}
```
**Fragment Shader (Bilinear Interpolation):**
```glsl
precision mediump float;
uniform sampler2D u_coverageTexture;
uniform vec2 u_resolution;
uniform vec4 u_bounds; // minLat, minLon, maxLat, maxLon
uniform float u_minRsrp;
uniform float u_maxRsrp;
varying vec2 v_texCoord;
// RSRP to color gradient (matches existing palette)
vec3 rsrpToColor(float rsrp) {
float t = clamp((rsrp - u_minRsrp) / (u_maxRsrp - u_minRsrp), 0.0, 1.0);
// Color stops: red -> orange -> yellow -> green -> cyan -> blue
// Reversed: strong signal = green/cyan, weak = red/orange
if (t < 0.2) {
return mix(vec3(0.5, 0.0, 0.0), vec3(1.0, 0.0, 0.0), t / 0.2); // maroon -> red
} else if (t < 0.4) {
return mix(vec3(1.0, 0.0, 0.0), vec3(1.0, 0.5, 0.0), (t - 0.2) / 0.2); // red -> orange
} else if (t < 0.6) {
return mix(vec3(1.0, 0.5, 0.0), vec3(1.0, 1.0, 0.0), (t - 0.4) / 0.2); // orange -> yellow
} else if (t < 0.8) {
return mix(vec3(1.0, 1.0, 0.0), vec3(0.0, 1.0, 0.0), (t - 0.6) / 0.2); // yellow -> green
} else {
return mix(vec3(0.0, 1.0, 0.0), vec3(0.0, 1.0, 1.0), (t - 0.8) / 0.2); // green -> cyan
}
}
void main() {
// Convert screen coords to geographic coords
vec2 geoCoord = mix(u_bounds.xy, u_bounds.zw, v_texCoord);
// Sample coverage texture (contains RSRP values encoded as colors)
vec4 sample = texture2D(u_coverageTexture, v_texCoord);
// Decode RSRP from texture (R channel = normalized RSRP)
float rsrp = mix(u_minRsrp, u_maxRsrp, sample.r);
// Skip if no coverage (alpha = 0)
if (sample.a < 0.1) {
discard;
}
vec3 color = rsrpToColor(rsrp);
gl_FragColor = vec4(color, sample.a);
}
```
### Phase 3: Coverage Data → Texture
Convert coverage points array to a WebGL texture for GPU sampling:
```typescript
function createCoverageTexture(
gl: WebGLRenderingContext,
points: CoveragePoint[],
bounds: L.LatLngBounds,
textureSize: number = 512
): WebGLTexture {
// Create a grid texture from sparse points
const data = new Uint8Array(textureSize * textureSize * 4);
const minLat = bounds.getSouth();
const maxLat = bounds.getNorth();
const minLon = bounds.getWest();
const maxLon = bounds.getEast();
// For each texture pixel, find nearest coverage point and interpolate
for (let y = 0; y < textureSize; y++) {
for (let x = 0; x < textureSize; x++) {
const lat = minLat + (maxLat - minLat) * (y / textureSize);
const lon = minLon + (maxLon - minLon) * (x / textureSize);
// Find nearest points and interpolate (IDW - Inverse Distance Weighting)
const { value, weight } = interpolateIDW(points, lat, lon, 4);
const idx = (y * textureSize + x) * 4;
if (weight > 0) {
// Encode normalized RSRP in R channel, weight in A channel
const normalized = (value - minRsrp) / (maxRsrp - minRsrp);
data[idx] = Math.floor(normalized * 255); // R = RSRP
data[idx + 1] = 0; // G = unused
data[idx + 2] = 0; // B = unused
data[idx + 3] = Math.floor(Math.min(weight, 1) * 255); // A = coverage mask
} else {
data[idx + 3] = 0; // No coverage
}
}
}
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, textureSize, textureSize, 0, gl.RGBA, gl.UNSIGNED_BYTE, data);
// Enable bilinear filtering for smooth interpolation
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return texture!;
}
// Inverse Distance Weighting interpolation
function interpolateIDW(
points: CoveragePoint[],
lat: number,
lon: number,
k: number = 4,
power: number = 2
): { value: number; weight: number } {
// Find k nearest points
const distances = points.map((p, i) => ({
index: i,
dist: Math.sqrt(Math.pow(p.lat - lat, 2) + Math.pow(p.lon - lon, 2))
}));
distances.sort((a, b) => a.dist - b.dist);
const nearest = distances.slice(0, k);
// If very close to a point, use its value directly
if (nearest[0].dist < 0.0001) {
return { value: points[nearest[0].index].rsrp, weight: 1 };
}
// IDW formula: weighted average where weight = 1 / distance^power
let sumWeights = 0;
let sumValues = 0;
for (const n of nearest) {
const w = 1 / Math.pow(n.dist, power);
sumWeights += w;
sumValues += w * points[n.index].rsrp;
}
// Limit interpolation range (don't extrapolate too far from data)
const maxDist = nearest[nearest.length - 1].dist;
const coverage = maxDist < 0.01 ? 1 : Math.max(0, 1 - maxDist * 50);
return {
value: sumValues / sumWeights,
weight: coverage
};
}
```
### Phase 4: Integration with Existing Code
**Modify:** `frontend/src/components/map/MapView.tsx`
Add toggle between old canvas layer and new WebGL layer:
```typescript
import WebGLCoverageLayer from './WebGLCoverageLayer';
// In MapView component:
const [useWebGL, setUseWebGL] = useState(true);
// In render:
{useWebGL ? (
<WebGLCoverageLayer
points={coveragePoints}
opacity={heatmapOpacity}
minRsrp={-130}
maxRsrp={-50}
visible={showCoverage}
/>
) : (
<GeographicHeatmap ... /> // Existing canvas implementation
)}
```
**Add setting:** `frontend/src/components/panels/SettingsPanel.tsx`
```typescript
<div className="flex items-center justify-between">
<span>Smooth Coverage (WebGL)</span>
<Toggle
checked={useWebGL}
onChange={setUseWebGL}
/>
</div>
```
### Phase 5: Performance Optimizations
1. **Texture Caching:** Only regenerate texture when coverage data changes
2. **Resolution Scaling:** Use smaller texture on zoom out, larger on zoom in
3. **Frustum Culling:** Don't render points outside visible bounds
4. **Web Worker:** Move IDW interpolation to background thread
```typescript
// Memoize texture generation
const coverageTexture = useMemo(() => {
if (!gl || points.length === 0) return null;
return createCoverageTexture(gl, points, bounds, textureSize);
}, [points, bounds, textureSize]);
// Dynamic texture size based on zoom
const textureSize = useMemo(() => {
const zoom = map.getZoom();
if (zoom < 10) return 256;
if (zoom < 14) return 512;
return 1024;
}, [map.getZoom()]);
```
---
## Files to Create/Modify
| File | Action | Description |
|------|--------|-------------|
| `frontend/src/components/map/WebGLCoverageLayer.tsx` | CREATE | New WebGL rendering component |
| `frontend/src/components/map/shaders/coverage.vert` | CREATE | Vertex shader (optional, can inline) |
| `frontend/src/components/map/shaders/coverage.frag` | CREATE | Fragment shader (optional, can inline) |
| `frontend/src/components/map/MapView.tsx` | MODIFY | Add WebGL layer toggle |
| `frontend/src/store/settings.ts` | MODIFY | Add useWebGL setting |
| `frontend/src/components/panels/CoverageSettingsPanel.tsx` | MODIFY | Add WebGL toggle UI |
---
## Testing Checklist
### Visual Quality
- [ ] No visible grid squares at any zoom level
- [ ] Smooth color gradients between coverage points
- [ ] Coverage boundary is smooth, not jagged
- [ ] Colors match existing palette (weak = red, strong = cyan/green)
- [ ] Opacity control works correctly
### Performance
- [ ] 60 FPS during map pan/zoom
- [ ] Initial render < 500ms for 6000 points
- [ ] Memory usage reasonable (< 100MB for large coverage)
- [ ] No GPU memory leaks on repeated calculations
### Compatibility
- [ ] Works on systems without dedicated GPU (falls back gracefully)
- [ ] Works in Chrome, Firefox, Edge
- [ ] Works on both high-DPI and standard displays
### Integration
- [ ] Toggle between WebGL and canvas modes works
- [ ] Coverage data updates correctly after recalculation
- [ ] Settings persist across sessions
- [ ] No console errors or warnings
---
## Fallback Strategy
If WebGL fails to initialize:
1. Log warning to console
2. Fall back to existing canvas implementation
3. Show toast notification to user
```typescript
const gl = canvas.getContext('webgl');
if (!gl) {
console.warn('WebGL not available, using canvas fallback');
setUseWebGL(false);
toast.warning('WebGL not supported, using standard rendering');
return;
}
```
---
## Success Criteria
1. **Visual:** Coverage looks like CloudRF/professional tools — smooth gradients, no grid
2. **Performance:** Same or better than current canvas implementation
3. **Reliability:** Graceful fallback if WebGL unavailable
4. **UX:** User can toggle between modes in settings
---
## Additional Notes
### Color Gradient Reference
Current RSRP color mapping (from `colorGradient.ts`):
```
-130 dBm → Maroon (no service)
-110 dBm → Red (very weak)
-100 dBm → Orange (weak)
-85 dBm → Yellow (fair)
-70 dBm → Green (good)
-50 dBm → Cyan (excellent)
```
### Coordinate Systems
- **Geographic:** Latitude/Longitude (EPSG:4326)
- **Screen:** Pixels from top-left
- **WebGL:** Normalized device coordinates (-1 to 1)
- **Texture:** UV coordinates (0 to 1)
All conversions must account for Web Mercator projection distortion.
---
## References
- WebGL Fundamentals: https://webglfundamentals.org/
- Leaflet Custom Layers: https://leafletjs.com/examples/extending/extending-2-layers.html
- IDW Interpolation: https://en.wikipedia.org/wiki/Inverse_distance_weighting
- CloudRF visualization: https://cloudrf.com (for visual reference)
---
## Commit Message
```
feat(coverage): WebGL smooth interpolation rendering
- Add WebGLCoverageLayer with GPU-accelerated rendering
- Implement IDW interpolation for smooth gradients
- Add toggle between WebGL and canvas modes
- Graceful fallback for systems without WebGL support
Closes #coverage-interpolation
```
---
**Ready for Implementation!**

View File

@@ -0,0 +1,439 @@
# RFCP Iteration 3.4.0 — Large Radius Support (20-50km)
## Goal
Enable 50km radius calculations without OOM by implementing memory-efficient processing patterns.
**Current limitation:** > 10-20km radius causes OOM (5+ GB RAM usage)
**Target:** 50km radius with < 4GB RAM peak
---
## Phase 1: Memory-Mapped Terrain
### 1.1 Terrain mmap Loading
Change terrain_service to use memory-mapped files instead of loading full arrays into RAM.
**File:** `backend/app/services/terrain_service.py`
```python
# Before (loads ~25 MB per tile into RAM):
terrain = np.fromfile(f, dtype='>i2').reshape((rows, cols))
# After (near-zero RAM, OS pages from disk):
terrain = np.memmap(f, dtype='>i2', mode='r', shape=(rows, cols))
```
**Expected impact:** -200-400 MB RAM per tile
### 1.2 Terrain Disk Cache
- Save downloaded .hgt files to persistent disk cache
- Don't keep raw arrays in memory after initial processing
- Implement LRU eviction if cache exceeds 2GB
- Location: `~/.rfcp/terrain_cache/`
---
## Phase 2: Tile-Based Processing
### 2.1 Split Large Calculations
If radius > 10km, split calculation area into 5km sub-tiles.
**File:** `backend/app/services/coverage_service.py` (or new `tile_processor.py`)
```python
def calculate_coverage_tiled(site, radius_m, resolution_m, settings):
"""Tile-based calculation for large radius."""
# Small radius — use existing single-pass
if radius_m <= 10000:
return calculate_coverage_single(site, radius_m, resolution_m, settings)
# Large radius — split into tiles
TILE_SIZE = 5000 # 5km tiles
tiles = generate_tile_grid(site.lat, site.lon, radius_m, TILE_SIZE)
all_results = []
for i, tile in enumerate(tiles):
log(f"Processing tile {i+1}/{len(tiles)}: {tile.bbox}")
# Load data for this tile only
tile_terrain = load_terrain_for_bbox(tile.bbox)
tile_buildings = load_buildings_for_bbox(tile.bbox)
# Calculate coverage for tile
tile_points = generate_grid_for_tile(tile, resolution_m)
tile_results = calculate_points(tile_points, site, settings,
tile_terrain, tile_buildings)
all_results.extend(tile_results)
# Free memory
del tile_terrain, tile_buildings
gc.collect()
# Report progress
progress = (i + 1) / len(tiles) * 100
yield_progress(progress, f"Tile {i+1}/{len(tiles)}")
return merge_and_dedupe_results(all_results)
def generate_tile_grid(center_lat, center_lon, radius_m, tile_size_m):
"""Generate grid of tiles covering the calculation area."""
tiles = []
# Calculate bbox of full area
lat_delta = radius_m / 111000
lon_delta = radius_m / (111000 * cos(radians(center_lat)))
# Generate tile grid
n_tiles = ceil(radius_m * 2 / tile_size_m)
for i in range(n_tiles):
for j in range(n_tiles):
tile_bbox = calculate_tile_bbox(center_lat, center_lon,
i, j, n_tiles, tile_size_m)
# Only include tiles that intersect with coverage circle
if tile_intersects_circle(tile_bbox, center_lat, center_lon, radius_m):
tiles.append(Tile(bbox=tile_bbox, index=(i, j)))
return tiles
```
### 2.2 Progressive Results via WebSocket
Send results per-tile as they complete, so user sees coverage growing.
**File:** `backend/app/api/websocket.py`
```python
async def calculate_coverage_ws(websocket, params):
for tile_results in calculate_coverage_tiled_generator(params):
# Send partial results
await websocket.send_json({
"type": "partial_results",
"points": tile_results.points,
"progress": tile_results.progress,
"tile": tile_results.tile_index,
"status": f"Tile {tile_results.tile_index} complete"
})
# Final message
await websocket.send_json({
"type": "complete",
"total_points": total_points,
"computation_time": elapsed
})
```
---
## Phase 3: SQLite Cache for OSM Data
### 3.1 Create Local Database
Replace in-memory OSM cache with SQLite database with spatial indexing.
**File:** `backend/app/services/cache_db.py` (NEW)
```python
import sqlite3
import json
class OSMCacheDB:
def __init__(self, db_path="~/.rfcp/osm_cache.db"):
self.conn = sqlite3.connect(db_path)
self._init_tables()
def _init_tables(self):
self.conn.executescript("""
CREATE TABLE IF NOT EXISTS buildings (
id INTEGER PRIMARY KEY,
osm_id TEXT UNIQUE,
lat REAL NOT NULL,
lon REAL NOT NULL,
height REAL DEFAULT 10.0,
geometry TEXT, -- GeoJSON
cell_key TEXT, -- grid cell for batch loading
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_buildings_lat ON buildings(lat);
CREATE INDEX IF NOT EXISTS idx_buildings_lon ON buildings(lon);
CREATE INDEX IF NOT EXISTS idx_buildings_cell ON buildings(cell_key);
CREATE TABLE IF NOT EXISTS vegetation (
id INTEGER PRIMARY KEY,
osm_id TEXT UNIQUE,
lat REAL NOT NULL,
lon REAL NOT NULL,
type TEXT,
geometry TEXT,
cell_key TEXT
);
CREATE INDEX IF NOT EXISTS idx_veg_lat ON vegetation(lat);
CREATE INDEX IF NOT EXISTS idx_veg_lon ON vegetation(lon);
-- Metadata for cache invalidation
CREATE TABLE IF NOT EXISTS cache_meta (
cell_key TEXT PRIMARY KEY,
data_type TEXT,
fetched_at TIMESTAMP,
item_count INTEGER
);
""")
self.conn.commit()
def query_buildings_bbox(self, min_lat, max_lat, min_lon, max_lon, limit=20000):
"""Query buildings within bounding box."""
cursor = self.conn.execute("""
SELECT osm_id, lat, lon, height, geometry
FROM buildings
WHERE lat BETWEEN ? AND ?
AND lon BETWEEN ? AND ?
LIMIT ?
""", (min_lat, max_lat, min_lon, max_lon, limit))
return [self._row_to_building(row) for row in cursor]
def insert_buildings(self, buildings, cell_key):
"""Bulk insert buildings from OSM fetch."""
self.conn.executemany("""
INSERT OR IGNORE INTO buildings
(osm_id, lat, lon, height, geometry, cell_key)
VALUES (?, ?, ?, ?, ?, ?)
""", [
(b['id'], b['lat'], b['lon'], b.get('height', 10),
json.dumps(b.get('geometry')), cell_key)
for b in buildings
])
self.conn.commit()
def is_cell_cached(self, cell_key, data_type, max_age_hours=24):
"""Check if cell data is cached and fresh."""
cursor = self.conn.execute("""
SELECT fetched_at FROM cache_meta
WHERE cell_key = ? AND data_type = ?
AND fetched_at > datetime('now', ?)
""", (cell_key, data_type, f'-{max_age_hours} hours'))
return cursor.fetchone() is not None
```
### 3.2 Update OSM Client
Modify OSM client to use SQLite cache.
**File:** `backend/app/services/osm_client.py`
```python
class OSMClient:
def __init__(self):
self.cache_db = OSMCacheDB()
def get_buildings(self, bbox, max_count=20000):
min_lat, min_lon, max_lat, max_lon = bbox
cell_key = self._bbox_to_cell_key(bbox)
# Check cache first
if self.cache_db.is_cell_cached(cell_key, 'buildings'):
return self.cache_db.query_buildings_bbox(
min_lat, max_lat, min_lon, max_lon, max_count
)
# Fetch from Overpass API
buildings = self._fetch_from_overpass(bbox, 'buildings')
# Store in cache
self.cache_db.insert_buildings(buildings, cell_key)
return buildings[:max_count]
```
---
## Phase 4: Worker Memory Optimization
### 4.1 Per-Tile Building Loading
Workers receive only tile bbox and query buildings themselves (or receive pre-filtered list).
```python
def _pool_worker_tiled(args):
"""Worker that loads buildings for its tile only."""
tile_bbox, terrain_shm_refs, config = args
# Load only buildings for this tile
cache_db = OSMCacheDB()
buildings = cache_db.query_buildings_bbox(*tile_bbox, limit=5000)
# Much smaller memory footprint per worker
# ...rest of calculation
```
### 4.2 Adaptive Worker Count
Reduce workers for large radius to prevent combined memory explosion.
```python
def get_worker_count_for_radius(radius_m, base_workers):
"""Scale down workers for large calculations."""
if radius_m > 30000:
return min(base_workers, 2)
elif radius_m > 20000:
return min(base_workers, 3)
elif radius_m > 10000:
return min(base_workers, 4)
return base_workers
```
---
## Phase 5: Frontend Progressive Rendering
### 5.1 Accumulate Partial Results
**File:** `frontend/src/store/coverage.ts`
```typescript
interface CoverageState {
points: CoveragePoint[];
isCalculating: boolean;
progress: number;
// NEW:
partialResults: CoveragePoint[];
tilesCompleted: number;
totalTiles: number;
}
// Handle partial results
case 'partial_results':
set(state => ({
partialResults: [...state.partialResults, ...message.points],
progress: message.progress,
tilesCompleted: state.tilesCompleted + 1
}));
break;
case 'complete':
set(state => ({
points: state.partialResults, // Finalize
partialResults: [],
isCalculating: false
}));
break;
```
### 5.2 Incremental Heatmap Render
**File:** `frontend/src/components/map/CoverageHeatmap.tsx`
```typescript
function CoverageHeatmap() {
const { points, partialResults, isCalculating } = useCoverageStore();
// Show partial results while calculating
const displayPoints = isCalculating ? partialResults : points;
// Throttle re-renders during streaming (every 500 points)
const throttledPoints = useThrottle(displayPoints, 500);
return <HeatmapLayer points={throttledPoints} />;
}
```
---
## Implementation Order
### Priority 1 — Biggest Impact
1. **Tile-based processing** (Phase 2.1) — enables large radius
2. **SQLite cache** (Phase 3) — reduces memory, speeds up repeated calcs
### Priority 2 — Memory Reduction
3. **Terrain mmap** (Phase 1.1) — easy win, minimal code change
4. **Per-tile building loading** (Phase 4.1)
### Priority 3 — UX Improvement
5. **Progressive WebSocket** (Phase 2.2)
6. **Frontend streaming** (Phase 5)
### Priority 4 — Polish
7. **Terrain disk cache** (Phase 1.2)
8. **Adaptive worker count** (Phase 4.2)
---
## Success Criteria
| Radius | Max Time | Max RAM |
|--------|----------|---------|
| 20 km | < 3 min | < 3 GB |
| 30 km | < 5 min | < 3.5 GB |
| 50 km | < 10 min | < 4 GB |
- No OOM crashes at any radius up to 50km
- Progressive results visible within 30s of starting
- Cache reuse speeds up repeated calculations 5-10x
---
## Files to Modify
### Backend (Python)
| File | Changes |
|------|---------|
| `terrain_service.py` | mmap loading, disk cache |
| `coverage_service.py` | tile-based routing |
| `parallel_coverage_service.py` | adaptive workers |
| `osm_client.py` | SQLite integration |
| `websocket.py` | streaming results |
| **NEW** `tile_processor.py` | tile generation & processing |
| **NEW** `cache_db.py` | SQLite cache layer |
### Frontend (TypeScript)
| File | Changes |
|------|---------|
| `store/coverage.ts` | partial results handling |
| `CoverageHeatmap.tsx` | incremental rendering |
| `App.tsx` | progress for tiled calc |
---
## Testing
```bash
# Test 20km radius
curl -X POST http://localhost:8888/api/coverage/calculate \
-H "Content-Type: application/json" \
-d '{"radius": 20000, "resolution": 500, "preset": "standard"}'
# Monitor memory
watch -n 1 'ps aux | grep rfcp-server | awk "{print \$6/1024\" MB\"}"'
# Test 50km radius
curl -X POST http://localhost:8888/api/coverage/calculate \
-H "Content-Type: application/json" \
-d '{"radius": 50000, "resolution": 1000, "preset": "standard"}'
```
---
## Notes
- Tile size 5km is a balance — smaller = more overhead, larger = more memory
- SQLite R-tree extension would be faster but requires compilation
- For Rust version, all of this will be native and faster
---
*"Think in tiles, stream results, cache everything"* 🗺️

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,557 @@
# RFCP Iteration 3.5.1 — Bugfixes & Polish
## Overview
Focused bugfix and polish release addressing UI issues, coverage boundary accuracy, history improvements, and GPU indicator fixes discovered during 3.5.0 testing.
---
## 1. GPU — Detection Not Working + UI Overlap
### 1A. GPU Not Detected Despite Being Available
**Problem:** User has a laptop with DUAL GPUs (Intel integrated + NVIDIA discrete) but the app only shows "CPU (NumPy)". GPU acceleration is not working at all — no GPU option available in the device selector.
**Root cause investigation needed:**
1. Check if CuPy is actually installed in the Python environment
2. Check if CUDA toolkit is accessible from the app's runtime
3. Check if PyOpenCL is installed (fallback for Intel GPU)
4. The backend GPU detection may be failing silently
**Debug steps to add:**
```python
# backend/app/services/gpu_backend.py — improve detection with logging
import logging
logger = logging.getLogger(__name__)
@classmethod
def detect_backends(cls) -> list:
backends = []
# Check NVIDIA CUDA
try:
import cupy as cp
count = cp.cuda.runtime.getDeviceCount()
logger.info(f"CUDA detected: {count} device(s)")
for i in range(count):
device = cp.cuda.Device(i)
backends.append({...})
except ImportError:
logger.warning("CuPy not installed — run: pip install cupy-cuda12x")
except Exception as e:
logger.warning(f"CUDA detection failed: {e}")
# Check OpenCL (works with Intel, AMD, AND NVIDIA)
try:
import pyopencl as cl
platforms = cl.get_platforms()
logger.info(f"OpenCL detected: {len(platforms)} platform(s)")
for platform in platforms:
for device in platform.get_devices():
logger.info(f" OpenCL device: {device.name}")
backends.append({...})
except ImportError:
logger.warning("PyOpenCL not installed — run: pip install pyopencl")
except Exception as e:
logger.warning(f"OpenCL detection failed: {e}")
# Always log what was found
logger.info(f"Total compute backends: {len(backends)} "
f"({sum(1 for b in backends if b['type'] == 'cuda')} CUDA, "
f"{sum(1 for b in backends if b['type'] == 'opencl')} OpenCL)")
# CPU always available
backends.append({...cpu...})
return backends
```
**Installation check endpoint:**
```python
# backend/app/api/routes/gpu.py — add diagnostic endpoint
@router.get("/diagnostics")
async def gpu_diagnostics():
"""Full GPU diagnostic info for troubleshooting."""
diag = {
"python_version": sys.version,
"platform": platform.platform(),
"cuda": {},
"opencl": {},
"numpy": {}
}
# Check CuPy/CUDA
try:
import cupy
diag["cuda"]["cupy_version"] = cupy.__version__
diag["cuda"]["cuda_version"] = cupy.cuda.runtime.runtimeGetVersion()
diag["cuda"]["device_count"] = cupy.cuda.runtime.getDeviceCount()
for i in range(diag["cuda"]["device_count"]):
d = cupy.cuda.Device(i)
diag["cuda"][f"device_{i}"] = {
"name": d.name,
"compute_capability": d.compute_capability,
"total_memory_mb": d.mem_info[1] // 1024 // 1024
}
except ImportError:
diag["cuda"]["error"] = "CuPy not installed"
diag["cuda"]["install_hint"] = "pip install cupy-cuda12x --break-system-packages"
except Exception as e:
diag["cuda"]["error"] = str(e)
# Check PyOpenCL
try:
import pyopencl as cl
diag["opencl"]["pyopencl_version"] = cl.VERSION_TEXT
for p in cl.get_platforms():
platform_info = {"name": p.name, "devices": []}
for d in p.get_devices():
platform_info["devices"].append({
"name": d.name,
"type": cl.device_type.to_string(d.type),
"memory_mb": d.global_mem_size // 1024 // 1024,
"compute_units": d.max_compute_units
})
diag["opencl"][p.name] = platform_info
except ImportError:
diag["opencl"]["error"] = "PyOpenCL not installed"
diag["opencl"]["install_hint"] = "pip install pyopencl"
except Exception as e:
diag["opencl"]["error"] = str(e)
# Check NumPy
import numpy as np
diag["numpy"]["version"] = np.__version__
return diag
```
**Frontend — show diagnostic info:**
```typescript
// In GPUIndicator.tsx — when only CPU detected, show help
{devices.length === 1 && devices[0].type === 'cpu' && (
<div className="text-xs text-yellow-400 mt-2 p-2 bg-yellow-900/20 rounded">
No GPU detected.
<button
onClick={() => fetchDiagnostics()}
className="underline ml-1"
>
Run diagnostics
</button>
</div>
)}
```
**Auto-install hint in UI:**
```
⚠ No GPU detected
For NVIDIA GPU: pip install cupy-cuda12x
For Intel/AMD GPU: pip install pyopencl
[Run Diagnostics] [Install Guide]
```
**Dual GPU handling (Intel + NVIDIA laptop):**
```python
# When both Intel (OpenCL) and NVIDIA (CUDA) found:
# - List both in device selector
# - Default to NVIDIA CUDA (faster)
# - Allow user to switch
# - Intel iGPU via OpenCL is still ~3-5x faster than CPU
# Example device list for dual GPU laptop:
# 1. ⚡ NVIDIA GeForce RTX 4060 (CUDA) — 8 GB [DEFAULT]
# 2. ⚡ Intel UHD Graphics 770 (OpenCL) — shared memory
# 3. 💻 CPU (16 cores)
```
### 1B. GPU Indicator UI — Fix Overlap with Fit Button
**Problem:** GPU device dropdown overlaps with the "Fit" button in top-right corner.
**Solution:**
- Keep compact "⚡ CPU" badge in header
- Dropdown opens to the LEFT or DOWNWARD, not overlapping map controls
- Proper z-index and positioning
- Shorter labels: "CPU" not "CPU (NumPy)"
**Files:**
- `frontend/src/components/ui/GPUIndicator.tsx`
- `backend/app/services/gpu_backend.py`
- `backend/app/api/routes/gpu.py`
---
## 2. Coverage Boundary — Improve Accuracy
**Problem:** Current boundary shows a rough circle/ellipse shape that doesn't follow actual coverage contour.
**Current behavior:** Boundary seems to be based on simple distance radius rather than actual RSRP threshold contour.
**Expected behavior:** Boundary should follow the actual -100 dBm (or configured threshold) contour line — an irregular shape that follows terrain, buildings, vegetation shadows.
**Solution:**
```python
# Backend approach: Generate contour from actual RSRP grid
import numpy as np
from scipy.ndimage import binary_dilation, binary_erosion
from shapely.geometry import MultiPoint
from shapely.ops import unary_union
def calculate_coverage_boundary(points: list, threshold_dbm: float = -100) -> list:
"""
Calculate coverage boundary as convex hull of points above threshold.
Returns list of [lat, lon] coordinates forming the boundary polygon.
"""
# Filter points above threshold
valid_points = [(p['lat'], p['lon']) for p in points if p['rsrp'] >= threshold_dbm]
if len(valid_points) < 3:
return []
# Create concave hull (alpha shape) for realistic boundary
# Concave hull follows the actual shape better than convex hull
from shapely.geometry import MultiPoint
multi_point = MultiPoint(valid_points)
# Alpha shape — adjust alpha for detail level
# Higher alpha = more detailed (but slower)
boundary = concave_hull(multi_point, ratio=0.3)
if boundary.is_empty:
return []
# Simplify to reduce points (tolerance in degrees ≈ 100m)
simplified = boundary.simplify(0.001)
# Return as coordinate list
coords = list(simplified.exterior.coords)
return [[lat, lon] for lat, lon in coords]
```
```python
# Alternative: Grid-based contour approach
def calculate_boundary_from_grid(
grid_points: list,
threshold_dbm: float,
grid_resolution_m: float
) -> list:
"""
Create boundary by finding edge cells of coverage area.
More accurate than hull — follows actual coverage gaps.
"""
import numpy as np
# Build 2D RSRP grid
lats = sorted(set(p['lat'] for p in grid_points))
lons = sorted(set(p['lon'] for p in grid_points))
grid = np.full((len(lats), len(lons)), np.nan)
lat_idx = {lat: i for i, lat in enumerate(lats)}
lon_idx = {lon: i for i, lon in enumerate(lons)}
for p in grid_points:
i = lat_idx[p['lat']]
j = lon_idx[p['lon']]
grid[i, j] = p['rsrp']
# Binary mask: above threshold
mask = grid >= threshold_dbm
# Find boundary: dilate - original = edge cells
dilated = binary_dilation(mask)
boundary_mask = dilated & ~mask
# Extract boundary coordinates
boundary_coords = []
for i in range(len(lats)):
for j in range(len(lons)):
if boundary_mask[i, j]:
boundary_coords.append([lats[i], lons[j]])
# Order points for polygon (traveling salesman approximate)
if len(boundary_coords) > 2:
ordered = order_boundary_points(boundary_coords)
return ordered
return boundary_coords
```
**Frontend changes:**
- Receive boundary polygon from backend (already calculated with results)
- Or calculate client-side from grid points
- Render as Leaflet polygon with dashed white stroke
- Should follow actual coverage shape, not circular approximation
**Files:**
- `backend/app/services/coverage_service.py` — add boundary calculation
- `frontend/src/components/map/CoverageBoundary.tsx` — render real contour
---
## 3. Session History — Show Propagation Parameters
**Problem:** History entries only show preset, points, radius, resolution. Missing propagation settings used.
**Solution:** Save full propagation config snapshot with each history entry.
```typescript
// frontend/src/store/calcHistory.ts
interface HistoryEntry {
id: string;
timestamp: Date;
computationTime: number;
preset: string;
radius: number;
resolution: number;
totalPoints: number;
// Coverage results
coverage: {
excellent: number; // percentage
good: number;
fair: number;
weak: number;
};
avgRsrp: number;
rangeMin: number;
rangeMax: number;
// NEW: Propagation parameters snapshot
propagation: {
modelsUsed: string[]; // ["Free-Space", "terrain_los", ...]
modelCount: number; // 12
frequency: number; // 2100 MHz
txPower: number; // 46 dBm
antennaGain: number; // 15 dBi
antennaHeight: number; // 10 m
// Environment
season: string; // "Winter (30%)"
temperature: string; // "15°C (mild)"
humidity: string; // "50% (normal)"
rainConditions: string; // "Light Rain"
indoorCoverage: string; // "Medium Building (brick)"
// Margins
fadingMargin: number; // 0 dB
// Atmospheric
atmosphericAbsorption: boolean;
};
// Site config
sites: number; // 2
sectors: number; // total sectors
}
```
**Display in History panel:**
```typescript
// Expanded history entry shows propagation details
<div className="history-entry-expanded">
{/* Existing: time, points, coverage bars */}
{/* NEW: Propagation summary (collapsed by default) */}
<details className="mt-2">
<summary className="text-xs text-gray-400 cursor-pointer hover:text-gray-300">
Propagation: {entry.propagation.modelCount} models, {entry.propagation.frequency} MHz
</summary>
<div className="mt-1 pl-3 text-xs text-gray-500 space-y-0.5">
<div>TX: {entry.propagation.txPower} dBm, Gain: {entry.propagation.antennaGain} dBi</div>
<div>Height: {entry.propagation.antennaHeight}m</div>
<div>Environment: {entry.propagation.season}, {entry.propagation.rainConditions}</div>
<div>Indoor: {entry.propagation.indoorCoverage}</div>
{entry.propagation.fadingMargin > 0 && (
<div>Fading margin: {entry.propagation.fadingMargin} dB</div>
)}
<div className="flex flex-wrap gap-1 mt-1">
{entry.propagation.modelsUsed.map(model => (
<span key={model} className="px-1 py-0.5 bg-slate-700 rounded text-[10px]">
{model}
</span>
))}
</div>
</div>
</details>
</div>
```
**Files:**
- `frontend/src/store/calcHistory.ts` — extend HistoryEntry type, save propagation
- `frontend/src/components/panels/HistoryPanel.tsx` — show expandable propagation details
- `backend/app/api/websocket.py` — include propagation config in result message
- `backend/app/services/coverage_service.py` — return config snapshot with results
---
## 4. Results Popup — Show Propagation Summary
**Problem:** Calculation Complete popup shows time, points, coverage bars — but not which models were used.
**Solution:** Add compact propagation info to results popup.
```typescript
// frontend/src/components/ui/ResultsPopup.tsx
// Add below coverage bars:
<div className="mt-2 text-xs text-gray-400">
<span>{result.modelsUsed?.length || 0} models</span>
<span className="mx-1"></span>
<span>{result.frequency} MHz</span>
{result.fadingMargin > 0 && (
<>
<span className="mx-1"></span>
<span>FM: {result.fadingMargin} dB</span>
</>
)}
{result.indoorCoverage && result.indoorCoverage !== 'none' && (
<>
<span className="mx-1"></span>
<span>Indoor: {result.indoorCoverage}</span>
</>
)}
</div>
```
**Files:**
- `frontend/src/components/ui/ResultsPopup.tsx`
---
## 5. Batch Frequency Change (from 3.5.0 backlog)
**Problem:** To compare coverage at different frequencies, user must edit each sector manually.
**Solution:** Quick-change buttons in toolbar or Coverage Settings.
```typescript
// frontend/src/components/panels/BatchOperations.tsx (NEW)
const QUICK_BANDS = [
{ freq: 700, label: '700', band: 'B28', color: 'text-red-400' },
{ freq: 800, label: '800', band: 'B20', color: 'text-orange-400' },
{ freq: 900, label: '900', band: 'B8', color: 'text-yellow-400' },
{ freq: 1800, label: '1800', band: 'B3', color: 'text-green-400' },
{ freq: 2100, label: '2100', band: 'B1', color: 'text-blue-400' },
{ freq: 2600, label: '2600', band: 'B7', color: 'text-purple-400' },
{ freq: 3500, label: '3500', band: 'n78', color: 'text-pink-400' },
];
export function BatchFrequencyChange() {
return (
<div className="p-3 border-t border-slate-700">
<h4 className="text-xs font-semibold text-gray-400 mb-2">
SET ALL SECTORS
</h4>
<div className="flex flex-wrap gap-1">
{QUICK_BANDS.map(b => (
<button
key={b.freq}
onClick={() => setAllSectorsFrequency(b.freq)}
className="px-2 py-1 text-xs bg-slate-700 hover:bg-slate-600 rounded"
title={`${b.band}${b.freq} MHz`}
>
<span className={b.color}>{b.label}</span>
</button>
))}
</div>
</div>
);
}
```
**Location:** Below site list, above Coverage Settings.
**Files:**
- `frontend/src/components/panels/BatchOperations.tsx` (NEW)
- `frontend/src/store/sites.ts` — add `setAllSectorsFrequency()` action
---
## 6. Minor UI Fixes
### 6.1 Terrain Profile — Click Propagation (verify fix)
- Verify that clicking "Terrain Profile" button no longer adds ruler point
- If still broken: ensure e.stopPropagation() AND e.preventDefault() on button
### 6.2 GPU Indicator — Shorter Label
- Current: "CPU (NumPy)" — too long
- Should be: "CPU" or "⚡ CPU"
- When GPU active: "⚡ RTX 4060" (short device name)
### 6.3 ~~Coordinate Display — Show Elevation~~ ✅ WORKS
- Elevation loads on hover with delay — NOT a bug
- Shows "Elev: 380m ASL" after holding cursor on map
- No fix needed
---
## Implementation Order
### Priority 1 — Quick Fixes (30 min)
- [ ] GPU indicator positioning (no overlap with Fit)
- [ ] GPU detection — install CuPy/PyOpenCL, diagnostics endpoint
- [ ] Terrain Profile click fix (verify)
### Priority 2 — History Enhancement (1 hour)
- [ ] Extend HistoryEntry with propagation params
- [ ] Save propagation snapshot on calculation complete
- [ ] Expandable propagation details in History panel
- [ ] Results popup — show model count + frequency
### Priority 3 — Coverage Boundary (1-2 hours)
- [ ] Implement contour-based boundary from actual RSRP grid
- [ ] Replace circular approximation with real coverage shape
- [ ] Test with multi-site calculations
- [ ] Smooth boundary line (simplify polygon)
### Priority 4 — Batch Frequency (30 min)
- [ ] BatchOperations component
- [ ] setAllSectorsFrequency store action
- [ ] Wire into sidebar panel
---
## Success Criteria
- [ ] GPU indicator does not overlap with any map controls
- [ ] Coverage boundary follows actual coverage shape (not circular)
- [ ] History entries show expandable propagation parameters
- [ ] Results popup shows model count and frequency
- [ ] Batch frequency change updates all sectors at once
- [ ] Terrain Profile button click doesn't add ruler point
- [ ] Elevation displays correctly in bottom-left
---
## Files Summary
### New Files
- `frontend/src/components/panels/BatchOperations.tsx`
### Modified Files
- `frontend/src/components/ui/GPUIndicator.tsx` — fix position/overlap
- `frontend/src/components/map/CoverageBoundary.tsx` — real contour
- `frontend/src/components/ui/ResultsPopup.tsx` — propagation info
- `frontend/src/store/calcHistory.ts` — extended HistoryEntry
- `frontend/src/components/panels/HistoryPanel.tsx` — expandable details
- `frontend/src/store/sites.ts` — batch frequency action
- `backend/app/services/coverage_service.py` — boundary calculation, config snapshot
- `backend/app/api/websocket.py` — include config in results
---
*"Polish makes the difference between a tool and a product"*

View File

@@ -0,0 +1,504 @@
# RFCP — Iteration 3.5.2: Native Backend + GPU Fix + UI Polish
## Overview
Fix critical architecture issues: GPU indicator dropdown broken, GPU acceleration not working
(CuPy in wrong Python environment), and prepare path to remove WSL2 dependency for end users.
Plus UI polish items carried over from 3.5.1.
**Priority:** GPU fixes first, then UI polish, then native Windows exploration.
---
## CRITICAL CONTEXT
### Current Architecture Problem
```
RFCP.exe (Electron, Windows)
└── launches backend via WSL2:
python3 -m uvicorn app.main:app --host 0.0.0.0 --port 8090
└── /usr/bin/python3 (WSL2 system Python 3.12)
└── NO venv, NO CuPy installed
User installed CuPy in Windows Python → backend doesn't see it.
User installed CuPy in WSL system Python → needs --break-system-packages
```
### GPU Hardware (Confirmed Working)
```
nvidia-smi output (from WSL2):
NVIDIA GeForce RTX 4060 Laptop GPU
Driver: 581.42 (Windows) / 580.95.02 (WSL2)
CUDA: 13.0
VRAM: 8188 MiB
GPU passthrough: WORKING ✅
```
### Files to Reference
```
backend/app/services/gpu_backend.py — GPUManager class
backend/app/api/routes/gpu.py — GPU API endpoints
frontend/src/components/ui/GPUIndicator.tsx — GPU badge/dropdown
desktop/ — Electron app source
installer/ — Build scripts
```
---
## Task 1: Fix GPU Indicator Dropdown Z-Index (Priority 1 — 10 min)
### Problem
GPU dropdown WORKS (opens on click, shows diagnostics, install hints) but renders
BEHIND the right sidebar panel. The sidebar (Sites, Coverage Settings) has higher
z-index than the GPU dropdown, so the dropdown is invisible/hidden underneath.
See screenshots: dropdown is partially visible only when sidebar is made very narrow.
It shows: "COMPUTE DEVICES", "CPU (NumPy)", install hints, "Run Diagnostics",
and even diagnostics JSON — all working but hidden behind sidebar.
### Root Cause
GPUIndicator dropdown z-index is lower than the right sidebar panel z-index.
### Solution
In `GPUIndicator.tsx` — find the dropdown container div and set z-index
higher than the sidebar:
```tsx
{isOpen && (
<div
className="absolute top-full mt-1 bg-dark-surface border border-dark-border
rounded-lg shadow-2xl p-3 min-w-[300px]"
style={{ zIndex: 9999 }} // MUST be above sidebar (which is ~z-50 or z-auto)
>
...
</div>
)}
```
**Key requirements:**
1. `z-index: 9999` (or at minimum higher than sidebar)
2. Position: dropdown should open to the LEFT (toward center of screen)
to avoid being cut off by right edge
3. `right-0` on the absolute positioning (anchored to right edge of badge)
**Alternative approach** — use Tailwind z-index:
```tsx
className="absolute top-full right-0 mt-1 z-[9999] ..."
```
**Also check:** The parent container of GPUIndicator might need `position: relative`
for absolute positioning to work correctly against the right sidebar.
### Testing
- [ ] Click "CPU" badge → dropdown appears ABOVE the sidebar
- [ ] Full dropdown visible: devices, install hints, diagnostics
- [ ] Dropdown doesn't get cut off on right side
- [ ] Click outside → dropdown closes
- [ ] Dropdown works at any window width
---
## Task 2: Install CuPy in WSL Backend (Priority 1 — 10 min)
### Problem
CuPy installed in Windows Python, but backend runs in WSL2 system Python.
### Solution
Add a startup check in the backend that detects missing GPU packages
and provides clear instructions. Also, the Electron app should try to
install dependencies on first launch.
**Step 1: Backend startup GPU check**
In `backend/app/main.py`, add on startup:
```python
@app.on_event("startup")
async def check_gpu_availability():
"""Log GPU status on startup for debugging."""
import logging
logger = logging.getLogger("rfcp.gpu")
# Check CuPy
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
if device_count > 0:
name = cp.cuda.Device(0).name
mem = cp.cuda.Device(0).mem_info[1] // 1024 // 1024
logger.info(f"✅ GPU detected: {name} ({mem} MB VRAM)")
logger.info(f" CuPy {cp.__version__}, CUDA devices: {device_count}")
else:
logger.warning("⚠️ CuPy installed but no CUDA devices found")
except ImportError:
logger.warning("⚠️ CuPy not installed — GPU acceleration disabled")
logger.warning(" Install: pip install cupy-cuda12x --break-system-packages")
except Exception as e:
logger.warning(f"⚠️ CuPy error: {e}")
# Check PyOpenCL
try:
import pyopencl as cl
platforms = cl.get_platforms()
for p in platforms:
for d in p.get_devices():
logger.info(f"✅ OpenCL device: {d.name.strip()}")
except ImportError:
logger.info(" PyOpenCL not installed (optional)")
except Exception:
pass
```
**Step 2: GPU diagnostics endpoint enhancement**
Enhance `/api/gpu/diagnostics` to return install commands:
```python
@router.get("/diagnostics")
async def gpu_diagnostics():
import platform, sys
diagnostics = {
"python": sys.version,
"platform": platform.platform(),
"executable": sys.executable,
"is_wsl": "microsoft" in platform.release().lower(),
"cuda_available": False,
"opencl_available": False,
"install_hint": "",
"devices": []
}
# Check nvidia-smi
try:
import subprocess
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,memory.total", "--format=csv,noheader"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0:
diagnostics["nvidia_smi"] = result.stdout.strip()
except:
diagnostics["nvidia_smi"] = "not found"
# Check CuPy
try:
import cupy
diagnostics["cupy_version"] = cupy.__version__
diagnostics["cuda_available"] = True
count = cupy.cuda.runtime.getDeviceCount()
for i in range(count):
d = cupy.cuda.Device(i)
diagnostics["devices"].append({
"id": i,
"name": d.name,
"memory_mb": d.mem_info[1] // 1024 // 1024,
"backend": "CUDA"
})
except ImportError:
if diagnostics.get("is_wsl"):
diagnostics["install_hint"] = "pip3 install cupy-cuda12x --break-system-packages"
else:
diagnostics["install_hint"] = "pip install cupy-cuda12x"
return diagnostics
```
**Step 3: Frontend shows diagnostics clearly**
In GPUIndicator dropdown, show:
```
⚠ No GPU detected
Your system: WSL2 + NVIDIA RTX 4060
To enable GPU acceleration:
┌─────────────────────────────────────────────┐
│ pip3 install cupy-cuda12x │
│ --break-system-packages │
└─────────────────────────────────────────────┘
Then restart RFCP.
[Copy Command] [Run Diagnostics]
```
### Testing
- [ ] Backend startup logs GPU status
- [ ] /api/gpu/diagnostics returns WSL detection + install hint
- [ ] Frontend shows clear install instructions
- [ ] After installing CuPy in WSL + restart → GPU appears in list
---
## Task 3: Terrain Profile Click Fix (Priority 2 — 5 min)
### Problem
Clicking "Terrain Profile" button in ruler measurement also adds a point on the map.
### Solution
In the Terrain Profile button handler:
```tsx
const handleTerrainProfile = (e: React.MouseEvent) => {
e.stopPropagation();
e.preventDefault();
// ... open terrain profile
};
```
Also check if the button is rendered inside a map click handler area —
may need `L.DomEvent.disableClickPropagation(container)` on the parent.
### Testing
- [ ] Click "Terrain Profile" → opens profile, NO new ruler point added
- [ ] Map click still works normally when not clicking the button
---
## Task 4: Coverage Boundary — Real Contour Shape (Priority 2 — 45 min)
### Problem
Current boundary is a rough circle/ellipse. Should follow actual coverage contour.
### Approaches
**Option A: Shapely Alpha Shape (recommended)**
```python
# backend/app/services/boundary_service.py
from shapely.geometry import MultiPoint
from shapely.ops import unary_union
import numpy as np
def calculate_coverage_boundary(points: list, threshold_dbm: float = -100) -> list:
"""Calculate concave hull of coverage area above threshold."""
# Filter points above threshold
valid = [(p['lon'], p['lat']) for p in points if p['rsrp'] >= threshold_dbm]
if len(valid) < 3:
return []
mp = MultiPoint(valid)
# Use convex hull first, then try concave
try:
# Shapely 2.0+ has concave_hull
from shapely import concave_hull
hull = concave_hull(mp, ratio=0.3)
except ImportError:
# Fallback to convex hull
hull = mp.convex_hull
# Simplify to reduce points (0.001 deg ≈ 100m)
simplified = hull.simplify(0.001, preserve_topology=True)
# Extract coordinates
if simplified.geom_type == 'Polygon':
coords = list(simplified.exterior.coords)
return [{'lat': c[1], 'lon': c[0]} for c in coords]
return []
```
**Option B: Grid-based contour (simpler)**
```python
def grid_contour_boundary(points: list, threshold_dbm: float, resolution: float):
"""Find boundary by detecting edge cells in grid."""
# Create binary grid: 1 = above threshold, 0 = below
# Find cells where 1 is adjacent to 0 = boundary
# Convert cell coords back to lat/lon
# Return ordered boundary points
```
### API Endpoint
```python
# Add to coverage calculation response
@router.post("/coverage/calculate")
async def calculate_coverage(...):
result = coverage_service.calculate(...)
# Calculate boundary
if result.points:
boundary = calculate_coverage_boundary(
result.points,
threshold_dbm=settings.min_signal
)
result.boundary = boundary
return result
```
### Frontend
```tsx
// CoverageBoundary.tsx — use returned boundary coords
// Instead of calculating alpha shape on frontend
const CoverageBoundary = ({ points, boundary }) => {
// If server returned boundary, use it
if (boundary && boundary.length > 0) {
return <Polygon positions={boundary.map(p => [p.lat, p.lon])} />;
}
// Fallback to current convex hull implementation
return <CurrentImplementation points={points} />;
};
```
### Dependencies
Need `shapely` installed:
```
pip install shapely # or pip3 install shapely --break-system-packages
```
Check if already in requirements.txt.
### Testing
- [ ] 5km calculation → boundary follows actual coverage shape
- [ ] 10km calculation → boundary is irregular (terrain-dependent)
- [ ] Toggle boundary on/off works
- [ ] Boundary doesn't crash with < 3 points
---
## Task 5: Results Popup Enhancement (Priority 3 — 15 min)
### Problem
Calculation complete toast/popup doesn't show which models were used.
### Solution
Enhance the toast message after calculation:
```tsx
// Current:
toast.success(`Calculated ${points} points in ${time}s`);
// Enhanced:
const modelCount = result.modelsUsed?.length ?? 0;
const freq = sites[0]?.frequency ?? 0;
const presetName = settings.preset ?? 'custom';
toast.success(
`${points} pts • ${time}s • ${presetName}${freq} MHz • ${modelCount} models`,
{ duration: 5000 }
);
```
### Testing
- [ ] After calculation, toast shows: points, time, preset, frequency, model count
---
## Task 6: Native Windows Backend (Priority 3 — Research/Plan)
### Problem
Current setup REQUIRES WSL2. Users without WSL2 can't use RFCP at all.
### Current Flow
```
RFCP.exe (Electron)
→ detects WSL2
→ launches: wsl python3 -m uvicorn ...
→ backend runs in WSL2 Linux
```
### Target Flow
```
RFCP.exe (Electron)
→ Option A: embedded Python (Windows native)
→ Option B: detect system Python (Windows)
→ Option C: keep WSL2 but with fallback
```
### Research Tasks (don't implement yet, just investigate)
1. **Check how Electron currently launches backend:**
```bash
# Look at desktop/ directory
cat desktop/src/main.ts # or main.js
# Find where it spawns python/uvicorn
```
2. **Check if Windows Python works for backend:**
```powershell
# In Windows PowerShell:
cd D:\root\rfcp\backend
python -m uvicorn app.main:app --host 0.0.0.0 --port 8090
# Does it start? What errors?
```
3. **Evaluate embedded Python options:**
- python-embedded (official, ~30 MB)
- PyInstaller (bundle backend as .exe)
- cx_Freeze
- Nuitka (compile Python to C)
4. **Document findings** — create a brief report:
```
RFCP-Native-Backend-Research.md
- Current architecture (WSL2 dependency)
- Windows Python compatibility test results
- Recommended approach
- Migration steps
- Timeline estimate
```
### Goal
User downloads RFCP.exe → installs → clicks icon → everything works.
No WSL2. No manual pip install. GPU auto-detected.
---
## Implementation Order
### Priority 1 (30 min total)
1. **Task 1:** Fix GPU dropdown — make it clickable again
2. **Task 2:** GPU diagnostics + install instructions in UI
3. **Task 3:** Terrain Profile click propagation fix
### Priority 2 (1 hour)
4. **Task 4:** Coverage boundary real contour (shapely)
5. **Task 5:** Results popup enhancement
### Priority 3 (Research only)
6. **Task 6:** Investigate native Windows backend — report only, no implementation
---
## Build & Deploy
```bash
# After implementation:
cd /mnt/d/root/rfcp/frontend
npx tsc --noEmit # TypeScript check
npm run build # Production build
# Rebuild Electron:
cd /mnt/d/root/rfcp/installer
bash build-win.sh
# Test:
# Install new .exe and verify GPU indicator works
```
---
## Success Criteria
- [ ] GPU dropdown opens when clicking badge
- [ ] Dropdown shows device list or install instructions
- [ ] After `pip3 install cupy-cuda12x --break-system-packages` in WSL + restart → GPU visible
- [ ] Terrain Profile click doesn't add ruler points
- [ ] Coverage boundary follows actual signal contour
- [ ] Results toast shows model count and frequency
- [ ] Native Windows backend research document created

View File

@@ -0,0 +1,556 @@
# RFCP — Iteration 3.6.0: Production GPU Build
## Overview
Enable GPU acceleration in the production PyInstaller build. Currently production
runs CPU-only (NumPy) because CuPy is not included in rfcp-server.exe.
**Goal:** User with NVIDIA GPU installs RFCP → GPU detected automatically →
coverage calculations use CUDA acceleration. No manual pip install required.
**Context from diagnostics screenshot:**
```json
{
"python_executable": "C:\\Users\\Administrator\\AppData\\Local\\Programs\\RFCP\\resources\\backend\\rfcp-server.exe",
"platform": "Windows-10-10.0.26288-SP0",
"is_wsl": false,
"numpy": { "version": "1.26.4" },
"cuda": {
"error": "CuPy not installed",
"install_hint": "pip install cupy-cuda12x"
}
}
```
**Architecture:** Production uses PyInstaller-bundled rfcp-server.exe (self-contained).
CuPy not included → GPU not available for end users.
---
## Strategy: Two-Tier Build
Instead of one massive binary, produce two builds:
```
RFCP-Setup-{version}.exe (~150 MB) — CPU-only, works everywhere
RFCP-Setup-{version}-GPU.exe (~700 MB) — includes CuPy + CUDA runtime
```
**Why not dynamic loading?**
PyInstaller bundles everything at build time. CuPy can't be pip-installed
into a frozen exe at runtime. Options are:
1. **Bundle CuPy in PyInstaller** ← cleanest, what we'll do
2. Side-load CuPy DLLs (fragile, version-sensitive)
3. Hybrid: unfrozen Python + CuPy installed separately (defeats purpose of exe)
---
## Task 1: PyInstaller Spec with CuPy (Priority 1 — 30 min)
### File: `installer/rfcp-server-gpu.spec`
Create a separate .spec file that includes CuPy:
```python
# rfcp-server-gpu.spec — GPU-enabled build
import os
import sys
from PyInstaller.utils.hooks import collect_all, collect_dynamic_libs
backend_path = os.path.abspath(os.path.join(os.path.dirname(SPEC), '..', 'backend'))
# Collect CuPy and its CUDA dependencies
cupy_datas, cupy_binaries, cupy_hiddenimports = collect_all('cupy')
# Also collect cupy_backends
cupyb_datas, cupyb_binaries, cupyb_hiddenimports = collect_all('cupy_backends')
# CUDA runtime libraries that CuPy needs
cuda_binaries = collect_dynamic_libs('cupy')
a = Analysis(
[os.path.join(backend_path, 'run_server.py')],
pathex=[backend_path],
binaries=cupy_binaries + cupyb_binaries + cuda_binaries,
datas=[
(os.path.join(backend_path, 'data', 'terrain'), 'data/terrain'),
] + cupy_datas + cupyb_datas,
hiddenimports=[
# Existing imports from rfcp-server.spec
'uvicorn.logging',
'uvicorn.loops',
'uvicorn.loops.auto',
'uvicorn.protocols',
'uvicorn.protocols.http',
'uvicorn.protocols.http.auto',
'uvicorn.protocols.websockets',
'uvicorn.protocols.websockets.auto',
'uvicorn.lifespan',
'uvicorn.lifespan.on',
'motor',
'pymongo',
'numpy',
'scipy',
'shapely',
'shapely.geometry',
'shapely.ops',
# CuPy-specific
'cupy',
'cupy.cuda',
'cupy.cuda.runtime',
'cupy.cuda.driver',
'cupy.cuda.memory',
'cupy.cuda.stream',
'cupy._core',
'cupy._core.core',
'cupy._core._routines_math',
'cupy.fft',
'cupy.linalg',
'fastrlock',
] + cupy_hiddenimports + cupyb_hiddenimports,
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
name='rfcp-server',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=False, # Don't compress CUDA libs — they need fast loading
console=True,
icon=os.path.join(os.path.dirname(SPEC), 'rfcp.ico'),
)
```
### Key Points:
- `collect_all('cupy')` grabs all CuPy submodules + CUDA DLLs
- `fastrlock` is a CuPy dependency (must be in hiddenimports)
- `upx=False` — don't compress CUDA binaries (breaks them)
- One-file mode (`a.binaries + a.datas` in EXE) for single exe
---
## Task 2: Build Script for GPU Variant (Priority 1 — 15 min)
### File: `installer/build-gpu.bat` (Windows)
```batch
@echo off
echo ========================================
echo RFCP GPU Build — rfcp-server-gpu.exe
echo ========================================
REM Ensure CuPy is installed in build environment
echo Checking CuPy installation...
python -c "import cupy; print(f'CuPy {cupy.__version__} with CUDA {cupy.cuda.runtime.runtimeGetVersion()}')"
if errorlevel 1 (
echo ERROR: CuPy not installed. Run: pip install cupy-cuda12x
exit /b 1
)
REM Build with GPU spec
echo Building rfcp-server with GPU support...
cd /d %~dp0\..\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
echo.
echo Build complete! Output: dist\rfcp-server.exe
echo Size:
dir dist\rfcp-server.exe
REM Optional: copy to Electron resources
if exist "..\desktop\resources" (
copy /y dist\rfcp-server.exe ..\desktop\resources\rfcp-server.exe
echo Copied to desktop\resources\
)
pause
```
### File: `installer/build-gpu.sh` (WSL/Linux)
```bash
#!/bin/bash
set -e
echo "========================================"
echo " RFCP GPU Build — rfcp-server (GPU)"
echo "========================================"
# Check CuPy
python3 -c "import cupy; print(f'CuPy {cupy.__version__}')" 2>/dev/null || {
echo "ERROR: CuPy not installed. Run: pip install cupy-cuda12x"
exit 1
}
cd "$(dirname "$0")/../backend"
pyinstaller ../installer/rfcp-server-gpu.spec --clean --noconfirm
echo ""
echo "Build complete!"
ls -lh dist/rfcp-server*
```
---
## Task 3: GPU Backend — Graceful CuPy Detection (Priority 1 — 15 min)
### File: `backend/app/services/gpu_backend.py`
The existing gpu_backend.py should already handle CuPy absence gracefully.
Verify and fix if needed:
```python
# gpu_backend.py — must work in BOTH CPU and GPU builds
import numpy as np
# Try importing CuPy — this is the key detection
_cupy_available = False
_gpu_device_name = None
_gpu_memory_mb = 0
try:
import cupy as cp
# Verify we can actually use it (not just import)
device = cp.cuda.Device(0)
_gpu_device_name = device.attributes.get('name', f'CUDA Device {device.id}')
# Try to get name via runtime
try:
props = cp.cuda.runtime.getDeviceProperties(0)
_gpu_device_name = props.get('name', _gpu_device_name)
if isinstance(_gpu_device_name, bytes):
_gpu_device_name = _gpu_device_name.decode('utf-8').strip('\x00')
except Exception:
pass
_gpu_memory_mb = device.mem_info[1] // (1024 * 1024)
_cupy_available = True
except ImportError:
cp = None # CuPy not installed (CPU build)
except Exception as e:
cp = None # CuPy installed but CUDA not available
print(f"[GPU] CuPy found but CUDA unavailable: {e}")
def is_gpu_available() -> bool:
return _cupy_available
def get_gpu_info() -> dict:
if _cupy_available:
return {
"available": True,
"backend": "CuPy (CUDA)",
"device": _gpu_device_name,
"memory_mb": _gpu_memory_mb,
}
return {
"available": False,
"backend": "NumPy (CPU)",
"device": "CPU",
"memory_mb": 0,
}
def get_array_module():
"""Return cupy if available, otherwise numpy."""
if _cupy_available:
return cp
return np
```
### Usage in coverage_service.py:
```python
from app.services.gpu_backend import get_array_module, is_gpu_available
xp = get_array_module() # cupy or numpy — same API
# All calculations use xp instead of np:
distances = xp.sqrt(dx**2 + dy**2)
path_loss = 20 * xp.log10(distances) + 20 * xp.log10(freq_mhz) - 27.55
# If using cupy, results need to come back to CPU for JSON serialization:
if is_gpu_available():
results = xp.asnumpy(path_loss)
else:
results = path_loss
```
---
## Task 4: GPU Status in Frontend Header (Priority 2 — 10 min)
### Update GPUIndicator.tsx
When GPU is detected, the badge should clearly show it:
```
CPU build: [⚙ CPU] (gray badge)
GPU detected: [⚡ RTX 4060] (green badge)
```
The existing GPUIndicator already does this. Just verify:
1. Badge color changes from gray → green when GPU available
2. Dropdown shows "Active: GPU (CUDA)" not just "CPU (NumPy)"
3. No install hints shown when CuPy IS available
---
## Task 5: Build Environment Setup (Priority 1 — Manual by Олег)
### Prerequisites for GPU build:
```powershell
# 1. Install CuPy in Windows Python (NOT WSL)
pip install cupy-cuda12x
# 2. Verify CuPy works
python -c "import cupy; print(cupy.cuda.runtime.runtimeGetVersion())"
# Should print: 12000 or similar
# 3. Install PyInstaller if not present
pip install pyinstaller
# 4. Verify fastrlock (CuPy dependency)
pip install fastrlock
```
### Build commands:
```powershell
# CPU-only build (existing)
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server.spec --clean --noconfirm
# GPU build (new)
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
```
### Expected output sizes:
```
rfcp-server.exe (CPU): ~80 MB
rfcp-server.exe (GPU): ~600-800 MB (CuPy bundles CUDA runtime libs)
```
---
## Task 6: Electron — Detect Build Variant (Priority 2 — 10 min)
### File: `desktop/main.js` or `desktop/src/main.ts`
Add version detection so UI knows which build it's running:
```javascript
// After backend starts, check GPU status
async function checkBackendCapabilities() {
try {
const response = await fetch('http://127.0.0.1:8090/api/gpu/status');
const data = await response.json();
// Send to renderer
mainWindow.webContents.send('gpu-status', data);
if (data.available) {
console.log(`[RFCP] GPU: ${data.device} (${data.memory_mb} MB)`);
} else {
console.log('[RFCP] Running in CPU mode');
}
} catch (e) {
console.log('[RFCP] Backend not ready for GPU check');
}
}
```
---
## Task 7: About / Version Info (Priority 3 — 5 min)
### Add build info to `/api/health` response:
```python
@app.get("/api/health")
async def health():
gpu_info = get_gpu_info()
return {
"status": "ok",
"version": "3.6.0",
"build": "gpu" if gpu_info["available"] else "cpu",
"gpu": gpu_info,
"python": sys.version,
"platform": platform.platform(),
}
```
---
## Build & Test Procedure
### Step 1: Setup Build Environment
```powershell
# Windows PowerShell (NOT WSL)
cd D:\root\rfcp
# Verify Python environment
python --version # Should be 3.11.x
pip list | findstr cupy # Should show cupy-cuda12x
# If CuPy not installed:
pip install cupy-cuda12x fastrlock
```
### Step 2: Build GPU Variant
```powershell
cd D:\root\rfcp\backend
pyinstaller ..\installer\rfcp-server-gpu.spec --clean --noconfirm
```
### Step 3: Test Standalone
```powershell
# Run the built exe directly
.\dist\rfcp-server.exe
# In another terminal:
curl http://localhost:8090/api/health
curl http://localhost:8090/api/gpu/status
curl http://localhost:8090/api/gpu/diagnostics
```
### Step 4: Verify GPU Detection
Expected `/api/gpu/status` response:
```json
{
"available": true,
"backend": "CuPy (CUDA)",
"device": "NVIDIA GeForce RTX 4060 Laptop GPU",
"memory_mb": 8188
}
```
### Step 5: Run Coverage Calculation
- Place a site on map
- Calculate coverage (10km, 200m resolution)
- Check logs for: `[GPU] Using CUDA: RTX 4060 (8188 MB)`
- Compare performance: should be 5-10x faster than CPU
### Step 6: Full Electron Build
```powershell
# Copy GPU server to Electron resources
copy backend\dist\rfcp-server.exe desktop\resources\
# Build Electron installer
cd installer
.\build-win.sh # or equivalent Windows script
```
---
## Risk Assessment
### Size Concern
CuPy bundles CUDA runtime (~500MB). Total GPU installer ~700-800MB.
**Mitigation:** This is acceptable for a professional RF planning tool.
AutoCAD is 7GB. QGIS is 1.5GB. Atoll is 3GB+.
### CUDA Version Compatibility
CuPy-cuda12x requires CUDA 12.x compatible driver.
RTX 4060 with Driver 581.42 → CUDA 13.0 → backward compatible ✅
**Mitigation:** gpu_backend.py already falls back to NumPy gracefully.
### PyInstaller + CuPy Issues
Known issues:
- CuPy uses many .so/.dll files that PyInstaller might miss
- `collect_all('cupy')` should catch them, but test thoroughly
- If missing DLLs → add them manually to `binaries` list
**Mitigation:** Test the standalone exe on a clean machine (no Python installed).
### Antivirus False Positives
Larger exe = more AV suspicion. PyInstaller exes already trigger some AV.
**Mitigation:** Code-sign the exe (future task), submit to AV vendors for whitelisting.
---
## Success Criteria
- [ ] `rfcp-server-gpu.spec` created and builds successfully
- [ ] Built exe detects RTX 4060 on startup
- [ ] `/api/gpu/status` returns `"available": true`
- [ ] Coverage calculation uses CuPy (check logs)
- [ ] GPU badge shows "⚡ RTX 4060" (green) in header
- [ ] Fallback to NumPy works if CUDA unavailable
- [ ] CPU-only spec (`rfcp-server.spec`) still builds and works
- [ ] Build time < 10 minutes
- [ ] GPU exe size < 1 GB
---
## Commit Message
```
feat(build): add GPU-enabled PyInstaller build with CuPy + CUDA
- New rfcp-server-gpu.spec with CuPy/CUDA collection
- Build scripts: build-gpu.bat, build-gpu.sh
- Graceful GPU detection in gpu_backend.py
- Two-tier build: CPU (~80MB) and GPU (~700MB) variants
- Auto-detection: RTX 4060 → CuPy acceleration
- Fallback: no CUDA → NumPy (CPU mode)
Iteration 3.6.0 — Production GPU Build
```
---
## Files Summary
### New Files:
| File | Purpose |
|------|---------|
| `installer/rfcp-server-gpu.spec` | PyInstaller config with CuPy |
| `installer/build-gpu.bat` | Windows GPU build script |
| `installer/build-gpu.sh` | Linux/WSL GPU build script |
### Modified Files:
| File | Changes |
|------|---------|
| `backend/app/services/gpu_backend.py` | Verify graceful detection |
| `backend/app/main.py` | Health endpoint with build info |
| `desktop/main.js` or `main.ts` | GPU status check after backend start |
| `frontend/src/components/ui/GPUIndicator.tsx` | Verify badge shows GPU |
### No Changes Needed:
| File | Reason |
|------|--------|
| `installer/rfcp-server.spec` | CPU build stays as-is |
| `backend/app/services/coverage_service.py` | Already uses get_array_module() |
| `installer/build-win.sh` | Existing CPU build unchanged |
---
## Timeline
| Phase | Task | Time |
|-------|------|------|
| **P1** | Create rfcp-server-gpu.spec | 30 min |
| **P1** | Build scripts | 15 min |
| **P1** | Verify gpu_backend.py | 15 min |
| **P2** | Frontend badge verification | 10 min |
| **P2** | Electron GPU status | 10 min |
| **P3** | Health endpoint update | 5 min |
| **Test** | Build + test standalone | 20 min |
| **Test** | Full Electron build | 15 min |
| | **Total** | **~2 hours** |
**Claude Code estimated time: 10-15 min** (spec + scripts + backend changes)
**Manual testing by Олег: 30-45 min** (building + verifying)

View File

@@ -0,0 +1,220 @@
# RFCP Project Roadmap — Updated February 4, 2026
**Project:** RFCP (RF Coverage Planning) for UMTC
**Developer:** Олег + Claude
**Started:** January 30, 2025
**Current Version:** 3.8.0 (GPU Acceleration Complete)
---
## ✅ Completed Milestones
### Phase 1: Frontend (January 2025)
- ✅ React + TypeScript + Vite + Leaflet
- ✅ Multi-site RF coverage planning
- ✅ Multi-sector sites (Alpha/Beta/Gamma)
- ✅ Geographic-scale canvas heatmap
- ✅ Keyboard shortcuts + delete confirmation
- ✅ NumberInput components with sliders
- ✅ TypeScript strict mode, ESLint clean
- ✅ Production build: 536KB / 163KB gzipped
### Phase 2: Backend Architecture (February 1, 2025)
- ✅ Python FastAPI + NumPy + ProcessPoolExecutor
- ✅ 8 propagation models (FreeSpace, Okumura-Hata, COST-231, ITU-R P.1546, etc.)
- ✅ Modular geometry engine (haversine, intersection, reflection, diffraction, LOS)
- ✅ SharedMemoryManager for terrain data (zero-copy, 25 MB)
- ✅ Building filtering (351k → 27k bbox → 15k cap)
- ✅ Overpass API with retry + mirror failover
- ✅ WebSocket progress streaming
### Phase 3: Performance (February 2-3, 2025)
- ✅ LOD (Level of Detail) optimization
- ✅ Spatial indexing for buildings (R-tree)
- ✅ Dominant path simplification for distant points
- ✅ OOM fix + memory management
- ✅ CloudRF-style color gradient
- ✅ Results popup + session history
- ✅ Terrain profile viewer
### Phase 4: GPU Acceleration (February 3-4, 2025) ⭐
- ✅ CuPy + CUDA backend (RTX 4060)
- ✅ CUDA Toolkit 13.1 + cupy-cuda13x setup
- ✅ Phase 2.5: Vectorized distances + path_loss (0.006s)
- ✅ Phase 2.6: Vectorized terrain LOS + diffraction (0.04s)
- ✅ Phase 2.7: Vectorized antenna pattern loss
- ✅ Vegetation bbox pre-filter (100x+ speedup)
- ✅ Worker process isolation (no CUDA in workers)
- ✅ PyInstaller ONEDIR GPU build (1.2 GB installer)
-**Full preset: 195s → 11.2s (17.4x speedup)**
### Supporting Work
- ✅ RF Radio Theory wiki article (comprehensive)
- ✅ Propagation model research (CloudRF, SPLAT!, Signal Server)
- ✅ RFCP Method collaboration framework documented
---
## 📊 Current Performance
| Preset | Points | Resolution | Time (cached) | Time (cold) |
|--------|--------|-----------|---------------|-------------|
| Standard | 1,975 | 200m | **2.3s** | ~12s |
| Full | 6,640 | 50m | **11.2s** | ~20s |
| 50km radius | 4,966 | adaptive | ~410s | ~420s |
**Hardware:** Windows 11, RTX 4060 Laptop GPU, 6-core CPU
---
## 🔜 Next: Phase 5 — Data & Accuracy
### 5.1 SRTM Terrain Integration
**Priority:** HIGH
**Status:** Not started
Current terrain: Single HGT tile download per calculation
Target: Pre-cached SRTM/ASTER DEM tiles with proper interpolation
- [ ] SRTM tile manager (auto-download, cache)
- [ ] Bilinear interpolation for elevation sampling
- [ ] Multi-tile coverage for large radius
- [ ] Terrain profile accuracy validation
- [ ] Compare with current terrain data quality
### 5.2 Project Persistence
**Priority:** MEDIUM
- [ ] Save/load projects (JSON or SQLite)
- [ ] Site configurations persistence
- [ ] Coverage results caching
- [ ] Session history persistence across restarts
- [ ] Export coverage report (PDF/PNG)
### 5.3 Accuracy Validation
**Priority:** MEDIUM
- [ ] Compare with known coverage maps
- [ ] Field measurements with real equipment
- [ ] Calibrate propagation models per environment
- [ ] Antenna pattern library (real equipment specs)
---
## 🔮 Future Phases
### Phase 6: Multi-Station & Dashboard
- [ ] Multi-station view (aggregate coverage)
- [ ] Station discovery via WireGuard mesh
- [ ] Coverage gap analysis
- [ ] Interference modeling between stations
- [ ] Handover zone visualization
### Phase 7: Hardware Integration
- [ ] LimeSDR Mini 2.0 testing
- [ ] Real RF attach validation
- [ ] sysmoISIM-SJA2 SIM integration
- [ ] ZTE B8200 base station testing
- [ ] INFOZAHYST Plastun SDR (if accessible)
### Phase 8: Advanced Features
- [ ] 3D visualization mode
- [ ] Link budget analysis view
- [ ] Frequency planning tool
- [ ] Indoor coverage modeling
- [ ] Time-series analysis (seasonal vegetation)
- [ ] Offline mode (embedded terrain DB)
### Phase 9: Distribution
- [ ] Auto-updater (electron-updater)
- [ ] Live USB distribution for field deployment
- [ ] Standalone offline package
- [ ] User documentation / help system
---
## 🏛️ Architecture Overview
```
RFCP Application (Electron)
├── Frontend (React + TypeScript + Vite)
│ ├── Leaflet map with custom canvas heatmap
│ ├── Zustand state management
│ └── WebSocket for progress streaming
├── Backend (Python FastAPI)
│ ├── Coverage Engine
│ │ ├── Grid generator (adaptive zones)
│ │ ├── GPU pipeline (CuPy/CUDA) — main process
│ │ │ ├── Phase 2.5: distances + path_loss
│ │ │ ├── Phase 2.6: terrain LOS + diffraction
│ │ │ └── Phase 2.7: antenna pattern
│ │ └── CPU workers (ProcessPool) — 3-6 workers
│ │ ├── Building obstruction (spatial index)
│ │ ├── Reflections (ray-building intersection)
│ │ └── Vegetation loss (bbox pre-filter)
│ │
│ ├── Propagation Models (8 models)
│ │ ├── Free-Space Path Loss
│ │ ├── Okumura-Hata (150-1500 MHz)
│ │ ├── COST-231-Hata (1500-2000 MHz)
│ │ ├── ITU-R P.1546
│ │ └── ... 4 more
│ │
│ ├── OSM Services
│ │ ├── Buildings (Overpass API + cache)
│ │ ├── Vegetation (bbox pre-filter)
│ │ ├── Water bodies
│ │ └── Streets
│ │
│ └── Terrain Service
│ ├── HGT tile download + cache
│ ├── Elevation sampling
│ └── Line-of-sight checking
└── Desktop (Electron)
├── Backend process management
└── NSIS installer (1.2 GB with CUDA)
```
---
## 📈 Development Timeline
```
Jan 30, 2025 Phase 1: Frontend complete (10 iterations)
Feb 01, 2025 Phase 2: Backend architecture (48 files, 82 tests)
Feb 02, 2025 Phase 3: LOD + performance optimization
Feb 03, 2025 Phase 3.5-3.6: GPU setup + CUDA build
Feb 04, 2025 Phase 3.7-3.8: GPU vectorization complete ⭐
─────────────────────────────────────────
Full preset: 195s → 11.2s (17.4x speedup)
Standard: 38s → 2.3s (16.5x speedup)
```
**Total development time:** ~5 days intensive
**Total iterations:** 3.8.0 (20+ sub-iterations)
**Architecture:** Battle-tested, production-ready
---
## 🧰 Tech Stack
| Component | Technology | Version |
|-----------|-----------|---------|
| Frontend | React + TypeScript | 18 |
| Build | Vite | 5.x |
| Map | Leaflet | 1.9 |
| State | Zustand | 4.x |
| Backend | Python FastAPI | 3.12 |
| GPU | CuPy + CUDA | 13.x |
| Parallel | ProcessPoolExecutor | stdlib |
| Terrain | NumPy (HGT tiles) | 1.26 |
| Desktop | Electron | 28.x |
| Installer | NSIS (via electron-builder) | - |
| Build (BE) | PyInstaller | 6.x |
---
*"11.2 seconds. Full preset. 6,640 points. GPU acceleration complete."*
*— February 4, 2026*

View File

@@ -0,0 +1,345 @@
# RFCP: WebGL Radial Gradients Coverage Layer
## Мета
Переробити WebGL coverage layer з texture-based підходу на **radial gradients** — як працює Canvas GeographicHeatmap, але на GPU.
## Чому radial gradients краще для візуалізації
**Texture-based (поточний):**
- Кожна точка = 1 pixel в grid
- Nearest neighbor fill → blocky квадрати
- Навіть з smoothstep — видно grid структуру
- ✅ Добре для: terrain detail, точні значення
- ❌ Погано для: красива візуалізація
**Radial gradients (Canvas heatmap):**
- Кожна точка = круг з radial falloff
- Smooth blending між точками
- Природній вигляд coverage
- ✅ Добре для: красива візуалізація, презентації
- ❌ Погано для: точні значення (blending спотворює)
## Архітектура WebGL Radial Gradients
### Підхід: Multi-pass additive blending
```
Pass 1-N: Для кожної точки (або batch точок)
├── Малюємо full-screen quad
├── Fragment shader: radial falloff від центру точки
├── Output: (weight * value, weight, 0, 1)
└── Blending: GL_ONE, GL_ONE (additive)
Final Pass:
├── Читаємо accumulated texture
├── Normalize: value = R / G (weighted average)
└── Apply colormap
```
### Альтернатива: Single-pass з texture atlas
Замість N проходів, закодувати всі точки в texture і в одному fragment shader пройтись по всіх:
```glsl
// Fragment shader
uniform sampler2D u_points; // texture з точками: (lat, lon, rsrp, radius)
uniform int u_pointCount;
void main() {
vec2 worldPos = getWorldPosition(v_uv);
float totalWeight = 0.0;
float totalValue = 0.0;
for (int i = 0; i < MAX_POINTS; i++) {
if (i >= u_pointCount) break;
vec4 point = texelFetch(u_points, ivec2(i, 0), 0);
vec2 pointPos = point.xy;
float rsrp = point.z;
float radius = point.w;
float dist = distance(worldPos, pointPos);
float weight = smoothstep(radius, 0.0, dist);
totalWeight += weight;
totalValue += weight * rsrp;
}
if (totalWeight < 0.001) discard;
float avgRsrp = totalValue / totalWeight;
vec3 color = rsrpToColor(avgRsrp);
gl_FragColor = vec4(color, smoothstep(0.0, 0.1, totalWeight));
}
```
**Проблема:** Loop по 6,675 точках в кожному fragment = дуже повільно.
### Рекомендований підхід: Batched additive blending
```
1. Створити offscreen framebuffer (float texture)
2. Для кожної точки (або batch по 100-500):
- Малювати quad розміром з radius точки
- Additive blend: (weight * rsrp, weight)
3. Final pass: normalize + colormap
```
Це як Mapbox heatmap працює.
---
## Імплементація
### Крок 1: Створити offscreen framebuffer
```typescript
// Accumulation texture (RG float for weighted sum)
const accumTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, accumTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RG32F, width, height, 0, gl.RG, gl.FLOAT, null);
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, accumTexture, 0);
```
**Примітка:** Потрібен `EXT_color_buffer_float` extension для float framebuffer.
### Крок 2: Point rendering shader
**Vertex shader:**
```glsl
attribute vec2 a_position; // quad vertices
attribute vec2 a_pointCenter; // point lat/lon (instanced)
attribute float a_pointRsrp; // point RSRP (instanced)
attribute float a_pointRadius; // point radius in pixels (instanced)
uniform mat4 u_matrix; // world to clip transform
varying vec2 v_localPos; // position relative to point center
varying float v_rsrp;
void main() {
// Expand quad around point center
vec2 worldPos = a_pointCenter + a_position * a_pointRadius;
gl_Position = u_matrix * vec4(worldPos, 0.0, 1.0);
v_localPos = a_position; // -1 to 1
v_rsrp = a_pointRsrp;
}
```
**Fragment shader:**
```glsl
precision highp float;
varying vec2 v_localPos;
varying float v_rsrp;
void main() {
// Radial distance from center (0 at center, 1 at edge)
float dist = length(v_localPos);
// Discard outside circle
if (dist > 1.0) discard;
// Radial falloff (smooth at edges)
float weight = 1.0 - smoothstep(0.0, 1.0, dist);
// Or gaussian: weight = exp(-dist * dist * 2.0);
// Output: (weight * normalized_rsrp, weight)
float normalizedRsrp = (v_rsrp + 130.0) / 80.0; // -130 to -50 → 0 to 1
gl_FragColor = vec4(weight * normalizedRsrp, weight, 0.0, 1.0);
}
```
### Крок 3: Final compositing shader
```glsl
precision highp float;
uniform sampler2D u_accumTexture;
varying vec2 v_uv;
vec3 rsrpToColor(float t) {
// t: 0 = weak (red), 1 = strong (cyan)
if (t < 0.25) return mix(vec3(1.0, 0.0, 0.0), vec3(1.0, 0.5, 0.0), t / 0.25);
if (t < 0.5) return mix(vec3(1.0, 0.5, 0.0), vec3(1.0, 1.0, 0.0), (t - 0.25) / 0.25);
if (t < 0.75) return mix(vec3(1.0, 1.0, 0.0), vec3(0.0, 1.0, 0.0), (t - 0.5) / 0.25);
return mix(vec3(0.0, 1.0, 0.0), vec3(0.0, 1.0, 1.0), (t - 0.75) / 0.25);
}
void main() {
vec2 accum = texture2D(u_accumTexture, v_uv).rg;
float totalValue = accum.r;
float totalWeight = accum.g;
// No coverage
if (totalWeight < 0.001) discard;
// Weighted average RSRP
float avgRsrp = totalValue / totalWeight;
// Color mapping
vec3 color = rsrpToColor(avgRsrp);
// Alpha based on weight (fade at edges)
float alpha = smoothstep(0.0, 0.1, totalWeight) * 0.85;
gl_FragColor = vec4(color, alpha);
}
```
### Крок 4: Rendering loop
```typescript
function render() {
const canvas = canvasRef.current;
const gl = glRef.current;
// 1. Position canvas over map
const nw = map.latLngToLayerPoint([bounds.maxLat, bounds.minLon]);
const se = map.latLngToLayerPoint([bounds.minLat, bounds.maxLon]);
canvas.style.transform = `translate(${nw.x}px, ${nw.y}px)`;
canvas.style.width = `${se.x - nw.x}px`;
canvas.style.height = `${se.y - nw.y}px`;
// 2. Clear accumulation buffer
gl.bindFramebuffer(gl.FRAMEBUFFER, accumFramebuffer);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT);
// 3. Render points with additive blending
gl.useProgram(pointProgram);
gl.enable(gl.BLEND);
gl.blendFunc(gl.ONE, gl.ONE); // Additive
// Set uniforms (matrix, etc.)
const matrix = calculateWorldToClipMatrix(bounds, canvas.width, canvas.height);
gl.uniformMatrix4fv(u_matrix, false, matrix);
// Draw all points (instanced if supported, or batched)
drawPoints(gl, points);
// 4. Final composite pass
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.useProgram(compositeProgram);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA); // Normal blend
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, accumTexture);
drawFullscreenQuad(gl);
}
```
---
## Оптимізації
### 1. Instanced rendering (якщо підтримується)
```typescript
const ext = gl.getExtension('ANGLE_instanced_arrays');
if (ext) {
// Use instanced rendering - draw all points in one call
ext.drawArraysInstancedANGLE(gl.TRIANGLE_STRIP, 0, 4, points.length);
}
```
### 2. Spatial culling
Малювати тільки точки що потрапляють у viewport:
```typescript
const visiblePoints = points.filter(p => {
const screenPos = map.latLngToContainerPoint([p.lat, p.lon]);
return screenPos.x > -radius && screenPos.x < canvas.width + radius &&
screenPos.y > -radius && screenPos.y < canvas.height + radius;
});
```
### 3. Dynamic radius based on zoom
```typescript
const zoom = map.getZoom();
const metersPerPixel = 40075016.686 * Math.cos(centerLat * Math.PI / 180) / Math.pow(2, zoom + 8);
const radiusPixels = (settings.resolution * 1.5) / metersPerPixel;
```
### 4. Resolution scaling
На низьких zoom рівнях, рендерити в менший framebuffer і upscale:
```typescript
const scale = zoom < 10 ? 0.5 : zoom < 12 ? 0.75 : 1.0;
const fbWidth = Math.round(canvas.width * scale);
const fbHeight = Math.round(canvas.height * scale);
```
---
## Порівняння з поточним texture-based
| Аспект | Texture-based | Radial gradients |
|--------|---------------|------------------|
| Візуалізація | Blocky | Smooth |
| Terrain detail | Добре | Менш точно |
| Performance | Швидко (1 draw call) | Повільніше (N points) |
| Memory | Texture size | Framebuffer + points |
| Код складність | Середня | Висока |
---
## Чеклист імплементації
### Phase 1: Basic setup
- [ ] Створити новий файл `WebGLRadialCoverageLayer.tsx`
- [ ] Setup WebGL context з float extensions
- [ ] Створити accumulation framebuffer
- [ ] Базовий vertex/fragment shader для точок
### Phase 2: Point rendering
- [ ] Implement point quad rendering
- [ ] Radial falloff function
- [ ] Additive blending
- [ ] Test з кількома точками
### Phase 3: Compositing
- [ ] Final pass shader
- [ ] Weighted average calculation
- [ ] Color mapping
- [ ] Alpha/transparency
### Phase 4: Integration
- [ ] Map positioning (як в поточному WebGL layer)
- [ ] Map event listeners (move/zoom)
- [ ] Opacity control
- [ ] Toggle в UI
### Phase 5: Optimization
- [ ] Instanced rendering
- [ ] Spatial culling
- [ ] Dynamic radius
- [ ] Resolution scaling
---
## Fallback
Якщо WebGL radial не працює (older GPU, missing extensions):
- Fallback до Canvas GeographicHeatmap
- Або до поточного texture-based WebGL
---
## Референси
1. [Mapbox GL Heatmap implementation](https://github.com/mapbox/mapbox-gl-js/blob/main/src/render/draw_heatmap.js)
2. [deck.gl HeatmapLayer](https://deck.gl/docs/api-reference/aggregation-layers/heatmap-layer)
3. [WebGL additive blending](https://webglfundamentals.org/webgl/lessons/webgl-text-texture.html)

View File

@@ -0,0 +1,281 @@
# RFCP v3.10.5: WebGL Smooth Coverage Implementation
## Контекст проблеми
**Поточний стан:**
- Backend повертає grid точок з lat/lon/RSRP (50m = 6,675 pts, 200m = 1,975 pts)
- WebGL texture-based rendering: points → texture → GL_LINEAR → colormap
- **Проблема:** Видимі grid squares/pixelation, особливо при zoom in або sparse grids (200m)
**Причина:**
- `GL_LINEAR` дає тільки C0 continuity (значення співпадають на краях, але похідні — ні)
- Це створює видимі "шви" між клітинками
## Рішення з ресерчу
### Ключовий інсайт
**Catmull-Rom spline interpolation** дає C1 continuity (smooth derivatives) І проходить через exact data values (на відміну від B-spline який blurs peaks).
**9-tap Catmull-Rom** замість `texture2D()`:
- 9 texture fetches замість 1
- ~0.32ms vs ~0.30ms на GTX 980 при 1920×1080
- Для нашої ~80×85 текстури — практично безкоштовно
### Критичне правило
**Інтерполювати RAW RSRP values ПЕРЕД colormap!**
- ❌ Неправильно: texture → colormap → interpolate (muddy colors)
- ✅ Правильно: texture → interpolate → colormap (clean gradients)
---
## Етап 1: Quick Fix (30 хвилин)
### Smoothstep coordinate remapping
Найшвидший спосіб прибрати grid edges — одна зміна в shader:
```glsl
// ЗАМІСТЬ:
vec4 texColor = texture2D(u_texture, v_uv);
// ВИКОРИСТАТИ:
vec4 textureSmooth(sampler2D tex, vec2 uv, vec2 texSize) {
vec2 p = uv * texSize + 0.5;
vec2 i = floor(p);
vec2 f = p - i;
f = f * f * f * (f * (f * 6.0 - 15.0) + 10.0); // quintic hermite
return texture2D(tex, (i + f - 0.5) / texSize);
}
// В main():
vec4 texColor = textureSmooth(u_texture, v_uv, u_textureSize);
```
**Що це дає:**
- C2 continuity з одним texture read
- Прибирає видимі grid edges
- Мінімальний positional bias
**Потрібно додати uniform:**
```javascript
const textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');
gl.uniform2f(textureSizeLocation, textureWidth, textureHeight);
```
---
## Етап 2: Production Implementation (1-2 години)
### 9-tap Catmull-Rom Shader
```glsl
precision highp float;
uniform sampler2D u_texture;
uniform vec2 u_textureSize;
uniform float u_opacity;
varying vec2 v_uv;
// Catmull-Rom 9-tap interpolation
// Source: TheRealMJP's gist (108 GitHub stars)
vec4 SampleTextureCatmullRom(sampler2D tex, vec2 uv, vec2 texSize) {
vec2 samplePos = uv * texSize;
vec2 texPos1 = floor(samplePos - 0.5) + 0.5;
vec2 f = samplePos - texPos1;
// Catmull-Rom weights
vec2 w0 = f * (-0.5 + f * (1.0 - 0.5 * f));
vec2 w1 = 1.0 + f * f * (-2.5 + 1.5 * f);
vec2 w2 = f * (0.5 + f * (2.0 - 1.5 * f));
vec2 w3 = f * f * (-0.5 + 0.5 * f);
// Combine weights for optimized sampling
vec2 w12 = w1 + w2;
vec2 offset12 = w2 / (w1 + w2);
// Compute texture coordinates
vec2 texPos0 = (texPos1 - 1.0) / texSize;
vec2 texPos3 = (texPos1 + 2.0) / texSize;
vec2 texPos12 = (texPos1 + offset12) / texSize;
// 9 texture fetches (optimized from 16)
vec4 result = vec4(0.0);
result += texture2D(tex, vec2(texPos0.x, texPos0.y)) * w0.x * w0.y;
result += texture2D(tex, vec2(texPos12.x, texPos0.y)) * w12.x * w0.y;
result += texture2D(tex, vec2(texPos3.x, texPos0.y)) * w3.x * w0.y;
result += texture2D(tex, vec2(texPos0.x, texPos12.y)) * w0.x * w12.y;
result += texture2D(tex, vec2(texPos12.x, texPos12.y)) * w12.x * w12.y;
result += texture2D(tex, vec2(texPos3.x, texPos12.y)) * w3.x * w12.y;
result += texture2D(tex, vec2(texPos0.x, texPos3.y)) * w0.x * w3.y;
result += texture2D(tex, vec2(texPos12.x, texPos3.y)) * w12.x * w3.y;
result += texture2D(tex, vec2(texPos3.x, texPos3.y)) * w3.x * w3.y;
return result;
}
// RSRP to color mapping (cyan -> green -> yellow -> orange -> red)
vec3 rsrpToColor(float rsrp) {
// rsrp: normalized 0.0 (weak, -110dBm) to 1.0 (strong, -50dBm)
// Color stops: red -> orange -> yellow -> green -> cyan
vec3 c0 = vec3(1.0, 0.0, 0.0); // red (weak)
vec3 c1 = vec3(1.0, 0.5, 0.0); // orange
vec3 c2 = vec3(1.0, 1.0, 0.0); // yellow
vec3 c3 = vec3(0.0, 1.0, 0.0); // green
vec3 c4 = vec3(0.0, 1.0, 1.0); // cyan (strong)
float t = clamp(rsrp, 0.0, 1.0);
if (t < 0.25) {
return mix(c0, c1, t / 0.25);
} else if (t < 0.5) {
return mix(c1, c2, (t - 0.25) / 0.25);
} else if (t < 0.75) {
return mix(c2, c3, (t - 0.5) / 0.25);
} else {
return mix(c3, c4, (t - 0.75) / 0.25);
}
}
void main() {
// 1. Sample with Catmull-Rom interpolation (RAW value)
vec4 texColor = SampleTextureCatmullRom(u_texture, v_uv, u_textureSize);
float rsrpNormalized = texColor.r;
// 2. Discard if no coverage (validity check)
if (rsrpNormalized < 0.01) {
discard;
}
// 3. Apply colormap AFTER interpolation
vec3 color = rsrpToColor(rsrpNormalized);
// 4. Smooth boundary fading (optional)
float boundaryAlpha = smoothstep(0.01, 0.05, rsrpNormalized);
gl_FragColor = vec4(color, boundaryAlpha * u_opacity);
}
```
### JavaScript зміни
```javascript
// 1. Vertex shader (без змін)
const vertexShaderSource = `
attribute vec2 a_position;
attribute vec2 a_texCoord;
varying vec2 v_uv;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_uv = a_texCoord;
}
`;
// 2. При створенні texture — зберегти розміри
const textureWidth = gridWidth;
const textureHeight = gridHeight;
// 3. Передати uniform
const textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');
if (textureSizeLocation) {
gl.uniform2f(textureSizeLocation, textureWidth, textureHeight);
} else {
console.error('[WebGL] u_textureSize uniform NOT FOUND!');
}
// 4. Texture filtering — можна залишити LINEAR для fallback
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
```
---
## Етап 3: Texture Data Format
### Поточний формат (перевірити)
```javascript
// Normalized RSRP value (0-255 mapped to 0.0-1.0 in shader)
const normalized = (rsrp - minRsrp) / (maxRsrp - minRsrp);
const value = Math.round(normalized * 255);
// Store in R channel
textureData[idx] = value; // R = normalized RSRP
textureData[idx + 1] = value; // G (можна використати для validity mask)
textureData[idx + 2] = value; // B
textureData[idx + 3] = 255; // A = fully opaque
```
### Альтернатива: Float texture (краща точність)
```javascript
// Якщо браузер підтримує OES_texture_float
const ext = gl.getExtension('OES_texture_float');
if (ext) {
const floatData = new Float32Array(width * height);
for (const point of points) {
const normalized = (point.rsrp - minRsrp) / (maxRsrp - minRsrp);
floatData[gridY * width + gridX] = normalized;
}
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, width, height, 0,
gl.LUMINANCE, gl.FLOAT, floatData);
}
```
---
## Чеклист імплементації
### Phase 1: Quick Test (Smoothstep)
- [ ] Додати `u_textureSize` uniform
- [ ] Замінити `texture2D()` на `textureSmooth()`
- [ ] Тест на 50m і 200m
- [ ] Тест zoom in/out
### Phase 2: Production (Catmull-Rom)
- [ ] Імплементувати `SampleTextureCatmullRom()`
- [ ] Оновити colormap function
- [ ] Додати boundary fading
- [ ] Тест edge cases (краї текстури)
- [ ] Performance benchmark
### Phase 3: Polish
- [ ] Видалити старі CSS blur workarounds
- [ ] Видалити cellSize multiplication (не потрібно з Catmull-Rom)
- [ ] Cleanup debug logs
- [ ] Update version to v3.10.5
---
## Очікуваний результат
**До (GL_LINEAR):**
```
┌───┬───┬───┐
│ A │ B │ C │ ← Видимі краї між клітинками
├───┼───┼───┤ C0 continuity
│ D │ E │ F │
└───┴───┴───┘
```
**Після (Catmull-Rom):**
```
╭───────────────╮
│ ░░░▒▒▓▓██ │ ← Smooth gradient
│ ░░░▒▒▓▓██▓▓ │ C1 continuity
│ ░░▒▒▓▓██ │ Exact values at grid points
╰───────────────╯
```
---
## Референси
1. [TheRealMJP's 9-tap Catmull-Rom HLSL](https://gist.github.com/TheRealMJP/c83b8c0f46b63f3a88a5986f4fa982b1)
2. [Inigo Quilez - Better Texture Filtering](https://iquilezles.org/articles/texture/)
3. [2D Catmull-Rom in 4 samples - Shadertoy](https://www.shadertoy.com/view/4tyGDD)
4. [mapbox-gl-interpolate-heatmap](https://github.com/vinayakkulkarni/mapbox-gl-interpolate-heatmap)
5. [NVIDIA GPU Gems 2 - Fast Third-Order Texture Filtering](https://developer.nvidia.com/gpugems/gpugems2/part-iii-high-quality-rendering/chapter-20-fast-third-order-texture-filtering)

View File

@@ -0,0 +1,149 @@
# RFCP Session Summary — February 4, 2026
## GPU Acceleration Complete: 195s → 11.2s (17.4x Speedup)
---
## 🎯 Session Goal
Complete GPU acceleration pipeline and optimize Full preset performance.
## 📊 Results
### Performance Achievement
| Metric | Before (3.7.0) | After (3.8.0) | Improvement |
|--------|----------------|---------------|-------------|
| **Full preset** (6640 pts, 50m) | 195s | **11.2s** | **17.4x** |
| **Standard preset** (1975 pts, 200m) | 7.2s | **2.3s** (cached) | **3.1x** |
| Phase 2.5 (distances+path_loss) | 0.33s | **0.006s** | 55x |
| Phase 2.6 (terrain LOS) | 7.29s | **0.04s** | 182x |
| Per-point (workers) | 1.1ms | **0.1ms** | 11x |
### GPU Pipeline (Final Architecture)
```
Phase 1: OSM data fetch (Overpass API) ~6-10s (network)
Phase 2: Terrain tile download + cache ~4s first / 0s cached
Phase 2.5: GPU — distances + base path_loss 0.006s ⚡
Phase 2.6: GPU — terrain LOS + diffraction loss 0.04s ⚡
Phase 2.7: GPU — antenna pattern loss ~0s ⚡
Phase 3: CPU workers — buildings + vegetation ~2s
─────────────────────────────────────────────────
TOTAL (cached): ~2.3s (Standard)
TOTAL (cached): ~11.2s (Full)
```
---
## 🔧 Changes Made (Iterations 3.7.0 → 3.8.0)
### Iteration 3.7.0 — GPU Precompute Foundation
- Added `gpu_manager` import to `coverage_service.py`
- Grid arrays created on GPU (CuPy)
- GPU precompute for distances + path_loss (vectorized)
- Fixed critical bug: CuPy worker process crashes (CUDA context sharing)
- Solution: GPU only in main process, workers use precomputed CPU values
- Fixed frontend duplicate calculation guard
### Iteration 3.8.0 — Full Vectorization
- **Phase 2.6**: `batch_terrain_los()` in `gpu_service.py`
- Vectorized terrain profile sampling for ALL points simultaneously
- Earth curvature correction vectorized
- Fresnel clearance + diffraction loss vectorized
- **Phase 2.7**: `batch_antenna_pattern()` in `gpu_service.py`
- Workers receive precomputed `has_los`, `terrain_loss`, `antenna_loss`
- Workers only compute buildings + reflections + vegetation
### Critical Fix: `_batch_elevation_lookup` Vectorization
- **Before**: Python `for` loop over 59,250 coordinates (7.29s)
- **After**: Vectorized NumPy tile indexing, loop only over tiles (0.04s)
- **Impact**: 182x speedup on Phase 2.6 alone
### Critical Fix: Vegetation Bbox Pre-filter
- **Before**: Each sample point checked ALL 683 vegetation polygons
- **After**: Bounding box pre-filter skips 95%+ of polygons
- **Impact**: Full preset 156s → 11.2s
---
## 📁 Files Modified
### Backend
- `app/services/coverage_service.py` — precomputed values passthrough
- `app/services/parallel_coverage_service.py` — 5 worker functions updated
- `app/services/gpu_service.py` — batch_terrain_los, batch_antenna_pattern, batch_final_rsrp
- `app/services/vegetation_service.py` — bbox pre-filter on _point_in_vegetation
### Build
- PyInstaller ONEDIR build: 1.6 GB dist → 1.2 GB NSIS installer
- CUDA DLLs bundled (cublas, cusparse, curand, etc.)
- Runtime hook for DLL directory setup
---
## 🏗️ Architecture (Final State)
```
Main Process (asyncio event loop)
├── Phase 2.5: GPU precompute
│ └── CuPy arrays: distances, path_loss (vectorized)
├── Phase 2.6: GPU terrain LOS
│ └── Batch elevation lookup (vectorized NumPy)
│ └── Earth curvature + Fresnel (CuPy)
│ └── Diffraction loss (CuPy)
├── Phase 2.7: GPU antenna pattern
│ └── Bearing + pattern loss (CuPy)
└── Phase 3: CPU ProcessPool (3 workers)
└── Receive precomputed dict per point
└── Skip terrain/antenna (already computed)
└── Only: buildings + reflections + vegetation
└── Pure NumPy + CPU
```
**Key Rule**: GPU (CuPy) code ONLY in main process. Workers never import gpu_manager.
---
## 🎮 Side Activity: Dwarf Fortress Gamelog Analysis
Analyzed 102,669-line gamelog from fort "Lashderush (Prophethandle)":
- 8-9 years, 23 migrant waves, 1,943 masterpieces
- 51,599 combat actions, only 4 deaths (weredeer outbreak)
- Top crafter: Momuz Nëkorlibash (201 masterpieces)
- Sole survivor transforms between dwarf/weredeer
---
## 🔮 Next Steps
### Immediate
- [x] ~~GPU acceleration~~ ✅ COMPLETE
- [ ] SRTM terrain data integration (higher accuracy than current tiles)
- [ ] Session history persistence across app restarts
### Short Term
- [ ] Multi-station dashboard
- [ ] Project export/import (JSON)
- [ ] Link budget analysis view
### Medium Term
- [ ] LimeSDR hardware integration testing
- [ ] Real RF validation against field measurements
- [ ] 3D visualization mode
---
## 💡 Key Learnings
1. **Python for-loops are the enemy**`_batch_elevation_lookup` went from 7.3s to 0.04s by replacing enumerate(zip()) with NumPy indexing
2. **Spatial pre-filtering is massive** — vegetation bbox check eliminated 95%+ of polygon tests
3. **GPU context can't be shared across processes** — spawn mode creates new CUDA contexts that OOM
4. **Vectorize in main, distribute to workers** — best pattern for GPU + multiprocessing
5. **Profile before optimizing** — Phase 2.6 bottleneck was invisible until measured
---
*Session duration: ~4 hours*
*Lines of code changed: ~300*
*Performance gain: 17.4x*
*Feeling: 🚀*

View File

@@ -0,0 +1,260 @@
# RFCP Session 2026-02-04 — Complete Development Log
**Session:** February 4, 2026 (afternoon/evening)
**Duration:** ~6 hours active development
**Iterations completed:** 3.9.0 → 3.9.1 → 3.10.0 → 3.10.1 → 3.10.2 → 3.10.3 → 3.10.4 (pending)
---
## What Was Done This Session
### Infrastructure: terra.eliah.one Tile Server ✅
- **DNS:** terra.eliah.one → 2.56.207.143 (VPS A, Hayhost)
- **Caddy:** File server with browse at /opt/terra/tiles/
- **SRTM3 (90m):** 187 tiles, 514.5 MB — full Ukraine (N44-N51, E018-E041)
- **SRTM1 (30m):** 160 tiles, 3,957.3 MB — full Ukraine (N44-N51, E022-E041)
- **Sources:** viewfinderpanoramas.org (SRTM3, void-filled), AWS S3 elevation-tiles-prod (SRTM1)
- **Index:** /api/index → tile_index.json (version 2, dual dataset)
- **Public access verified:** https://terra.eliah.one/srtm1/ and /srtm3/
### Iteration 3.9.1: Terra Integration ✅
- terrain_service.py updated with prioritized SRTM sources:
1. terra.eliah.one/srtm1/ (30m, preferred)
2. terra.eliah.one/srtm3/ (90m, fallback)
3. AWS S3 skadi mirror (public fallback)
- New endpoints: /api/terrain/status, /api/terrain/download, /api/terrain/index
- Auto-downloads tiles on first use, cached permanently on disk
- 173 tiles loaded (4,278.6 MB) confirmed in Data Cache panel
### Iteration 3.10.0: Link Budget + Fresnel Zone + Interference ✅
- **Link Budget Calculator:** Full TX→RX path analysis panel
- EIRP calculation, FSPL, terrain loss, received power, link margin
- RX point placement on map (orange marker, dashed line)
- ✓ LINK OK / ✗ FAIL status with margin display
- **Fresnel Zone Visualization:** On Terrain Profile chart
- First Fresnel zone ellipse overlay (semi-transparent)
- Red highlighting where terrain intrudes zone
- Frequency-aware (zone size changes with MHz)
- Clearance calculation with recommendation text
- **Interference Modeling (C/I):** Backend ready
- Carrier-to-interference ratio per grid point
- Co-frequency site grouping
- GPU-accelerated (CuPy vectorized)
### Iteration 3.10.1: UI Bugfixes (partial) ✅
- Elevation opacity control
- Data Cache panel with region downloads
- Various dark theme text fixes
### Iteration 3.10.2: Tool Mode System ✅
- **ActiveTool state:** 'none' | 'ruler' | 'rx-placement' | 'site-placement'
- Single map click handler dispatches to active tool
- Cursor management (default/crosshair/cell per tool)
- Ruler snap-to-site (20px threshold)
- Event propagation fixes (partial — terrain profile still leaks)
### Iteration 3.10.3: Calculator Button + Ruler Limit ✅
- Calculator button added to right toolbar
- Ruler limited to 2 points max (point-to-point only)
- Third click starts new measurement
### Iteration 3.10.4: Pending Fixes 🔧
- Terrain Profile click-through (needs stopImmediatePropagation on native event)
- TX Height hardcoded to 2m in Link Budget (should read from site config)
---
## Current State — What Works
### Core Features ✅
- Multi-site RF coverage planning with multi-sector antennas
- GPU-accelerated coverage calculation (RTX 4060, CuPy/CUDA)
- 9 propagation models (Free-Space, terrain_los, buildings, materials, dominant_path, street_canyon, reflections, water_reflection, vegetation, atmospheric)
- Performance: 11.2s Full preset (17.4x speedup from v3.8.0)
- Geographic-scale heatmap with Leaflet tile rendering
### Terrain Integration ✅
- SRTM elevation data (30m and 90m resolution)
- Bilinear interpolation for sub-pixel accuracy
- Memory-mapped I/O with LRU cache (20 tiles)
- Auto-detection SRTM1 vs SRTM3 by file size
- Terrain-aware coverage calculation (Line of Sight, terrain loss)
- Terrain Profile viewer with elevation chart
### Analysis Tools ✅
- **Link Budget Calculator** — point-to-point path analysis
- **Fresnel Zone Visualization** — on terrain profile chart
- **Ruler/Distance Measurement** — 2-point with snap-to-site
- **Terrain Profile** — elevation cross-section between 2 points
- **Coverage Statistics** — Excellent/Good/Fair/Weak breakdown
- **Session History** — compare calculation runs
### Data Management ✅
- Export: CSV, GeoJSON coverage data
- Import/Export: Site configurations (JSON)
- Data Cache: Regional tile pre-download (Ukraine, Eastern Ukraine, Donbas, Central, Western, Kyiv)
- 173 terrain tiles (4.3 GB) cached locally
### Infrastructure ✅
- Frontend: React 18 + TypeScript + Vite + Leaflet
- Backend: Python FastAPI + CuPy GPU pipeline
- Tile Server: terra.eliah.one (Caddy file_server)
- Packaging: PyInstaller + Electron (Windows installer)
- Desktop app: RFCP - RF Coverage Planner (native window)
---
## Known Bugs (for 3.10.4+)
| # | Bug | Severity | Root Cause |
|---|-----|----------|------------|
| 1 | Terrain Profile click places ruler point | Medium | stopPropagation not blocking Leaflet's native DOM listener. Need `e.nativeEvent.stopImmediatePropagation()` or move popup outside Leaflet container |
| 2 | TX Height shows 2m in Link Budget | Low | Hardcoded default, not reading from site config field |
| 3 | Cursor still shows hand in some cases | Low | Leaflet default grab cursor not fully overridden |
| 4 | Elevation Colors opacity slider | Low | May need correct layer reference binding |
---
## Roadmap — Updated February 4, 2026
### ✅ COMPLETED (Iterations 1-3.10.3)
**Phase 1: Foundation** (Dec 2024)
- React + TypeScript + Vite + Leaflet setup
- Basic site management, coverage calculation
**Phase 2: Core Features** (Jan 2025, Iterations 1-10.1)
- Multi-site, multi-sector, geographic heatmap
- Coverage statistics, keyboard shortcuts
- Code audit, production polish
**Phase 3: GPU Acceleration** (Feb 2-3, 2026, Iterations 3.1-3.8)
- CuPy/CUDA pipeline: 195s → 11.2s (17.4x)
- PyInstaller build with CUDA bundling
- Windows native backend (no WSL2)
**Phase 4: Terrain Integration** (Feb 4, 2026, Iterations 3.9-3.10)
- SRTM tile server (terra.eliah.one)
- 347 tiles, 4.5 GB, full Ukraine coverage
- Terrain-aware propagation, terrain profiles
- Link budget calculator, Fresnel zones
- Tool mode system, interference modeling
### 🔧 REMAINING ON CURRENT STACK
**3.10.4: Final Bugfixes** (1-2 hours)
- Terrain Profile click propagation fix
- TX Height from site config
- Cursor cleanup
- Elevation opacity fix
**3.11: Polish & QA** (optional, 2-3 hours)
- Interference C/I heatmap toggle on frontend
- Coverage comparison mode (before/after)
- Keyboard shortcuts help modal (?)
- Settings persistence (localStorage)
- Input validation improvements
**3.12: Offline Package** (optional, 2-3 hours)
- SRTM3 tiles bundled in installer (~180 MB gzipped)
- SRTM1 as optional "HD Terrain Pack" download
- First-run extraction to data/terrain/
- Full offline operation without internet
### 🔮 FUTURE (New Stack — When Inspired)
**Stack Migration: Tauri + SvelteKit + Rust**
- Native performance without Electron overhead
- Rust backend replacing Python FastAPI
- GPU compute via wgpu or Vulkan
- Smaller installer (<100 MB vs current ~1.6 GB)
- Already tested Tauri for UMTC Wiki project
**Advanced RF Features:**
- 3D terrain visualization (Three.js or WebGPU)
- Drive test data import and comparison
- Multiple frequency band planning
- Custom propagation model editor
- Real-time collaboration (via Matrix?)
**Field Deployment:**
- Live USB with BitLocker encryption
- Offline-first with full Ukraine terrain
- Integration with UMTC tactical mesh
- LoRa/IoT device position planning
---
## Tech Specs Quick Reference
### Backend
```
Location: D:\root\rfcp\backend
Framework: FastAPI + Uvicorn
GPU: CuPy + CUDA (RTX 4060)
Python: 3.x with numpy, scipy, httpx
Build: PyInstaller ONEDIR (~1.6 GB with CUDA)
Start: python -m uvicorn app.main:app --host 0.0.0.0 --port 8000
```
### Frontend
```
Location: D:\root\rfcp\frontend
Framework: React 18 + TypeScript + Vite
Map: Leaflet + custom geographic heatmap
State: Zustand
Build: npm run build → dist/
Bundle: 163KB gzipped
```
### Tile Server
```
Domain: terra.eliah.one
Server: VPS A (2.56.207.143), Caddy file_server
Path: /opt/terra/tiles/srtm1/ and /opt/terra/tiles/srtm3/
Index: /api/index → tile_index.json
Health: /health → "ok"
Tiles: 187 SRTM3 (515 MB) + 160 SRTM1 (3.9 GB)
```
### Key Files
```
terrain_service.py — SRTM tile loading, bilinear interpolation, elevation profiles
gpu_service.py — CuPy/CUDA coverage calculation pipeline
coverage_service.py — Propagation models, coverage orchestration
routes/terrain.py — /api/terrain/status, /download, /index
routes/coverage.py — /api/link-budget, /api/fresnel-profile
frontend/src/store/tools.ts — ActiveTool state management
frontend/src/components/panels/LinkBudgetPanel.tsx
frontend/src/components/map/TerrainProfile.tsx
frontend/src/components/map/MeasurementTool.tsx
```
---
## Performance Benchmarks
| Preset | Resolution | Points | Time | GPU |
|--------|-----------|--------|------|-----|
| Standard | 200m | 1,975 | 7.4s | ✅ |
| Full | 50m | 6,639-6,662 | 11.2-11.7s | ✅ |
| 50km radius | 200m | 4,966 | ~30s | ✅ |
**GPU:** NVIDIA RTX 4060 (CUDA)
**Speedup:** 17.4x vs CPU-only (v3.7.0 baseline)
---
## Session Notes
Продуктивна сесія. За ~6 годин:
- Підняли tile server з нуля (terra.eliah.one)
- 347 тайлів terrain data для всієї України
- Інтегрували terrain в backend (auto-download, status API)
- Додали Link Budget Calculator, Fresnel Zone, Interference modeling
- Впровадили Tool Mode System для вирішення click conflicts
- Виправили купу UX багів
Продукт близький до завершення на поточному стеку. Основна функціональність працює, залишились polish баги та optional фічі. Рефактор на Tauri+SvelteKit+Rust — коли буде натхнення, не терміново.
Half Sword скачаний і чекає. 🗡️

View File

@@ -0,0 +1,193 @@
# RFCP v3.10.5 Session Summary - 2026-02-06
## Що зробили сьогодні
### 1. WebGL Texture-Based Coverage (ЗАВЕРШЕНО ✅)
**Проблема:** Canvas heatmap був blocky, хотіли smooth interpolation.
**Рішення:** Texture-based WebGL з smoothstep shader + nearest neighbor fill.
**Файл:** `frontend/src/components/map/WebGLCoverageLayer.tsx`
**Як працює:**
1. Створюємо texture де кожен pixel = RSRP value
2. Nearest neighbor fill для заповнення gaps (circular coverage → rectangular texture)
3. Smoothstep shader для C2 continuity interpolation
4. Colormap applied AFTER interpolation
**Статус:** Працює, але все ще blocky на zoom in через nearest neighbor fill.
---
### 2. WebGL Radial Gradients Coverage (В ПРОЦЕСІ 🔄)
**Мета:** Красиві smooth gradients як Canvas heatmap, але GPU-accelerated.
**Файл:** `frontend/src/components/map/WebGLRadialCoverageLayer.tsx`
**Як працює:**
1. Кожна точка = quad з Gaussian radial falloff
2. Additive blending в float framebuffer: (weight × rsrp, weight)
3. Final composite pass: normalize (R/G) + colormap
**Поточний статус:**
- ✅ Framebuffer створюється правильно
- ✅ Points рендеряться (framebuffer має дані)
- ✅ Composite pass працює (final pixel має колір)
- ✅ 50m показує beautiful smooth gradients!
- ✅ 200m тепер теж показує (після radius fix)
- ⚠️ Coverage radius не повний (обрізається раніше ніж 10km)
- ⚠️ Темне коло на периферії (falloff занадто різкий?)
- ⚠️ Selector dropdown сірий на білому (CSS issue)
---
### 3. Coverage Renderer Selector (ЗАВЕРШЕНО ✅)
**Файл:** `frontend/src/store/settings.ts`
**Додано:** `coverageRenderer: 'radial' | 'texture' | 'canvas'`
**UI:** Dropdown в Coverage Settings panel
**Fallback chain:**
- Radial fails → Texture
- Texture fails → Canvas
---
## Залишилось зробити (Next Session)
### Priority 1: Fix Radial Coverage Radius
**Симптом:** Coverage не покриває повні 10km, обрізається раніше.
**Можливі причини:**
1. Canvas bounds не включають padding для point radius
2. Points на краю мають gradient що виходить за canvas
3. Normalized coordinates calculation wrong at edges
**Debug:**
```javascript
// Перевірити bounds vs actual coverage extent
console.log('Canvas bounds:', bounds);
console.log('Points extent:', {
minLat: Math.min(...points.map(p => p.lat)),
maxLat: Math.max(...points.map(p => p.lat)),
// ...
});
```
**Fix approach:**
1. Додати padding до canvas bounds = point radius
2. Або clip points що виходять за межі
---
### Priority 2: Fix Dark Ring on Periphery
**Симптом:** Темне коло на краю coverage area.
**Причина:** Точки на периферії мають менше сусідів → менший total weight → темніший колір після normalization.
**Fix options:**
1. Збільшити radius multiplier (3.0× замість 2.5×)
2. Або додати edge detection і boost alpha там
3. Або використати min weight threshold перед normalization
---
### Priority 3: Fix Selector Dropdown Styling
**Симптом:** Сірий текст на білому фоні (погано видно).
**Fix:** Update CSS classes в App.tsx для dropdown.
---
### Priority 4: Performance Testing
Протестувати з великою кількістю точок:
- 10,000+ points
- 50,000+ points
- Measure frame time
Якщо повільно — implement instanced rendering.
---
## Files Changed Today
```
frontend/src/components/map/
├── WebGLCoverageLayer.tsx # Texture-based (updated with NN fill)
├── WebGLRadialCoverageLayer.tsx # NEW - Radial gradients
└── GeographicHeatmap.tsx # Canvas fallback (unchanged)
frontend/src/store/
└── settings.ts # Added coverageRenderer option
frontend/src/
└── App.tsx # Integrated renderer selector
```
---
## Console Debug Commands
```javascript
// Check which renderer is active
document.querySelectorAll('canvas').forEach(c =>
console.log(c.className, c.width, c.height)
);
// Check WebGL errors
const canvas = document.querySelector('.webgl-radial-coverage');
const gl = canvas?.getContext('webgl');
console.log('WebGL error:', gl?.getError());
// Read center pixel
gl?.readPixels(canvas.width/2, canvas.height/2, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(4));
```
---
## Key Insights Learned
1. **Texture-based vs Radial:** Texture good for terrain detail accuracy, Radial good for beautiful visualization.
2. **Float framebuffer:** Need `EXT_color_buffer_float` extension. Fallback: use RGBA8 with encoding.
3. **Additive blending:** `gl.blendFunc(gl.ONE, gl.ONE)` for accumulation, then `gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)` for final composite.
4. **Weighted average in shader:** Store (weight × value, weight), then normalize: value = R / G.
5. **Radius scaling:** Higher resolution = more points = smaller radius. Lower resolution = fewer points = larger radius to compensate.
---
## Git Status
- ✅ Pushed working WebGL texture-based coverage
- 🔄 WebGL radial in progress (functional but incomplete)
---
## Next Session Start Point
1. Відкрити RFCP project
2. `npm run dev` в frontend
3. Test radial coverage з 50m і 200m
4. Fix radius issue (Priority 1)
5. Fix dark ring (Priority 2)
6. Polish UI (Priority 3)
---
## Session Stats
- **Duration:** ~6 hours
- **Iterations:** 15+ fix attempts
- **Final result:** Working radial gradients renderer (90% complete)
- **Key breakthrough:** Discovering framebuffer had data but composite pass wasn't reading it

View File

@@ -0,0 +1,723 @@
# RFCP - Iteration 3.3.0: Performance Architecture Refactor
## Overview
Major refactoring based on research into professional RF tools (Signal-Server, SPLAT!, CloudRF SLEIPNIR, Sionna RT).
**Root cause identified:** Pickle serialization overhead dominates computation time.
- DP_TIMING shows: 0.6-0.9ms per point (actual calculation)
- Real throughput: 258ms per point
- **99% of time is IPC overhead, not calculation!**
**Target:** Reduce 5km Detailed from timeout (300s) to <30s
---
## Part 1: Eliminate Pickle Overhead (CRITICAL)
### 1.1 Shared Memory for Buildings
Currently terrain is in shared memory, but **15,000 buildings are pickled for every chunk**.
**File:** `backend/app/services/parallel_coverage_service.py`
```python
from multiprocessing import shared_memory
import numpy as np
def buildings_to_shared_memory(buildings: list) -> tuple:
"""
Convert buildings to numpy arrays and store in shared memory.
Returns: (shm_name, shape, dtype) for reconstruction in workers
"""
# Extract building data into numpy arrays
# For each building we need: lat, lon, height, num_vertices, vertices_flat
# Simplified: store as structured array
building_data = []
all_vertices = []
vertex_offsets = [0]
for b in buildings:
coords = extract_coords(b)
height = b.get('properties', {}).get('height', 10.0)
building_data.append({
'lat': np.mean([c[1] for c in coords]),
'lon': np.mean([c[0] for c in coords]),
'height': height,
'vertex_start': len(all_vertices),
'vertex_count': len(coords)
})
all_vertices.extend(coords)
vertex_offsets.append(len(all_vertices))
# Create numpy arrays
buildings_arr = np.array([
(b['lat'], b['lon'], b['height'], b['vertex_start'], b['vertex_count'])
for b in building_data
], dtype=[
('lat', 'f8'), ('lon', 'f8'), ('height', 'f4'),
('vertex_start', 'i4'), ('vertex_count', 'i2')
])
vertices_arr = np.array(all_vertices, dtype=[('lon', 'f8'), ('lat', 'f8')])
# Store in shared memory
shm_buildings = shared_memory.SharedMemory(
create=True,
size=buildings_arr.nbytes,
name=f"rfcp_buildings_{os.getpid()}"
)
shm_vertices = shared_memory.SharedMemory(
create=True,
size=vertices_arr.nbytes,
name=f"rfcp_vertices_{os.getpid()}"
)
# Copy data
np.ndarray(buildings_arr.shape, dtype=buildings_arr.dtype,
buffer=shm_buildings.buf)[:] = buildings_arr
np.ndarray(vertices_arr.shape, dtype=vertices_arr.dtype,
buffer=shm_vertices.buf)[:] = vertices_arr
return {
'buildings': (shm_buildings.name, buildings_arr.shape, buildings_arr.dtype),
'vertices': (shm_vertices.name, vertices_arr.shape, vertices_arr.dtype)
}
def buildings_from_shared_memory(shm_info: dict) -> tuple:
"""Reconstruct buildings arrays from shared memory in worker."""
shm_b = shared_memory.SharedMemory(name=shm_info['buildings'][0])
shm_v = shared_memory.SharedMemory(name=shm_info['vertices'][0])
buildings = np.ndarray(
shm_info['buildings'][1],
dtype=shm_info['buildings'][2],
buffer=shm_b.buf
)
vertices = np.ndarray(
shm_info['vertices'][1],
dtype=shm_info['vertices'][2],
buffer=shm_v.buf
)
return buildings, vertices, shm_b, shm_v
```
### 1.2 Increase Batch Size
**Current:** 7 chunks of ~144 points = high IPC overhead per point
**Target:** 2-3 chunks of ~300-400 points = amortize IPC cost
```python
# In parallel_coverage_service.py
def calculate_optimal_chunk_size(total_points: int, num_workers: int) -> int:
"""
Calculate chunk size to minimize IPC overhead.
Rule: computation_time should be 10-100x serialization_time
For RF calculations: ~1ms compute, ~50ms serialize
So batch at least 500 points to make compute dominate.
"""
min_chunk = 300 # Minimum to amortize IPC
max_chunk = 1000 # Maximum for memory
ideal_chunks = max(2, num_workers) # At least 2 chunks per worker
ideal_size = total_points // ideal_chunks
return max(min_chunk, min(max_chunk, ideal_size))
```
### 1.3 Pre-build Spatial Index Once
Currently spatial index may be rebuilt per-chunk. Build once and share reference.
```python
class SharedSpatialIndex:
"""Spatial index that can be shared across processes via shared memory."""
def __init__(self, buildings_shm_info: dict):
self.buildings, self.vertices, _, _ = buildings_from_shared_memory(buildings_shm_info)
self._build_grid()
def _build_grid(self):
"""Build simple grid-based spatial index."""
# Grid cells of ~100m
self.cell_size = 0.001 # ~111m in degrees
self.grid = defaultdict(list)
for i, b in enumerate(self.buildings):
cell_x = int(b['lon'] / self.cell_size)
cell_y = int(b['lat'] / self.cell_size)
self.grid[(cell_x, cell_y)].append(i)
def query_radius(self, lat: float, lon: float, radius_m: float) -> list:
"""Get building indices within radius."""
radius_deg = radius_m / 111000
cells_to_check = int(radius_deg / self.cell_size) + 1
center_x = int(lon / self.cell_size)
center_y = int(lat / self.cell_size)
result = []
for dx in range(-cells_to_check, cells_to_check + 1):
for dy in range(-cells_to_check, cells_to_check + 1):
result.extend(self.grid.get((center_x + dx, center_y + dy), []))
return result
```
---
## Part 2: Radial Calculation Pattern (Signal-Server style)
Instead of grid, calculate along radial spokes for faster coverage estimation.
### 2.1 Radial Engine
**File:** `backend/app/services/radial_coverage_service.py` (NEW)
```python
"""
Radial coverage calculation engine inspired by Signal-Server/SPLAT!
Instead of calculating every grid point independently:
1. Cast rays from TX in all directions (0-360°)
2. Sample terrain along each ray (profile)
3. Apply propagation model to profile
4. Interpolate between rays for final grid
This is 10-50x faster because:
- Terrain profiles are linear (cache-friendly)
- No building geometry per-point (use clutter model)
- Embarrassingly parallel by azimuth
"""
import numpy as np
from concurrent.futures import ThreadPoolExecutor
import math
class RadialCoverageEngine:
def __init__(self, terrain_service, propagation_model):
self.terrain = terrain_service
self.model = propagation_model
def calculate_coverage(
self,
tx_lat: float, tx_lon: float, tx_height: float,
radius_m: float,
frequency_mhz: float,
tx_power_dbm: float,
num_radials: int = 360, # 1° resolution
samples_per_radial: int = 100,
num_threads: int = 8
) -> dict:
"""
Calculate coverage using radial ray-casting.
Returns dict with 'radials' (raw data) and 'grid' (interpolated).
"""
# Pre-load terrain tiles
self._preload_terrain(tx_lat, tx_lon, radius_m)
# Calculate radials in parallel (by azimuth sectors)
sector_size = num_radials // num_threads
with ThreadPoolExecutor(max_workers=num_threads) as executor:
futures = []
for i in range(num_threads):
start_az = i * sector_size
end_az = (i + 1) * sector_size if i < num_threads - 1 else num_radials
futures.append(executor.submit(
self._calculate_sector,
tx_lat, tx_lon, tx_height,
radius_m, frequency_mhz, tx_power_dbm,
start_az, end_az, samples_per_radial
))
# Collect results
all_radials = []
for f in futures:
all_radials.extend(f.result())
return {
'radials': all_radials,
'center': (tx_lat, tx_lon),
'radius': radius_m,
'num_radials': num_radials
}
def _calculate_sector(
self,
tx_lat, tx_lon, tx_height,
radius_m, frequency_mhz, tx_power_dbm,
start_az, end_az, samples_per_radial
) -> list:
"""Calculate radials for one azimuth sector."""
results = []
for az in range(start_az, end_az):
radial = self._calculate_radial(
tx_lat, tx_lon, tx_height,
radius_m, frequency_mhz, tx_power_dbm,
az, samples_per_radial
)
results.append(radial)
return results
def _calculate_radial(
self,
tx_lat, tx_lon, tx_height,
radius_m, frequency_mhz, tx_power_dbm,
azimuth_deg, num_samples
) -> dict:
"""
Calculate signal strength along one radial.
Uses terrain profile + Longley-Rice style calculation.
"""
az_rad = math.radians(azimuth_deg)
cos_lat = math.cos(math.radians(tx_lat))
# Sample points along radial
distances = np.linspace(100, radius_m, num_samples)
# Calculate lat/lon for each sample
lat_offsets = (distances / 111000) * math.cos(az_rad)
lon_offsets = (distances / (111000 * cos_lat)) * math.sin(az_rad)
lats = tx_lat + lat_offsets
lons = tx_lon + lon_offsets
# Get terrain profile
elevations = np.array([
self.terrain.get_elevation_sync(lat, lon)
for lat, lon in zip(lats, lons)
])
tx_elevation = self.terrain.get_elevation_sync(tx_lat, tx_lon)
# Calculate path loss for each point
rsrp_values = []
los_flags = []
for i, (dist, elev) in enumerate(zip(distances, elevations)):
# Simple LOS check using terrain profile up to this point
profile = elevations[:i+1]
has_los = self._check_los_profile(
tx_elevation + tx_height,
elev + 1.5, # RX height
profile,
distances[:i+1]
)
# Path loss (using configured model)
path_loss = self.model.calculate_path_loss(
frequency_mhz, dist, tx_height, 1.5,
has_los=has_los
)
# Add diffraction loss if NLOS
if not has_los:
diff_loss = self._calculate_diffraction_loss(
tx_elevation + tx_height,
elev + 1.5,
profile,
distances[:i+1],
frequency_mhz
)
path_loss += diff_loss
rsrp = tx_power_dbm - path_loss
rsrp_values.append(rsrp)
los_flags.append(has_los)
return {
'azimuth': azimuth_deg,
'distances': distances.tolist(),
'lats': lats.tolist(),
'lons': lons.tolist(),
'rsrp': rsrp_values,
'has_los': los_flags
}
def _check_los_profile(self, tx_h, rx_h, profile, distances) -> bool:
"""Check LOS using terrain profile (Fresnel zone clearance)."""
if len(profile) < 2:
return True
total_dist = distances[-1]
# Line from TX to RX
for i in range(1, len(profile) - 1):
d = distances[i]
# Expected height on LOS line
expected_h = tx_h + (rx_h - tx_h) * (d / total_dist)
# Actual terrain height
actual_h = profile[i]
if actual_h > expected_h - 0.6: # Small clearance margin
return False
return True
def _calculate_diffraction_loss(self, tx_h, rx_h, profile, distances, freq_mhz) -> float:
"""Calculate diffraction loss using Deygout method."""
# Find main obstacle
max_v = -999
max_idx = -1
total_dist = distances[-1]
wavelength = 300 / freq_mhz # meters
for i in range(1, len(profile) - 1):
d1 = distances[i]
d2 = total_dist - d1
# Height of LOS line at this point
los_h = tx_h + (rx_h - tx_h) * (d1 / total_dist)
# Obstacle height above LOS
h = profile[i] - los_h
if h > 0:
# Fresnel parameter
v = h * math.sqrt(2 * (d1 + d2) / (wavelength * d1 * d2))
if v > max_v:
max_v = v
max_idx = i
if max_v < -0.78:
return 0.0
# Knife-edge diffraction loss (ITU-R P.526)
if max_v < 0:
loss = 6.02 + 9.11 * max_v - 1.27 * max_v * max_v
elif max_v < 2.4:
loss = 6.02 + 9.11 * max_v + 1.65 * max_v * max_v
else:
loss = 12.953 + 20 * math.log10(max_v)
return max(0, loss)
```
---
## Part 3: Propagation Model Updates
### 3.1 Add Longley-Rice ITM Support
**File:** `backend/app/services/propagation_models/itm_model.py` (NEW)
```python
"""
Longley-Rice Irregular Terrain Model (ITM)
Best for: VHF/UHF terrain-based propagation (20 MHz - 20 GHz)
Based on: itmlogic Python package
Key parameters:
- Earth dielectric constant (eps): 4-81 (15 typical for ground)
- Ground conductivity (sgm): 0.001-5.0 S/m
- Atmospheric refractivity (ens): 250-400 N-units (301 typical)
- Climate: 1=Equatorial, 2=Continental Subtropical, etc.
"""
try:
from itmlogic import itmlogic_p2p
HAS_ITMLOGIC = True
except ImportError:
HAS_ITMLOGIC = False
from .base_model import BasePropagationModel, PropagationInput, PropagationResult
class LongleyRiceModel(BasePropagationModel):
"""Longley-Rice ITM propagation model."""
name = "Longley-Rice-ITM"
frequency_range = (20, 20000) # MHz
distance_range = (1000, 2000000) # meters
# Default ITM parameters
DEFAULT_PARAMS = {
'eps': 15.0, # Earth dielectric constant
'sgm': 0.005, # Ground conductivity (S/m)
'ens': 301.0, # Atmospheric refractivity (N-units)
'pol': 0, # Polarization: 0=horizontal, 1=vertical
'mdvar': 12, # Mode of variability
'klim': 5, # Climate: 5=Continental Temperate
}
# Ground parameters by type
GROUND_PARAMS = {
'average': {'eps': 15.0, 'sgm': 0.005},
'poor': {'eps': 4.0, 'sgm': 0.001},
'good': {'eps': 25.0, 'sgm': 0.020},
'fresh_water': {'eps': 81.0, 'sgm': 0.010},
'sea_water': {'eps': 81.0, 'sgm': 5.0},
'forest': {'eps': 12.0, 'sgm': 0.003},
}
def __init__(self, ground_type: str = 'average', climate: int = 5):
if not HAS_ITMLOGIC:
raise ImportError("itmlogic package required: pip install itmlogic")
self.params = self.DEFAULT_PARAMS.copy()
if ground_type in self.GROUND_PARAMS:
self.params.update(self.GROUND_PARAMS[ground_type])
self.params['klim'] = climate
def calculate(self, input: PropagationInput) -> PropagationResult:
"""Calculate path loss using ITM point-to-point mode."""
# ITM requires terrain profile
if not hasattr(input, 'terrain_profile') or input.terrain_profile is None:
# Fallback to free-space if no terrain
return self._free_space_fallback(input)
result = itmlogic_p2p(
input.terrain_profile, # Elevation samples
input.frequency_mhz,
input.tx_height_m,
input.rx_height_m,
self.params['eps'],
self.params['sgm'],
self.params['ens'],
self.params['pol'],
self.params['mdvar'],
self.params['klim']
)
return PropagationResult(
path_loss_db=result['dbloss'],
model_name=self.name,
details={
'mode': result.get('propmode', 'unknown'),
'variability': result.get('var', 0),
}
)
def _free_space_fallback(self, input: PropagationInput) -> PropagationResult:
"""Free-space path loss when no terrain available."""
fspl = 20 * np.log10(input.distance_m) + 20 * np.log10(input.frequency_mhz) - 27.55
return PropagationResult(
path_loss_db=fspl,
model_name=f"{self.name} (FSPL fallback)",
details={'mode': 'free_space'}
)
```
### 3.2 Add VHF/UHF Model Selection
**File:** `backend/app/services/propagation_models/model_selector.py`
```python
"""
Automatic propagation model selection based on frequency and environment.
"""
def select_model_for_frequency(
frequency_mhz: float,
environment: str = 'urban',
has_terrain: bool = True
) -> BasePropagationModel:
"""
Select appropriate propagation model.
Frequency bands:
- VHF: 30-300 MHz (tactical radios, FM broadcast)
- UHF: 300-3000 MHz (tactical radios, TV, early cellular)
- Cellular: 700-2600 MHz (LTE bands)
- mmWave: 24-100 GHz (5G)
Decision tree:
1. VHF/UHF with terrain → Longley-Rice ITM
2. Urban cellular → COST-231 Hata
3. Suburban/rural cellular → Okumura-Hata
4. mmWave → 3GPP 38.901
"""
# VHF (30-300 MHz)
if 30 <= frequency_mhz <= 300:
if has_terrain:
return LongleyRiceModel(ground_type='average', climate=5)
else:
return FreeSpaceModel() # Fallback
# UHF (300-1000 MHz)
elif 300 < frequency_mhz <= 1000:
if has_terrain:
return LongleyRiceModel(ground_type='average', climate=5)
else:
return OkumuraHataModel(environment=environment)
# Cellular (1000-2600 MHz)
elif 1000 < frequency_mhz <= 2600:
if environment == 'urban':
return Cost231HataModel()
else:
return OkumuraHataModel(environment=environment)
# Higher frequencies
else:
return FreeSpaceModel() # Or implement 3GPP 38.901
# Frequency band constants for UI
FREQUENCY_BANDS = {
'VHF_LOW': (30, 88, "VHF Low (30-88 MHz) - Military/Public Safety"),
'VHF_HIGH': (136, 174, "VHF High (136-174 MHz) - Marine/Aviation"),
'UHF_LOW': (400, 512, "UHF (400-512 MHz) - Public Safety/Tactical"),
'UHF_TV': (470, 862, "UHF TV (470-862 MHz)"),
'LTE_700': (700, 800, "LTE Band 28/20 (700-800 MHz)"),
'LTE_900': (880, 960, "LTE Band 8 (900 MHz)"),
'LTE_1800': (1710, 1880, "LTE Band 3 (1800 MHz)"),
'LTE_2100': (1920, 2170, "LTE Band 1 (2100 MHz)"),
'LTE_2600': (2500, 2690, "LTE Band 7 (2600 MHz)"),
}
```
---
## Part 4: Progress Bar Fix (WebSocket)
### 4.1 Proper Progress Streaming
The 5% bug persists because WebSocket messages aren't reaching frontend.
**Debug approach:**
```python
# In coverage calculation, add explicit progress logging
async def calculate_with_progress(self, ...):
total_points = len(points)
for i, chunk_result in enumerate(chunk_results):
progress = int((i + 1) / total_chunks * 100)
# Log to console AND send via WebSocket
logger.info(f"[PROGRESS] {progress}% - chunk {i+1}/{total_chunks}")
if progress_callback:
await progress_callback(progress, f"Calculating... {i+1}/{total_chunks}")
await asyncio.sleep(0) # Yield to event loop
```
**Frontend fix - check WebSocket subscription:**
```typescript
// In App.tsx or CoverageStore
useEffect(() => {
const ws = new WebSocket('ws://localhost:8888/ws/coverage');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('[WS] Received:', data); // DEBUG
if (data.type === 'progress') {
setProgress(data.progress);
setProgressStatus(data.status);
}
};
ws.onerror = (e) => console.error('[WS] Error:', e);
ws.onclose = () => console.log('[WS] Closed');
return () => ws.close();
}, []);
```
---
## Part 5: Testing & Validation
### 5.1 Performance Benchmarks
After refactoring, expected performance:
| Scenario | Before | After | Speedup |
|----------|--------|-------|---------|
| 5km Standard | 5s | 3s | 1.7x |
| 5km Detailed | timeout | 25s | 12x |
| 10km Standard | 30s | 10s | 3x |
| 10km Detailed | timeout | 60s | 5x |
### 5.2 Test Commands
```powershell
# Quick test
cd D:\root\rfcp\installer
.\test-detailed-quick.bat
# Check for [PROGRESS] logs in output
# Check for [DP_TIMING] logs
# Verify shared memory cleanup
# Check Task Manager for memory after calculation
```
---
## Implementation Order
1. **Shared Memory for Buildings** (biggest impact) - Part 1.1
2. **Increase Batch Size** - Part 1.2
3. **Progress Bar Debug** - Part 4
4. **Radial Engine** (optional, for preview mode) - Part 2
5. **Longley-Rice ITM** (for VHF/UHF) - Part 3
---
## Dependencies to Add
```
# requirements.txt additions
itmlogic>=0.1.0 # Longley-Rice ITM implementation
```
---
## Commit Message
```
feat: Iteration 3.3.0 - Performance Architecture Refactor
Performance:
- Add shared memory for buildings (eliminate pickle overhead)
- Increase batch size to 300-500 points (amortize IPC)
- Add radial coverage engine (Signal-Server style)
Propagation Models:
- Add Longley-Rice ITM for VHF/UHF (20 MHz - 20 GHz)
- Add automatic model selection by frequency
- Add frequency band constants for UI
Bug Fixes:
- Debug and fix WebSocket progress (5% stuck bug)
Expected: 5km Detailed from timeout → ~25s (12x speedup)
```
---
## Notes for Claude Code
This is a significant refactoring. Approach step by step:
1. First implement shared memory for buildings
2. Test that alone - should see major speedup
3. Then increase batch size
4. Test again
5. Then tackle progress bar
6. Radial engine and ITM can be separate iterations if needed
The key insight: **99% of time is IPC overhead, not calculation**.
Fixing pickle serialization is the #1 priority.
---
*"Fast per-point means nothing if IPC eats your lunch"* 🍽️

View File

@@ -0,0 +1,191 @@
# RF Coverage Planning Software: Performance Optimization and Propagation Models
**The performance gap between fast per-point calculations (~1ms) and slow overall throughput (~258ms/point) is caused by pickle serialization overhead in Python multiprocessing**, which dominates actual compute time when processing small batches. The solution involves batching 1000+ points per IPC round-trip, using shared memory for terrain data, and leveraging GPU acceleration for workloads exceeding 10,000 points—achieving 10-50x speedups. Modern RF coverage tools like Signal-Server, SPLAT!, and Sionna RT demonstrate that combining radial segment parallelization, multi-resolution terrain tiling, and appropriate propagation model selection (Longley-Rice ITM for terrain-based VHF/UHF, COST-231 Hata for cellular) enables efficient large-area calculations while maintaining accuracy within 6-10 dB standard deviation.
---
## The multiprocessing bottleneck: why per-point speed deceives
The dramatic discrepancy between fast individual point calculations and slow aggregate throughput stems from a classic Python multiprocessing anti-pattern where **inter-process communication overhead dominates computation time**. When each worker processes a single point or small batch, the system spends more time serializing and deserializing data than performing actual RF calculations.
Python's multiprocessing uses pickle for IPC by default, requiring objects to be serialized twice per task (sending to worker and returning results). For RF calculations involving terrain data, DEM arrays, and GIS features, this serialization cost becomes catastrophic. Research shows that pickling a **40 MB dictionary four times per task can cause a 600% slowdown**. The situation worsens because spawning a subprocess takes approximately 50ms (50,000µs) compared to ~100µs for a thread—making process pool initialization per-request extremely expensive.
The solution architecture requires three fundamental changes. First, batch operations must amortize serialization costs by processing **1,000-10,000 points per IPC round-trip** rather than individual points. Second, shared memory (`multiprocessing.shared_memory` or `numpy.memmap`) should hold terrain data to eliminate pickle overhead entirely. Third, process pools must be pre-initialized at application startup rather than per-request:
```python
# Anti-pattern: Single-point processing (slow)
with Pool() as pool:
results = pool.map(calculate_point, points) # Each point pickled separately
# Optimal pattern: Batch processing with shared memory
from multiprocessing import shared_memory
shm = shared_memory.SharedMemory(create=True, size=terrain_data.nbytes)
chunk_size = 1000 # Process 1000 points per IPC round-trip
batches = [points[i:i+chunk_size] for i in range(0, len(points), chunk_size)]
```
The target metric is ensuring computation time exceeds serialization time by **10-100x**. For a 1ms per-point calculation, this means batching at least 100-1000 points to make serialization overhead negligible.
---
## Open-source RF tools reveal proven optimization architectures
**Signal-Server**, the C++14 multi-threaded engine that powered CloudRF from 2012-2016, demonstrates the foundational architecture for RF coverage calculations. Its primary improvement over the original SPLAT! was multi-threading through radial segment parallelization—splitting the circular coverage area so multiple threads process different azimuth ranges simultaneously. The implementation uses POSIX threads with configurable segment counts (must be even and greater than 4), processing up to 32 terrain tiles simultaneously with support for gzip/bzip2 compressed tiles for faster I/O.
Signal-Server supports 12 propagation models through a simple command-line parameter: ITM (Longley-Rice), line-of-sight, Hata, ECC33, SUI, COST-Hata, free-space, ITWOM, Ericsson, Plane Earth, Egli, and Soil models. The terrain tiling system uses SDF format converted from SRTM HGT files, supporting resolutions of 300/600/1200/3600 pixels per tile with automatic multi-tile loading based on calculation bounds.
**SPLAT!** (Signal Propagation, Loss, And Terrain), the foundational tool started in 1997, uses a radial ray-casting algorithm that projects rays from the transmitter in all azimuths (0-360°), samples terrain elevation along each path, and applies Longley-Rice ITM calculations to the terrain profile. Its Longley-Rice integration handles three prediction ranges (line-of-sight, diffraction, scatter) with terrain irregularity parameter Δh(d) computed from terrain samples. Key parameters include earth dielectric constant (5-80), ground conductivity (0.001-5.0 S/m), atmospheric refractivity (250-400 N-units), and climate zone selection.
**Sionna RT by NVIDIA** represents the state-of-the-art in GPU-accelerated RF simulation, using differentiable ray tracing built on TensorFlow, Mitsuba 3, and Dr.Jit. Its key innovation enables gradient computation through channel impulse responses with respect to material properties, antenna patterns, and transmitter/receiver positions—making it suitable for ML-integrated optimization. The path solver supports both Shooting and Bouncing Rays (SBR) and the Image Method, handling direct LOS paths, reflections, diffractions, and scattering patterns. Memory efficiency improvements in version 1.0 support scenes with 3D building models from OpenStreetMap, while configurable path loss thresholds and angular separation control enable scalable computation.
**CloudRF's SLEIPNIR engine** (replacing Signal-Server in 2019) achieves up to **10x faster** performance through multi-resolution modeling that seamlessly merges different resolution data sources, dual CPU/GPU engines (**78% speedup** with GPU for clutter calculations), and 1m LiDAR resolution support with global 10m land cover integration.
---
## VHF and UHF propagation models differ fundamentally from cellular bands
The **Longley-Rice Irregular Terrain Model (ITM)** serves as the most comprehensive model for terrain-based VHF/UHF propagation, predicting median attenuation over irregular terrain for frequencies from 20 MHz to 20 GHz across distances of 1-2000 km. The model handles five propagation mechanisms: free-space loss, terrain diffraction (multiple knife-edge), ground reflection, atmospheric refraction (4/3 Earth radius approximation), and tropospheric scatter beyond the horizon. Statistical variables include time, location, and situation variability ranging from 0.01 to 0.99, with typical accuracy of ±6-10 dB standard deviation for point-to-point mode.
Critical ITM parameters require careful selection based on environment:
| Ground Type | Permittivity | Conductivity (S/m) |
|------------|--------------|-------------------|
| Average Ground | 15 | 0.005 |
| Poor Ground | 4 | 0.001 |
| Good Ground | 25 | 0.020 |
| Fresh Water | 81 | 0.010 |
| Sea Water | 81 | 5.0 |
**ITU-R P.1546** provides empirical field-strength curves for 30 MHz to 4 GHz based on extensive Northern Hemisphere measurements, covering distances of 1-1000 km with time percentages of 1%, 10%, and 50%. The model uses reference frequencies of 100, 600, and 2000 MHz with interpolation for other frequencies, applying corrections for terrain clearance angle, receiving antenna height, clutter losses, and mixed land/sea paths.
For UHF and cellular bands, the **Okumura-Hata model** (150-1500 MHz, 1-20 km distance) and its **COST-231 extension** (1500-2000 MHz) provide rapid empirical calculations with 6-8 dB standard deviation in urban environments. The urban path loss formula is:
```
L_urban = 69.55 + 26.16*log10(f) - 13.82*log10(h_b) - a(h_m)
+ (44.9 - 6.55*log10(h_b))*log10(d)
```
Where `a(h_m)` is the mobile antenna correction factor varying by city size and frequency. Suburban and rural corrections reduce urban loss by 2*(log10(f/28))² + 5.4 dB and 4.78*(log10(f))² - 18.33*log10(f) + 40.94 dB respectively.
The key propagation differences across frequency bands are dramatic: **VHF wavelengths (1-10m) enable strong diffraction around obstacles but poor building penetration**, while **UHF (0.1-1m wavelength) provides better building penetration but weaker terrain following**. Cellular frequencies (1800+ MHz) have the highest free-space loss baseline, weakest diffraction, and moderate building penetration. Vegetation penetration follows the opposite pattern—VHF penetrates foliage better than higher frequencies where specific attenuation increases significantly.
---
## Terrain diffraction models handle mountainous areas differently
The **single knife-edge diffraction model** (ITU-R P.526) calculates the Fresnel parameter v and corresponding loss:
```python
v = h * sqrt(2 * (d1 + d2) / (wavelength * d1 * d2))
# For v > -0.78:
if v < 0: loss = 6.02 + 9.11*v - 1.27*
elif v < 2.4: loss = 6.02 + 9.11*v + 1.65*
else: loss = 12.953 + 20*log10(v)
```
For multiple obstacles, the **Deygout method** finds the main obstacle (highest Fresnel parameter v between transmitter and receiver), calculates its diffraction loss, then recursively finds secondary obstacles on each side. It provides better accuracy for **widely spaced obstacles** (2-4 ridges) but tends to overestimate for closely spaced obstacles. The **Epstein-Peterson method** calculates diffraction loss sequentially from transmitter to receiver, providing better accuracy for **closely spaced obstacles** but underestimating for widely separated ones.
The **Bullington equivalent single edge** method replaces all obstacles with one equivalent knife edge, providing the simplest and fastest calculation but often underestimating loss (too optimistic)—useful only for initial estimates. Professional tools like CloudRF implement **Delta-Bullington** as the default for its balance of accuracy and speed, with configurable options including Huygens (basic), sequential multi-obstacle, and Deygout 94 with combining factor.
---
## GPU acceleration delivers 10-50x speedups for appropriate workloads
The RF calculations benefiting most from GPU acceleration are embarrassingly parallel operations: **ray tracing** (10-100x+ speedup with NVIDIA OptiX), **FFT operations** (cuFFT highly optimized), **viewshed/LOS calculations** (CloudRF reports **50x faster** than CPU), and **batch path loss calculations** for many points. Matrix operations in propagation models benefit from cuBLAS, while terrain correlation matrices and large array operations see significant acceleration.
**CuPy** provides a drop-in NumPy replacement for NVIDIA GPUs with 10-100x speedups for large arrays (>100,000 elements):
```python
import cupy as cp
terrain_gpu = cp.asarray(terrain_data)
distances = cp.sqrt(cp.sum((points_gpu - tx_position)**2, axis=1))
path_loss = 20 * cp.log10(distances) + 20 * cp.log10(frequency_mhz) - 27.55
results = path_loss.get() # Transfer back to CPU
```
**Numba CUDA** enables writing custom GPU kernels in Python for complex propagation models requiring control flow:
```python
from numba import cuda
import math
@cuda.jit
def free_space_path_loss_kernel(distances, frequency, output):
idx = cuda.grid(1)
if idx < distances.shape[0]:
output[idx] = 20 * math.log10(distances[idx]) + 20 * math.log10(frequency) - 27.55
```
Minimum problem sizes for GPU benefit are: **10,000+ elements** for array operations, **1,024+ points** for FFT, **512x512+** for matrix multiply, and **5,000+ points** for path loss calculations. Memory transfer overhead (PCIe 3.0: ~8 GB/s practical) means the critical formula is `GPU_worthwhile = compute_time > (2 × transfer_time)`. For 100MB terrain data, transfer overhead is approximately 5-12ms.
**AMD ROCm/HIP** provides cross-platform compatibility through CuPy (`pip install cupy-rocm-5-0`), with PyTorch and TensorFlow also offering official ROCm builds. **Intel integrated graphics** support via PyOpenCL achieves 2-10x speedups over CPU (3-6x slower than discrete GPUs), suitable for edge deployments with moderate workloads (10,000-100,000 points).
---
## Environment modeling requires frequency-dependent clutter coefficients
**ITU-R P.1812-6** defines default clutter heights and losses by environment type: dense urban (20-25m height, 15-25 dB loss), urban (15-20m, 10-20 dB), suburban (9-12m, 5-15 dB), rural (0-4m, 0-5 dB), and forest (15-20m, 10-25 dB). The **3GPP TR 38.901** path loss models define specific scenarios: UMa (Urban Macro) with 25m base station height, UMi (Urban Micro Street Canyon) with 10m base station, RMa (Rural Macro), and InF (Indoor Factory) variants.
For vegetation, **ITU-R P.833-10** specifies excess attenuation using `A_ev = A_m * (1 - exp(-d*γ/A_m))` where specific attenuation γ varies by frequency: **0.06 dB/m at 200 MHz**, **0.20 dB/m at 1 GHz**, and **0.60 dB/m at 5 GHz** for in-leaf conditions. Seasonal variation reduces loss by approximately 20% out-of-leaf for deciduous forests, with **2 dB variation at 900 MHz increasing to 8.5 dB at 1800+ MHz**.
**Building entry loss** per ITU-R P.2109 distinguishes traditional buildings (median 10-16 dB at 100 MHz to 2 GHz) from thermally-efficient modern buildings with metallized glass and foil insulation (25-32 dB). Material-specific losses from 3GPP TR 38.901 show standard glass at **2.4 dB at 2 GHz**, concrete at **13 dB at 2 GHz increasing to 117 dB at 28 GHz**, and IRR/Low-E glass at **23.6 dB at 2 GHz**.
---
## Machine learning and hybrid approaches complement physics-based models
Current ML approaches for path loss prediction rank by accuracy: **XGBoost/Gradient Boosting** (RMSE: 2.1-3.4 dB, best for small-medium datasets), Neural Network Ensembles (2.5-4.0 dB), Random Forest (3.0-4.5 dB), and Deep Neural Networks (3.0-5.0 dB). Training data requirements scale predictably: <1,000 samples yield RMSE 6-10 dB, 10,000-100,000 samples achieve production-quality RMSE 2-4 dB.
**Hybrid physics+ML architectures** prove most effective. The ML Correction approach calculates `PL_total = PL_empirical(d, f, h_tx, h_rx) + ΔPL_ML(features)` where ΔPL_ML learns systematic biases. The LOS/NLOS Ensemble uses a classifier to weight separate LOS and NLOS regressors. Physics-Informed Neural Networks add penalty terms that enforce physical constraints like "path loss should increase with distance" and "FSPL provides a lower bound."
**Pre-computed propagation databases** store path loss values at 20-50 bytes per grid cell, enabling sub-millisecond lookups. For a 10km radius at 30m resolution (~349,000 cells), storage is approximately 7 MB compressed. Interpolation techniques range from fast bilinear (1-2 dB error) to kriging (higher accuracy with uncertainty estimates).
---
## Tile-based caching enables responsive coverage map delivery
The optimal caching architecture uses **XYZ (Slippy Map) tiles** with multi-tier storage: L1 in-memory Redis (sub-millisecond access, ~100GB capacity), L2 disk cache (SQLite/MBTiles format), and L3 cloud storage (S3 for permanent pre-computed tiles). Cache keys should incorporate parameter hashes for instant invalidation when transmitter settings change:
```python
def get_tile_key(z: int, x: int, y: int, params_hash: str) -> str:
return f"tile:coverage:{params_hash}:{z}:{x}:{y}"
```
For dynamic coverage, TTL-based expiration (15 minutes to 24 hours) combined with Redis pub/sub channels (`map:update:region:*`) enables targeted geographic invalidation. The hybrid approach pre-computes base zoom levels (z=6-12) for commonly accessed areas while generating higher zoom levels (z>12) on-demand.
**Level of Detail (LOD) techniques** adapt computation intensity to distance: Tier 1 (0-500m) uses full 3D building geometry with 1m terrain resolution, Tier 2 (500m-2km) uses simplified buildings with 10m terrain, Tier 3 (2-10km) uses clutter heights only with 30m terrain, and Tier 4 (>10km) uses statistical clutter with 90m SRTM terrain. Adaptive grid generation provides higher resolution near the transmitter (10m) transitioning to coarser resolution (100m) at distance, reducing computation while maintaining visual quality where it matters.
---
## Recommended architecture for Python/FastAPI RF coverage backend
The optimal stack combines **FastAPI** (async API gateway with rate limiting), **Celery** (distributed task queue for heavy RF calculations), **Redis** (tile caching and job status), and **CuPy/Numba** (GPU acceleration). Terrain data should use **numpy.memmap** for memory-mapped access to large DEMs with **STRtree spatial indexing** for tile lookups via Shapely.
For the propagation engine, implement **Longley-Rice ITM** as the primary terrain model (using the `itmlogic` Python package), **COST-231 Hata** for quick urban estimates, and **Deygout diffraction** for multiple terrain obstacles. The model selection logic should consider frequency range (Hata for 150-1500 MHz, COST-231 for 1500-2000 MHz, ITM for terrain-specific), distance (empirical for <20km, ITM for longer paths), and accuracy requirements (ray tracing only for <5km urban scenarios).
```python
class GPURFEngine:
def __init__(self, max_points=1_000_000):
# Pre-allocate GPU memory at startup
self.d_buffer = cp.empty((max_points, 3), dtype=cp.float32)
async def calculate_coverage(self, points: np.ndarray) -> np.ndarray:
if len(points) < 1000:
return self._cpu_fallback(points) # Small workloads on CPU
# GPU path for large workloads
d_points = cp.asarray(points)
# ... GPU computation
return results.get()
```
Celery configuration should use separate queues for fast (cached), compute (full calculation), and batch operations, with `worker_prefetch_multiplier=1` for heavy tasks and `task_acks_late=True` for reliability. Output formats should include PNG tiles with colormap lookup for web display and Cloud-Optimized GeoTIFF for professional GIS integration.
---
## Conclusion
Building efficient RF coverage planning software requires addressing the fundamental mismatch between fast per-point propagation calculations and the overhead of Python's multiprocessing model. **Batch processing (1000+ points per IPC round-trip), shared memory for terrain data, and GPU acceleration for workloads exceeding 10,000 points** provide the foundation for achieving throughput within an order of magnitude of commercial tools.
The propagation model selection should follow a tiered approach: Longley-Rice ITM for terrain-based VHF/UHF planning with available DEM data, Okumura-Hata/COST-231 for rapid urban cellular estimates, and Deygout diffraction for mountainous terrain with multiple obstacles. Environment modeling through ITU-R P.2108/P.2109/P.833 provides standardized clutter, building entry, and vegetation loss coefficients that maintain accuracy across diverse deployment scenarios.
The most impactful optimizations in order of implementation priority are: fixing the multiprocessing serialization bottleneck (immediate 100x throughput improvement), implementing tile-based caching with parameter-hash keys (sub-millisecond repeat queries), adding GPU acceleration for large coverage maps (10-50x for >10,000 points), and incorporating LOD techniques (3-10x computation reduction with minimal accuracy impact). This architecture enables a Python/FastAPI backend to compete with commercial tools while maintaining the flexibility for custom propagation models and ML integration.

View File

@@ -1194,19 +1194,6 @@
"linux"
]
},
"node_modules/@rollup/rollup-linux-x64-gnu": {
"version": "4.57.0",
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.57.0.tgz",
"integrity": "sha512-OR5p5yG5OKSxHReWmwvM0P+VTPMwoBS45PXTMYaskKQqybkS3Kmugq1W+YbNWArF8/s7jQScgzXUhArzEQ7x0A==",
"cpu": [
"x64"
],
"dev": true,
"optional": true,
"os": [
"linux"
]
},
"node_modules/@rollup/rollup-linux-x64-musl": {
"version": "4.57.0",
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.57.0.tgz",
@@ -3449,6 +3436,20 @@
"fsevents": "~2.3.2"
}
},
"node_modules/rollup/node_modules/@rollup/rollup-linux-x64-gnu": {
"version": "4.57.0",
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.57.0.tgz",
"integrity": "sha512-OR5p5yG5OKSxHReWmwvM0P+VTPMwoBS45PXTMYaskKQqybkS3Kmugq1W+YbNWArF8/s7jQScgzXUhArzEQ7x0A==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
]
},
"node_modules/scheduler": {
"version": "0.27.0",
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.27.0.tgz",

View File

@@ -6,6 +6,7 @@ import { useSitesStore } from '@/store/sites.ts';
import { useCoverageStore } from '@/store/coverage.ts';
import { useSettingsStore } from '@/store/settings.ts';
import { useHistoryStore, pushToFuture, pushToPast } from '@/store/history.ts';
import { useToolStore } from '@/store/tools.ts';
import { useToastStore } from '@/components/ui/Toast.tsx';
import { useKeyboardShortcuts } from '@/hooks/useKeyboardShortcuts.ts';
import { useUnsavedChanges } from '@/hooks/useUnsavedChanges.ts';
@@ -13,17 +14,26 @@ import { logger } from '@/utils/logger.ts';
import { db } from '@/db/schema.ts';
import MapView from '@/components/map/Map.tsx';
import GeographicHeatmap from '@/components/map/GeographicHeatmap.tsx';
import WebGLCoverageLayer from '@/components/map/WebGLCoverageLayer.tsx';
import WebGLRadialCoverageLayer from '@/components/map/WebGLRadialCoverageLayer.tsx';
import CoverageBoundary from '@/components/map/CoverageBoundary.tsx';
import HeatmapLegend from '@/components/map/HeatmapLegend.tsx';
import SiteList from '@/components/panels/SiteList.tsx';
import ExportPanel from '@/components/panels/ExportPanel.tsx';
import ProjectPanel from '@/components/panels/ProjectPanel.tsx';
import CoverageStats from '@/components/panels/CoverageStats.tsx';
import HistoryPanel from '@/components/panels/HistoryPanel.tsx';
import BatchFrequencyChange from '@/components/panels/BatchFrequencyChange.tsx';
import ResultsPanel from '@/components/panels/ResultsPanel.tsx';
import SiteImportExport from '@/components/panels/SiteImportExport.tsx';
import { SiteConfigModal } from '@/components/modals/index.ts';
import type { SiteFormValues } from '@/components/modals/index.ts';
import ToastContainer from '@/components/ui/Toast.tsx';
import ThemeToggle from '@/components/ui/ThemeToggle.tsx';
import GPUIndicator from '@/components/ui/GPUIndicator.tsx';
import TerrainProfile from '@/components/map/TerrainProfile.tsx';
import LinkBudgetPanel from '@/components/panels/LinkBudgetPanel.tsx';
import LinkBudgetOverlay from '@/components/map/LinkBudgetOverlay.tsx';
import Button from '@/components/ui/Button.tsx';
import NumberInput from '@/components/ui/NumberInput.tsx';
import ConfirmDialog from '@/components/ui/ConfirmDialog.tsx';
@@ -55,7 +65,7 @@ async function restoreSites(snapshot: Site[]) {
export default function App() {
const loadSites = useSitesStore((s) => s.loadSites);
const sites = useSitesStore((s) => s.sites);
const setPlacingMode = useSitesStore((s) => s.setPlacingMode);
const selectedSiteId = useSitesStore((s) => s.selectedSiteId);
const coverageResult = useCoverageStore((s) => s.result);
const isCalculating = useCoverageStore((s) => s.isCalculating);
@@ -63,6 +73,7 @@ export default function App() {
const heatmapVisible = useCoverageStore((s) => s.heatmapVisible);
const coverageError = useCoverageStore((s) => s.error);
const coverageProgress = useCoverageStore((s) => s.progress);
const partialPoints = useCoverageStore((s) => s.partialPoints);
const calculateCoverageApi = useCoverageStore((s) => s.calculateCoverage);
const cancelCalculation = useCoverageStore((s) => s.cancelCalculation);
@@ -104,14 +115,20 @@ export default function App() {
const setTerrainOpacity = useSettingsStore((s) => s.setTerrainOpacity);
const showGrid = useSettingsStore((s) => s.showGrid);
const setShowGrid = useSettingsStore((s) => s.setShowGrid);
const measurementMode = useSettingsStore((s) => s.measurementMode);
const setMeasurementMode = useSettingsStore((s) => s.setMeasurementMode);
const showElevationInfo = useSettingsStore((s) => s.showElevationInfo);
// Tool store (centralized active tool state)
const activeTool = useToolStore((s) => s.activeTool);
const setActiveTool = useToolStore((s) => s.setActiveTool);
const clearTool = useToolStore((s) => s.clearTool);
const setShowElevationInfo = useSettingsStore((s) => s.setShowElevationInfo);
const showBoundary = useSettingsStore((s) => s.showBoundary);
const showElevationOverlay = useSettingsStore((s) => s.showElevationOverlay);
const setShowElevationOverlay = useSettingsStore((s) => s.setShowElevationOverlay);
const elevationOpacity = useSettingsStore((s) => s.elevationOpacity);
const setElevationOpacity = useSettingsStore((s) => s.setElevationOpacity);
const coverageRenderer = useSettingsStore((s) => s.coverageRenderer);
const setCoverageRenderer = useSettingsStore((s) => s.setCoverageRenderer);
// History (undo/redo)
const canUndo = useHistoryStore((s) => s.canUndo);
@@ -129,6 +146,9 @@ export default function App() {
const [panelCollapsed, setPanelCollapsed] = useState(false);
const [showShortcuts, setShowShortcuts] = useState(false);
const [kbDeleteTarget, setKbDeleteTarget] = useState<{ id: string; name: string } | null>(null);
const [profileEndpoints, setProfileEndpoints] = useState<{ start: [number, number]; end: [number, number] } | null>(null);
const [showLinkBudget, setShowLinkBudget] = useState(false);
const [linkBudgetRxPoint, setLinkBudgetRxPoint] = useState<{ lat: number; lon: number } | null>(null);
// Region wizard for first-run (desktop mode only)
const [showWizard, setShowWizard] = useState(false);
@@ -205,17 +225,26 @@ export default function App() {
loadSites();
}, [loadSites]);
// Handle map click -> open modal with coordinates
const handleMapClick = useCallback(
// Handle site placement from map click
const handleSitePlacement = useCallback(
(lat: number, lon: number) => {
setModalState({
isOpen: true,
mode: 'create',
initialData: { lat, lon },
});
setPlacingMode(false);
// Tool store clearTool() is called by MapClickHandler after placement
},
[setPlacingMode]
[]
);
// Handle RX point placement for Link Budget
const handleRxPlacement = useCallback(
(lat: number, lon: number) => {
setLinkBudgetRxPoint({ lat, lon });
// Tool store clearTool() is called by MapClickHandler after placement
},
[]
);
const handleEditSite = useCallback((site: Site) => {
@@ -394,8 +423,8 @@ export default function App() {
const currentSettings = useCoverageStore.getState().settings;
// Validation
if (currentSettings.radius > 100) {
addToast('Radius too large (max 100km)', 'error');
if (currentSettings.radius > 50) {
addToast('Radius too large (max 50km)', 'error');
return;
}
if (currentSettings.resolution < 50) {
@@ -406,9 +435,17 @@ export default function App() {
try {
await calculateCoverageApi();
// Check result after calculation
const result = useCoverageStore.getState().result;
const error = useCoverageStore.getState().error;
// After calculateCoverageApi returns, check if WS took over.
// In WS mode, the function returns immediately and result arrives asynchronously.
const state = useCoverageStore.getState();
if (state.isCalculating && state.activeCalcId) {
// WebSocket mode — toast will be shown from the WS onResult callback
return;
}
// HTTP mode — result is ready now
const result = state.result;
const error = state.error;
if (error) {
let userMessage = 'Calculation failed';
@@ -428,11 +465,14 @@ export default function App() {
);
} else {
const timeStr = result.calculationTime.toFixed(1);
const firstSite = sites.find((s) => s.visible);
const freqStr = firstSite ? ` \u2022 ${firstSite.frequency} MHz` : '';
const presetStr = settings.preset ? ` \u2022 ${settings.preset}` : '';
const modelsStr = result.modelsUsed?.length
? ` ${result.modelsUsed.length} models`
? ` \u2022 ${result.modelsUsed.length} models`
: '';
addToast(
`Calculated ${result.totalPoints.toLocaleString()} points in ${timeStr}s${modelsStr}`,
`${result.totalPoints.toLocaleString()} pts \u2022 ${timeStr}s${presetStr}${freqStr}${modelsStr}`,
'success'
);
}
@@ -465,7 +505,7 @@ export default function App() {
return (
<div className="h-screen w-screen flex flex-col bg-gray-100 dark:bg-dark-bg">
{/* Header */}
<header className="bg-slate-800 dark:bg-slate-900 text-white px-4 py-2 flex items-center justify-between flex-shrink-0 z-10">
<header className="bg-slate-800 dark:bg-slate-900 text-white px-4 py-2 flex items-center justify-between flex-shrink-0 z-[1010]">
<div className="flex items-center gap-2">
<span className="text-base font-bold">RFCP</span>
<span className="text-xs text-slate-400 hidden sm:inline">
@@ -473,6 +513,7 @@ export default function App() {
</span>
</div>
<div className="flex items-center gap-3 mr-4">
<GPUIndicator />
<ThemeToggle />
{/* Undo / Redo buttons */}
<div className="hidden sm:flex items-center gap-1">
@@ -647,25 +688,106 @@ export default function App() {
<div className="flex-1 flex overflow-hidden relative">
{/* Map */}
<div className="flex-1 relative">
<MapView onMapClick={handleMapClick} onEditSite={handleEditSite}>
{coverageResult && (
<MapView
onSitePlacement={handleSitePlacement}
onRxPlacement={handleRxPlacement}
onEditSite={handleEditSite}
onProfileRequest={(start, end) => setProfileEndpoints({ start, end })}
showLinkBudget={showLinkBudget}
onToggleLinkBudget={() => setShowLinkBudget(!showLinkBudget)}
>
{/* Show partial results during tiled calculation, or final result */}
{(coverageResult || (isCalculating && partialPoints.length > 0)) && (
<>
<GeographicHeatmap
points={coverageResult.points}
visible={heatmapVisible}
opacity={settings.heatmapOpacity}
radiusMeters={settings.heatmapRadius}
rsrpThreshold={settings.rsrpThreshold}
/>
<CoverageBoundary
points={coverageResult.points.filter(p => p.rsrp >= settings.rsrpThreshold)}
visible={heatmapVisible}
resolution={settings.resolution}
/>
{/* Render coverage layer based on selected renderer */}
{coverageRenderer === 'webgl-radial' && (
<WebGLRadialCoverageLayer
key="webgl-radial-coverage"
points={isCalculating && partialPoints.length > 0 ? partialPoints : (coverageResult?.points ?? [])}
visible={heatmapVisible}
opacity={settings.heatmapOpacity}
minRsrp={-130}
maxRsrp={-50}
radiusMeters={settings.heatmapRadius}
onWebGLFailed={() => setCoverageRenderer('webgl-texture')}
/>
)}
{coverageRenderer === 'webgl-texture' && (
<WebGLCoverageLayer
key="webgl-coverage"
points={isCalculating && partialPoints.length > 0 ? partialPoints : (coverageResult?.points ?? [])}
visible={heatmapVisible}
opacity={settings.heatmapOpacity}
minRsrp={-130}
maxRsrp={-50}
onWebGLFailed={() => setCoverageRenderer('canvas')}
/>
)}
{coverageRenderer === 'canvas' && (
<GeographicHeatmap
key="canvas-coverage"
points={isCalculating && partialPoints.length > 0 ? partialPoints : (coverageResult?.points ?? [])}
visible={heatmapVisible}
opacity={settings.heatmapOpacity}
radiusMeters={settings.heatmapRadius}
rsrpThreshold={settings.rsrpThreshold}
/>
)}
{coverageResult && (
<CoverageBoundary
points={coverageResult.points.filter(p => p.rsrp >= settings.rsrpThreshold)}
visible={showBoundary}
resolution={settings.resolution}
boundary={coverageResult.boundary}
/>
)}
</>
)}
{/* Link Budget TX-RX overlay */}
{showLinkBudget && linkBudgetRxPoint && (() => {
const txSite = sites.find(s => s.id === selectedSiteId);
return (
<LinkBudgetOverlay
txPoint={txSite ? { lat: txSite.lat, lon: txSite.lon } : null}
rxPoint={linkBudgetRxPoint}
onRxDrag={(lat, lon) => setLinkBudgetRxPoint({ lat, lon })}
/>
);
})()}
</MapView>
{activeTool === 'rx-placement' && (
<div className="absolute top-4 left-1/2 -translate-x-1/2 z-[2000] bg-blue-600 text-white px-4 py-2 rounded-lg shadow-lg text-sm font-medium flex items-center gap-2">
<span>Click on map to set RX point</span>
<button
onClick={() => clearTool()}
className="text-white/70 hover:text-white ml-2"
>
Cancel
</button>
</div>
)}
<HeatmapLegend />
<ResultsPanel />
{profileEndpoints && (
<TerrainProfile
start={profileEndpoints.start}
end={profileEndpoints.end}
onClose={() => setProfileEndpoints(null)}
/>
)}
{showLinkBudget && (
<div className="absolute top-20 left-4 z-[1500]">
<LinkBudgetPanel
rxPoint={linkBudgetRxPoint}
onRequestMapClick={() => setActiveTool('rx-placement')}
onClose={() => {
setShowLinkBudget(false);
clearTool();
setLinkBudgetRxPoint(null);
}}
/>
</div>
)}
</div>
{/* Side panel */}
@@ -697,6 +819,11 @@ export default function App() {
{/* Site list */}
<SiteList onEditSite={handleEditSite} onAddSite={handleAddManual} />
{/* Quick frequency change */}
<div className="bg-white dark:bg-dark-surface border border-gray-200 dark:border-dark-border rounded-lg shadow-sm">
<BatchFrequencyChange />
</div>
{/* Coverage settings */}
<div className="bg-white dark:bg-dark-surface border border-gray-200 dark:border-dark-border rounded-lg shadow-sm p-4 space-y-3">
<h3 className="text-sm font-semibold text-gray-800 dark:text-dark-text">
@@ -706,14 +833,15 @@ export default function App() {
<NumberInput
label="Radius"
value={settings.radius}
onChange={(v) =>
useCoverageStore.getState().updateSettings({ radius: v })
}
onChange={(v) => {
const clamped = Math.min(v, 50);
useCoverageStore.getState().updateSettings({ radius: clamped });
}}
min={1}
max={100}
max={50}
step={5}
unit="km"
hint="Calculation area around each site"
hint="Calculation area around each site (max 50km)"
/>
<NumberInput
label="Resolution"
@@ -751,6 +879,24 @@ export default function App() {
unit="%"
hint="Transparency of the RF coverage overlay"
/>
<div>
<label className="text-sm font-medium text-gray-700 dark:text-dark-text">
Coverage Renderer
</label>
<p className="text-xs text-gray-400 dark:text-dark-muted mb-1">
Visualization style for coverage overlay
</p>
<select
value={coverageRenderer}
onChange={(e) => setCoverageRenderer(e.target.value as 'webgl-radial' | 'webgl-texture' | 'canvas')}
className="w-full mt-1 px-2 py-1.5 text-sm bg-white dark:bg-dark-border border border-gray-300 dark:border-dark-border rounded-md text-gray-700 dark:text-dark-text"
>
<option value="webgl-radial" className="bg-white dark:bg-slate-800 text-gray-700 dark:text-white">WebGL Radial (smooth)</option>
<option value="webgl-texture" className="bg-white dark:bg-slate-800 text-gray-700 dark:text-white">WebGL Texture (fast)</option>
<option value="canvas" className="bg-white dark:bg-slate-800 text-gray-700 dark:text-white">Canvas (fallback)</option>
</select>
</div>
{coverageRenderer === 'canvas' && (
<div>
<label className="text-sm font-medium text-gray-700 dark:text-dark-text">
Heatmap Quality
@@ -780,6 +926,7 @@ export default function App() {
</p>
)}
</div>
)}
{/* Propagation Model Preset */}
<div>
<label className="text-sm font-medium text-gray-700 dark:text-dark-text">
@@ -1007,6 +1154,20 @@ export default function App() {
<option value="vehicle">Inside Vehicle</option>
</select>
</div>
{/* Fading margin */}
<div className="mt-2 pt-2 border-t border-gray-200 dark:border-dark-border">
<NumberInput
label="Fading Margin"
value={settings.fading_margin ?? 0}
onChange={(v) => useCoverageStore.getState().updateSettings({ fading_margin: v })}
min={0}
max={20}
step={1}
unit="dB"
hint="Safety margin subtracted from signal"
/>
</div>
</div>
)}
</div>
@@ -1042,15 +1203,15 @@ export default function App() {
<label className="flex items-center gap-2 cursor-pointer text-sm text-gray-700 dark:text-dark-text">
<input
type="checkbox"
checked={measurementMode}
onChange={(e) => setMeasurementMode(e.target.checked)}
checked={activeTool === 'ruler'}
onChange={(e) => e.target.checked ? setActiveTool('ruler') : clearTool()}
className="w-4 h-4 rounded border-gray-300 dark:border-dark-border accent-orange-600"
/>
Distance Measurement
</label>
{measurementMode && (
{activeTool === 'ruler' && (
<p className="text-xs text-gray-400 dark:text-dark-muted pl-6">
Click to add points. Right-click to finish.
Click start and end points. Esc to cancel.
</p>
)}
<label className="flex items-center gap-2 cursor-pointer text-sm text-gray-700 dark:text-dark-text">
@@ -1084,7 +1245,7 @@ export default function App() {
/>
</div>
)}
</div>
</div>
</div>
{/* Data Cache Status */}
@@ -1174,6 +1335,9 @@ export default function App() {
modelsUsed={coverageResult?.modelsUsed}
/>
{/* Session history */}
<HistoryPanel />
{/* Export coverage data */}
<ExportPanel />

View File

@@ -1,8 +1,8 @@
/**
* Renders a dashed polyline around the coverage zone boundary.
*
* Uses @turf/concave to compute a concave hull (alpha shape) per site,
* which correctly follows sector/wedge shapes — not just convex circles.
* Prefers server-computed boundary if available (shapely concave_hull).
* Falls back to client-side @turf/concave computation.
*
* Performance: ~20-50ms for 10k points (runs once per coverage change).
*/
@@ -12,7 +12,7 @@ import { useMap } from 'react-leaflet';
import L from 'leaflet';
import concave from '@turf/concave';
import { featureCollection, point } from '@turf/helpers';
import type { CoveragePoint } from '@/types/index.ts';
import type { CoveragePoint, BoundaryPoint } from '@/types/index.ts';
import { logger } from '@/utils/logger.ts';
interface CoverageBoundaryProps {
@@ -21,21 +21,34 @@ interface CoverageBoundaryProps {
resolution: number; // meters — controls concave hull detail
color?: string;
weight?: number;
boundary?: BoundaryPoint[]; // server-provided boundary (preferred)
}
export default function CoverageBoundary({
points,
visible,
resolution,
color = '#7c3aed', // purple-600 — visible against both map and orange gradient
color = '#ffffff', // white — visible against red-to-blue gradient
weight = 2,
boundary,
}: CoverageBoundaryProps) {
const map = useMap();
const layerRef = useRef<L.LayerGroup | null>(null);
// Compute boundary paths grouped by site
// Compute boundary paths - prefer server boundary, fallback to client-side
const boundaryPaths = useMemo(() => {
if (!visible || points.length === 0) return [];
if (!visible) return [];
// Use server-provided boundary if available
if (boundary && boundary.length >= 3) {
const serverPath: L.LatLngExpression[] = boundary.map(
(p) => [p.lat, p.lon] as L.LatLngExpression
);
return [serverPath];
}
// Fallback to client-side computation
if (points.length === 0) return [];
// Group points by siteId (fallback to 'all' when siteId not available from API)
const bySite = new Map<string, CoveragePoint[]>();
@@ -61,7 +74,7 @@ export default function CoverageBoundary({
}
return paths;
}, [points, visible, resolution]);
}, [points, visible, resolution, boundary]);
// Render / cleanup polylines
useEffect(() => {
@@ -107,7 +120,10 @@ export default function CoverageBoundary({
/**
* Compute concave hull boundary path(s) for a set of coverage points.
*
* maxEdge = resolution * 3 (in km) gives good detail without over-fitting.
* Uses adaptive maxEdge based on point count and resolution:
* - More points → smaller maxEdge for finer detail
* - Larger resolution → larger maxEdge to avoid over-fitting
*
* Returns multiple paths if hull is a MultiPolygon (disjoint coverage areas).
* Falls back to empty if hull computation fails (e.g., collinear points).
*/
@@ -121,8 +137,17 @@ function computeConcaveHulls(
const features = pts.map((p) => point([p.lon, p.lat]));
const fc = featureCollection(features);
// maxEdge in km — resolution * 3 balances detail vs smoothness
const maxEdge = (resolutionM * 3) / 1000;
// Adaptive maxEdge based on point density:
// - Base: resolution * 2 (tighter fit)
// - For sparse grids (<100 pts): use larger edge to avoid holes
// - For dense grids (>1000 pts): use smaller edge for detail
let multiplier = 2.0;
if (pts.length < 100) {
multiplier = 4.0; // Sparse: wider tolerance
} else if (pts.length > 1000) {
multiplier = 1.5; // Dense: finer detail
}
const maxEdge = (resolutionM * multiplier) / 1000;
try {
const hull = concave(fc, { maxEdge, units: 'kilometers' });

View File

@@ -45,6 +45,12 @@ export default function ElevationLayer({ visible, opacity }: ElevationLayerProps
const debounceRef = useRef<ReturnType<typeof setTimeout> | null>(null);
const abortRef = useRef<AbortController | null>(null);
const lastBoundsRef = useRef<string>('');
const opacityRef = useRef(opacity);
// Keep opacity ref in sync
useEffect(() => {
opacityRef.current = opacity;
}, [opacity]);
const removeOverlay = useCallback(() => {
if (overlayRef.current) {
@@ -119,21 +125,23 @@ export default function ElevationLayer({ visible, opacity }: ElevationLayerProps
// Remove old overlay
removeOverlay();
// Add new overlay
// Add new overlay (opacity will be set by the dedicated effect)
const leafletBounds = L.latLngBounds(
[data.bbox.min_lat, data.bbox.min_lon],
[data.bbox.max_lat, data.bbox.max_lon],
);
overlayRef.current = L.imageOverlay(canvas.toDataURL(), leafletBounds, {
opacity,
opacity: 0.5, // Default, will be updated by opacity effect
interactive: false,
zIndex: 97,
});
overlayRef.current.addTo(map);
// Apply current opacity immediately using ref
overlayRef.current.setOpacity(opacityRef.current);
} catch (_e) {
// Silently ignore fetch errors (network issues, aborts, etc.)
}
}, [map, opacity, removeOverlay]);
}, [map, removeOverlay]);
// Update opacity on existing overlay
useEffect(() => {

View File

@@ -10,15 +10,15 @@
import { normalizeRSRP, valueToColor } from '@/utils/colorGradient.ts';
import { useCoverageStore } from '@/store/coverage.ts';
import { useSitesStore } from '@/store/sites.ts';
import { useSettingsStore } from '@/store/settings.ts';
const LEGEND_STEPS = [
{ rsrp: -130, label: 'No Service' },
{ rsrp: -110, label: 'Very Weak' },
{ rsrp: -100, label: 'Weak' },
{ rsrp: -90, label: 'Fair' },
{ rsrp: -80, label: 'Good' },
{ rsrp: -70, label: 'Strong' },
{ rsrp: -50, label: 'Excellent' },
{ rsrp: -110, label: 'Weak' },
{ rsrp: -100, label: 'Fair' },
{ rsrp: -85, label: 'Good' },
{ rsrp: -70, label: 'Excellent' },
{ rsrp: -50, label: 'Max' },
];
/** Build a CSS linear-gradient string matching the heatmap gradient exactly. */
@@ -42,6 +42,8 @@ export default function HeatmapLegend() {
const toggleHeatmap = useCoverageStore((s) => s.toggleHeatmap);
const settings = useCoverageStore((s) => s.settings);
const sites = useSitesStore((s) => s.sites);
const showBoundary = useSettingsStore((s) => s.showBoundary);
const setShowBoundary = useSettingsStore((s) => s.setShowBoundary);
if (!result) return null;
@@ -73,6 +75,23 @@ export default function HeatmapLegend() {
</button>
</div>
{/* Boundary toggle */}
<div className="flex items-center justify-between mb-2">
<span className="text-[10px] text-gray-500 dark:text-dark-muted">
Boundary
</span>
<button
onClick={() => setShowBoundary(!showBoundary)}
className={`w-8 h-4 rounded-full transition-colors relative
${showBoundary ? 'bg-blue-500' : 'bg-gray-300 dark:bg-dark-border'}`}
>
<span
className={`absolute top-0.5 w-3 h-3 rounded-full bg-white shadow transition-transform
${showBoundary ? 'left-4' : 'left-0.5'}`}
/>
</button>
</div>
{/* Gradient bar + labels */}
<div className="flex gap-2">
{/* Continuous gradient bar */}
@@ -106,9 +125,9 @@ export default function HeatmapLegend() {
{/* Cutoff indicator + below-threshold (dimmed) */}
{belowThreshold.length > 0 && (
<div className="mt-1.5 pt-1.5 border-t border-dashed border-purple-400 dark:border-purple-500">
<div className="mt-1.5 pt-1.5 border-t border-dashed border-gray-400 dark:border-gray-500">
<div className="flex items-center gap-1 mb-1">
<span className="text-[9px] text-purple-500 dark:text-purple-400 font-medium">
<span className="text-[9px] text-gray-500 dark:text-gray-400 font-medium">
Coverage boundary ({threshold} dBm)
</span>
</div>

View File

@@ -0,0 +1,83 @@
/**
* Link Budget Overlay
*
* Shows RX marker and dashed line from TX site to RX point.
*/
import { useEffect, useState } from 'react';
import { Marker, Polyline } from 'react-leaflet';
import L from 'leaflet';
interface LinkBudgetOverlayProps {
txPoint: { lat: number; lon: number } | null;
rxPoint: { lat: number; lon: number } | null;
onRxDrag?: (lat: number, lon: number) => void;
}
// Orange circle icon for RX marker
const rxIcon = L.divIcon({
className: 'rx-marker',
html: '<div style="width: 14px; height: 14px; background: #f97316; border: 2px solid white; border-radius: 50%; box-shadow: 0 2px 4px rgba(0,0,0,0.3);"></div>',
iconSize: [14, 14],
iconAnchor: [7, 7],
});
export default function LinkBudgetOverlay({ txPoint, rxPoint, onRxDrag }: LinkBudgetOverlayProps) {
const [markerRef, setMarkerRef] = useState<L.Marker | null>(null);
// Handle drag events
useEffect(() => {
if (!markerRef || !onRxDrag) return;
const handleDrag = () => {
const pos = markerRef.getLatLng();
onRxDrag(pos.lat, pos.lng);
};
markerRef.on('drag', handleDrag);
markerRef.on('dragend', handleDrag);
return () => {
markerRef.off('drag', handleDrag);
markerRef.off('dragend', handleDrag);
};
}, [markerRef, onRxDrag]);
if (!rxPoint) return null;
const rxLatLng: [number, number] = [rxPoint.lat, rxPoint.lon];
const txLatLng: [number, number] | null = txPoint ? [txPoint.lat, txPoint.lon] : null;
return (
<>
{/* Dashed line from TX to RX */}
{txLatLng && (
<Polyline
positions={[txLatLng, rxLatLng]}
pathOptions={{
color: '#f97316',
weight: 2,
dashArray: '8, 4',
opacity: 0.8,
}}
/>
)}
{/* RX marker (draggable) */}
<Marker
position={rxLatLng}
icon={rxIcon}
draggable={!!onRxDrag}
ref={(ref) => setMarkerRef(ref)}
eventHandlers={{
dragend: (e) => {
if (onRxDrag) {
const pos = e.target.getLatLng();
onRxDrag(pos.lat, pos.lng);
}
},
}}
/>
</>
);
}

View File

@@ -1,10 +1,12 @@
import { useRef, useCallback, useEffect } from 'react';
import { useRef, useCallback, useEffect, useState } from 'react';
import { MapContainer, TileLayer, useMapEvents, useMap } from 'react-leaflet';
import 'leaflet/dist/leaflet.css';
import type { Map as LeafletMap } from 'leaflet';
import L from 'leaflet';
import type { Site } from '@/types/index.ts';
import { useSitesStore } from '@/store/sites.ts';
import { useSettingsStore } from '@/store/settings.ts';
import { useToolStore } from '@/store/tools.ts';
import { useToastStore } from '@/components/ui/Toast.tsx';
import SiteMarker from './SiteMarker.tsx';
import MapExtras from './MapExtras.tsx';
@@ -14,22 +16,72 @@ import ElevationDisplay from './ElevationDisplay.tsx';
import ElevationLayer from './ElevationLayer.tsx';
interface MapViewProps {
onMapClick: (lat: number, lon: number) => void;
onSitePlacement: (lat: number, lon: number) => void;
onRxPlacement?: (lat: number, lon: number) => void;
onEditSite: (site: Site) => void;
onProfileRequest?: (start: [number, number], end: [number, number]) => void;
showLinkBudget?: boolean;
onToggleLinkBudget?: () => void;
children?: React.ReactNode;
}
const SNAP_THRESHOLD_PX = 20;
/**
* Unified map click handler that dispatches based on active tool
*/
function MapClickHandler({
onMapClick,
onSitePlacement,
onRxPlacement,
onRulerClick,
sites,
}: {
onMapClick: (lat: number, lon: number) => void;
onSitePlacement: (lat: number, lon: number) => void;
onRxPlacement?: (lat: number, lon: number) => void;
onRulerClick: (lat: number, lon: number) => void;
sites: Site[];
}) {
const isPlacingMode = useSitesStore((s) => s.isPlacingMode);
const activeTool = useToolStore((s) => s.activeTool);
const clearTool = useToolStore((s) => s.clearTool);
const map = useMap();
useMapEvents({
click: (e) => {
if (isPlacingMode) {
onMapClick(e.latlng.lat, e.latlng.lng);
switch (activeTool) {
case 'ruler':
// Snap to nearest site if within threshold
const clickPoint = map.latLngToContainerPoint(e.latlng);
let snappedLat = e.latlng.lat;
let snappedLon = e.latlng.lng;
for (const site of sites) {
const sitePoint = map.latLngToContainerPoint(L.latLng(site.lat, site.lon));
const pixelDist = clickPoint.distanceTo(sitePoint);
if (pixelDist < SNAP_THRESHOLD_PX) {
snappedLat = site.lat;
snappedLon = site.lon;
break;
}
}
onRulerClick(snappedLat, snappedLon);
break;
case 'rx-placement':
if (onRxPlacement) {
onRxPlacement(e.latlng.lat, e.latlng.lng);
clearTool(); // Single click action
}
break;
case 'site-placement':
onSitePlacement(e.latlng.lat, e.latlng.lng);
clearTool(); // Single click action
break;
case 'none':
default:
// No action on map click — just pan/zoom
break;
}
},
});
@@ -37,6 +89,61 @@ function MapClickHandler({
return null;
}
/**
* Component to apply cursor classes based on active tool
*/
function CursorManager() {
const map = useMap();
const activeTool = useToolStore((s) => s.activeTool);
useEffect(() => {
const container = map.getContainer();
// Remove all tool cursors
container.classList.remove('tool-ruler', 'tool-rx-placement', 'tool-site-placement');
switch (activeTool) {
case 'ruler':
container.classList.add('tool-ruler');
break;
case 'rx-placement':
container.classList.add('tool-rx-placement');
break;
case 'site-placement':
container.classList.add('tool-site-placement');
break;
default:
// Default cursor (arrow)
break;
}
}, [map, activeTool]);
return null;
}
/**
* Right-click handler for ruler mode
*/
function RulerRightClickHandler({ onRightClick }: { onRightClick: () => void }) {
const activeTool = useToolStore((s) => s.activeTool);
const map = useMap();
useEffect(() => {
if (activeTool !== 'ruler') return;
const handleContextMenu = (e: L.LeafletMouseEvent) => {
L.DomEvent.preventDefault(e.originalEvent);
onRightClick();
};
map.on('contextmenu', handleContextMenu);
return () => {
map.off('contextmenu', handleContextMenu);
};
}, [map, activeTool, onRightClick]);
return null;
}
/**
* Inner component that exposes the map instance via ref callback
*/
@@ -48,23 +155,72 @@ function MapRefSetter({ mapRef }: { mapRef: React.MutableRefObject<LeafletMap |
return null;
}
export default function MapView({ onMapClick, onEditSite, children }: MapViewProps) {
export default function MapView({ onSitePlacement, onRxPlacement, onEditSite, onProfileRequest, showLinkBudget, onToggleLinkBudget, children }: MapViewProps) {
const sites = useSitesStore((s) => s.sites);
const isPlacingMode = useSitesStore((s) => s.isPlacingMode);
const showTerrain = useSettingsStore((s) => s.showTerrain);
const terrainOpacity = useSettingsStore((s) => s.terrainOpacity);
const setShowTerrain = useSettingsStore((s) => s.setShowTerrain);
const showGrid = useSettingsStore((s) => s.showGrid);
const setShowGrid = useSettingsStore((s) => s.setShowGrid);
const measurementMode = useSettingsStore((s) => s.measurementMode);
const setMeasurementMode = useSettingsStore((s) => s.setMeasurementMode);
const showElevationInfo = useSettingsStore((s) => s.showElevationInfo);
const showElevationOverlay = useSettingsStore((s) => s.showElevationOverlay);
const setShowElevationOverlay = useSettingsStore((s) => s.setShowElevationOverlay);
const elevationOpacity = useSettingsStore((s) => s.elevationOpacity);
const addToast = useToastStore((s) => s.addToast);
// Tool store
const activeTool = useToolStore((s) => s.activeTool);
const setActiveTool = useToolStore((s) => s.setActiveTool);
const clearTool = useToolStore((s) => s.clearTool);
const mapRef = useRef<LeafletMap | null>(null);
// Ruler points state (managed here since MeasurementTool is now controlled by tool store)
const [rulerPoints, setRulerPoints] = useState<[number, number][]>([]);
// Ruler limited to exactly 2 points (point-to-point measurement)
const handleRulerClick = useCallback((lat: number, lon: number) => {
setRulerPoints(prev => {
if (prev.length === 0) {
// First point
return [[lat, lon]];
} else if (prev.length === 1) {
// Second point — measurement complete
return [prev[0], [lat, lon]];
} else {
// Already 2 points — start new measurement
return [[lat, lon]];
}
});
}, []);
const handleRulerRightClick = useCallback(() => {
if (rulerPoints.length >= 2) {
// Calculate total distance
let total = 0;
for (let i = 1; i < rulerPoints.length; i++) {
const [lat1, lon1] = rulerPoints[i - 1];
const [lat2, lon2] = rulerPoints[i];
const R = 6371;
const dLat = ((lat2 - lat1) * Math.PI) / 180;
const dLon = ((lon2 - lon1) * Math.PI) / 180;
const a = Math.sin(dLat / 2) ** 2 +
Math.cos((lat1 * Math.PI) / 180) * Math.cos((lat2 * Math.PI) / 180) * Math.sin(dLon / 2) ** 2;
total += R * 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
}
addToast(`Distance: ${total.toFixed(2)} km (${(total * 1000).toFixed(0)} m)`, 'info');
}
setRulerPoints([]);
clearTool();
}, [rulerPoints, addToast, clearTool]);
// Clear ruler points when tool changes away from ruler
useEffect(() => {
if (activeTool !== 'ruler') {
setRulerPoints([]);
}
}, [activeTool]);
const handleFitToSites = useCallback(() => {
if (sites.length === 0 || !mapRef.current) return;
const bounds = sites.map((site) => [site.lat, site.lon] as [number, number]);
@@ -75,14 +231,24 @@ export default function MapView({ onMapClick, onEditSite, children }: MapViewPro
mapRef.current?.setView([48.4, 35.0], 7);
}, []);
// Toggle ruler tool
const handleRulerToggle = useCallback(() => {
if (activeTool === 'ruler') {
clearTool();
} else {
setActiveTool('ruler');
}
}, [activeTool, setActiveTool, clearTool]);
return (
<>
<MapContainer
center={[48.4, 35.0]}
zoom={7}
className={`w-full h-full ${isPlacingMode ? 'cursor-crosshair' : ''}`}
className="w-full h-full"
>
<MapRefSetter mapRef={mapRef} />
<CursorManager />
{/* Base OSM layer */}
<TileLayer
attribution='&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a>'
@@ -99,16 +265,22 @@ export default function MapView({ onMapClick, onEditSite, children }: MapViewPro
)}
{/* Elevation color overlay from SRTM terrain data */}
<ElevationLayer visible={showElevationOverlay} opacity={elevationOpacity} />
<MapClickHandler onMapClick={onMapClick} />
{/* Unified click handler */}
<MapClickHandler
onSitePlacement={onSitePlacement}
onRxPlacement={onRxPlacement}
onRulerClick={handleRulerClick}
sites={sites}
/>
{/* Right-click handler for ruler */}
<RulerRightClickHandler onRightClick={handleRulerRightClick} />
<MapExtras />
{showElevationInfo && <ElevationDisplay />}
<CoordinateGrid visible={showGrid} />
{/* Ruler visualization (only points and line, no click handling) */}
<MeasurementTool
enabled={measurementMode}
onComplete={(distKm) => {
addToast(`Distance: ${distKm.toFixed(2)} km (${(distKm * 1000).toFixed(0)} m)`, 'info');
setMeasurementMode(false);
}}
points={rulerPoints}
onProfileRequest={onProfileRequest}
/>
{sites
.filter((s) => s.visible)
@@ -161,12 +333,12 @@ export default function MapView({ onMapClick, onEditSite, children }: MapViewPro
Grid
</button>
<button
onClick={() => setMeasurementMode(!measurementMode)}
onClick={handleRulerToggle}
className={`bg-white dark:bg-dark-surface shadow-lg rounded px-3 py-2 text-sm
hover:bg-gray-50 dark:hover:bg-dark-border transition-colors
text-gray-700 dark:text-dark-text min-h-[36px]
${measurementMode ? 'ring-2 ring-orange-500' : ''}`}
title={measurementMode ? 'Exit measurement mode' : 'Measure distance (click points, right-click to finish)'}
${activeTool === 'ruler' ? 'ring-2 ring-orange-500' : ''}`}
title={activeTool === 'ruler' ? 'Exit measurement mode' : 'Measure point-to-point distance'}
>
Ruler
</button>
@@ -180,6 +352,18 @@ export default function MapView({ onMapClick, onEditSite, children }: MapViewPro
>
Elev
</button>
{onToggleLinkBudget && (
<button
onClick={onToggleLinkBudget}
className={`bg-white dark:bg-dark-surface shadow-lg rounded px-3 py-2 text-sm
hover:bg-gray-50 dark:hover:bg-dark-border transition-colors
text-gray-700 dark:text-dark-text min-h-[36px]
${showLinkBudget ? 'ring-2 ring-purple-500' : ''}`}
title={showLinkBudget ? 'Close Link Budget Calculator' : 'Open Link Budget Calculator'}
>
LB
</button>
)}
</div>
</>
);

View File

@@ -1,10 +1,17 @@
import { useEffect, useRef, useState } from 'react';
import { useMap, Polyline, Marker } from 'react-leaflet';
/**
* Ruler/Measurement Tool Visualization
*
* Pure visualization component - receives points from parent,
* click handling is done by the centralized MapClickHandler.
*/
import { useEffect, useRef } from 'react';
import { Polyline, Marker } from 'react-leaflet';
import L from 'leaflet';
interface MeasurementToolProps {
enabled: boolean;
onComplete?: (distanceKm: number) => void;
points: [number, number][];
onProfileRequest?: (start: [number, number], end: [number, number]) => void;
}
function haversineKm(
@@ -39,50 +46,18 @@ const dotIcon = L.divIcon({
html: '<div style="width:10px;height:10px;background:white;border:2px solid #333;border-radius:50%;"></div>',
});
export default function MeasurementTool({ enabled, onComplete }: MeasurementToolProps) {
const map = useMap();
const [points, setPoints] = useState<[number, number][]>([]);
const pointsRef = useRef(points);
useEffect(() => {
pointsRef.current = points;
}, [points]);
export default function MeasurementTool({ points, onProfileRequest }: MeasurementToolProps) {
const overlayRef = useRef<HTMLDivElement>(null);
// Clear on disable
/* eslint-disable react-hooks/set-state-in-effect */
// Use Leaflet's DOM event utility to block click propagation to the map
useEffect(() => {
if (!enabled) {
setPoints([]);
if (overlayRef.current) {
L.DomEvent.disableClickPropagation(overlayRef.current);
L.DomEvent.disableScrollPropagation(overlayRef.current);
}
}, [enabled]);
/* eslint-enable react-hooks/set-state-in-effect */
}, [points.length]); // Re-run when overlay appears/disappears
// Click handler: add measurement point
useEffect(() => {
if (!enabled) return;
const handleClick = (e: L.LeafletMouseEvent) => {
setPoints((prev) => [...prev, [e.latlng.lat, e.latlng.lng]]);
};
const handleRightClick = (e: L.LeafletMouseEvent) => {
L.DomEvent.preventDefault(e.originalEvent);
const pts = pointsRef.current;
if (pts.length >= 2 && onComplete) {
onComplete(totalDistance(pts));
}
setPoints([]);
};
map.on('click', handleClick);
map.on('contextmenu', handleRightClick);
return () => {
map.off('click', handleClick);
map.off('contextmenu', handleRightClick);
};
}, [map, enabled, onComplete]);
if (!enabled || points.length === 0) return null;
if (points.length === 0) return null;
const dist = totalDistance(points);
@@ -99,6 +74,7 @@ export default function MeasurementTool({ enabled, onComplete }: MeasurementTool
))}
{dist > 0 && (
<div
ref={overlayRef}
style={{
position: 'absolute',
top: '10px',
@@ -109,13 +85,29 @@ export default function MeasurementTool({ enabled, onComplete }: MeasurementTool
padding: '6px 14px',
borderRadius: '6px',
zIndex: 2000,
pointerEvents: 'none',
fontSize: '13px',
fontWeight: 600,
letterSpacing: '0.3px',
}}
>
Distance: {dist.toFixed(2)} km ({(dist * 1000).toFixed(0)} m)
{points.length >= 2 && onProfileRequest && (
<button
onClick={() => onProfileRequest(points[0], points[points.length - 1])}
style={{
marginLeft: 10,
background: 'rgba(255,255,255,0.15)',
border: '1px solid rgba(255,255,255,0.3)',
color: 'white',
padding: '2px 8px',
borderRadius: 4,
cursor: 'pointer',
fontSize: 11,
}}
>
Terrain Profile
</button>
)}
</div>
)}
</>

View File

@@ -0,0 +1,405 @@
/**
* Canvas-based terrain elevation profile viewer with Fresnel zone visualization.
*
* Shows elevation cross-section between two geographic points with:
* - Green filled terrain area
* - Dashed red LOS line from start to end
* - Optional Fresnel zone ellipse (light blue)
* - Red highlighting where terrain intrudes Fresnel zone
* - Hover tooltip with elevation/distance at cursor
* - Stats bar: total distance, min/max elevation, Fresnel status
*/
import { useEffect, useRef, useState, useCallback } from 'react';
import L from 'leaflet';
import { api } from '@/services/api.ts';
import type { FresnelProfileResponse } from '@/services/api.ts';
interface TerrainProfileProps {
start: [number, number]; // [lat, lon]
end: [number, number]; // [lat, lon]
txHeight?: number; // TX antenna height (m)
rxHeight?: number; // RX antenna height (m)
frequency?: number; // Frequency (MHz) for Fresnel calculation
onClose: () => void;
}
const CANVAS_W = 600;
const CANVAS_H = 220;
const PAD = { top: 20, right: 20, bottom: 30, left: 50 };
const PLOT_W = CANVAS_W - PAD.left - PAD.right;
const PLOT_H = CANVAS_H - PAD.top - PAD.bottom;
export default function TerrainProfile({
start,
end,
txHeight = 30,
rxHeight = 1.5,
frequency = 1800,
onClose,
}: TerrainProfileProps) {
const canvasRef = useRef<HTMLCanvasElement>(null);
const [fresnelData, setFresnelData] = useState<FresnelProfileResponse | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [hover, setHover] = useState<{ x: number; idx: number } | null>(null);
const [showFresnel, setShowFresnel] = useState(true);
// Fetch Fresnel profile data (includes terrain)
useEffect(() => {
setLoading(true);
setError(null);
api
.getFresnelProfile({
tx_lat: start[0],
tx_lon: start[1],
tx_height_m: txHeight,
rx_lat: end[0],
rx_lon: end[1],
rx_height_m: rxHeight,
frequency_mhz: frequency,
num_points: 200,
})
.then((data: FresnelProfileResponse) => {
setFresnelData(data);
setLoading(false);
})
.catch((err: Error) => {
setError(err.message);
setLoading(false);
});
}, [start, end, txHeight, rxHeight, frequency]);
const profile = fresnelData?.profile;
// Draw chart
const draw = useCallback(
(hoverIdx: number | null) => {
const canvas = canvasRef.current;
if (!canvas || !profile || profile.length === 0) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
const dpr = window.devicePixelRatio || 1;
canvas.width = CANVAS_W * dpr;
canvas.height = CANVAS_H * dpr;
ctx.scale(dpr, dpr);
// Clear
ctx.clearRect(0, 0, CANVAS_W, CANVAS_H);
const terrainElevs = profile.map((p) => p.terrain_elevation);
const losHeights = profile.map((p) => p.los_height);
const fresnelTops = profile.map((p) => p.fresnel_top);
const fresnelBottoms = profile.map((p) => p.fresnel_bottom);
const distances = profile.map((p) => p.distance);
// Calculate bounds including Fresnel zone
const allHeights = showFresnel
? [...terrainElevs, ...fresnelTops, ...fresnelBottoms]
: [...terrainElevs, ...losHeights];
const minElev = Math.min(...allHeights);
const maxElev = Math.max(...allHeights);
const maxDist = distances[distances.length - 1] || 1;
// Add 10% padding to elevation range
const elevRange = maxElev - minElev || 1;
const eMin = minElev - elevRange * 0.1;
const eMax = maxElev + elevRange * 0.15;
const xScale = (d: number) => PAD.left + (d / maxDist) * PLOT_W;
const yScale = (e: number) => PAD.top + PLOT_H - ((e - eMin) / (eMax - eMin)) * PLOT_H;
// Grid lines
ctx.strokeStyle = '#e5e7eb';
ctx.lineWidth = 0.5;
const nGridY = 5;
for (let i = 0; i <= nGridY; i++) {
const y = PAD.top + (i / nGridY) * PLOT_H;
ctx.beginPath();
ctx.moveTo(PAD.left, y);
ctx.lineTo(PAD.left + PLOT_W, y);
ctx.stroke();
}
// Fresnel zone fill (light blue)
if (showFresnel) {
ctx.beginPath();
// Top boundary (left to right)
ctx.moveTo(xScale(distances[0]), yScale(fresnelTops[0]));
for (let i = 1; i < profile.length; i++) {
ctx.lineTo(xScale(distances[i]), yScale(fresnelTops[i]));
}
// Bottom boundary (right to left)
for (let i = profile.length - 1; i >= 0; i--) {
ctx.lineTo(xScale(distances[i]), yScale(fresnelBottoms[i]));
}
ctx.closePath();
ctx.fillStyle = 'rgba(59, 130, 246, 0.15)';
ctx.fill();
// Fresnel boundaries (dashed)
ctx.setLineDash([3, 3]);
ctx.strokeStyle = 'rgba(59, 130, 246, 0.4)';
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(xScale(distances[0]), yScale(fresnelTops[0]));
for (let i = 1; i < profile.length; i++) {
ctx.lineTo(xScale(distances[i]), yScale(fresnelTops[i]));
}
ctx.stroke();
ctx.beginPath();
ctx.moveTo(xScale(distances[0]), yScale(fresnelBottoms[0]));
for (let i = 1; i < profile.length; i++) {
ctx.lineTo(xScale(distances[i]), yScale(fresnelBottoms[i]));
}
ctx.stroke();
ctx.setLineDash([]);
}
// Terrain fill
ctx.beginPath();
ctx.moveTo(xScale(distances[0]), yScale(terrainElevs[0]));
for (let i = 1; i < profile.length; i++) {
ctx.lineTo(xScale(distances[i]), yScale(terrainElevs[i]));
}
ctx.lineTo(xScale(distances[distances.length - 1]), PAD.top + PLOT_H);
ctx.lineTo(xScale(distances[0]), PAD.top + PLOT_H);
ctx.closePath();
ctx.fillStyle = 'rgba(34, 197, 94, 0.3)';
ctx.fill();
// Highlight Fresnel intrusions (red fill)
if (showFresnel) {
for (let i = 0; i < profile.length; i++) {
if (profile[i].clearance < 0) {
const x = xScale(distances[i]);
const yTerrain = yScale(terrainElevs[i]);
const yFresnel = yScale(fresnelBottoms[i]);
const intrusion = Math.min(yFresnel - yTerrain, 20);
if (intrusion > 0) {
ctx.fillStyle = 'rgba(239, 68, 68, 0.4)';
ctx.fillRect(x - 1, yTerrain, 3, intrusion);
}
}
}
}
// Terrain line
ctx.beginPath();
ctx.moveTo(xScale(distances[0]), yScale(terrainElevs[0]));
for (let i = 1; i < profile.length; i++) {
ctx.lineTo(xScale(distances[i]), yScale(terrainElevs[i]));
}
ctx.strokeStyle = '#16a34a';
ctx.lineWidth = 1.5;
ctx.stroke();
// LOS line (solid)
ctx.beginPath();
ctx.moveTo(xScale(distances[0]), yScale(losHeights[0]));
ctx.lineTo(xScale(distances[distances.length - 1]), yScale(losHeights[losHeights.length - 1]));
ctx.strokeStyle = '#ef4444';
ctx.lineWidth = 1.5;
ctx.stroke();
// Y axis labels
ctx.fillStyle = '#6b7280';
ctx.font = '10px monospace';
ctx.textAlign = 'right';
ctx.textBaseline = 'middle';
for (let i = 0; i <= nGridY; i++) {
const elev = eMax - (i / nGridY) * (eMax - eMin);
const y = PAD.top + (i / nGridY) * PLOT_H;
ctx.fillText(`${Math.round(elev)}m`, PAD.left - 4, y);
}
// X axis labels
ctx.textAlign = 'center';
ctx.textBaseline = 'top';
const nGridX = 5;
for (let i = 0; i <= nGridX; i++) {
const d = (i / nGridX) * maxDist;
const x = xScale(d);
ctx.fillText(`${(d / 1000).toFixed(1)}km`, x, PAD.top + PLOT_H + 4);
}
// Hover crosshair + tooltip
if (hoverIdx !== null && hoverIdx >= 0 && hoverIdx < profile.length) {
const p = profile[hoverIdx];
const hx = xScale(p.distance);
const hy = yScale(p.terrain_elevation);
// Vertical line
ctx.beginPath();
ctx.moveTo(hx, PAD.top);
ctx.lineTo(hx, PAD.top + PLOT_H);
ctx.strokeStyle = 'rgba(0, 0, 0, 0.3)';
ctx.lineWidth = 1;
ctx.stroke();
// Dot on terrain
ctx.beginPath();
ctx.arc(hx, hy, 4, 0, Math.PI * 2);
ctx.fillStyle = '#2563eb';
ctx.fill();
// Tooltip with clearance info
const clearanceText = showFresnel ? ` | F1: ${p.clearance >= 0 ? '+' : ''}${p.clearance.toFixed(0)}m` : '';
const text = `${Math.round(p.terrain_elevation)}m @ ${(p.distance / 1000).toFixed(2)}km${clearanceText}`;
ctx.font = 'bold 11px monospace';
const tw = ctx.measureText(text).width + 10;
const tx = Math.min(hx + 8, CANVAS_W - tw - 4);
const ty = Math.max(hy - 22, PAD.top);
ctx.fillStyle = 'rgba(0, 0, 0, 0.8)';
ctx.beginPath();
ctx.roundRect(tx, ty, tw, 18, 3);
ctx.fill();
ctx.fillStyle = p.clearance < 0 && showFresnel ? '#fca5a5' : 'white';
ctx.textAlign = 'left';
ctx.textBaseline = 'middle';
ctx.fillText(text, tx + 5, ty + 9);
}
},
[profile, showFresnel]
);
// Re-draw on profile load or hover change
useEffect(() => {
draw(hover?.idx ?? null);
}, [draw, hover]);
// Mouse move handler
const handleMouseMove = useCallback(
(e: React.MouseEvent<HTMLCanvasElement>) => {
if (!profile || profile.length === 0) return;
const canvas = canvasRef.current;
if (!canvas) return;
const rect = canvas.getBoundingClientRect();
const mx = e.clientX - rect.left;
const relX = (mx - PAD.left) / PLOT_W;
if (relX < 0 || relX > 1) {
setHover(null);
return;
}
const idx = Math.round(relX * (profile.length - 1));
setHover({ x: mx, idx });
},
[profile]
);
const handleMouseLeave = useCallback(() => setHover(null), []);
// Stats
const minElev = profile ? Math.min(...profile.map((p) => p.terrain_elevation)) : 0;
const maxElev = profile ? Math.max(...profile.map((p) => p.terrain_elevation)) : 0;
const totalDist = fresnelData?.total_distance_m ?? 0;
// Status badge
const getStatusBadge = () => {
if (!fresnelData) return null;
if (fresnelData.los_clear && fresnelData.fresnel_clear) {
return <span className="text-green-600 dark:text-green-400 font-medium">LOS Clear</span>;
} else if (fresnelData.los_clear) {
return (
<span className="text-yellow-600 dark:text-yellow-400 font-medium">
F1 {fresnelData.fresnel_clear_pct}% Clear
</span>
);
} else {
return <span className="text-red-500 font-medium">LOS Blocked</span>;
}
};
// Ref for the container to block Leaflet events
const containerRef = useRef<HTMLDivElement>(null);
// Use Leaflet's DOM event utility to block click propagation to the map
useEffect(() => {
if (containerRef.current) {
L.DomEvent.disableClickPropagation(containerRef.current);
L.DomEvent.disableScrollPropagation(containerRef.current);
}
}, []);
return (
<div
ref={containerRef}
className="absolute bottom-6 left-1/2 -translate-x-1/2 z-[1500]
bg-white dark:bg-dark-surface rounded-lg shadow-xl border border-gray-200 dark:border-dark-border
overflow-hidden"
style={{ width: CANVAS_W + 16 }}
>
{/* Header */}
<div className="flex items-center justify-between px-3 py-2 border-b border-gray-100 dark:border-dark-border">
<div className="flex items-center gap-3">
<span className="text-xs font-semibold text-gray-700 dark:text-dark-text">
Terrain Profile
</span>
<label className="flex items-center gap-1.5 text-[10px] text-gray-500 cursor-pointer">
<input
type="checkbox"
checked={showFresnel}
onChange={(e) => setShowFresnel(e.target.checked)}
className="w-3 h-3"
/>
Fresnel Zone ({frequency} MHz)
</label>
</div>
<button
onClick={onClose}
className="text-gray-400 hover:text-gray-600 dark:hover:text-white text-sm w-6 h-6 flex items-center justify-center rounded hover:bg-gray-100 dark:hover:bg-dark-border"
>
{'\u2715'}
</button>
</div>
{/* Canvas */}
<div className="px-2 py-1">
{loading && (
<div className="flex items-center justify-center h-[220px] text-sm text-gray-400">
Loading profile...
</div>
)}
{error && (
<div className="flex items-center justify-center h-[220px] text-sm text-red-400">
{error}
</div>
)}
{!loading && !error && profile && (
<canvas
ref={canvasRef}
style={{ width: CANVAS_W, height: CANVAS_H, cursor: 'crosshair' }}
onMouseMove={handleMouseMove}
onMouseLeave={handleMouseLeave}
/>
)}
</div>
{/* Stats bar */}
{profile && profile.length > 0 && (
<div className="flex items-center justify-between px-3 py-1.5 bg-gray-50 dark:bg-dark-bg text-[10px] text-gray-500 dark:text-dark-muted border-t border-gray-100 dark:border-dark-border">
<span>Distance: {(totalDist / 1000).toFixed(2)} km</span>
<span>Min: {Math.round(minElev)} m</span>
<span>Max: {Math.round(maxElev)} m</span>
{showFresnel && fresnelData && (
<span>Clearance: {fresnelData.worst_clearance_m.toFixed(0)} m</span>
)}
{getStatusBadge()}
</div>
)}
{/* Recommendation */}
{showFresnel && fresnelData && !fresnelData.fresnel_clear && (
<div className="px-3 py-1.5 text-[10px] bg-yellow-50 dark:bg-yellow-900/20 text-yellow-700 dark:text-yellow-300 border-t border-yellow-200 dark:border-yellow-800">
{fresnelData.recommendation} (~{fresnelData.estimated_loss_db.toFixed(1)} dB loss)
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,669 @@
/**
* WebGL coverage layer using texture-based value interpolation.
*
* Simple approach (like CloudRF surface raster):
* 1. Create texture where each pixel = one grid cell's RSRP value
* 2. GPU's GL_LINEAR filtering interpolates between adjacent cells
* 3. Fragment shader maps interpolated value to color gradient
*/
import { useEffect, useRef, useMemo, useCallback } from 'react';
import { useMap } from 'react-leaflet';
export interface CoveragePoint {
lat: number;
lon: number;
rsrp: number;
}
interface WebGLCoverageLayerProps {
points: CoveragePoint[];
opacity: number;
minRsrp?: number;
maxRsrp?: number;
visible: boolean;
onWebGLFailed?: () => void;
}
const VERTEX_SHADER = `
attribute vec2 a_position;
varying vec2 v_uv;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
// Map position to UV, flip Y
v_uv = vec2((a_position.x + 1.0) * 0.5, 1.0 - (a_position.y + 1.0) * 0.5);
}
`;
// Fragment shader with smoothstep interpolation for C2 continuity
// This removes visible grid edges with minimal performance cost
const FRAGMENT_SHADER = `
precision mediump float;
uniform sampler2D u_coverage;
uniform vec2 u_textureSize;
varying vec2 v_uv;
// Quintic Hermite smoothstep - gives C2 continuity (smooth 2nd derivatives)
// This removes visible "seams" between grid cells
vec4 textureSmooth(sampler2D tex, vec2 uv, vec2 texSize) {
vec2 p = uv * texSize + 0.5;
vec2 i = floor(p);
vec2 f = p - i;
// Quintic hermite curve: f³(6f² - 15f + 10)
f = f * f * f * (f * (f * 6.0 - 15.0) + 10.0);
return texture2D(tex, (i + f - 0.5) / texSize);
}
// RSRP to color gradient (red -> orange -> yellow -> green -> cyan)
// Applied AFTER interpolation for clean gradients
vec3 rsrpToColor(float t) {
// t: 0 = weak (red), 1 = strong (cyan)
if (t < 0.25) return mix(vec3(1.0, 0.0, 0.0), vec3(1.0, 0.5, 0.0), t / 0.25);
if (t < 0.5) return mix(vec3(1.0, 0.5, 0.0), vec3(1.0, 1.0, 0.0), (t - 0.25) / 0.25);
if (t < 0.75) return mix(vec3(1.0, 1.0, 0.0), vec3(0.0, 1.0, 0.0), (t - 0.5) / 0.25);
return mix(vec3(0.0, 1.0, 0.0), vec3(0.0, 1.0, 1.0), (t - 0.75) / 0.25);
}
void main() {
// 1. Sample with smoothstep interpolation (RAW RSRP value)
vec4 texel = textureSmooth(u_coverage, v_uv, u_textureSize);
// 2. Alpha channel indicates coverage presence
if (texel.a < 0.1) discard;
// 3. Apply colormap AFTER interpolation (critical for clean gradients)
float rsrp = texel.r;
vec3 color = rsrpToColor(rsrp);
// 4. Smooth boundary fading
float boundaryAlpha = smoothstep(0.01, 0.05, rsrp);
gl_FragColor = vec4(color, boundaryAlpha * 0.85);
}
`;
function compileShader(gl: WebGLRenderingContext, source: string, type: number): WebGLShader | null {
const shader = gl.createShader(type);
if (!shader) return null;
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('Shader error:', gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
function createProgram(gl: WebGLRenderingContext): WebGLProgram | null {
const vs = compileShader(gl, VERTEX_SHADER, gl.VERTEX_SHADER);
const fs = compileShader(gl, FRAGMENT_SHADER, gl.FRAGMENT_SHADER);
if (!vs || !fs) return null;
const program = gl.createProgram();
if (!program) return null;
gl.attachShader(program, vs);
gl.attachShader(program, fs);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('Program error:', gl.getProgramInfoLog(program));
return null;
}
return program;
}
interface GridInfo {
width: number;
height: number;
minLat: number;
maxLat: number;
minLon: number;
maxLon: number;
latStep: number;
lonStep: number;
}
function detectGrid(points: CoveragePoint[]): GridInfo | null {
if (points.length < 4) return null;
// Calculate bounds directly from points (no rounding)
let minLat = Infinity, maxLat = -Infinity;
let minLon = Infinity, maxLon = -Infinity;
for (const p of points) {
if (p.lat < minLat) minLat = p.lat;
if (p.lat > maxLat) maxLat = p.lat;
if (p.lon < minLon) minLon = p.lon;
if (p.lon > maxLon) maxLon = p.lon;
}
// Find grid step by looking at sorted unique coordinates
const lats = new Set<number>();
const lons = new Set<number>();
for (const p of points) {
lats.add(Math.round(p.lat * 1000000) / 1000000); // 6 decimal places
lons.add(Math.round(p.lon * 1000000) / 1000000);
}
const sortedLats = Array.from(lats).sort((a, b) => a - b);
const sortedLons = Array.from(lons).sort((a, b) => a - b);
// Calculate step from median difference between adjacent points
const latDiffs: number[] = [];
const lonDiffs: number[] = [];
for (let i = 1; i < sortedLats.length; i++) {
latDiffs.push(sortedLats[i] - sortedLats[i-1]);
}
for (let i = 1; i < sortedLons.length; i++) {
lonDiffs.push(sortedLons[i] - sortedLons[i-1]);
}
latDiffs.sort((a, b) => a - b);
lonDiffs.sort((a, b) => a - b);
const latStep = latDiffs[Math.floor(latDiffs.length / 2)] || (maxLat - minLat) / 10;
const lonStep = lonDiffs[Math.floor(lonDiffs.length / 2)] || (maxLon - minLon) / 10;
// Calculate grid dimensions from actual extent and step
const width = Math.max(2, Math.round((maxLon - minLon) / lonStep) + 1);
const height = Math.max(2, Math.round((maxLat - minLat) / latStep) + 1);
return {
width,
height,
minLat,
maxLat,
minLon,
maxLon,
latStep,
lonStep,
};
}
interface TextureResult {
texture: WebGLTexture;
width: number;
height: number;
}
function createCoverageTexture(
gl: WebGLRenderingContext,
points: CoveragePoint[],
grid: GridInfo,
minRsrp: number,
maxRsrp: number
): TextureResult | null {
const { width, height, minLat, maxLat, minLon, maxLon } = grid;
const latRange = maxLat - minLat;
const lonRange = maxLon - minLon;
const rsrpRange = maxRsrp - minRsrp;
// Step 1: Create sparse grid with actual point positions
// Store normalized RSRP value (0-1) at each grid cell that has data
const sparseGrid = new Map<number, number>(); // key = gy * width + gx, value = normalized RSRP
for (const p of points) {
const gx = Math.round((p.lon - minLon) / lonRange * (width - 1));
const gy = Math.round((p.lat - minLat) / latRange * (height - 1));
if (gx >= 0 && gx < width && gy >= 0 && gy < height) {
const normalized = Math.max(0, Math.min(1, (p.rsrp - minRsrp) / rsrpRange));
const key = gy * width + gx;
// Keep the stronger signal if multiple points map to same cell
if (!sparseGrid.has(key) || sparseGrid.get(key)! < normalized) {
sparseGrid.set(key, normalized);
}
}
}
// Step 2: For each empty cell, find nearest filled cell using expanding search
// This fills the circular coverage area properly
const data = new Uint8Array(width * height * 4);
const maxSearchRadius = Math.max(width, height); // Max distance to search
let filledCount = 0;
for (let gy = 0; gy < height; gy++) {
for (let gx = 0; gx < width; gx++) {
const key = gy * width + gx;
if (sparseGrid.has(key)) {
// Cell has actual data
const value = Math.round(sparseGrid.get(key)! * 255);
const idx = key * 4;
data[idx] = value;
data[idx + 1] = 0;
data[idx + 2] = 0;
data[idx + 3] = 255;
filledCount++;
} else {
// Find nearest cell with data using expanding square search
let found = false;
let nearestValue = 0;
let nearestDistSq = Infinity;
// Search in expanding radius
for (let r = 1; r <= maxSearchRadius && !found; r++) {
// Check cells at distance r (square perimeter)
for (let dy = -r; dy <= r && !found; dy++) {
for (let dx = -r; dx <= r; dx++) {
// Only check perimeter cells (optimization)
if (Math.abs(dx) !== r && Math.abs(dy) !== r) continue;
const nx = gx + dx;
const ny = gy + dy;
if (nx < 0 || nx >= width || ny < 0 || ny >= height) continue;
const nkey = ny * width + nx;
if (sparseGrid.has(nkey)) {
const distSq = dx * dx + dy * dy;
if (distSq < nearestDistSq) {
nearestDistSq = distSq;
nearestValue = sparseGrid.get(nkey)!;
}
}
}
}
// If we found something at this radius, use it (nearest neighbor)
if (nearestDistSq < Infinity) {
found = true;
}
}
if (found) {
// Fill with nearest neighbor value
// Apply distance-based alpha fade for smooth edges
const dist = Math.sqrt(nearestDistSq);
const maxDist = 3; // Fade out over 3 cells
const alpha = dist <= maxDist ? 255 : Math.max(0, 255 - (dist - maxDist) * 50);
const value = Math.round(nearestValue * 255);
const idx = key * 4;
data[idx] = value;
data[idx + 1] = 0;
data[idx + 2] = 0;
data[idx + 3] = Math.round(alpha);
filledCount++;
}
// If not found, leave as transparent (alpha = 0)
}
}
}
console.log('[WebGL] Texture created (nearest-neighbor filled):', {
textureSize: `${width}x${height}`,
originalPoints: sparseGrid.size,
filledCells: filledCount,
totalCells: width * height,
fillPercent: (filledCount / (width * height) * 100).toFixed(1) + '%'
});
const texture = gl.createTexture();
if (!texture) return null;
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, data);
// LINEAR filtering for smooth interpolation between filled cells
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return { texture, width, height };
}
export default function WebGLCoverageLayer({
points,
opacity,
minRsrp = -130,
maxRsrp = -50,
visible,
onWebGLFailed,
}: WebGLCoverageLayerProps) {
const map = useMap();
// Refs for WebGL resources
const canvasRef = useRef<HTMLCanvasElement | null>(null);
const glRef = useRef<WebGLRenderingContext | null>(null);
const programRef = useRef<WebGLProgram | null>(null);
const textureRef = useRef<WebGLTexture | null>(null);
const quadBufferRef = useRef<WebGLBuffer | null>(null);
// Track what data the current texture was built from
const lastPointsHashRef = useRef<string>('');
const boundsRef = useRef<{ minLat: number; maxLat: number; minLon: number; maxLon: number } | null>(null);
const textureSizeRef = useRef<{ width: number; height: number }>({ width: 1, height: 1 });
// Stable ref for callback to avoid re-initialization
const onWebGLFailedRef = useRef(onWebGLFailed);
onWebGLFailedRef.current = onWebGLFailed;
// Track if initialized to prevent re-runs
const initializedRef = useRef(false);
// Compute stable hash for points data
const pointsHash = useMemo(() => {
if (points.length === 0) return '';
const first = points[0];
const last = points[points.length - 1];
return `${points.length}:${first.lat.toFixed(5)}:${last.lon.toFixed(5)}:${first.rsrp.toFixed(1)}`;
}, [points]);
// Render function - only draws, no resource creation
const render = useCallback(() => {
const canvas = canvasRef.current;
const gl = glRef.current;
const program = programRef.current;
const texture = textureRef.current;
const bounds = boundsRef.current;
// DEBUG: Check what's missing if we can't render
if (!canvas || !gl || !program || !texture || !bounds) {
console.log('[WebGL] Render skipped - missing:', {
canvas: !!canvas,
gl: !!gl,
program: !!program,
texture: !!texture,
bounds: !!bounds
});
return;
}
// Position canvas over coverage area
const nw = map.latLngToLayerPoint([bounds.maxLat, bounds.minLon]);
const se = map.latLngToLayerPoint([bounds.minLat, bounds.maxLon]);
const width = Math.abs(se.x - nw.x);
const height = Math.abs(se.y - nw.y);
if (width < 1 || height < 1) return;
canvas.style.transform = `translate(${nw.x}px, ${nw.y}px)`;
canvas.style.width = `${width}px`;
canvas.style.height = `${height}px`;
// DEBUG: Log every reposition
console.log('[WebGL] Canvas repositioned:', {
transform: canvas.style.transform,
width: canvas.style.width,
height: canvas.style.height,
zoom: map.getZoom()
});
// Get texture size for shader uniform
const texSize = textureSizeRef.current;
// Set canvas resolution
const dpr = Math.min(window.devicePixelRatio || 1, 2);
const canvasW = Math.min(Math.round(width * dpr), 2048);
const canvasH = Math.min(Math.round(height * dpr), 2048);
if (canvas.width !== canvasW || canvas.height !== canvasH) {
canvas.width = canvasW;
canvas.height = canvasH;
}
// Render
gl.viewport(0, 0, canvasW, canvasH);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.useProgram(program);
// Bind quad buffer
gl.bindBuffer(gl.ARRAY_BUFFER, quadBufferRef.current);
const posLoc = gl.getAttribLocation(program, 'a_position');
gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
// Bind texture
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(gl.getUniformLocation(program, 'u_coverage'), 0);
// Set texture size uniform (texSize already defined above for blur)
const textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');
if (textureSizeLocation) {
gl.uniform2f(textureSizeLocation, texSize.width, texSize.height);
}
// Draw
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.disableVertexAttribArray(posLoc);
}, [map]);
// Effect 1: Initialize WebGL (canvas, context, program, quad buffer) - runs ONCE
useEffect(() => {
if (!visible) return;
// Skip if already initialized
if (initializedRef.current && canvasRef.current && glRef.current) {
return;
}
const pane = map.getPane('overlayPane');
if (!pane) return;
// Create canvas if needed
if (!canvasRef.current) {
// Remove any leftover canvas elements from previous sessions
const existingCanvases = pane.querySelectorAll('canvas.webgl-coverage');
existingCanvases.forEach(c => c.remove());
console.log('[WebGL] Removed', existingCanvases.length, 'leftover canvas elements');
const canvas = document.createElement('canvas');
canvas.className = 'webgl-coverage'; // Add class for identification
canvas.style.position = 'absolute';
canvas.style.pointerEvents = 'none';
canvas.style.transformOrigin = '0 0';
pane.appendChild(canvas);
canvasRef.current = canvas;
}
const canvas = canvasRef.current;
// Initialize WebGL if needed
if (!glRef.current) {
const gl = canvas.getContext('webgl', { alpha: true, premultipliedAlpha: false });
if (!gl) {
console.error('[WebGL] WebGL not available');
onWebGLFailedRef.current?.();
return;
}
glRef.current = gl;
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
}
const gl = glRef.current;
// Create program if needed
if (!programRef.current) {
const program = createProgram(gl);
if (!program) {
console.error('[WebGL] Failed to create program');
onWebGLFailedRef.current?.();
return;
}
programRef.current = program;
}
// Create quad buffer if needed
if (!quadBufferRef.current) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
-1, -1, 1, -1, -1, 1, 1, 1
]), gl.STATIC_DRAW);
quadBufferRef.current = buf;
}
initializedRef.current = true;
console.log('[WebGL] Initialized (should appear ONCE)');
}, [visible, map]); // Removed onWebGLFailed - use ref instead
// Effect 2: Create texture when points data changes
useEffect(() => {
if (!visible || points.length === 0 || !glRef.current) return;
// Skip if same data
if (pointsHash === lastPointsHashRef.current && textureRef.current) {
return;
}
const gl = glRef.current;
const grid = detectGrid(points);
if (!grid) return;
// Delete old texture
if (textureRef.current) {
gl.deleteTexture(textureRef.current);
textureRef.current = null;
}
// Create new texture (returns texture + dimensions)
const result = createCoverageTexture(gl, points, grid, minRsrp, maxRsrp);
if (!result) {
console.error('[WebGL] Failed to create texture');
return;
}
textureRef.current = result.texture;
lastPointsHashRef.current = pointsHash;
// Store texture size for shader uniform
textureSizeRef.current = { width: result.width, height: result.height };
// Store bounds for rendering (with half-cell padding)
const canvasBounds = {
minLat: grid.minLat - grid.latStep / 2,
maxLat: grid.maxLat + grid.latStep / 2,
minLon: grid.minLon - grid.lonStep / 2,
maxLon: grid.maxLon + grid.lonStep / 2,
};
boundsRef.current = canvasBounds;
// FULL DEBUG: Compare data extent vs canvas bounds
const lats = points.map(p => p.lat);
const lons = points.map(p => p.lon);
const dataMinLat = Math.min(...lats);
const dataMaxLat = Math.max(...lats);
const dataMinLon = Math.min(...lons);
const dataMaxLon = Math.max(...lons);
console.log('[WebGL] FULL DEBUG:', {
// Data extent (actual points)
dataMinLat: dataMinLat.toFixed(6),
dataMaxLat: dataMaxLat.toFixed(6),
dataMinLon: dataMinLon.toFixed(6),
dataMaxLon: dataMaxLon.toFixed(6),
dataLatRange: (dataMaxLat - dataMinLat).toFixed(6),
dataLonRange: (dataMaxLon - dataMinLon).toFixed(6),
// Grid detection result
gridWidth: grid.width,
gridHeight: grid.height,
gridMinLat: grid.minLat.toFixed(6),
gridMaxLat: grid.maxLat.toFixed(6),
gridMinLon: grid.minLon.toFixed(6),
gridMaxLon: grid.maxLon.toFixed(6),
gridLatStep: grid.latStep.toFixed(6),
gridLonStep: grid.lonStep.toFixed(6),
// Texture size
textureWidth: result.width,
textureHeight: result.height,
// Canvas bounds (what we use for rendering)
canvasMinLat: canvasBounds.minLat.toFixed(6),
canvasMaxLat: canvasBounds.maxLat.toFixed(6),
canvasMinLon: canvasBounds.minLon.toFixed(6),
canvasMaxLon: canvasBounds.maxLon.toFixed(6),
canvasLatRange: (canvasBounds.maxLat - canvasBounds.minLat).toFixed(6),
canvasLonRange: (canvasBounds.maxLon - canvasBounds.minLon).toFixed(6),
// Comparison
latCoveragePercent: ((canvasBounds.maxLat - canvasBounds.minLat) / (dataMaxLat - dataMinLat) * 100).toFixed(1) + '%',
lonCoveragePercent: ((canvasBounds.maxLon - canvasBounds.minLon) / (dataMaxLon - dataMinLon) * 100).toFixed(1) + '%',
// Expected
expectedRange: '~0.18 degrees for 20km radius',
pointCount: points.length
});
// Initial render
render();
}, [visible, points, pointsHash, minRsrp, maxRsrp, render]);
// Effect 3: Set up map event listeners for re-rendering on move/zoom
// Note: Set up listeners even without texture - render() will check for texture
useEffect(() => {
if (!visible) return;
let frameId = 0;
let moveCount = 0;
const onMapChange = () => {
moveCount++;
if (moveCount <= 3 || moveCount % 10 === 0) {
console.log('[WebGL] Map event #' + moveCount + ', triggering render');
}
cancelAnimationFrame(frameId);
frameId = requestAnimationFrame(render);
};
map.on('move', onMapChange);
map.on('zoom', onMapChange);
map.on('resize', onMapChange);
console.log('[WebGL] Map listeners attached');
return () => {
map.off('move', onMapChange);
map.off('zoom', onMapChange);
map.off('resize', onMapChange);
cancelAnimationFrame(frameId);
console.log('[WebGL] Map listeners detached');
};
}, [visible, map, render]);
// Effect 4: Update opacity without recreating anything
useEffect(() => {
if (canvasRef.current) {
canvasRef.current.style.opacity = String(opacity);
}
}, [opacity]);
// Effect 5: Hide/show canvas based on visibility
useEffect(() => {
if (canvasRef.current) {
canvasRef.current.style.display = visible ? 'block' : 'none';
}
}, [visible]);
// Cleanup on unmount
useEffect(() => {
return () => {
const gl = glRef.current;
if (gl) {
if (textureRef.current) gl.deleteTexture(textureRef.current);
if (quadBufferRef.current) gl.deleteBuffer(quadBufferRef.current);
if (programRef.current) gl.deleteProgram(programRef.current);
}
if (canvasRef.current) {
canvasRef.current.remove();
canvasRef.current = null;
}
glRef.current = null;
programRef.current = null;
textureRef.current = null;
quadBufferRef.current = null;
};
}, []);
return null;
}

View File

@@ -0,0 +1,632 @@
/**
* WebGL Radial Gradients Coverage Layer
*
* Uses multi-pass additive blending to render smooth radial gradients
* around each coverage point, similar to Canvas GeographicHeatmap but GPU-accelerated.
*
* Approach:
* 1. Render each point as a quad with radial falloff (only when data changes)
* 2. Use additive blending to accumulate (weight * rsrp, weight)
* 3. Final pass: normalize and apply colormap (on every frame)
*/
import { useEffect, useRef, useMemo, useCallback } from 'react';
import { useMap } from 'react-leaflet';
// Logging: 0=off, 1=errors, 2=info, 3=debug
const LOG_LEVEL = 2;
const log = (level: number, ...args: unknown[]) => {
if (level <= LOG_LEVEL) console.log('[WebGL Radial]', ...args);
};
export interface CoveragePoint {
lat: number;
lon: number;
rsrp: number;
}
interface WebGLRadialCoverageLayerProps {
points: CoveragePoint[];
opacity: number;
minRsrp?: number;
maxRsrp?: number;
visible: boolean;
radiusMeters?: number;
onWebGLFailed?: () => void;
}
// Point accumulation vertex shader
const POINT_VERTEX_SHADER = `
attribute vec2 a_position; // quad vertices (-1 to 1)
attribute vec2 a_pointPos; // point position in normalized coords
attribute float a_pointRsrp; // normalized RSRP (0-1)
attribute float a_pointRadius; // radius in normalized coords
varying vec2 v_localPos;
varying float v_rsrp;
void main() {
// Expand quad around point center
vec2 pos = a_pointPos + a_position * a_pointRadius;
gl_Position = vec4(pos * 2.0 - 1.0, 0.0, 1.0); // Map 0-1 to clip space -1 to 1
v_localPos = a_position; // -1 to 1 within the quad
v_rsrp = a_pointRsrp;
}
`;
// Point accumulation fragment shader
const POINT_FRAGMENT_SHADER = `
precision highp float;
varying vec2 v_localPos;
varying float v_rsrp;
void main() {
// Radial distance from center (0 at center, 1 at edge)
float dist = length(v_localPos);
// Discard outside circle
if (dist > 1.0) discard;
// Radial falloff - softer gaussian for better edge coverage
// exp(-2) = 0.135 at edge vs exp(-3) = 0.05, giving more contribution from edge points
float weight = exp(-dist * dist * 2.0);
// Output: (weight * rsrp, weight, 0, 0)
// Using RG channels for accumulation
gl_FragColor = vec4(weight * v_rsrp, weight, 0.0, 1.0);
}
`;
// Final compositing vertex shader
const COMPOSITE_VERTEX_SHADER = `
attribute vec2 a_position;
varying vec2 v_uv;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_uv = (a_position + 1.0) * 0.5;
}
`;
// Final compositing fragment shader
const COMPOSITE_FRAGMENT_SHADER = `
precision highp float;
uniform sampler2D u_accumTexture;
uniform float u_opacity;
varying vec2 v_uv;
vec3 rsrpToColor(float t) {
// t: 0 = weak (red), 1 = strong (cyan)
if (t < 0.25) return mix(vec3(1.0, 0.0, 0.0), vec3(1.0, 0.5, 0.0), t / 0.25);
if (t < 0.5) return mix(vec3(1.0, 0.5, 0.0), vec3(1.0, 1.0, 0.0), (t - 0.25) / 0.25);
if (t < 0.75) return mix(vec3(1.0, 1.0, 0.0), vec3(0.0, 1.0, 0.0), (t - 0.5) / 0.25);
return mix(vec3(0.0, 1.0, 0.0), vec3(0.0, 1.0, 1.0), (t - 0.75) / 0.25);
}
void main() {
vec4 accum = texture2D(u_accumTexture, v_uv);
float totalValue = accum.r;
float totalWeight = accum.g;
// No coverage - discard if weight is truly zero
if (totalWeight < 0.0001) discard;
// Weighted average RSRP
float avgRsrp = clamp(totalValue / totalWeight, 0.0, 1.0);
// Color mapping
vec3 color = rsrpToColor(avgRsrp);
// Alpha based on weight (fade at edges)
float alpha = min(1.0, totalWeight * 0.1) * u_opacity;
gl_FragColor = vec4(color, alpha);
}
`;
function compileShader(gl: WebGLRenderingContext, source: string, type: number): WebGLShader | null {
const shader = gl.createShader(type);
if (!shader) return null;
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('[WebGL Radial] Shader error:', gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
function createProgram(gl: WebGLRenderingContext, vsSource: string, fsSource: string): WebGLProgram | null {
const vs = compileShader(gl, vsSource, gl.VERTEX_SHADER);
const fs = compileShader(gl, fsSource, gl.FRAGMENT_SHADER);
if (!vs || !fs) return null;
const program = gl.createProgram();
if (!program) return null;
gl.attachShader(program, vs);
gl.attachShader(program, fs);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('[WebGL Radial] Program error:', gl.getProgramInfoLog(program));
return null;
}
// Clean up shaders after linking
gl.deleteShader(vs);
gl.deleteShader(fs);
return program;
}
interface Bounds {
minLat: number;
maxLat: number;
minLon: number;
maxLon: number;
}
export default function WebGLRadialCoverageLayer({
points,
opacity,
minRsrp = -130,
maxRsrp = -50,
visible,
radiusMeters = 400,
onWebGLFailed,
}: WebGLRadialCoverageLayerProps) {
const map = useMap();
// Refs for WebGL resources
const canvasRef = useRef<HTMLCanvasElement | null>(null);
const glRef = useRef<WebGLRenderingContext | null>(null);
const pointProgramRef = useRef<WebGLProgram | null>(null);
const compositeProgramRef = useRef<WebGLProgram | null>(null);
const accumTextureRef = useRef<WebGLTexture | null>(null);
const framebufferRef = useRef<WebGLFramebuffer | null>(null);
const quadBufferRef = useRef<WebGLBuffer | null>(null);
const pointBufferRef = useRef<WebGLBuffer | null>(null);
const boundsRef = useRef<Bounds | null>(null);
const initializedRef = useRef(false);
const lastPointsHashRef = useRef<string>('');
const instExtRef = useRef<ANGLE_instanced_arrays | null>(null);
// Track if points need to be re-rendered (expensive pass)
const needsPointRenderRef = useRef(true);
// Stable ref for callback
const onWebGLFailedRef = useRef(onWebGLFailed);
onWebGLFailedRef.current = onWebGLFailed;
// Track framebuffer size
const fbSizeRef = useRef<{ width: number; height: number }>({ width: 0, height: 0 });
// Compute points hash for change detection
const pointsHash = useMemo(() => {
if (points.length === 0) return 'empty';
const first = points[0];
const last = points[points.length - 1];
return `${points.length}:${first.lat.toFixed(5)}:${last.lon.toFixed(5)}:${first.rsrp.toFixed(1)}`;
}, [points]);
// Calculate bounds from points
const calculateBounds = useCallback((pts: CoveragePoint[]): Bounds | null => {
if (pts.length === 0) return null;
let minLat = Infinity, maxLat = -Infinity;
let minLon = Infinity, maxLon = -Infinity;
for (const p of pts) {
if (p.lat < minLat) minLat = p.lat;
if (p.lat > maxLat) maxLat = p.lat;
if (p.lon < minLon) minLon = p.lon;
if (p.lon > maxLon) maxLon = p.lon;
}
// Padding needs to accommodate the radial gradient of edge points
// Each point's gradient extends beyond its center, use 12% of range as padding
const latRangeRaw = maxLat - minLat;
const lonRangeRaw = maxLon - minLon;
const latPaddingGradient = latRangeRaw * 0.12;
const lonPaddingGradient = lonRangeRaw * 0.12;
const latPaddingRadius = radiusMeters / 111000;
const lonPaddingRadius = radiusMeters / (111000 * Math.cos((minLat + maxLat) / 2 * Math.PI / 180));
const latPadding = Math.max(latPaddingGradient, latPaddingRadius);
const lonPadding = Math.max(lonPaddingGradient, lonPaddingRadius);
log(2, 'Bounds padding:', { latPadding: latPadding.toFixed(5), lonPadding: lonPadding.toFixed(5) });
return {
minLat: minLat - latPadding,
maxLat: maxLat + latPadding,
minLon: minLon - lonPadding,
maxLon: maxLon + lonPadding,
};
}, [radiusMeters]);
// Render function - split into point accumulation (expensive) and composite (cheap)
const render = useCallback(() => {
const canvas = canvasRef.current;
const gl = glRef.current;
const pointProgram = pointProgramRef.current;
const compositeProgram = compositeProgramRef.current;
const framebuffer = framebufferRef.current;
const accumTexture = accumTextureRef.current;
const quadBuffer = quadBufferRef.current;
const bounds = boundsRef.current;
if (!canvas || !gl || !pointProgram || !compositeProgram || !framebuffer ||
!accumTexture || !quadBuffer || !bounds) {
return;
}
log(3, 'render() points:', points.length, 'needsPointRender:', needsPointRenderRef.current);
// Position canvas over coverage area
const nw = map.latLngToLayerPoint([bounds.maxLat, bounds.minLon]);
const se = map.latLngToLayerPoint([bounds.minLat, bounds.maxLon]);
const width = Math.abs(se.x - nw.x);
const height = Math.abs(se.y - nw.y);
if (width < 1 || height < 1) return;
canvas.style.transform = `translate(${nw.x}px, ${nw.y}px)`;
canvas.style.width = `${width}px`;
canvas.style.height = `${height}px`;
// Set canvas resolution
const dpr = Math.min(window.devicePixelRatio || 1, 2);
const canvasW = Math.min(Math.round(width * dpr), 2048);
const canvasH = Math.min(Math.round(height * dpr), 2048);
// Resize canvas and framebuffer if needed (with tolerance to avoid subpixel jitter)
const needsResize = Math.abs(canvas.width - canvasW) > 2 || Math.abs(canvas.height - canvasH) > 2;
if (needsResize) {
canvas.width = canvasW;
canvas.height = canvasH;
// Resize accumulation texture
gl.bindTexture(gl.TEXTURE_2D, accumTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, canvasW, canvasH, 0, gl.RGBA, gl.FLOAT, null);
fbSizeRef.current = { width: canvasW, height: canvasH };
needsPointRenderRef.current = true; // Must re-render points after resize
}
// === Pass 1: Accumulate points into framebuffer (only when needed) ===
if (needsPointRenderRef.current) {
const t0 = performance.now();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.useProgram(pointProgram);
gl.enable(gl.BLEND);
gl.blendFunc(gl.ONE, gl.ONE); // Additive blending
// Get attribute locations
const posLoc = gl.getAttribLocation(pointProgram, 'a_position');
const pointPosLoc = gl.getAttribLocation(pointProgram, 'a_pointPos');
const pointRsrpLoc = gl.getAttribLocation(pointProgram, 'a_pointRsrp');
const pointRadiusLoc = gl.getAttribLocation(pointProgram, 'a_pointRadius');
// Calculate radius in normalized coords
const latRange = bounds.maxLat - bounds.minLat;
const lonRange = bounds.maxLon - bounds.minLon;
// Calculate radius: ensure smooth overlap between adjacent points
const gridDim = Math.sqrt(points.length);
const avgCellLat = latRange / gridDim;
const avgCellLon = lonRange / gridDim;
// For smooth coverage we need each point's gradient to reach ~2 cells in every direction
// Denser grids (more points) need relatively larger multiplier because edge effects matter more
const baseMultiplier = 3.5;
const densityBoost = Math.max(1.0, gridDim / 50); // 1.0 at 50pts, 1.6 at 80pts
const radiusMultiplier = baseMultiplier * densityBoost;
const normalizedRadiusLat = (avgCellLat * radiusMultiplier) / latRange;
const normalizedRadiusLon = (avgCellLon * radiusMultiplier) / lonRange;
const normalizedRadius = Math.max(normalizedRadiusLat, normalizedRadiusLon);
const rsrpRange = maxRsrp - minRsrp;
const instExt = instExtRef.current;
const pointBuffer = pointBufferRef.current;
if (instExt && pointBuffer) {
// === INSTANCED RENDERING: 1 draw call for ALL points ===
// Build instance data buffer: [posX, posY, rsrp, radius] × N points
const instanceData = new Float32Array(points.length * 4);
for (let i = 0; i < points.length; i++) {
const p = points[i];
const normX = (p.lon - bounds.minLon) / lonRange;
const normY = (p.lat - bounds.minLat) / latRange;
const normRsrp = Math.max(0, Math.min(1, (p.rsrp - minRsrp) / rsrpRange));
instanceData[i * 4 + 0] = normX;
instanceData[i * 4 + 1] = normY;
instanceData[i * 4 + 2] = normRsrp;
instanceData[i * 4 + 3] = normalizedRadius;
}
gl.bindBuffer(gl.ARRAY_BUFFER, pointBuffer);
gl.bufferData(gl.ARRAY_BUFFER, instanceData, gl.DYNAMIC_DRAW);
// Bind quad buffer for a_position (per-vertex)
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
// Bind instance buffer for per-instance attributes
gl.bindBuffer(gl.ARRAY_BUFFER, pointBuffer);
const stride = 4 * 4; // 4 floats × 4 bytes
gl.enableVertexAttribArray(pointPosLoc);
gl.vertexAttribPointer(pointPosLoc, 2, gl.FLOAT, false, stride, 0);
instExt.vertexAttribDivisorANGLE(pointPosLoc, 1); // per-instance
gl.enableVertexAttribArray(pointRsrpLoc);
gl.vertexAttribPointer(pointRsrpLoc, 1, gl.FLOAT, false, stride, 8);
instExt.vertexAttribDivisorANGLE(pointRsrpLoc, 1); // per-instance
gl.enableVertexAttribArray(pointRadiusLoc);
gl.vertexAttribPointer(pointRadiusLoc, 1, gl.FLOAT, false, stride, 12);
instExt.vertexAttribDivisorANGLE(pointRadiusLoc, 1); // per-instance
// ONE draw call for ALL points!
instExt.drawArraysInstancedANGLE(gl.TRIANGLE_STRIP, 0, 4, points.length);
// Reset divisors
instExt.vertexAttribDivisorANGLE(pointPosLoc, 0);
instExt.vertexAttribDivisorANGLE(pointRsrpLoc, 0);
instExt.vertexAttribDivisorANGLE(pointRadiusLoc, 0);
gl.disableVertexAttribArray(posLoc);
gl.disableVertexAttribArray(pointPosLoc);
gl.disableVertexAttribArray(pointRsrpLoc);
gl.disableVertexAttribArray(pointRadiusLoc);
const t1 = performance.now();
log(2, 'Instanced render:', points.length, 'points in 1 call,', (t1 - t0).toFixed(1) + 'ms');
} else {
// === FALLBACK: per-point draw calls ===
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
for (const p of points) {
const normX = (p.lon - bounds.minLon) / lonRange;
const normY = (p.lat - bounds.minLat) / latRange;
const normRsrp = Math.max(0, Math.min(1, (p.rsrp - minRsrp) / rsrpRange));
gl.vertexAttrib2f(pointPosLoc, normX, normY);
gl.vertexAttrib1f(pointRsrpLoc, normRsrp);
gl.vertexAttrib1f(pointRadiusLoc, normalizedRadius);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
gl.disableVertexAttribArray(posLoc);
const t1 = performance.now();
log(2, 'Fallback render:', points.length, 'points in', points.length, 'calls,', (t1 - t0).toFixed(1) + 'ms');
}
log(3, 'Grid estimate:', { points: points.length, gridDim: gridDim.toFixed(1), densityBoost: densityBoost.toFixed(2), radiusMultiplier: radiusMultiplier.toFixed(1), normalizedRadius: normalizedRadius.toFixed(4) });
needsPointRenderRef.current = false;
}
// === Pass 2: Composite to screen (always runs) ===
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.useProgram(compositeProgram);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA); // Normal blending
// Bind quad buffer
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
const compositePos = gl.getAttribLocation(compositeProgram, 'a_position');
gl.enableVertexAttribArray(compositePos);
gl.vertexAttribPointer(compositePos, 2, gl.FLOAT, false, 0, 0);
// Bind accumulation texture
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, accumTexture);
gl.uniform1i(gl.getUniformLocation(compositeProgram, 'u_accumTexture'), 0);
gl.uniform1f(gl.getUniformLocation(compositeProgram, 'u_opacity'), opacity);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.disableVertexAttribArray(compositePos);
}, [map, points, minRsrp, maxRsrp, opacity]);
// Effect 1: Initialize WebGL
useEffect(() => {
if (!visible) return;
if (initializedRef.current && canvasRef.current && glRef.current) return;
const pane = map.getPane('overlayPane');
if (!pane) return;
// Remove any leftover canvas
const existing = pane.querySelectorAll('canvas.webgl-radial-coverage');
existing.forEach(c => c.remove());
// Create canvas
const canvas = document.createElement('canvas');
canvas.className = 'webgl-radial-coverage';
canvas.style.position = 'absolute';
canvas.style.pointerEvents = 'none';
canvas.style.transformOrigin = '0 0';
pane.appendChild(canvas);
canvasRef.current = canvas;
// Initialize WebGL
const gl = canvas.getContext('webgl', { alpha: true, premultipliedAlpha: false });
if (!gl) {
console.error('[WebGL Radial] WebGL not available');
onWebGLFailedRef.current?.();
return;
}
glRef.current = gl;
// Check for float texture support
const floatExt = gl.getExtension('OES_texture_float');
gl.getExtension('OES_texture_float_linear'); // Enable if available
if (!floatExt) {
console.error('[WebGL Radial] OES_texture_float not supported');
onWebGLFailedRef.current?.();
return;
}
// Check for instanced rendering support
const instExt = gl.getExtension('ANGLE_instanced_arrays');
if (instExt) {
log(2, 'Instanced rendering supported');
instExtRef.current = instExt;
} else {
log(1, 'Instanced rendering NOT supported, using fallback');
}
gl.enable(gl.BLEND);
// Create point program
const pointProgram = createProgram(gl, POINT_VERTEX_SHADER, POINT_FRAGMENT_SHADER);
if (!pointProgram) {
console.error('[WebGL Radial] Failed to create point program');
onWebGLFailedRef.current?.();
return;
}
pointProgramRef.current = pointProgram;
// Create composite program
const compositeProgram = createProgram(gl, COMPOSITE_VERTEX_SHADER, COMPOSITE_FRAGMENT_SHADER);
if (!compositeProgram) {
console.error('[WebGL Radial] Failed to create composite program');
onWebGLFailedRef.current?.();
return;
}
compositeProgramRef.current = compositeProgram;
// Create quad buffer (fullscreen quad)
const quadBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
-1, -1,
1, -1,
-1, 1,
1, 1,
]), gl.STATIC_DRAW);
quadBufferRef.current = quadBuffer;
// Create point buffer (will be filled per-point for now, TODO: instancing)
const pointBuffer = gl.createBuffer();
pointBufferRef.current = pointBuffer;
// Create accumulation texture (float RGBA)
// Use NEAREST filtering - float textures require OES_texture_float_linear for LINEAR
const accumTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, accumTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
accumTextureRef.current = accumTexture;
// Create framebuffer
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, accumTexture, 0);
framebufferRef.current = framebuffer;
// Check framebuffer status
const status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status !== gl.FRAMEBUFFER_COMPLETE) {
console.error('[WebGL Radial] Framebuffer not complete:', status);
onWebGLFailedRef.current?.();
return;
}
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
initializedRef.current = true;
}, [visible, map]);
// Effect 2: Update bounds when points change
useEffect(() => {
if (!visible || points.length === 0) return;
if (pointsHash === lastPointsHashRef.current) return;
const bounds = calculateBounds(points);
if (!bounds) return;
boundsRef.current = bounds;
lastPointsHashRef.current = pointsHash;
needsPointRenderRef.current = true; // Mark for point re-render
render();
}, [visible, points, pointsHash, calculateBounds, render]);
// Effect 3: Map event listeners
useEffect(() => {
if (!visible) return;
let frameId = 0;
const onMapChange = () => {
cancelAnimationFrame(frameId);
frameId = requestAnimationFrame(render);
};
map.on('move', onMapChange);
map.on('zoom', onMapChange);
map.on('resize', onMapChange);
return () => {
map.off('move', onMapChange);
map.off('zoom', onMapChange);
map.off('resize', onMapChange);
cancelAnimationFrame(frameId);
};
}, [visible, map, render]);
// Effect 4: Visibility toggle
useEffect(() => {
if (canvasRef.current) {
canvasRef.current.style.display = visible ? 'block' : 'none';
}
}, [visible]);
// Cleanup on unmount
useEffect(() => {
return () => {
const gl = glRef.current;
if (gl) {
if (accumTextureRef.current) gl.deleteTexture(accumTextureRef.current);
if (framebufferRef.current) gl.deleteFramebuffer(framebufferRef.current);
if (quadBufferRef.current) gl.deleteBuffer(quadBufferRef.current);
if (pointBufferRef.current) gl.deleteBuffer(pointBufferRef.current);
if (pointProgramRef.current) gl.deleteProgram(pointProgramRef.current);
if (compositeProgramRef.current) gl.deleteProgram(compositeProgramRef.current);
}
if (canvasRef.current) {
canvasRef.current.remove();
canvasRef.current = null;
}
};
}, []);
return null;
}

View File

@@ -1,6 +1,5 @@
import { useState, useEffect, useCallback } from 'react';
import NumberInput from '@/components/ui/NumberInput.tsx';
import FrequencySelector from '@/components/panels/FrequencySelector.tsx';
import FrequencyBandPanel from '@/components/panels/FrequencyBandPanel.tsx';
import ModalBackdrop from './ModalBackdrop.tsx';
@@ -31,6 +30,7 @@ interface SiteConfigModalProps {
const TEMPLATES = {
limesdr: {
label: 'LimeSDR',
tooltip: 'SDR dev board — low power, short range testing (20 dBm, 2 dBi, 1800 MHz)',
style: 'purple',
name: 'LimeSDR Mini',
power: 20,
@@ -41,6 +41,7 @@ const TEMPLATES = {
},
lowBBU: {
label: 'Low BBU',
tooltip: 'Low-power baseband unit — suburban/campus coverage (40 dBm, 8 dBi, 1800 MHz)',
style: 'green',
name: 'Low Power BBU',
power: 40,
@@ -51,6 +52,7 @@ const TEMPLATES = {
},
highBBU: {
label: 'High BBU',
tooltip: 'High-power BBU — urban macro sector (43 dBm, 15 dBi, 65\u00B0 sector)',
style: 'orange',
name: 'High Power BBU',
power: 43,
@@ -63,6 +65,7 @@ const TEMPLATES = {
},
urbanMacro: {
label: 'Urban Macro',
tooltip: 'Standard urban macro site — rooftop/tower sector (43 dBm, 18 dBi, 65\u00B0 sector)',
style: 'blue',
name: 'Urban Macro Site',
power: 43,
@@ -75,6 +78,7 @@ const TEMPLATES = {
},
ruralTower: {
label: 'Rural Tower',
tooltip: 'Rural high tower — long range 800 MHz omni coverage (46 dBm, 8 dBi, 50m)',
style: 'emerald',
name: 'Rural Tower',
power: 46,
@@ -85,6 +89,7 @@ const TEMPLATES = {
},
smallCell: {
label: 'Small Cell',
tooltip: 'Urban small cell — street-level high capacity (30 dBm, 12 dBi, 2600 MHz)',
style: 'cyan',
name: 'Small Cell',
power: 30,
@@ -97,6 +102,7 @@ const TEMPLATES = {
},
indoorDAS: {
label: 'Indoor DAS',
tooltip: 'Indoor distributed antenna — in-building coverage (23 dBm, 2 dBi, 2100 MHz)',
style: 'rose',
name: 'Indoor DAS',
power: 23,
@@ -107,6 +113,7 @@ const TEMPLATES = {
},
uhfTactical: {
label: 'UHF Tactical',
tooltip: 'UHF tactical radio — man-portable field comms (25 dBm, 3 dBi, 450 MHz)',
style: 'amber',
name: 'UHF Tactical Radio',
power: 25,
@@ -117,6 +124,7 @@ const TEMPLATES = {
},
vhfRepeater: {
label: 'VHF Repeater',
tooltip: 'VHF repeater — long range voice/data relay (40 dBm, 6 dBi, 150 MHz)',
style: 'teal',
name: 'VHF Repeater',
power: 40,
@@ -203,11 +211,11 @@ export default function SiteConfigModal({
if (form.power < 10 || form.power > 50) {
newErrors.power = 'Power must be 10-50 dBm';
}
if (form.gain < 0 || form.gain > 25) {
newErrors.gain = 'Gain must be 0-25 dBi';
if (form.gain < 0 || form.gain > 30) {
newErrors.gain = 'Gain must be 0-30 dBi';
}
if (form.frequency < 100 || form.frequency > 6000) {
newErrors.frequency = 'Frequency must be 100-6000 MHz';
if (form.frequency < 30 || form.frequency > 6000) {
newErrors.frequency = 'Frequency must be 30-6000 MHz';
}
if (form.height < 1 || form.height > 100) {
newErrors.height = 'Height must be 1-100m';
@@ -360,20 +368,20 @@ export default function SiteConfigModal({
label="Antenna Gain"
value={form.gain}
min={0}
max={25}
max={30}
step={0.5}
unit="dBi"
hint="Omni 2-8, Sector 15-18, Parabolic 20-25"
hint={
form.gain <= 8
? `Omni-directional (${form.gain} dBi)`
: form.gain <= 18
? `Sector/Panel (${form.gain} dBi)`
: `Parabolic/Dish (${form.gain} dBi)`
}
onChange={(v) => updateField('gain', v)}
/>
{/* Frequency */}
<FrequencySelector
value={form.frequency}
onChange={(v) => updateField('frequency', v)}
/>
{/* Band panel — UHF/VHF/LTE/5G grouped selector */}
{/* Band panel — UHF/VHF/LTE/5G grouped selector + custom input */}
<FrequencyBandPanel
value={form.frequency}
onChange={(v) => updateField('frequency', v)}
@@ -485,6 +493,7 @@ export default function SiteConfigModal({
key={key}
type="button"
onClick={() => applyTemplate(key as keyof typeof TEMPLATES)}
title={t.tooltip}
className={`px-3 py-1.5 rounded text-xs font-medium transition-colors min-h-[32px]
${TEMPLATE_COLORS[t.style] ?? TEMPLATE_COLORS.blue}`}
>

View File

@@ -0,0 +1,77 @@
/**
* Quick frequency band selector for setting all sectors at once.
* Enables rapid comparison of coverage at different frequency bands.
*/
import { useSitesStore } from '@/store/sites.ts';
import { COMMON_FREQUENCIES } from '@/constants/frequencies.ts';
const QUICK_BANDS = [
{ freq: 70, label: '70', color: 'text-indigo-400' },
{ freq: 225, label: '225', color: 'text-cyan-400' },
{ freq: 700, label: '700', color: 'text-red-400' },
{ freq: 800, label: '800', color: 'text-orange-400' },
{ freq: 900, label: '900', color: 'text-yellow-400' },
{ freq: 1800, label: '1.8G', color: 'text-green-400' },
{ freq: 2100, label: '2.1G', color: 'text-blue-400' },
{ freq: 2600, label: '2.6G', color: 'text-purple-400' },
{ freq: 3500, label: '3.5G', color: 'text-pink-400' },
];
export default function BatchFrequencyChange() {
const sites = useSitesStore((s) => s.sites);
const setAllSitesFrequency = useSitesStore((s) => s.setAllSitesFrequency);
if (sites.length === 0) return null;
// Get current frequency (from first site)
const currentFreq = sites[0]?.frequency ?? 1800;
// Check if all sites have same frequency
const allSameFreq = sites.every((s) => s.frequency === currentFreq);
// Get band info
const getBandName = (freq: number) => {
const band = COMMON_FREQUENCIES.find((b) => b.value === freq);
return band?.name ?? `${freq} MHz`;
};
const handleSetFrequency = async (freq: number) => {
await setAllSitesFrequency(freq);
};
return (
<div className="p-3 border-t border-gray-200 dark:border-dark-border">
<div className="flex items-center justify-between mb-2">
<h4 className="text-xs font-semibold text-gray-500 dark:text-dark-muted uppercase">
Quick Frequency
</h4>
<span className="text-[10px] text-gray-400 dark:text-dark-muted">
{allSameFreq ? getBandName(currentFreq) : 'Mixed'}
</span>
</div>
<div className="flex flex-wrap gap-1">
{QUICK_BANDS.map((b) => {
const isActive = allSameFreq && currentFreq === b.freq;
return (
<button
key={b.freq}
onClick={() => handleSetFrequency(b.freq)}
className={`px-2 py-1 text-xs rounded transition-colors ${
isActive
? 'bg-blue-100 text-blue-700 dark:bg-blue-900/30 dark:text-blue-300 ring-1 ring-blue-400'
: 'bg-gray-100 hover:bg-gray-200 dark:bg-dark-border dark:hover:bg-dark-muted text-gray-700 dark:text-dark-text'
}`}
title={`Set all sectors to ${b.freq} MHz (${getBandName(b.freq)})`}
>
<span className={isActive ? '' : b.color}>{b.label}</span>
</button>
);
})}
</div>
<div className="mt-1.5 text-[10px] text-gray-400 dark:text-dark-muted">
Sets all {sites.length} sector{sites.length !== 1 ? 's' : ''} to selected band
</div>
</div>
);
}

View File

@@ -19,8 +19,8 @@ function estimateAreaKm2(pointCount: number, resolutionM: number): number {
}
const LEVELS = [
{ label: 'Excellent', threshold: -70, color: 'bg-green-500' },
{ label: 'Good', threshold: -85, color: 'bg-lime-500' },
{ label: 'Excellent', threshold: -70, color: 'bg-blue-500' },
{ label: 'Good', threshold: -85, color: 'bg-green-500' },
{ label: 'Fair', threshold: -100, color: 'bg-yellow-500' },
{ label: 'Weak', threshold: -Infinity, color: 'bg-red-500' },
] as const;

View File

@@ -5,6 +5,7 @@
* and propagation model info for each band.
*/
import { useState } from 'react';
import { COMMON_FREQUENCIES, FREQUENCY_GROUPS, getWavelength } from '@/constants/frequencies.ts';
import type { FrequencyBand } from '@/types/index.ts';
@@ -54,11 +55,25 @@ function getBandForFrequency(freq: number): string | null {
export default function FrequencyBandPanel({ value, onChange }: FrequencyBandPanelProps) {
const currentBand = getBandForFrequency(value);
const [customInput, setCustomInput] = useState('');
const handleCustomSubmit = () => {
const parsed = parseInt(customInput, 10);
if (parsed > 0 && parsed <= 100000) {
onChange(parsed);
setCustomInput('');
}
};
return (
<div className="space-y-3">
<div className="text-xs font-semibold text-gray-500 dark:text-dark-muted uppercase tracking-wide">
Frequency Bands
<div className="flex items-center justify-between">
<div className="text-xs font-semibold text-gray-500 dark:text-dark-muted uppercase tracking-wide">
Operating Frequency
</div>
<div className="text-xs font-medium text-gray-600 dark:text-dark-muted">
{value} MHz
</div>
</div>
{(Object.keys(FREQUENCY_GROUPS) as Array<keyof typeof FREQUENCY_GROUPS>).map((bandType) => {
@@ -139,6 +154,28 @@ export default function FrequencyBandPanel({ value, onChange }: FrequencyBandPan
</div>
);
})}
{/* Custom frequency input */}
<div className="flex gap-2">
<input
type="number"
placeholder="Custom MHz..."
value={customInput}
onChange={(e) => setCustomInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && handleCustomSubmit()}
className="flex-1 px-2.5 py-1.5 border border-gray-300 dark:border-dark-border dark:bg-dark-bg dark:text-dark-text rounded-md text-xs
focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
min={1}
max={100000}
/>
<button
type="button"
onClick={handleCustomSubmit}
className="px-3 py-1.5 bg-gray-200 hover:bg-gray-300 dark:bg-dark-border dark:hover:bg-dark-muted dark:text-dark-text rounded-md text-xs text-gray-700 min-h-[28px]"
>
Set
</button>
</div>
</div>
);
}

View File

@@ -0,0 +1,206 @@
import { useState } from 'react';
import { useCalcHistoryStore } from '@/store/calcHistory.ts';
import type { CalculationEntry } from '@/store/calcHistory.ts';
function EntryDetail({ entry }: { entry: CalculationEntry }) {
const p = entry.propagation;
return (
<div className="mt-1.5 pt-1.5 border-t border-gray-100 dark:border-dark-border space-y-1.5 text-[10px]">
{/* Coverage breakdown with percentages */}
<div className="grid grid-cols-4 gap-1 text-center">
<div>
<div className="font-semibold text-blue-600 dark:text-blue-400">
{entry.coverage.excellent.toFixed(0)}%
</div>
<div className="text-gray-400">Excellent</div>
</div>
<div>
<div className="font-semibold text-green-600 dark:text-green-400">
{entry.coverage.good.toFixed(0)}%
</div>
<div className="text-gray-400">Good</div>
</div>
<div>
<div className="font-semibold text-yellow-600 dark:text-yellow-400">
{entry.coverage.fair.toFixed(0)}%
</div>
<div className="text-gray-400">Fair</div>
</div>
<div>
<div className="font-semibold text-red-600 dark:text-red-400">
{entry.coverage.weak.toFixed(0)}%
</div>
<div className="text-gray-400">Weak</div>
</div>
</div>
{/* RSRP details */}
<div className="flex justify-between text-gray-500 dark:text-dark-muted">
<span>Avg RSRP: {entry.avgRsrp.toFixed(1)} dBm</span>
<span>Range: {entry.rangeMin.toFixed(0)} / {entry.rangeMax.toFixed(0)} dBm</span>
</div>
{/* Propagation details */}
{p && (
<div className="pt-1.5 border-t border-gray-100 dark:border-dark-border space-y-1">
{/* Site parameters */}
<div className="flex flex-wrap gap-x-3 gap-y-0.5 text-gray-500 dark:text-dark-muted">
<span>{p.frequency} MHz</span>
<span>{p.txPower} dBm</span>
<span>{p.antennaGain} dBi</span>
<span>{p.antennaHeight} m</span>
</div>
{/* Models used */}
{p.modelsUsed.length > 0 && (
<div className="flex flex-wrap gap-1">
{p.modelsUsed.map((model) => (
<span
key={model}
className="px-1 py-0.5 bg-gray-100 dark:bg-dark-border text-gray-600 dark:text-dark-muted rounded"
>
{model}
</span>
))}
</div>
)}
{/* Active toggles summary */}
<div className="flex flex-wrap gap-1">
{p.use_terrain && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Terrain</span>
)}
{p.use_buildings && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Buildings</span>
)}
{p.use_materials && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Materials</span>
)}
{p.use_dominant_path && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">DomPath</span>
)}
{p.use_reflections && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Reflections</span>
)}
{p.use_vegetation && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Vegetation</span>
)}
{p.use_atmospheric && (
<span className="px-1 py-0.5 bg-green-50 dark:bg-green-900/20 text-green-700 dark:text-green-300 rounded">Atmospheric</span>
)}
{p.fading_margin > 0 && (
<span className="px-1 py-0.5 bg-orange-50 dark:bg-orange-900/20 text-orange-700 dark:text-orange-300 rounded">
-{p.fading_margin} dB fade
</span>
)}
{p.rain_rate > 0 && (
<span className="px-1 py-0.5 bg-blue-50 dark:bg-blue-900/20 text-blue-700 dark:text-blue-300 rounded">
Rain {p.rain_rate} mm/h
</span>
)}
{p.indoor_loss_type !== 'none' && (
<span className="px-1 py-0.5 bg-purple-50 dark:bg-purple-900/20 text-purple-700 dark:text-purple-300 rounded">
Indoor: {p.indoor_loss_type}
</span>
)}
</div>
</div>
)}
</div>
);
}
export default function HistoryPanel() {
const entries = useCalcHistoryStore((s) => s.entries);
const clearHistory = useCalcHistoryStore((s) => s.clearHistory);
const [expanded, setExpanded] = useState(false);
const [expandedEntry, setExpandedEntry] = useState<string | null>(null);
if (entries.length === 0) return null;
return (
<div className="bg-white dark:bg-dark-surface border border-gray-200 dark:border-dark-border rounded-lg shadow-sm p-4">
<div className="flex items-center justify-between">
<button
onClick={() => setExpanded(!expanded)}
className="flex items-center gap-1 text-sm font-semibold text-gray-800 dark:text-dark-text"
>
<span className="text-[10px]">{expanded ? '\u25BC' : '\u25B6'}</span>
Session History
<span className="text-xs text-gray-400 dark:text-dark-muted font-normal ml-1">
({entries.length})
</span>
</button>
{expanded && (
<button
onClick={clearHistory}
className="text-[10px] text-red-400 hover:text-red-600 dark:text-red-500 dark:hover:text-red-400 transition-colors"
>
Clear All
</button>
)}
</div>
{expanded && (
<div className="mt-2 space-y-1.5 max-h-80 overflow-y-auto">
{entries.map((entry) => {
const isOpen = expandedEntry === entry.id;
return (
<button
key={entry.id}
onClick={() => setExpandedEntry(isOpen ? null : entry.id)}
className="w-full text-left text-xs border border-gray-100 dark:border-dark-border rounded p-2 space-y-1 hover:bg-gray-50 dark:hover:bg-dark-bg transition-colors cursor-pointer"
>
{/* Row 1: timestamp + computation time */}
<div className="flex justify-between items-center">
<span className="text-gray-500 dark:text-dark-muted">
{entry.timestamp.toLocaleTimeString()}
</span>
<span className="font-bold text-gray-800 dark:text-dark-text">
{entry.computationTime.toFixed(1)}s
</span>
</div>
{/* Row 2: badges */}
<div className="flex gap-1.5 flex-wrap text-[10px]">
<span className="px-1 py-0.5 bg-blue-50 dark:bg-blue-900/20 text-blue-700 dark:text-blue-300 rounded">
{entry.preset}
</span>
<span className="text-gray-500 dark:text-dark-muted">
{entry.totalPoints.toLocaleString()} pts
</span>
<span className="text-gray-500 dark:text-dark-muted">
{entry.radius}km
</span>
<span className="text-gray-500 dark:text-dark-muted">
{entry.resolution}m
</span>
</div>
{/* Coverage bar */}
<div className="flex h-1.5 rounded-full overflow-hidden bg-gray-100 dark:bg-dark-border">
{entry.coverage.excellent > 0 && (
<div className="bg-blue-500" style={{ width: `${entry.coverage.excellent}%` }} />
)}
{entry.coverage.good > 0 && (
<div className="bg-green-500" style={{ width: `${entry.coverage.good}%` }} />
)}
{entry.coverage.fair > 0 && (
<div className="bg-yellow-500" style={{ width: `${entry.coverage.fair}%` }} />
)}
{entry.coverage.weak > 0 && (
<div className="bg-red-500" style={{ width: `${entry.coverage.weak}%` }} />
)}
</div>
{/* Expandable detail */}
{isOpen && <EntryDetail entry={entry} />}
</button>
);
})}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,361 @@
/**
* Link Budget Calculator Panel
*
* Shows complete RF link budget from transmitter to receiver:
* - TX: power, gain, cable loss, EIRP
* - Path: distance, FSPL, terrain loss
* - RX: gain, sensitivity, margin
*/
import { useState, useEffect } from 'react';
import { useSitesStore } from '@/store/sites.ts';
import { api } from '@/services/api.ts';
import type { LinkBudgetResponse } from '@/services/api.ts';
import Button from '@/components/ui/Button.tsx';
interface LinkBudgetPanelProps {
/** Optional RX coordinates from map click */
rxPoint?: { lat: number; lon: number } | null;
/** Callback to enable map click mode */
onRequestMapClick?: () => void;
/** Callback when panel is closed */
onClose?: () => void;
}
export default function LinkBudgetPanel({
rxPoint,
onRequestMapClick,
onClose,
}: LinkBudgetPanelProps) {
const sites = useSitesStore((s) => s.sites);
const selectedSiteId = useSitesStore((s) => s.selectedSiteId);
// TX parameters (from selected site or manual)
const selectedSite = sites.find((s) => s.id === selectedSiteId);
// TX height override for what-if scenarios (null = use site default)
const [txHeightOverride, setTxHeightOverride] = useState<number | null>(null);
const txHeight = txHeightOverride ?? selectedSite?.height ?? 30;
// Reset height override when site changes
useEffect(() => {
setTxHeightOverride(null);
}, [selectedSiteId]);
// RX coordinates
const [rxLat, setRxLat] = useState<string>(rxPoint?.lat?.toFixed(6) || '');
const [rxLon, setRxLon] = useState<string>(rxPoint?.lon?.toFixed(6) || '');
// Additional TX/RX parameters
const [txCableLoss, setTxCableLoss] = useState<number>(2);
const [rxGain, setRxGain] = useState<number>(0);
const [rxCableLoss, setRxCableLoss] = useState<number>(0);
const [rxSensitivity, setRxSensitivity] = useState<number>(-100);
const [rxHeight, setRxHeight] = useState<number>(1.5);
// Result
const [result, setResult] = useState<LinkBudgetResponse | null>(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
// Update RX coordinates when rxPoint changes
useEffect(() => {
if (rxPoint) {
setRxLat(rxPoint.lat.toFixed(6));
setRxLon(rxPoint.lon.toFixed(6));
}
}, [rxPoint]);
const calculateLinkBudget = async () => {
if (!selectedSite) {
setError('Select a site first');
return;
}
const rxLatNum = parseFloat(rxLat);
const rxLonNum = parseFloat(rxLon);
if (isNaN(rxLatNum) || isNaN(rxLonNum)) {
setError('Enter valid RX coordinates');
return;
}
setLoading(true);
setError(null);
try {
const response = await api.calculateLinkBudget({
tx_lat: selectedSite.lat,
tx_lon: selectedSite.lon,
tx_power_dbm: selectedSite.power,
tx_gain_dbi: selectedSite.gain,
tx_cable_loss_db: txCableLoss,
tx_height_m: txHeight,
rx_lat: rxLatNum,
rx_lon: rxLonNum,
rx_gain_dbi: rxGain,
rx_cable_loss_db: rxCableLoss,
rx_sensitivity_dbm: rxSensitivity,
rx_height_m: rxHeight,
frequency_mhz: selectedSite.frequency,
});
setResult(response);
} catch (err) {
setError(err instanceof Error ? err.message : 'Calculation failed');
} finally {
setLoading(false);
}
};
const marginColor = result
? result.margin_db >= 10
? 'text-green-600 dark:text-green-400'
: result.margin_db >= 0
? 'text-yellow-600 dark:text-yellow-400'
: 'text-red-600 dark:text-red-400'
: '';
return (
<div
className="bg-white dark:bg-dark-surface border border-gray-200 dark:border-dark-border rounded-lg shadow-sm p-4 space-y-4 w-80"
onClick={(e) => e.stopPropagation()}
onMouseDown={(e) => e.stopPropagation()}
onPointerDown={(e) => e.stopPropagation()}
>
{/* Header */}
<div className="flex items-center justify-between">
<h3 className="text-sm font-semibold text-gray-800 dark:text-dark-text flex items-center gap-2">
<span className="text-lg">📡</span>
Link Budget Calculator
</h3>
{onClose && (
<button
onClick={onClose}
className="text-gray-400 hover:text-gray-600 dark:hover:text-white text-sm"
>
</button>
)}
</div>
{/* TX Section */}
<div className="space-y-2">
<div className="text-xs font-medium text-gray-500 dark:text-dark-muted uppercase">
Transmitter
</div>
{selectedSite ? (
<div className="text-xs space-y-1 bg-gray-50 dark:bg-dark-bg p-2 rounded text-gray-700 dark:text-dark-text">
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">Site:</span>
<span className="font-medium">{selectedSite.name}</span>
</div>
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">Power:</span>
<span>{selectedSite.power} dBm</span>
</div>
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">Gain:</span>
<span>{selectedSite.gain} dBi</span>
</div>
<div className="flex justify-between items-center">
<span className="text-gray-500 dark:text-dark-muted">Height:</span>
<div className="flex items-center">
<input
type="number"
value={txHeight}
onChange={(e) => setTxHeightOverride(parseFloat(e.target.value) || 30)}
className="w-16 text-right text-xs px-1 py-0.5 border rounded dark:bg-dark-bg dark:border-dark-border dark:text-dark-text"
min="1"
max="300"
step="1"
/>
<span className="text-gray-400 dark:text-dark-muted ml-1">m</span>
</div>
</div>
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">Frequency:</span>
<span>{selectedSite.frequency} MHz</span>
</div>
<div className="flex justify-between items-center">
<span className="text-gray-500 dark:text-dark-muted">Cable Loss:</span>
<input
type="number"
value={txCableLoss}
onChange={(e) => setTxCableLoss(parseFloat(e.target.value) || 0)}
className="w-16 text-right text-xs px-1 py-0.5 border rounded dark:bg-dark-bg dark:border-dark-border dark:text-dark-text"
step="0.5"
/>
<span className="text-gray-400 dark:text-dark-muted ml-1">dB</span>
</div>
</div>
) : (
<div className="text-xs text-gray-400 dark:text-dark-muted italic">Select a site on the map</div>
)}
</div>
{/* RX Section */}
<div className="space-y-2">
<div className="text-xs font-medium text-gray-500 dark:text-dark-muted uppercase">
Receiver
</div>
<div className="grid grid-cols-2 gap-2">
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">Latitude</label>
<input
type="text"
value={rxLat}
onChange={(e) => setRxLat(e.target.value)}
placeholder="48.4500"
className="w-full text-xs px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">Longitude</label>
<input
type="text"
value={rxLon}
onChange={(e) => setRxLon(e.target.value)}
placeholder="35.0400"
className="w-full text-xs px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
</div>
{onRequestMapClick && (
<Button size="sm" variant="secondary" onClick={onRequestMapClick} className="w-full">
📍 Click on Map to Set RX Point
</Button>
)}
<div className="grid grid-cols-2 gap-2 text-xs">
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">RX Gain (dBi)</label>
<input
type="number"
value={rxGain}
onChange={(e) => setRxGain(parseFloat(e.target.value) || 0)}
className="w-full px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">RX Height (m)</label>
<input
type="number"
value={rxHeight}
onChange={(e) => setRxHeight(parseFloat(e.target.value) || 1.5)}
className="w-full px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">Sensitivity (dBm)</label>
<input
type="number"
value={rxSensitivity}
onChange={(e) => setRxSensitivity(parseFloat(e.target.value) || -100)}
className="w-full px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
<div>
<label className="text-[10px] text-gray-400 dark:text-dark-muted">Cable Loss (dB)</label>
<input
type="number"
value={rxCableLoss}
onChange={(e) => setRxCableLoss(parseFloat(e.target.value) || 0)}
className="w-full px-2 py-1 border rounded dark:bg-dark-bg dark:border-dark-border text-gray-800 dark:text-dark-text"
/>
</div>
</div>
</div>
{/* Calculate Button */}
<Button
onClick={calculateLinkBudget}
disabled={loading || !selectedSite}
className="w-full"
>
{loading ? 'Calculating...' : 'Calculate Link Budget'}
</Button>
{/* Error */}
{error && (
<div className="text-xs text-red-500 bg-red-50 dark:bg-red-900/20 p-2 rounded">
{error}
</div>
)}
{/* Results */}
{result && (
<div className="space-y-2 border-t pt-3 dark:border-dark-border">
<div className="text-xs font-medium text-gray-500 dark:text-dark-muted uppercase">
Results
</div>
{/* Path Info */}
<div className="text-xs space-y-1 bg-gray-50 dark:bg-dark-bg p-2 rounded text-gray-700 dark:text-dark-text">
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">Distance:</span>
<span className="font-medium">{result.distance_km.toFixed(2)} km</span>
</div>
<div className="flex justify-between">
<span className="text-gray-500 dark:text-dark-muted">LOS:</span>
<span className={result.los_clear ? 'text-green-600 dark:text-green-400' : 'text-red-500 dark:text-red-400'}>
{result.los_clear ? '✓ Clear' : '✗ Blocked'}
</span>
</div>
</div>
{/* Link Budget Table */}
<div className="text-xs space-y-1 bg-blue-50 dark:bg-blue-900/20 p-2 rounded text-gray-700 dark:text-dark-text">
<div className="flex justify-between">
<span>EIRP:</span>
<span className="font-mono">{result.eirp_dbm.toFixed(1)} dBm</span>
</div>
<div className="flex justify-between text-gray-500 dark:text-dark-muted">
<span>- FSPL:</span>
<span className="font-mono">{result.fspl_db.toFixed(1)} dB</span>
</div>
<div className="flex justify-between text-gray-500 dark:text-dark-muted">
<span>- Terrain Loss:</span>
<span className="font-mono">{result.terrain_loss_db.toFixed(1)} dB</span>
</div>
<div className="flex justify-between border-t pt-1 dark:border-dark-border">
<span>= Total Path Loss:</span>
<span className="font-mono font-medium">{result.total_path_loss_db.toFixed(1)} dB</span>
</div>
</div>
{/* Final Result */}
<div className="text-xs space-y-1 bg-gray-100 dark:bg-dark-border p-2 rounded text-gray-700 dark:text-dark-text">
<div className="flex justify-between">
<span>Received Power:</span>
<span className="font-mono font-medium">{result.rx_power_dbm.toFixed(1)} dBm</span>
</div>
<div className="flex justify-between">
<span>RX Sensitivity:</span>
<span className="font-mono">{rxSensitivity} dBm</span>
</div>
<div className={`flex justify-between font-bold ${marginColor}`}>
<span>Link Margin:</span>
<span className="font-mono">{result.margin_db.toFixed(1)} dB</span>
</div>
<div className={`text-center text-sm font-bold mt-2 ${marginColor}`}>
{result.status === 'OK' ? '✓ LINK OK' : '✗ LINK FAIL'}
</div>
</div>
{/* Obstructions */}
{result.obstructions && result.obstructions.length > 0 && (
<div className="text-xs text-orange-600 dark:text-orange-400 bg-orange-50 dark:bg-orange-900/20 p-2 rounded">
<div className="font-medium mb-1"> Terrain Obstructions:</div>
{result.obstructions.slice(0, 3).map((obs, i) => (
<div key={i}>
@ {(obs.distance_m / 1000).toFixed(2)} km: +{obs.height_above_los_m.toFixed(1)} m above LOS
</div>
))}
{result.obstructions.length > 3 && (
<div className="text-gray-500">...and {result.obstructions.length - 3} more</div>
)}
</div>
)}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,163 @@
import { useEffect, useState, useRef, useCallback } from 'react';
import { useCoverageStore } from '@/store/coverage.ts';
import type { CoverageResult } from '@/types/index.ts';
function classifyCoverage(points: Array<{ rsrp: number }>) {
const counts = { excellent: 0, good: 0, fair: 0, weak: 0 };
for (const p of points) {
if (p.rsrp > -70) counts.excellent++;
else if (p.rsrp > -85) counts.good++;
else if (p.rsrp > -100) counts.fair++;
else counts.weak++;
}
return counts;
}
const AUTO_DISMISS_MS = 10_000;
export default function ResultsPanel() {
const result = useCoverageStore((s) => s.result);
const [visible, setVisible] = useState(false);
const [show, setShow] = useState(false);
const timerRef = useRef<ReturnType<typeof setTimeout> | undefined>(undefined);
const prevResultRef = useRef<CoverageResult | null>(null);
const dismiss = useCallback(() => {
setVisible(false);
setTimeout(() => setShow(false), 300);
}, []);
useEffect(() => {
// Only trigger on NEW result (not initial mount with existing result)
if (result && result !== prevResultRef.current && result.points.length > 0) {
setShow(true);
requestAnimationFrame(() => setVisible(true));
if (timerRef.current) clearTimeout(timerRef.current);
timerRef.current = setTimeout(dismiss, AUTO_DISMISS_MS);
}
prevResultRef.current = result;
return () => {
if (timerRef.current) clearTimeout(timerRef.current);
};
}, [result, dismiss]);
if (!show || !result) return null;
const counts = classifyCoverage(result.points);
const total = result.points.length;
const preset = result.settings.preset ?? 'standard';
const timeStr = result.calculationTime.toFixed(1);
return (
<>
<style>{`@keyframes rfcp-shrink { from { width: 100%; } to { width: 0%; } }`}</style>
<div
className={`absolute top-4 left-4 z-[1000] w-72
bg-white/95 dark:bg-dark-surface/95 backdrop-blur-sm
border border-gray-200 dark:border-dark-border rounded-lg shadow-lg
transition-all duration-300 ease-out pointer-events-auto
${visible ? 'opacity-100 translate-x-0' : 'opacity-0 -translate-x-8'}`}
>
{/* Header */}
<div className="flex items-center justify-between px-3 pt-3 pb-1">
<h3 className="text-xs font-semibold text-gray-700 dark:text-dark-text">
Calculation Complete
</h3>
<button
onClick={dismiss}
className="text-gray-400 hover:text-gray-600 dark:hover:text-dark-text text-sm leading-none"
>
&times;
</button>
</div>
{/* Body */}
<div className="px-3 pb-3 space-y-2">
{/* Time + points */}
<div className="flex items-baseline gap-2">
<span className="text-lg font-bold text-gray-800 dark:text-dark-text">
{timeStr}s
</span>
<span className="text-xs text-gray-500 dark:text-dark-muted">
{total.toLocaleString()} points
</span>
</div>
{/* Coverage breakdown bar */}
<div className="flex h-2 rounded-full overflow-hidden">
{counts.excellent > 0 && (
<div className="bg-blue-500" style={{ width: `${(counts.excellent / total) * 100}%` }} />
)}
{counts.good > 0 && (
<div className="bg-green-500" style={{ width: `${(counts.good / total) * 100}%` }} />
)}
{counts.fair > 0 && (
<div className="bg-yellow-500" style={{ width: `${(counts.fair / total) * 100}%` }} />
)}
{counts.weak > 0 && (
<div className="bg-red-500" style={{ width: `${(counts.weak / total) * 100}%` }} />
)}
</div>
{/* Coverage percentages */}
<div className="grid grid-cols-4 gap-1 text-center text-[10px]">
<div>
<div className="font-semibold text-blue-600 dark:text-blue-400">
{total > 0 ? ((counts.excellent / total) * 100).toFixed(0) : 0}%
</div>
<div className="text-gray-400">Exc</div>
</div>
<div>
<div className="font-semibold text-green-600 dark:text-green-400">
{total > 0 ? ((counts.good / total) * 100).toFixed(0) : 0}%
</div>
<div className="text-gray-400">Good</div>
</div>
<div>
<div className="font-semibold text-yellow-600 dark:text-yellow-400">
{total > 0 ? ((counts.fair / total) * 100).toFixed(0) : 0}%
</div>
<div className="text-gray-400">Fair</div>
</div>
<div>
<div className="font-semibold text-red-600 dark:text-red-400">
{total > 0 ? ((counts.weak / total) * 100).toFixed(0) : 0}%
</div>
<div className="text-gray-400">Weak</div>
</div>
</div>
{/* Metadata */}
<div className="flex flex-wrap gap-1.5 text-[10px] text-gray-500 dark:text-dark-muted">
<span className="px-1.5 py-0.5 bg-gray-100 dark:bg-dark-border rounded">
{preset}
</span>
<span className="px-1.5 py-0.5 bg-gray-100 dark:bg-dark-border rounded">
{result.settings.radius}km
</span>
<span className="px-1.5 py-0.5 bg-gray-100 dark:bg-dark-border rounded">
{result.settings.resolution}m
</span>
{result.modelsUsed && result.modelsUsed.length > 0 && (
<span className="px-1.5 py-0.5 bg-gray-100 dark:bg-dark-border rounded">
{result.modelsUsed.length} models
</span>
)}
</div>
</div>
{/* Auto-dismiss progress bar */}
<div className="h-0.5 bg-gray-100 dark:bg-dark-border rounded-b-lg overflow-hidden">
<div
className="h-full bg-blue-400 dark:bg-blue-500"
style={{
animation: `rfcp-shrink ${AUTO_DISMISS_MS}ms linear forwards`,
}}
/>
</div>
</div>
</>
);
}

View File

@@ -1,6 +1,7 @@
import { useState, useCallback, useMemo } from 'react';
import type { Site } from '@/types/index.ts';
import { useSitesStore } from '@/store/sites.ts';
import { useToolStore } from '@/store/tools.ts';
import { useToastStore } from '@/components/ui/Toast.tsx';
import Button from '@/components/ui/Button.tsx';
import ConfirmDialog from '@/components/ui/ConfirmDialog.tsx';
@@ -75,9 +76,20 @@ export default function SiteList({ onEditSite, onAddSite }: SiteListProps) {
const deleteSite = useSitesStore((s) => s.deleteSite);
const selectSite = useSitesStore((s) => s.selectSite);
const selectedSiteId = useSitesStore((s) => s.selectedSiteId);
const isPlacingMode = useSitesStore((s) => s.isPlacingMode);
const togglePlacingMode = useSitesStore((s) => s.togglePlacingMode);
const selectedSiteIds = useSitesStore((s) => s.selectedSiteIds);
// Tool store for site placement mode
const activeTool = useToolStore((s) => s.activeTool);
const setActiveTool = useToolStore((s) => s.setActiveTool);
const clearTool = useToolStore((s) => s.clearTool);
const isPlacingMode = activeTool === 'site-placement';
const togglePlacingMode = useCallback(() => {
if (isPlacingMode) {
clearTool();
} else {
setActiveTool('site-placement');
}
}, [isPlacingMode, setActiveTool, clearTool]);
const toggleSiteSelection = useSitesStore((s) => s.toggleSiteSelection);
const selectAllSites = useSitesStore((s) => s.selectAllSites);
const clearSelection = useSitesStore((s) => s.clearSelection);

View File

@@ -0,0 +1,172 @@
/**
* Small header badge showing the active compute backend (CPU or GPU).
* Fetches status on mount. Clicking opens a dropdown to switch devices.
* Dropdown opens to the LEFT to avoid overlapping map controls.
*/
import { useState, useEffect, useRef } from 'react';
import { api } from '@/services/api.ts';
import type { GPUStatus, GPUDevice } from '@/services/api.ts';
export default function GPUIndicator() {
const [status, setStatus] = useState<GPUStatus | null>(null);
const [open, setOpen] = useState(false);
const [switching, setSwitching] = useState(false);
const [diagnostics, setDiagnostics] = useState<Record<string, unknown> | null>(null);
const [showDiag, setShowDiag] = useState(false);
const ref = useRef<HTMLDivElement>(null);
useEffect(() => {
api.getGPUStatus().then(setStatus).catch(() => {});
}, []);
// Close dropdown on outside click
useEffect(() => {
if (!open) return;
const handler = (e: MouseEvent) => {
if (ref.current && !ref.current.contains(e.target as Node)) {
setOpen(false);
setShowDiag(false);
}
};
document.addEventListener('mousedown', handler);
return () => document.removeEventListener('mousedown', handler);
}, [open]);
// Auto-fetch diagnostics when dropdown opens and only CPU available
useEffect(() => {
if (open && status?.active_backend === 'cpu' && !diagnostics) {
api.getGPUDiagnostics().then(setDiagnostics).catch(() => {});
}
}, [open, status?.active_backend, diagnostics]);
if (!status) return null;
const isGPU = status.active_backend !== 'cpu';
// Short label: just "CPU" or first word of GPU name
const label = isGPU
? (status.active_device?.name?.split(' ')[0] ?? 'GPU')
: 'CPU';
const handleSwitch = async (device: GPUDevice) => {
setSwitching(true);
try {
await api.setGPUDevice(device.backend, device.index);
const updated = await api.getGPUStatus();
setStatus(updated);
} catch {
// ignore
}
setSwitching(false);
setOpen(false);
};
const handleRunDiagnostics = async () => {
try {
const diag = await api.getGPUDiagnostics();
setDiagnostics(diag);
setShowDiag(true);
} catch {
// ignore
}
};
return (
<div ref={ref} className="relative">
<button
onClick={() => setOpen(!open)}
className={`px-2 py-1 rounded text-[11px] font-medium transition-colors
${isGPU
? 'bg-green-100 text-green-700 hover:bg-green-200 dark:bg-green-900/30 dark:text-green-300 dark:hover:bg-green-900/50'
: 'bg-gray-100 text-gray-600 hover:bg-gray-200 dark:bg-dark-border dark:text-dark-muted dark:hover:bg-dark-muted'
}`}
title={`Compute: ${status.active_device?.name ?? label}`}
>
{isGPU ? '\u26A1' : '\u2699\uFE0F'} {label}
</button>
{open && (
<div className="absolute top-full left-0 mt-1 w-64 bg-white dark:bg-dark-surface border border-gray-200 dark:border-dark-border rounded-lg shadow-lg z-[9999] py-1">
<div className="px-3 py-1.5 text-[10px] font-semibold text-gray-400 dark:text-dark-muted uppercase">
Compute Devices
</div>
{status.available_devices.map((d) => {
const isActive =
status.active_device?.backend === d.backend &&
status.active_device?.index === d.index;
return (
<button
key={`${d.backend}-${d.index}`}
onClick={() => !isActive && handleSwitch(d)}
disabled={isActive || switching}
className={`w-full text-left px-3 py-2 text-xs transition-colors
${isActive
? 'bg-blue-50 text-blue-700 dark:bg-blue-900/20 dark:text-blue-300'
: 'text-gray-700 hover:bg-gray-50 dark:text-dark-text dark:hover:bg-dark-border'
}
disabled:opacity-60`}
>
<div className="flex items-center justify-between">
<span className="font-medium">{d.name}</span>
{isActive && (
<span className="text-[10px] text-blue-500 dark:text-blue-400">Active</span>
)}
</div>
<div className="text-[10px] text-gray-400 dark:text-dark-muted mt-0.5">
{d.backend.toUpperCase()}
{d.memory_mb > 0 && ` \u2022 ${d.memory_mb} MB`}
</div>
</button>
);
})}
{/* Show help when only CPU available */}
{status.available_devices.length === 1 && status.active_backend === 'cpu' && (
<div className="border-t border-gray-100 dark:border-dark-border mt-1 pt-2 px-3 pb-2">
<div className="text-[10px] text-yellow-600 dark:text-yellow-400 mb-2">
No GPU detected. For faster calculations:
</div>
{diagnostics?.is_wsl ? (
<div className="text-[10px] text-gray-500 dark:text-dark-muted space-y-1">
<div className="text-[9px] text-gray-400 dark:text-dark-muted mb-1">WSL2 detected - use pip3:</div>
<div className="bg-gray-100 dark:bg-dark-border px-2 py-1 rounded font-mono text-[9px] break-all">
pip3 install cupy-cuda12x --break-system-packages
</div>
<div className="text-[9px] text-gray-400 dark:text-dark-muted mt-1">Then restart RFCP</div>
</div>
) : (
<div className="text-[10px] text-gray-500 dark:text-dark-muted space-y-0.5">
<div>NVIDIA: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install cupy-cuda12x</code></div>
<div>Intel/AMD: <code className="bg-gray-100 dark:bg-dark-border px-1 rounded">pip install pyopencl</code></div>
</div>
)}
{typeof diagnostics?.nvidia_smi === 'string' && diagnostics.nvidia_smi !== 'not found or error' && (
<div className="mt-2 text-[9px] text-green-600 dark:text-green-400">
GPU hardware found: {diagnostics.nvidia_smi.split(',')[0]}
</div>
)}
<button
onClick={handleRunDiagnostics}
className="mt-2 w-full text-[10px] text-blue-600 dark:text-blue-400 hover:underline text-left"
>
{diagnostics ? 'Refresh Diagnostics' : 'Run Diagnostics'}
</button>
</div>
)}
{/* Diagnostics output */}
{showDiag && diagnostics && (
<div className="border-t border-gray-100 dark:border-dark-border mt-1 pt-2 px-3 pb-2 max-h-48 overflow-y-auto">
<div className="text-[10px] font-semibold text-gray-500 dark:text-dark-muted mb-1">
Diagnostics
</div>
<pre className="text-[9px] text-gray-600 dark:text-gray-400 whitespace-pre-wrap break-all">
{JSON.stringify(diagnostics, null, 2)}
</pre>
</div>
)}
</div>
)}
</div>
);
}

View File

@@ -1,6 +1,39 @@
import type { FrequencyBand } from '@/types/index.ts';
export const COMMON_FREQUENCIES: FrequencyBand[] = [
{
value: 70,
name: 'VHF Low',
range: '30-88 MHz',
type: 'VHF',
characteristics: {
range: 'long',
penetration: 'excellent',
typical: 'Military tactical, long-range ground wave',
},
},
{
value: 225,
name: 'Military UHF',
range: '225-400 MHz',
type: 'UHF',
characteristics: {
range: 'long',
penetration: 'good',
typical: 'NATO MILCOM, SINCGARS, air-ground',
},
},
{
value: 700,
name: 'Band 28',
range: '703-803 MHz',
type: 'LTE',
characteristics: {
range: 'long',
penetration: 'excellent',
typical: 'Extended range LTE, first responder (FirstNet)',
},
},
{
value: 800,
name: 'Band 20',
@@ -12,6 +45,17 @@ export const COMMON_FREQUENCIES: FrequencyBand[] = [
typical: 'Rural coverage, deep building penetration',
},
},
{
value: 900,
name: 'Band 8',
range: '880-960 MHz',
type: 'LTE',
characteristics: {
range: 'long',
penetration: 'excellent',
typical: 'GSM refarming, IoT, rural coverage',
},
},
{
value: 1800,
name: 'Band 3',
@@ -91,16 +135,16 @@ export const COMMON_FREQUENCIES: FrequencyBand[] = [
},
];
export const QUICK_FREQUENCIES = [800, 1800, 1900, 2100, 2600];
export const QUICK_FREQUENCIES = [700, 800, 900, 1800, 1900, 2100, 2600];
// Tactical radio presets for UHF/VHF
export const TACTICAL_FREQUENCIES = [150, 450];
export const TACTICAL_FREQUENCIES = [70, 150, 225, 450];
// All quick frequencies grouped by band type
export const FREQUENCY_GROUPS = {
LTE: [800, 1800, 1900, 2100, 2600],
UHF: [450],
VHF: [150],
VHF: [70, 150],
UHF: [225, 450],
LTE: [700, 800, 900, 1800, 1900, 2100, 2600],
'5G': [3500],
} as const;

View File

@@ -2,6 +2,7 @@ import { useEffect } from 'react';
import { useSitesStore } from '@/store/sites.ts';
import { useCoverageStore } from '@/store/coverage.ts';
import { useSettingsStore } from '@/store/settings.ts';
import { useToolStore } from '@/store/tools.ts';
import { useToastStore } from '@/components/ui/Toast.tsx';
interface ShortcutHandlers {
@@ -63,7 +64,7 @@ export function useKeyboardShortcuts({
// Escape always works
if (e.key === 'Escape') {
useSitesStore.getState().selectSite(null);
useSitesStore.getState().setPlacingMode(false);
useToolStore.getState().clearTool();
onCloseForm();
return;
}
@@ -76,7 +77,7 @@ export function useKeyboardShortcuts({
switch (e.key.toUpperCase()) {
case 'S': // Shift+S: New site (place mode)
e.preventDefault();
useSitesStore.getState().setPlacingMode(true);
useToolStore.getState().setActiveTool('site-placement');
useToastStore.getState().addToast('Click on map to place new site', 'info');
return;
case 'C': // Shift+C: Clear coverage

View File

@@ -35,6 +35,31 @@
width: 100%;
height: 100%;
z-index: 0;
cursor: default !important;
}
/* Remove grab cursor from interactive layers */
.leaflet-interactive {
cursor: default !important;
}
/* Grabbing only when actually dragging */
.leaflet-container.leaflet-dragging,
.leaflet-container:active {
cursor: grabbing !important;
}
/* Tool-specific cursors (applied via JS class toggle) */
.leaflet-container.tool-ruler {
cursor: crosshair !important;
}
.leaflet-container.tool-rx-placement {
cursor: crosshair !important;
}
.leaflet-container.tool-site-placement {
cursor: cell !important;
}
/* Dark mode map tiles (invert brightness slightly) */

View File

@@ -75,6 +75,11 @@ export interface ApiCoverageStats {
points_with_atmospheric_loss: number;
}
export interface ApiBoundaryPoint {
lat: number;
lon: number;
}
export interface CoverageResponse {
points: ApiCoveragePoint[];
count: number;
@@ -82,6 +87,7 @@ export interface CoverageResponse {
stats: ApiCoverageStats;
computation_time: number;
models_used: string[];
boundary?: ApiBoundaryPoint[];
}
export interface Preset {
@@ -212,6 +218,104 @@ class ApiService {
if (!response.ok) throw new Error('Failed to get cache stats');
return response.json();
}
// === GPU API ===
async getGPUStatus(): Promise<GPUStatus> {
const response = await fetch(`${API_BASE}/api/gpu/status`);
if (!response.ok) throw new Error('Failed to get GPU status');
return response.json();
}
async getGPUDevices(): Promise<{ devices: GPUDevice[] }> {
const response = await fetch(`${API_BASE}/api/gpu/devices`);
if (!response.ok) throw new Error('Failed to get GPU devices');
return response.json();
}
async setGPUDevice(backend: string, index: number = 0): Promise<{ status: string; backend: string; device: string }> {
const response = await fetch(`${API_BASE}/api/gpu/set`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ backend, index }),
});
if (!response.ok) {
const err = await response.json().catch(() => ({ detail: 'Failed to set GPU device' }));
throw new Error(err.detail || 'Failed to set GPU device');
}
return response.json();
}
async getGPUDiagnostics(): Promise<Record<string, unknown>> {
const response = await fetch(`${API_BASE}/api/gpu/diagnostics`);
if (!response.ok) throw new Error('Failed to get GPU diagnostics');
return response.json();
}
// === Terrain Profile API ===
async getTerrainProfile(
lat1: number, lon1: number,
lat2: number, lon2: number,
points: number = 100,
): Promise<TerrainProfilePoint[]> {
const params = new URLSearchParams({
lat1: lat1.toString(),
lon1: lon1.toString(),
lat2: lat2.toString(),
lon2: lon2.toString(),
points: points.toString(),
});
const response = await fetch(`${API_BASE}/api/terrain/profile?${params}`);
if (!response.ok) throw new Error('Failed to get terrain profile');
const data = await response.json();
return data.profile ?? data;
}
// === Link Budget API ===
async calculateLinkBudget(request: LinkBudgetRequest): Promise<LinkBudgetResponse> {
const response = await fetch(`${API_BASE}/api/coverage/link-budget`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
if (!response.ok) {
const error = await response.json().catch(() => ({ detail: 'Link budget calculation failed' }));
throw new Error(error.detail || 'Link budget calculation failed');
}
return response.json();
}
// === Fresnel Profile API ===
async getFresnelProfile(request: FresnelProfileRequest): Promise<FresnelProfileResponse> {
const response = await fetch(`${API_BASE}/api/coverage/fresnel-profile`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
if (!response.ok) {
const error = await response.json().catch(() => ({ detail: 'Fresnel profile calculation failed' }));
throw new Error(error.detail || 'Fresnel profile calculation failed');
}
return response.json();
}
// === Interference API ===
async calculateInterference(request: CoverageRequest): Promise<InterferenceResponse> {
const response = await fetch(`${API_BASE}/api/coverage/interference`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
if (!response.ok) {
const error = await response.json().catch(() => ({ detail: 'Interference calculation failed' }));
throw new Error(error.detail || 'Interference calculation failed');
}
return response.json();
}
}
// === Region types ===
@@ -244,4 +348,138 @@ export interface CacheStats {
vegetation_mb: number;
}
// === GPU types ===
export interface GPUDevice {
backend: string;
index: number;
name: string;
memory_mb: number;
}
export interface GPUStatus {
active_backend: string;
active_device: GPUDevice | null;
gpu_available: boolean;
available_devices: GPUDevice[];
}
// === Terrain Profile types ===
export interface TerrainProfilePoint {
lat: number;
lon: number;
elevation: number;
distance: number;
}
// === Link Budget types ===
export interface LinkBudgetRequest {
tx_lat: number;
tx_lon: number;
tx_power_dbm: number;
tx_gain_dbi: number;
tx_cable_loss_db: number;
tx_height_m: number;
rx_lat: number;
rx_lon: number;
rx_gain_dbi: number;
rx_cable_loss_db: number;
rx_sensitivity_dbm: number;
rx_height_m: number;
frequency_mhz: number;
}
export interface LinkBudgetResponse {
distance_km: number;
distance_m: number;
tx_elevation_m: number;
rx_elevation_m: number;
eirp_dbm: number;
fspl_db: number;
terrain_loss_db: number;
total_path_loss_db: number;
los_clear: boolean;
obstructions: { distance_m: number; height_above_los_m: number }[];
rx_power_dbm: number;
margin_db: number;
status: 'OK' | 'FAIL';
link_budget: {
tx_power_dbm: number;
tx_gain_dbi: number;
tx_cable_loss_db: number;
rx_gain_dbi: number;
rx_cable_loss_db: number;
rx_sensitivity_dbm: number;
};
}
// === Fresnel Profile types ===
export interface FresnelProfileRequest {
tx_lat: number;
tx_lon: number;
tx_height_m: number;
rx_lat: number;
rx_lon: number;
rx_height_m: number;
frequency_mhz: number;
num_points?: number;
}
export interface FresnelProfilePoint {
distance: number;
lat: number;
lon: number;
terrain_elevation: number;
los_height: number;
fresnel_top: number;
fresnel_bottom: number;
f1_radius: number;
clearance: number;
}
export interface FresnelProfileResponse {
profile: FresnelProfilePoint[];
total_distance_m: number;
tx_elevation: number;
rx_elevation: number;
frequency_mhz: number;
wavelength_m: number;
los_clear: boolean;
fresnel_clear: boolean;
fresnel_clear_pct: number;
worst_clearance_m: number;
estimated_loss_db: number;
recommendation: string;
}
// === Interference types ===
export interface InterferencePoint {
lat: number;
lon: number;
ci_ratio_db: number;
best_server_idx: number;
best_server_rsrp: number;
}
export interface InterferenceResponse {
points: InterferencePoint[];
count: number;
stats: {
min_ci_db: number;
max_ci_db: number;
avg_ci_db: number;
good_coverage_pct: number;
marginal_coverage_pct: number;
interference_dominant_pct: number;
};
computation_time: number;
sites: { name: string; frequency_mhz: number }[];
frequency_groups: Record<number, number>;
warning: string | null;
}
export const api = new ApiService();

View File

@@ -19,20 +19,30 @@ export interface WSProgress {
eta_seconds?: number;
}
export interface WSPartialResults {
points: Array<Record<string, unknown>>;
tile: number;
total_tiles: number;
progress: number;
}
type ProgressCallback = (progress: WSProgress) => void;
type ResultCallback = (data: CoverageResponse) => void;
type ErrorCallback = (error: string) => void;
type PartialResultsCallback = (data: WSPartialResults) => void;
type ConnectionCallback = (connected: boolean) => void;
interface PendingCalc {
onProgress?: ProgressCallback;
onResult: ResultCallback;
onError: ErrorCallback;
onPartialResults?: PartialResultsCallback;
}
class WebSocketService {
private ws: WebSocket | null = null;
private reconnectTimer: ReturnType<typeof setTimeout> | undefined;
private pingTimer: ReturnType<typeof setInterval> | undefined;
private _connected = false;
private _pendingCalcs = new Map<string, PendingCalc>();
private _connectionListeners = new Set<ConnectionCallback>();
@@ -70,10 +80,20 @@ class WebSocketService {
this.ws.onopen = () => {
this._setConnected(true);
// Keepalive pings every 30s to prevent connection timeout during long calculations
if (this.pingTimer) clearInterval(this.pingTimer);
this.pingTimer = setInterval(() => {
if (this.ws?.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({ type: 'ping' }));
}
}, 30_000);
};
this.ws.onclose = () => {
this._setConnected(false);
if (this.pingTimer) { clearInterval(this.pingTimer); this.pingTimer = undefined; }
// Fail all pending calculations — their callbacks reference the old socket
this._failPendingCalcs('WebSocket disconnected');
this.reconnectTimer = setTimeout(() => this.connect(), 2000);
};
@@ -99,6 +119,16 @@ class WebSocketService {
console.warn('[WS] progress msg but no pending calc:', calcId, msg.phase, msg.progress);
}
break;
case 'partial_results':
if (pending?.onPartialResults) {
pending.onPartialResults({
points: msg.points,
tile: msg.tile,
total_tiles: msg.total_tiles,
progress: msg.progress,
});
}
break;
case 'result':
if (pending) {
pending.onResult(msg.data);
@@ -121,8 +151,18 @@ class WebSocketService {
};
}
/** Fail all pending calculations (e.g. on disconnect). */
private _failPendingCalcs(reason: string): void {
for (const [calcId, pending] of this._pendingCalcs) {
try { pending.onError(reason); } catch { /* ignore */ }
this._pendingCalcs.delete(calcId);
}
}
disconnect(): void {
if (this.reconnectTimer) clearTimeout(this.reconnectTimer);
if (this.pingTimer) { clearInterval(this.pingTimer); this.pingTimer = undefined; }
this._failPendingCalcs('WebSocket disconnected');
this.ws?.close();
this.ws = null;
this._setConnected(false);
@@ -138,13 +178,14 @@ class WebSocketService {
onResult: ResultCallback,
onError: ErrorCallback,
onProgress?: ProgressCallback,
onPartialResults?: PartialResultsCallback,
): string | undefined {
if (!this.ws || this.ws.readyState !== WebSocket.OPEN) {
return undefined;
}
const calcId = crypto.randomUUID();
this._pendingCalcs.set(calcId, { onProgress, onResult, onError });
this._pendingCalcs.set(calcId, { onProgress, onResult, onError, onPartialResults });
this.ws.send(JSON.stringify({
type: 'calculate',

View File

@@ -0,0 +1,64 @@
import { create } from 'zustand';
export interface PropagationSnapshot {
// Models used
modelsUsed: string[];
use_terrain: boolean;
use_buildings: boolean;
use_materials: boolean;
use_dominant_path: boolean;
use_street_canyon: boolean;
use_reflections: boolean;
use_water_reflection: boolean;
use_vegetation: boolean;
use_atmospheric: boolean;
// Site params (first site or average)
frequency: number;
txPower: number;
antennaGain: number;
antennaHeight: number;
// Environmental
season: string;
rain_rate: number;
indoor_loss_type: string;
fading_margin: number;
}
export interface CalculationEntry {
id: string;
timestamp: Date;
preset: string;
radius: number;
resolution: number;
computationTime: number;
totalPoints: number;
coverage: { excellent: number; good: number; fair: number; weak: number };
avgRsrp: number;
rangeMin: number;
rangeMax: number;
// Propagation snapshot for detailed history
propagation?: PropagationSnapshot;
}
interface CalcHistoryState {
entries: CalculationEntry[];
addEntry: (entry: CalculationEntry) => void;
clearHistory: () => void;
}
const MAX_ENTRIES = 50;
export const useCalcHistoryStore = create<CalcHistoryState>((set) => ({
entries: [],
addEntry: (entry) =>
set((state) => {
const entries = [entry, ...state.entries];
if (entries.length > MAX_ENTRIES) {
entries.length = MAX_ENTRIES;
}
return { entries };
}),
clearHistory: () => set({ entries: [] }),
}));

View File

@@ -3,6 +3,9 @@ import { api } from '@/services/api.ts';
import { wsService } from '@/services/websocket.ts';
import type { WSProgress } from '@/services/websocket.ts';
import { useSitesStore } from '@/store/sites.ts';
import { useToastStore } from '@/components/ui/Toast.tsx';
import { useCalcHistoryStore } from '@/store/calcHistory.ts';
import type { CalculationEntry, PropagationSnapshot } from '@/store/calcHistory.ts';
import type { CoverageResult, CoverageSettings, CoverageApiStats } from '@/types/index.ts';
import type { ApiSiteParams, CoverageResponse } from '@/services/api.ts';
@@ -17,6 +20,11 @@ interface CoverageState {
progress: WSProgress | null;
activeCalcId: string | null;
// Progressive rendering — accumulates points as tiles complete
partialPoints: CoverageResult['points'];
tilesCompleted: number;
totalTiles: number;
setResult: (result: CoverageResult | null) => void;
clearCoverage: () => void;
setIsCalculating: (val: boolean) => void;
@@ -49,7 +57,7 @@ function buildApiSettings(settings: CoverageSettings) {
return {
radius: settings.radius * 1000, // km → meters
resolution: settings.resolution,
min_signal: settings.rsrpThreshold,
min_signal: -130, // Send all useful points; frontend filters visually via rsrpThreshold
preset: settings.preset,
use_terrain: settings.use_terrain,
use_buildings: settings.use_buildings,
@@ -65,6 +73,7 @@ function buildApiSettings(settings: CoverageSettings) {
use_atmospheric: settings.use_atmospheric,
temperature_c: settings.temperature_c,
humidity_percent: settings.humidity_percent,
fading_margin: settings.fading_margin,
};
}
@@ -89,6 +98,72 @@ function responseToResult(response: CoverageResponse, settings: CoverageSettings
settings: settings,
stats: response.stats as CoverageApiStats,
modelsUsed: response.models_used,
boundary: response.boundary,
};
}
function buildHistoryEntry(result: CoverageResult): CalculationEntry {
const counts = { excellent: 0, good: 0, fair: 0, weak: 0 };
let minRsrp = Infinity;
let maxRsrp = -Infinity;
for (const p of result.points) {
if (p.rsrp > -70) counts.excellent++;
else if (p.rsrp > -85) counts.good++;
else if (p.rsrp > -100) counts.fair++;
else counts.weak++;
if (p.rsrp < minRsrp) minRsrp = p.rsrp;
if (p.rsrp > maxRsrp) maxRsrp = p.rsrp;
}
const total = result.points.length;
const avgRsrp = result.stats?.avg_rsrp
?? (total > 0 ? result.points.reduce((s, p) => s + p.rsrp, 0) / total : 0);
// Capture propagation snapshot from settings + sites
const sites = useSitesStore.getState().sites.filter((s) => s.visible);
const firstSite = sites[0];
const settings = result.settings;
const propagation: PropagationSnapshot = {
modelsUsed: result.modelsUsed ?? [],
use_terrain: settings.use_terrain ?? true,
use_buildings: settings.use_buildings ?? true,
use_materials: settings.use_materials ?? true,
use_dominant_path: settings.use_dominant_path ?? false,
use_street_canyon: settings.use_street_canyon ?? false,
use_reflections: settings.use_reflections ?? false,
use_water_reflection: settings.use_water_reflection ?? false,
use_vegetation: settings.use_vegetation ?? false,
use_atmospheric: settings.use_atmospheric ?? false,
frequency: firstSite?.frequency ?? 1800,
txPower: firstSite?.power ?? 43,
antennaGain: firstSite?.gain ?? 18,
antennaHeight: firstSite?.height ?? 30,
season: settings.season ?? 'summer',
rain_rate: settings.rain_rate ?? 0,
indoor_loss_type: settings.indoor_loss_type ?? 'none',
fading_margin: settings.fading_margin ?? 0,
};
return {
id: crypto.randomUUID(),
timestamp: new Date(),
preset: result.settings.preset ?? 'standard',
radius: result.settings.radius,
resolution: result.settings.resolution,
computationTime: result.calculationTime,
totalPoints: result.totalPoints,
coverage: {
excellent: total > 0 ? (counts.excellent / total) * 100 : 0,
good: total > 0 ? (counts.good / total) * 100 : 0,
fair: total > 0 ? (counts.fair / total) * 100 : 0,
weak: total > 0 ? (counts.weak / total) * 100 : 0,
},
avgRsrp,
rangeMin: minRsrp === Infinity ? 0 : minRsrp,
rangeMax: maxRsrp === -Infinity ? 0 : maxRsrp,
propagation,
};
}
@@ -120,11 +195,16 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
use_atmospheric: false,
temperature_c: 15,
humidity_percent: 50,
// Fading margin
fading_margin: 0,
},
heatmapVisible: true,
error: null,
progress: null,
activeCalcId: null,
partialPoints: [],
tilesCompleted: 0,
totalTiles: 0,
setResult: (result) => set({ result }),
clearCoverage: () => set({ result: null, error: null }),
@@ -138,6 +218,12 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
setError: (error) => set({ error }),
calculateCoverage: async () => {
// Guard against duplicate calculations
if (get().isCalculating) {
console.warn('[Coverage] Calculation already in progress, ignoring duplicate request');
return;
}
const { settings } = get();
const sites = useSitesStore.getState().sites;
@@ -154,7 +240,7 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
const apiSettings = buildApiSettings(settings);
set({ isCalculating: true, error: null, progress: null, activeCalcId: null });
set({ isCalculating: true, error: null, progress: null, activeCalcId: null, partialPoints: [], tilesCompleted: 0, totalTiles: 0 });
// Try WebSocket first (provides real-time progress)
if (wsService.connected) {
@@ -163,17 +249,66 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
apiSettings as unknown as Record<string, unknown>,
// onResult
(data) => {
const result = responseToResult(data, settings);
set({ result, isCalculating: false, error: null, progress: null, activeCalcId: null });
try {
const result = responseToResult(data, settings);
set({ result, isCalculating: false, error: null, progress: null, activeCalcId: null, partialPoints: [], tilesCompleted: 0, totalTiles: 0 });
// Show success toast for WS result
const addToast = useToastStore.getState().addToast;
if (result.points.length === 0) {
addToast('No coverage points. Try increasing radius.', 'warning');
} else {
const timeStr = result.calculationTime.toFixed(1);
const firstSite = useSitesStore.getState().sites.find((s) => s.visible);
const freqStr = firstSite ? ` \u2022 ${firstSite.frequency} MHz` : '';
const presetStr = settings.preset ? ` \u2022 ${settings.preset}` : '';
const modelsStr = result.modelsUsed?.length
? ` \u2022 ${result.modelsUsed.length} models`
: '';
addToast(
`${result.totalPoints.toLocaleString()} pts \u2022 ${timeStr}s${presetStr}${freqStr}${modelsStr}`,
'success'
);
}
// Push to session history
if (result.points.length > 0) {
useCalcHistoryStore.getState().addEntry(buildHistoryEntry(result));
}
} catch (err) {
console.error('[Coverage] Failed to process result:', err);
set({ isCalculating: false, error: 'Failed to process coverage result', progress: null, activeCalcId: null, partialPoints: [], tilesCompleted: 0, totalTiles: 0 });
}
},
// onError
(error) => {
set({ isCalculating: false, error, progress: null, activeCalcId: null });
set({ isCalculating: false, error, progress: null, activeCalcId: null, partialPoints: [], tilesCompleted: 0, totalTiles: 0 });
useToastStore.getState().addToast(`Calculation failed: ${error}`, 'error');
},
// onProgress
(progress) => {
set({ progress });
},
// onPartialResults — accumulate points as tiles complete
(data) => {
const newPoints = data.points.map((p: Record<string, unknown>) => ({
lat: p.lat as number,
lon: p.lon as number,
rsrp: p.rsrp as number,
distance: p.distance as number,
has_los: p.has_los as boolean,
terrain_loss: p.terrain_loss as number,
building_loss: p.building_loss as number,
reflection_gain: (p.reflection_gain as number) ?? 0,
vegetation_loss: (p.vegetation_loss as number) ?? 0,
rain_loss: (p.rain_loss as number) ?? 0,
indoor_loss: (p.indoor_loss as number) ?? 0,
atmospheric_loss: (p.atmospheric_loss as number) ?? 0,
}));
set((state) => ({
partialPoints: [...state.partialPoints, ...newPoints],
tilesCompleted: data.tile + 1,
totalTiles: data.total_tiles,
}));
},
);
if (calcId) {
@@ -191,6 +326,10 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
const result = responseToResult(response, settings);
set({ result, isCalculating: false, error: null });
// Push to session history
if (result.points.length > 0) {
useCalcHistoryStore.getState().addEntry(buildHistoryEntry(result));
}
} catch (err) {
if (err instanceof Error && err.name === 'AbortError') {
set({ isCalculating: false });
@@ -209,6 +348,6 @@ export const useCoverageStore = create<CoverageState>((set, get) => ({
wsService.cancel(activeCalcId);
}
api.cancelCalculation();
set({ isCalculating: false, progress: null, activeCalcId: null });
set({ isCalculating: false, progress: null, activeCalcId: null, partialPoints: [], tilesCompleted: 0, totalTiles: 0 });
},
}));

View File

@@ -3,6 +3,8 @@ import { persist } from 'zustand/middleware';
type Theme = 'light' | 'dark' | 'system';
type CoverageRenderer = 'webgl-texture' | 'webgl-radial' | 'canvas';
interface SettingsState {
theme: Theme;
showTerrain: boolean;
@@ -10,9 +12,13 @@ interface SettingsState {
showGrid: boolean;
measurementMode: boolean;
showElevationInfo: boolean;
showBoundary: boolean;
showElevationOverlay: boolean;
elevationOpacity: number;
useWebGLCoverage: boolean;
coverageRenderer: CoverageRenderer;
setTheme: (theme: Theme) => void;
setShowBoundary: (show: boolean) => void;
setShowTerrain: (show: boolean) => void;
setTerrainOpacity: (opacity: number) => void;
setShowGrid: (show: boolean) => void;
@@ -20,6 +26,8 @@ interface SettingsState {
setShowElevationInfo: (show: boolean) => void;
setShowElevationOverlay: (show: boolean) => void;
setElevationOpacity: (opacity: number) => void;
setUseWebGLCoverage: (use: boolean) => void;
setCoverageRenderer: (renderer: CoverageRenderer) => void;
}
function applyTheme(theme: Theme) {
@@ -42,8 +50,11 @@ export const useSettingsStore = create<SettingsState>()(
showGrid: false,
measurementMode: false,
showElevationInfo: false,
showBoundary: false,
showElevationOverlay: false,
elevationOpacity: 0.5,
useWebGLCoverage: true, // Default to WebGL smooth rendering
coverageRenderer: 'webgl-radial' as CoverageRenderer, // Default to radial gradients
setTheme: (theme: Theme) => {
set({ theme });
applyTheme(theme);
@@ -53,11 +64,27 @@ export const useSettingsStore = create<SettingsState>()(
setShowGrid: (show: boolean) => set({ showGrid: show }),
setMeasurementMode: (mode: boolean) => set({ measurementMode: mode }),
setShowElevationInfo: (show: boolean) => set({ showElevationInfo: show }),
setShowBoundary: (show: boolean) => set({ showBoundary: show }),
setShowElevationOverlay: (show: boolean) => set({ showElevationOverlay: show }),
setElevationOpacity: (opacity: number) => set({ elevationOpacity: opacity }),
setUseWebGLCoverage: (use: boolean) => set({ useWebGLCoverage: use }),
setCoverageRenderer: (renderer: CoverageRenderer) => set({ coverageRenderer: renderer }),
}),
{
name: 'rfcp-settings',
version: 3, // v3: Add coverageRenderer setting
migrate: (persistedState: unknown, version: number) => {
const state = persistedState as Partial<SettingsState>;
if (version < 2) {
// v2: Reset useWebGLCoverage to true (was stuck on false from early WebGL failures)
state.useWebGLCoverage = true;
}
if (version < 3) {
// v3: Add coverageRenderer, default to radial
state.coverageRenderer = 'webgl-radial';
}
return state as SettingsState;
},
}
)
);

View File

@@ -64,6 +64,7 @@ interface SitesState {
batchAdjustTilt: (delta: number) => Promise<void>;
batchSetTilt: (tilt: number) => Promise<void>;
batchSetFrequency: (frequency: number) => Promise<void>;
setAllSitesFrequency: (frequency: number) => Promise<void>;
}
export const useSitesStore = create<SitesState>((set, get) => ({
@@ -584,4 +585,30 @@ export const useSitesStore = create<SitesState>((set, get) => ({
set({ sites: updatedSites });
useCoverageStore.getState().clearCoverage();
},
setAllSitesFrequency: async (frequency: number) => {
const { sites } = get();
if (sites.length === 0) return;
pushSnapshot('set all sites frequency', sites);
const clamped = Math.max(100, Math.min(6000, frequency));
const now = new Date();
const updatedSites = sites.map((site) => ({
...site,
frequency: clamped,
updatedAt: now,
}));
for (const site of updatedSites) {
await db.sites.put({
id: site.id,
data: JSON.stringify(site),
createdAt: site.createdAt.getTime(),
updatedAt: now.getTime(),
});
}
set({ sites: updatedSites });
useCoverageStore.getState().clearCoverage();
},
}));

View File

@@ -0,0 +1,26 @@
/**
* Tool Mode Store
*
* Single source of truth for which tool is currently active.
* Only the active tool receives map click events.
*/
import { create } from 'zustand';
export type ActiveTool =
| 'none' // Default — pan/zoom only, no click actions
| 'ruler' // Distance measurement, click to add points
| 'rx-placement' // Link Budget RX point, single click
| 'site-placement'; // Place new site on map
interface ToolState {
activeTool: ActiveTool;
setActiveTool: (tool: ActiveTool) => void;
clearTool: () => void;
}
export const useToolStore = create<ToolState>((set) => ({
activeTool: 'none',
setActiveTool: (tool) => set({ activeTool: tool }),
clearTool: () => set({ activeTool: 'none' }),
}));

View File

@@ -15,6 +15,11 @@ export interface CoveragePoint {
atmospheric_loss?: number; // dB atmospheric absorption
}
export interface BoundaryPoint {
lat: number;
lon: number;
}
export interface CoverageResult {
points: CoveragePoint[];
calculationTime: number; // seconds (was ms for browser calc)
@@ -23,6 +28,7 @@ export interface CoverageResult {
// API-provided fields
stats?: CoverageApiStats;
modelsUsed?: string[];
boundary?: BoundaryPoint[]; // server-computed coverage boundary
}
export interface CoverageApiStats {
@@ -64,6 +70,8 @@ export interface CoverageSettings {
use_atmospheric?: boolean;
temperature_c?: number;
humidity_percent?: number;
// Fading margin
fading_margin?: number; // dB additional safety loss
}
export interface GridPoint {

View File

@@ -5,5 +5,6 @@ export type {
CoverageSettings,
CoverageApiStats,
GridPoint,
BoundaryPoint,
} from './coverage.ts';
export type { FrequencyBand } from './frequency.ts';

View File

@@ -1,10 +1,11 @@
/**
* RSRP → color mapping with smooth gradient interpolation.
*
* Purple → Orange palette:
* -130 dBm = deep purple (no service)
* -90 dBm = peach (fair)
* -50 dBm = bright orange (excellent)
* CloudRF-style Red → Blue palette:
* -130 dBm = dark red (no service)
* -100 dBm = yellow (fair)
* -70 dBm = green (good)
* -50 dBm = deep blue (excellent)
*
* All functions are pure and allocation-free on the hot path
* (pre-built lookup table for fast per-pixel color resolution).
@@ -18,14 +19,13 @@ interface GradientStop {
}
const GRADIENT_STOPS: GradientStop[] = [
{ value: 0.0, r: 26, g: 0, b: 51 }, // #1a0033 — deep purple (no service)
{ value: 0.15, r: 74, g: 20, b: 140 }, // #4a148c — dark purple
{ value: 0.30, r: 123, g: 31, b: 162 }, // #7b1fa2 — purple (very weak)
{ value: 0.45, r: 171, g: 71, b: 188 }, // #ab47bc — light purple (weak)
{ value: 0.60, r: 255, g: 138, b: 101 }, // #ff8a65 — peach (fair)
{ value: 0.75, r: 255, g: 111, b: 0 }, // #ff6f00 — dark orange (good)
{ value: 0.85, r: 255, g: 152, b: 0 }, // #ff9800 — orange (strong)
{ value: 1.0, r: 255, g: 183, b: 77 }, // #ffb74d — bright orange (excellent)
{ value: 0.0, r: 127, g: 0, b: 0 }, // #7f0000 — dark red (no service)
{ value: 0.15, r: 239, g: 68, b: 68 }, // #EF4444 — red (very weak)
{ value: 0.30, r: 249, g: 115, b: 22 }, // #F97316 — orange (weak)
{ value: 0.50, r: 234, g: 179, b: 8 }, // #EAB308 — yellow (fair)
{ value: 0.70, r: 34, g: 197, b: 94 }, // #22C55E — green (good)
{ value: 0.85, r: 59, g: 130, b: 246 }, // #3B82F6 — blue (strong)
{ value: 1.0, r: 37, g: 99, b: 235 }, // #2563EB — deep blue (excellent)
];
/**

41
install.bat Normal file
View File

@@ -0,0 +1,41 @@
@echo off
title RFCP - First Time Setup
echo ============================================
echo RFCP - RF Coverage Planner - Setup
echo ============================================
echo.
REM Check if Python exists
python --version >nul 2>&1
if errorlevel 1 (
echo ERROR: Python not found!
echo.
echo Please install Python 3.10+ from:
echo https://www.python.org/downloads/
echo.
echo Make sure to check "Add Python to PATH" during installation.
echo.
pause
exit /b 1
)
echo Python found:
python --version
echo.
REM Change to script directory
cd /d "%~dp0"
REM Run installer
echo Running RFCP installer...
echo.
python install_rfcp.py
echo.
echo ============================================
echo Setup complete!
echo.
echo To start RFCP, run: RFCP.bat
echo Then open: http://localhost:8090
echo ============================================
pause

498
install_rfcp.py Normal file
View File

@@ -0,0 +1,498 @@
#!/usr/bin/env python3
"""
RFCP Installer — Detects hardware, installs dependencies, sets up GPU acceleration.
Usage:
python install_rfcp.py
The installer handles:
- Python dependency installation
- GPU detection (NVIDIA/Intel/AMD)
- GPU acceleration setup (CuPy for CUDA, PyOpenCL for Intel/AMD)
- Frontend build (if Node.js available)
- Verification of installation
"""
import subprocess
import sys
import platform
import os
import shutil
def print_header(text: str):
"""Print section header."""
print(f"\n{'=' * 60}")
print(f" {text}")
print('=' * 60)
def print_step(text: str):
"""Print step indicator."""
print(f"\n>>> {text}")
def check_python() -> bool:
"""Verify Python 3.10+ is available."""
version = sys.version_info
if version.major < 3 or version.minor < 10:
print(f"[X] Python 3.10+ required, found {version.major}.{version.minor}")
return False
print(f"[OK] Python {version.major}.{version.minor}.{version.micro}")
return True
def check_node() -> bool:
"""Verify Node.js 18+ is available."""
try:
result = subprocess.run(
["node", "--version"],
capture_output=True,
text=True,
timeout=10
)
version = result.stdout.strip().lstrip('v')
major = int(version.split('.')[0])
if major < 18:
print(f"[!] Node.js 18+ recommended, found {version}")
return False
print(f"[OK] Node.js {version}")
return True
except FileNotFoundError:
print("[!] Node.js not found (frontend build will be skipped)")
return False
except Exception as e:
print(f"[!] Node.js check failed: {e}")
return False
def detect_gpu() -> dict:
"""Detect available GPU hardware."""
gpus = {
"nvidia": False,
"nvidia_name": "",
"nvidia_memory_mb": 0,
"intel": False,
"intel_name": "",
"amd": False,
"amd_name": ""
}
# Check NVIDIA via nvidia-smi
try:
result = subprocess.run(
["nvidia-smi", "--query-gpu=name,driver_version,memory.total",
"--format=csv,noheader"],
capture_output=True,
text=True,
timeout=10
)
if result.returncode == 0 and result.stdout.strip():
info = result.stdout.strip()
parts = info.split(",")
gpus["nvidia"] = True
gpus["nvidia_name"] = parts[0].strip()
if len(parts) >= 3:
mem_str = parts[2].strip().replace(" MiB", "").replace(" MB", "")
try:
gpus["nvidia_memory_mb"] = int(mem_str)
except ValueError:
pass
print(f"[OK] NVIDIA GPU: {gpus['nvidia_name']}")
except FileNotFoundError:
pass
except subprocess.TimeoutExpired:
print("[!] nvidia-smi timed out")
except Exception as e:
print(f"[!] NVIDIA detection error: {e}")
# Check Intel/AMD via WMI (Windows) or lspci (Linux)
if platform.system() == "Windows":
try:
result = subprocess.run(
["wmic", "path", "win32_videocontroller", "get",
"name", "/format:csv"],
capture_output=True,
text=True,
timeout=10
)
for line in result.stdout.strip().split('\n'):
line_lower = line.lower()
if 'intel' in line_lower and ('uhd' in line_lower or 'iris' in line_lower or 'hd graphics' in line_lower):
gpus["intel"] = True
# Extract name from CSV
parts = line.split(',')
for part in parts:
if 'Intel' in part:
gpus["intel_name"] = part.strip()
break
if gpus["intel_name"]:
print(f"[OK] Intel GPU: {gpus['intel_name']}")
elif 'amd' in line_lower or 'radeon' in line_lower:
gpus["amd"] = True
parts = line.split(',')
for part in parts:
if 'AMD' in part or 'Radeon' in part:
gpus["amd_name"] = part.strip()
break
if gpus["amd_name"]:
print(f"[OK] AMD GPU: {gpus['amd_name']}")
except Exception:
pass
else:
# Linux: use lspci
try:
result = subprocess.run(
["lspci"],
capture_output=True,
text=True,
timeout=10
)
for line in result.stdout.split('\n'):
if 'VGA' in line or 'Display' in line or '3D' in line:
if 'Intel' in line:
gpus["intel"] = True
gpus["intel_name"] = line.split(':')[-1].strip() if ':' in line else "Intel GPU"
print(f"[OK] Intel GPU: {gpus['intel_name']}")
elif 'AMD' in line or 'Radeon' in line:
gpus["amd"] = True
gpus["amd_name"] = line.split(':')[-1].strip() if ':' in line else "AMD GPU"
print(f"[OK] AMD GPU: {gpus['amd_name']}")
except Exception:
pass
if not gpus["nvidia"] and not gpus["intel"] and not gpus["amd"]:
print("[i] No GPU detected - will use CPU (NumPy)")
return gpus
def install_core_dependencies() -> bool:
"""Install core Python dependencies."""
print_step("Installing core dependencies...")
req_file = os.path.join(os.path.dirname(__file__), "backend", "requirements.txt")
if not os.path.exists(req_file):
print(f"[X] requirements.txt not found at {req_file}")
return False
try:
subprocess.run(
[sys.executable, "-m", "pip", "install", "-r", req_file,
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print("[OK] Core dependencies installed")
return True
except subprocess.CalledProcessError as e:
print(f"[X] pip install failed: {e}")
return False
except subprocess.TimeoutExpired:
print("[X] pip install timed out (10 min)")
return False
def install_gpu_dependencies(gpus: dict) -> bool:
"""Install GPU-specific dependencies based on detected hardware."""
print_step("Setting up GPU acceleration...")
gpu_installed = False
# NVIDIA - install CuPy (includes CUDA runtime)
if gpus["nvidia"]:
print(f" Installing CuPy for {gpus['nvidia_name']}...")
try:
# Try CUDA 12 first (newer cards, RTX 30xx/40xx)
subprocess.run(
[sys.executable, "-m", "pip", "install", "cupy-cuda12x",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print(f" [OK] CuPy (CUDA 12) installed")
gpu_installed = True
except (subprocess.CalledProcessError, subprocess.TimeoutExpired):
try:
# Fallback to CUDA 11 (older cards)
print(" [!] CUDA 12 failed, trying CUDA 11...")
subprocess.run(
[sys.executable, "-m", "pip", "install", "cupy-cuda11x",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=600
)
print(f" [OK] CuPy (CUDA 11) installed")
gpu_installed = True
except Exception as e:
print(f" [X] CuPy installation failed: {e}")
print(f" Manual install: pip install cupy-cuda12x")
# Intel/AMD - install PyOpenCL
if gpus["intel"] or gpus["amd"]:
gpu_name = gpus["intel_name"] or gpus["amd_name"]
print(f" Installing PyOpenCL for {gpu_name}...")
try:
subprocess.run(
[sys.executable, "-m", "pip", "install", "pyopencl",
"--quiet", "--no-warn-script-location"],
check=True,
timeout=300
)
print(f" [OK] PyOpenCL installed")
gpu_installed = True
except Exception as e:
print(f" [X] PyOpenCL installation failed: {e}")
print(f" Manual install: pip install pyopencl")
if not gpu_installed and not gpus["nvidia"] and not gpus["intel"] and not gpus["amd"]:
print(" [i] No GPU acceleration - using CPU (NumPy)")
print(" This is fine! GPU just makes large calculations faster.")
return gpu_installed
def install_frontend(has_node: bool) -> bool:
"""Install frontend dependencies and build."""
if not has_node:
print_step("Skipping frontend build (Node.js not available)")
return False
print_step("Setting up frontend...")
frontend_dir = os.path.join(os.path.dirname(__file__), "frontend")
if not os.path.exists(os.path.join(frontend_dir, "package.json")):
print("[!] Frontend directory not found")
return False
try:
print(" Installing npm packages...")
subprocess.run(
["npm", "install"],
cwd=frontend_dir,
check=True,
timeout=300,
capture_output=True
)
print(" Building frontend...")
subprocess.run(
["npm", "run", "build"],
cwd=frontend_dir,
check=True,
timeout=300,
capture_output=True
)
print("[OK] Frontend built")
return True
except subprocess.CalledProcessError as e:
print(f"[X] Frontend build failed: {e}")
return False
except subprocess.TimeoutExpired:
print("[X] Frontend build timed out")
return False
def create_launcher() -> bool:
"""Create launcher scripts."""
print_step("Creating launcher scripts...")
base_dir = os.path.dirname(os.path.abspath(__file__))
if platform.system() == "Windows":
# Create RFCP.bat
launcher_path = os.path.join(base_dir, "RFCP.bat")
with open(launcher_path, 'w') as f:
f.write('@echo off\n')
f.write('title RFCP - RF Coverage Planner\n')
f.write(f'cd /d "{base_dir}"\n')
f.write('echo Starting RFCP...\n')
f.write('echo Open http://localhost:8090 in your browser\n')
f.write('echo Press Ctrl+C to stop\n')
f.write('echo.\n')
f.write(f'cd backend\n')
f.write(f'"{sys.executable}" -m uvicorn app.main:app --host 0.0.0.0 --port 8090\n')
print(f" [OK] Created: RFCP.bat")
# Create install.bat for first-time setup
install_bat_path = os.path.join(base_dir, "install.bat")
with open(install_bat_path, 'w') as f:
f.write('@echo off\n')
f.write('title RFCP - First Time Setup\n')
f.write('echo ============================================\n')
f.write('echo RFCP - RF Coverage Planner - Setup\n')
f.write('echo ============================================\n')
f.write('echo.\n')
f.write('python --version >nul 2>&1\n')
f.write('if errorlevel 1 (\n')
f.write(' echo ERROR: Python not found!\n')
f.write(' echo Please install Python 3.10+ from python.org\n')
f.write(' pause\n')
f.write(' exit /b 1\n')
f.write(')\n')
f.write(f'cd /d "{base_dir}"\n')
f.write('python install_rfcp.py\n')
f.write('echo.\n')
f.write('echo Setup complete! Run RFCP.bat to start.\n')
f.write('pause\n')
print(f" [OK] Created: install.bat")
else:
# Linux/macOS
launcher_path = os.path.join(base_dir, "rfcp.sh")
with open(launcher_path, 'w') as f:
f.write('#!/bin/bash\n')
f.write(f'cd "{base_dir}"\n')
f.write('echo "Starting RFCP..."\n')
f.write('echo "Open http://localhost:8090 in your browser"\n')
f.write('echo "Press Ctrl+C to stop"\n')
f.write('cd backend\n')
f.write(f'{sys.executable} -m uvicorn app.main:app --host 0.0.0.0 --port 8090\n')
os.chmod(launcher_path, 0o755)
print(f" [OK] Created: rfcp.sh")
return True
def verify_installation() -> bool:
"""Run quick verification tests."""
print_step("Verifying installation...")
checks = []
critical_fail = False
# Check core imports
try:
import numpy as np
checks.append(f"[OK] NumPy {np.__version__}")
except ImportError:
checks.append("[X] NumPy missing")
critical_fail = True
try:
import scipy
checks.append(f"[OK] SciPy {scipy.__version__}")
except ImportError:
checks.append("[X] SciPy missing")
critical_fail = True
try:
import fastapi
checks.append(f"[OK] FastAPI {fastapi.__version__}")
except ImportError:
checks.append("[X] FastAPI missing")
critical_fail = True
try:
import uvicorn
checks.append(f"[OK] Uvicorn {uvicorn.__version__}")
except ImportError:
checks.append("[X] Uvicorn missing")
critical_fail = True
# Check GPU acceleration
try:
import cupy as cp
device_count = cp.cuda.runtime.getDeviceCount()
if device_count > 0:
props = cp.cuda.runtime.getDeviceProperties(0)
name = props["name"]
if isinstance(name, bytes):
name = name.decode()
mem_mb = props["totalGlobalMem"] // (1024 * 1024)
checks.append(f"[OK] CuPy (CUDA) -> {name} ({mem_mb} MB)")
else:
checks.append("[i] CuPy installed but no CUDA devices found")
except ImportError:
checks.append("[i] CuPy not available (NVIDIA GPU acceleration disabled)")
except Exception as e:
checks.append(f"[!] CuPy error: {e}")
try:
import pyopencl as cl
devices = []
for p in cl.get_platforms():
for d in p.get_devices():
devices.append(d.name.strip())
if devices:
checks.append(f"[OK] PyOpenCL -> {', '.join(devices[:2])}")
else:
checks.append("[i] PyOpenCL installed but no devices found")
except ImportError:
checks.append("[i] PyOpenCL not available (Intel/AMD GPU acceleration disabled)")
except Exception as e:
checks.append(f"[!] PyOpenCL error: {e}")
for check in checks:
print(f" {check}")
return not critical_fail
def main():
"""Main installer entry point."""
print_header("RFCP - RF Coverage Planner - Installer")
# Step 1: Check prerequisites
print_step("Checking prerequisites...")
if not check_python():
print("\n[X] Python 3.10+ is required. Please install from python.org")
sys.exit(1)
has_node = check_node()
# Step 2: Detect GPU
print_step("Detecting GPU hardware...")
gpus = detect_gpu()
# Step 3: Install core dependencies
if not install_core_dependencies():
print("\n[X] Core dependency installation failed")
sys.exit(1)
# Step 4: Install GPU dependencies
install_gpu_dependencies(gpus)
# Step 5: Frontend (optional)
install_frontend(has_node)
# Step 6: Create launcher
create_launcher()
# Step 7: Verify
success = verify_installation()
# Summary
print_header("Installation Summary")
if success:
print(" [OK] RFCP installed successfully!")
print()
print(" To start RFCP:")
if platform.system() == "Windows":
print(" Double-click RFCP.bat")
print(" Or run: python -m uvicorn app.main:app --port 8090")
else:
print(" Run: ./rfcp.sh")
print(" Or: python -m uvicorn app.main:app --port 8090")
print()
print(" Then open: http://localhost:8090")
print()
# GPU summary
if gpus["nvidia"]:
print(f" GPU: {gpus['nvidia_name']} (CUDA)")
elif gpus["intel"]:
print(f" GPU: {gpus['intel_name']} (OpenCL)")
elif gpus["amd"]:
print(f" GPU: {gpus['amd_name']} (OpenCL)")
else:
print(" Mode: CPU only (NumPy)")
else:
print(" [!] Installation completed with errors")
print(" Some features may not work correctly")
print()
print('=' * 60)
if __name__ == "__main__":
main()

70
installer/build-gpu.bat Normal file
View File

@@ -0,0 +1,70 @@
@echo off
echo ========================================
echo RFCP GPU Build — ONEDIR mode
echo CuPy-cuda13x + CUDA Toolkit 13.x
echo ========================================
echo.
REM ── Check CuPy ──
echo [1/5] Checking CuPy installation...
python -c "import cupy; print(f' CuPy {cupy.__version__}')" 2>nul
if errorlevel 1 (
echo ERROR: CuPy not installed.
echo Run: pip install cupy-cuda13x
exit /b 1
)
REM ── Check CUDA compute ──
echo [2/5] Testing GPU compute...
python -c "import cupy; a = cupy.array([1,2,3]); assert a.sum() == 6; print(' GPU compute: OK')" 2>nul
if errorlevel 1 (
echo ERROR: CuPy installed but GPU compute failed.
echo Check: CUDA Toolkit installed? nvidia-smi works?
exit /b 1
)
REM ── Check CUDA_PATH ──
echo [3/5] Checking CUDA Toolkit...
if defined CUDA_PATH (
echo CUDA_PATH: %CUDA_PATH%
) else (
echo WARNING: CUDA_PATH not set
)
REM ── Check nvidia pip DLLs ──
echo [4/5] Checking nvidia pip packages...
python -c "import nvidia; import os; base=os.path.dirname(nvidia.__file__); dlls=[f for d in os.listdir(base) if os.path.isdir(os.path.join(base,d,'bin')) for f in os.listdir(os.path.join(base,d,'bin')) if f.endswith('.dll')]; print(f' nvidia pip DLLs: {len(dlls)}')" 2>nul
if errorlevel 1 (
echo No nvidia pip packages (will use CUDA Toolkit)
)
REM ── Build ──
echo.
echo [5/5] Building rfcp-server (ONEDIR mode)...
echo This may take 3-5 minutes...
echo.
cd /d "%~dp0\..\backend"
pyinstaller "..\installer\rfcp-server-gpu.spec" --clean --noconfirm
echo.
echo ========================================
if exist "dist\rfcp-server\rfcp-server.exe" (
echo BUILD COMPLETE! (ONEDIR mode)
echo.
echo Output: backend\dist\rfcp-server\
dir /b dist\rfcp-server\*.exe dist\rfcp-server\*.dll 2>nul | find /c /v "" > nul
echo.
echo Test commands:
echo cd dist\rfcp-server
echo rfcp-server.exe
echo curl http://localhost:8090/api/health
echo curl http://localhost:8090/api/gpu/status
echo ========================================
) else (
echo BUILD FAILED — check errors above
echo ========================================
exit /b 1
)
pause

84
installer/build-gpu.sh Normal file
View File

@@ -0,0 +1,84 @@
#!/bin/bash
set -e
echo "========================================"
echo " RFCP GPU Build — ONEDIR mode"
echo " CuPy-cuda13x + CUDA Toolkit 13.x"
echo "========================================"
echo ""
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
BACKEND_DIR="$SCRIPT_DIR/../backend"
# Check backend exists
if [ ! -f "$BACKEND_DIR/run_server.py" ]; then
echo "ERROR: Backend not found at $BACKEND_DIR"
exit 1
fi
# Check Python
echo "[1/5] Checking Python..."
python3 --version || { echo "ERROR: Python3 not found"; exit 1; }
# Check CuPy
echo ""
echo "[2/5] Checking CuPy installation..."
if ! python3 -c "import cupy; print(f' CuPy {cupy.__version__}')" 2>/dev/null; then
echo "ERROR: CuPy not installed"
echo ""
echo "Install CuPy:"
echo " pip3 install cupy-cuda13x"
echo " # or for WSL2:"
echo " pip3 install cupy-cuda13x --break-system-packages"
exit 1
fi
# Check GPU compute
echo ""
echo "[3/5] Testing GPU compute..."
if python3 -c "import cupy; a = cupy.array([1,2,3]); assert a.sum() == 6; print(' GPU compute: OK')" 2>/dev/null; then
:
else
echo "WARNING: GPU compute test failed (may still work)"
fi
# Check CUDA
echo ""
echo "[4/5] Checking CUDA..."
if [ -n "$CUDA_PATH" ]; then
echo " CUDA_PATH: $CUDA_PATH"
else
echo " CUDA_PATH not set (relying on nvidia pip packages)"
fi
# Check nvidia pip packages
echo ""
echo "[5/5] Checking nvidia pip packages..."
python3 -c "import nvidia; print(' nvidia packages found')" 2>/dev/null || echo " No nvidia pip packages"
# Build
echo ""
echo "Building rfcp-server (ONEDIR mode)..."
echo ""
cd "$BACKEND_DIR"
pyinstaller "$SCRIPT_DIR/rfcp-server-gpu.spec" --clean --noconfirm
echo ""
echo "========================================"
if [ -f "dist/rfcp-server/rfcp-server" ] || [ -f "dist/rfcp-server/rfcp-server.exe" ]; then
echo " BUILD COMPLETE! (ONEDIR mode)"
echo ""
echo " Output: backend/dist/rfcp-server/"
ls -lh dist/rfcp-server/ | head -20
echo ""
echo " Test:"
echo " cd dist/rfcp-server"
echo " ./rfcp-server"
echo " curl http://localhost:8090/api/health"
echo "========================================"
else
echo " BUILD FAILED — check errors above"
echo "========================================"
exit 1
fi

View File

@@ -3,6 +3,7 @@ set -e
echo "========================================="
echo " RFCP Desktop Build (Windows)"
echo " GPU-enabled ONEDIR build"
echo "========================================="
cd "$(dirname "$0")/.."
@@ -14,15 +15,30 @@ npm ci
npm run build
cd ..
# 2. Build backend with PyInstaller
echo "[2/4] Building backend..."
# 2. Build backend with PyInstaller (GPU ONEDIR mode)
echo "[2/4] Building backend (GPU)..."
cd backend
# Check CuPy is available
if ! python -c "import cupy" 2>/dev/null; then
echo "WARNING: CuPy not installed - GPU acceleration will not be available"
echo " Install with: pip install cupy-cuda13x"
fi
python -m pip install -r requirements.txt
python -m pip install pyinstaller
cd ../installer
python -m PyInstaller rfcp-server.spec --clean --noconfirm
# Build using GPU spec (ONEDIR output)
python -m PyInstaller ../installer/rfcp-server-gpu.spec --clean --noconfirm
# Copy ONEDIR folder to desktop staging area
# Result: desktop/backend-dist/win/rfcp-server/rfcp-server.exe + _internal/
mkdir -p ../desktop/backend-dist/win
cp dist/rfcp-server.exe ../desktop/backend-dist/win/
rm -rf ../desktop/backend-dist/win/rfcp-server # Clean old build
cp -r dist/rfcp-server ../desktop/backend-dist/win/rfcp-server
echo " Backend copied to: desktop/backend-dist/win/rfcp-server/"
ls -la ../desktop/backend-dist/win/rfcp-server/*.exe 2>/dev/null || true
cd ..
# 3. Build Electron app

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More