Patent Pending — AI-Powered Aerial Search Detection Platform

SAR Vision

Aerial Human Detection
for Search & Rescue Teams

SAR Vision processes drone video in real time and flags potential human contacts for operator review. Designed for field conditions where connectivity is unreliable, hardware is constrained, and operational continuity is critical. Built to support the judgment of trained SAR personnel — not to replace it.

Developed by Brian Skyberg, active SAR team member with field deployment experience.
Current Stability Release: v0.42 — Stability-focused release with fault-tolerant signal handling, persistent data logging, and tested stability during extended field sessions.
Real-time 1080p detection on standard field laptop hardware — no proprietary equipment required
Fully offline operation — no network uplink required in the field or during active search
Recall-optimized detection — configured to surface all plausible human contacts for human review
Drone-agnostic input — HDMI capture with automatic signal recovery, RTMP stream, or recorded video file from any platform
SAR Vision v0.42 — detection interface showing real-time person detection on aerial footage
SAR Vision in Operation
Live detection on aerial drone footage — stability release v0.42
Model: Specialized SAR Detection Models
Processing: Local / GPU-Accelerated
Connectivity: Offline — No Network
SAR_VISION ◆ DETECTION RECORDING
STABILITY RELEASE v0.42 — FIELD DEPLOYMENT
SAR Vision on DJI M30T Feed
Real-time detection workflow on DJI M30T source footage
Source: DJI M30T Drone Feed
Inference: Local / GPU-Accelerated
Context: Field Evaluation Recording
SAR_VISION ◆ DJI_M30T_FEED
FIELD EVALUATION — REAL-TIME OPERATION

General-Purpose Detection
Is Insufficient for
Aerial SAR Operations

Standard detection models are trained on ground-level photography from consumer and autonomous vehicle datasets. Aerial SAR presents a different visual domain — and the cost of a missed detection is not a degraded performance metric. It is a person not found.

📐

Perspective Mismatch

General models are trained on upright figures at ground level. Aerial footage shows overhead silhouettes, foreshortened limbs, and partial figures. These models were not trained primarily on overhead aerial SAR imagery.

🌿

Terrain Camouflage

Subjects in distress are frequently stationary, wearing earth-tone clothing, and partially obscured by terrain or vegetation. Without specific aerial SAR training data, detection systems do not generalize to these conditions reliably.

📶

No Field Connectivity

Cloud-based inference depends on uplink bandwidth that does not exist across most active search areas. Any system requiring network connectivity is unsuitable for remote field deployment.

Processing Throughput

CPU-only inference typically cannot maintain stable real-time 1080p processing on standard field laptops. Processing gaps mean a subject may be in frame during an interval that was never evaluated.

⚠ The Core Asymmetry
"A false positive costs an investigation team minutes. A false negative may cost the subject their life."

Designed to prioritize recall in search-and-rescue operational contexts. This tuning strategy intentionally surfaces low-confidence detections for operator review rather than suppressing them. SAR operations invert this priority. Verification of a false positive is a recoverable outcome. Suppression of a valid detection is not.


SAR Vision is configured to prioritize recall over precision: all plausible contacts are surfaced for operator review, including low-confidence candidates. This is the correct engineering decision for this operational context.

Why Recall Is the
Primary Design Objective

Illustrative Recall Comparison (Conceptual)

Precision Secondary objective
Recall (Tuned for SAR) — High Primary objective
Recall (General-Purpose Models) — Reduced in aerial contexts Untuned baseline
False Positive

Team investigates flagged area. No subject found. Cost: bounded time for investigation.

False Negative

Subject is in frame. System does not flag. Team continues past. Cost: potentially mission-critical.

Illustrative comparison based on internal evaluation datasets. Not independently validated.

The Model Is Calibrated
to Flag, Not Filter

Standard detection benchmarks weight precision and recall equally, or optimize toward precision because false positives carry a higher perceived cost. In SAR operations, that optimization is incorrect. SAR Vision is configured to surface all contacts the model considers plausible, including low-confidence candidates.

Confidence thresholds are configurable, but defaults are intentionally conservative. Low-confidence detections are presented to the operator for assessment rather than suppressed before review. The system aims to improve the likelihood that plausible contacts are not filtered before a human evaluates them.

This approach is reflected throughout the detection pipeline — from model training to post-processing — all oriented toward SAR context rather than general benchmark performance.

Design Principle

SAR Vision surfaces contacts. It does not adjudicate them. Every flagged detection requires human verification before any operational action. The system provides decision support; the operator retains decision authority.

What SAR Vision Does

An aerial search-assistance platform that detects, tracks, and logs potential human contacts for operator review in real time. Deployable on existing field hardware with fault-tolerant operation under unstable field conditions. No additional infrastructure required.

🎯

SAR-Specific Model

Purpose-built detection models designed for aerial SAR search conditions — not repurposed from surveillance or autonomous vehicle datasets. Trained to identify subjects from overhead perspectives in challenging terrain conditions.

🚁

Drone-Agnostic Input

Accepts HDMI capture from any drone monitor output, RTMP stream endpoints from mission planning software, or pre-recorded video files for post-flight review. No proprietary SDK required.

💻

Local GPU Processing

Runs on-device using GPU-accelerated hardware. A mid-range field laptop with a modern NVIDIA GPU is sufficient for real-time 1080p operation. Designed for stability during extended field sessions. All processing is local — no data leaves the device.

🔍

Recall-First Configuration

Confidence thresholds and post-processing parameters are configured to maximize detection coverage over precision. Low-confidence contacts are presented for operator review rather than suppressed at threshold.

📍

Operator-in-the-Loop

All flagged contacts require human confirmation. SAR Vision does not take autonomous action or determine subject status. It presents contacts for assessment; the operator decides what action, if any, to take.

🔗

Workflow Compatible

Does not replace SARTopo, CalTopo, or existing incident command tools. Adds a systematic aerial detection layer to drone operations teams are already conducting, without modifying established procedures.

System Design

Input Layer

Video Ingestion

Accepts HDMI capture (any USB capture card), RTMP endpoints from FPV or mission software, or local video files. Includes automatic signal recovery with fault-tolerant reconnection handling for unstable HDMI connections. Ingestion pipeline is designed to maintain throughput continuity under typical field conditions, with automatic recovery from common signal interruptions.

Pre-Processing

Frame Preparation

Frames conditioned for detection model input. Includes environmental compensation for challenging aerial lighting conditions common in overcast and low-contrast search environments.

Inference Engine

SAR Detection Engine

Purpose-built detection models designed for aerial SAR conditions. Trained on imagery spanning overhead perspectives, partial occlusion, varied terrain, and low-contrast subject presentations. GPU-accelerated with resource management designed for extended operational sessions. Not general-purpose models.

Post-Processing

Detection Output & Logging

Detection post-processing tuned for SAR operational context. Contacts above configurable thresholds annotated on live feed. All events logged with crash-tolerant persistence, timestamp, and frame reference for post-mission review. Video evidence captured with recovery handling for common signal interruptions and storage write errors.

Future

SARCommand Integration

Designed as standalone module and as a detection input within the SARCommand incident management concept, currently under development.

Minimum System Requirements
GPU
REQUIREDNVIDIA CUDA-capable GPU
Mid-range or above (e.g. laptop-class discrete GPU)
Performance varies by resolution and model configuration.
OS
Windows 10/11 · Ubuntu 20.04+
RAM
16 GB min · 32 GB recommended
Network
OPTIONALFully offline capable
Input
HDMI capture · RTMP · File
Storage
Model data + operational logs
Detection Model
Architecture
Purpose-built for aerial SAR
Training
Aerial SAR imagery — specialized training pipeline
Target Class
Human-subject detection optimized for aerial SAR conditions
Threshold
Configurable — recall-optimized default
Throughput
Real-time operation demonstrated on mid-range NVIDIA laptop hardware at 1080p under field conditions

Designed for
Field Conditions

SAR operations don't occur in controlled environments. SAR Vision is designed to remain operational under the constraints that actually exist during active search — unstable power, intermittent signal, storage pressure, and sustained compute load.

📡

No Network Dependency

All inference runs locally. Model weights are bundled with the application. No license server, no cloud API, no outbound data. Built for offline, low-connectivity field environments including dead zones, canyon terrain, and remote wilderness.

🖥

Standard Hardware

Designed to run on equipment the team already carries. No specialized hardware beyond a supported NVIDIA GPU. Packaging targets straightforward installation on existing field laptops.

🚁

Platform-Independent

Compatible with any drone providing a video output — DJI, Autel, Skydio, or other platforms via HDMI capture or RTMP stream. No manufacturer-specific integration required.

📁

Post-Flight Review

Recorded flight video can be processed after landing. Useful when operational conditions require full operator attention during flight, or for documentation and after-action review.

Input Source Compatibility
HDMI
USB capture card — any drone monitor output
✓ Operational
RTMP
Stream endpoint — Litchi, DJI Go, compatible software
✓ Operational
File
MP4, MOV, AVI — post-flight review
✓ Operational
Thermal
FLIR / DJI Zenmuse XT2 — RGB fusion
Planned
Multi-UAV
Parallel stream processing
Planned

SAR Vision is an additional detection layer for drone-equipped operations. It does not replace SARTopo, incident command structure, or field coordinator judgment. Teams continue operating with existing tools; SAR Vision provides systematic aerial coverage that cannot be maintained manually at scale.

Operational Testing Status

SAR Vision has been stability-tested and evaluated under sustained field conditions on portable hardware. The following reflects the current testing state. No performance claims are made beyond what has been directly observed and documented.

Demonstrated to regional SAR personnel. The system was reviewed and observed by active search-and-rescue team members under structured conditions. Operational feedback from those sessions is incorporated into the development cycle.

Tested against live drone HDMI feeds. SAR Vision has been tested against real-time drone video via HDMI capture, demonstrating detection pipeline stability and throughput under field-representative conditions.

Tested under sustained field deployment conditions. System has been run continuously on portable field hardware, confirming offline functionality, fault-tolerant signal handling, GPU stability under sustained load, and detection output during active UAV flight.

Specialized SAR detection models applied. Detection models are purpose-built on aerial SAR imagery, showing improved detection of overhead human figures in internal evaluation compared to general-purpose baselines.

Structured agency evaluation being explored. Formal evaluation with drone-equipped SAR units is being pursued to assess field suitability. Interested units would contribute structured operational feedback.

⚠   Operational Limitations

Human verification is required for all detections. No output from SAR Vision should be acted upon without evaluation by a qualified operator. The system surfaces candidates; personnel assess them.

Performance varies by terrain, lighting, and subject visibility. Detection reliability is affected by vegetation density, terrain complexity, ambient light conditions, and subject contrast against background. No system performs uniformly across all environments.

False positives are expected and by design. The recall-optimized configuration intentionally accepts a higher false positive rate. Teams should plan for flagged contacts that do not correspond to subjects on every deployment.

NVIDIA GPU is a deployment prerequisite. CPU-only hardware is not recommended for real-time deployment. This requirement must be confirmed before evaluation planning begins.

Continued operational refinement. SAR Vision has been stability-tested and is under continued improvement. Participating units should expect periodic updates and are expected to provide structured operational feedback as part of participation.

The system has not yet been deployed on a confirmed live subject recovery mission.

Not certified for operational decision authority.

Field Detection Examples

Representative detection examples from internal evaluation footage processed through SAR Vision. These screenshots illustrate the types of subject candidates surfaced across varied terrain, lighting, and distance conditions.

All detections shown reflect recall-prioritized tuning designed to minimize missed subjects while preserving operator review control.

Woodland trail detection example
Woodland Trail Environment
Confidence: 0.80

Subject candidate detected on a partially shaded forest trail with mixed canopy cover.

• Dappled lighting under canopy
• Vegetation edge contrast
• Natural terrain pathways
Open terrain long-range detection example
Open Terrain (Long Range)
Confidences: 0.72 and low-confidence candidate

Two subject candidates detected across wide-field terrain at distance.

• Long-range detection at reduced pixel density
• Recall-first tuning for operator review
• High-contrast open landscapes
Dense vegetation mixed terrain detection example
Dense Vegetation / Mixed Terrain
Confidences: 0.86, 0.67, low-confidence candidate

Multiple subject candidates detected in agricultural and river-edge terrain.

• High-uniform green vegetation
• Mixed terrain near water features
• Low-confidence candidates surfaced for review

Examples reflect internal evaluation footage. Performance is environment-dependent and not independently validated.

Who This Tool Is For

SAR Vision was built for a specific operational context. Understanding where it fits — and where it does not — is part of evaluating whether it is appropriate for your unit.

✓   Appropriate For
Active SAR teams with drone programs

Units conducting UAS-assisted searches who need systematic coverage of aerial video beyond what manual operator review can sustain.

Units in low-connectivity environments

Teams whose search areas lack reliable cellular or satellite uplink — remote wilderness, canyon terrain, or backcountry without communications infrastructure.

Teams prioritizing recall over automation

Personnel who want to increase detection coverage without delegating subject identification to the system — operators remain in the loop on every flagged contact.

County SAR, sheriff aerial units, Civil Air Patrol

Government and accredited volunteer units with structured operational procedures who can integrate detection assistance into existing field workflows.

✕   Not Designed For
Fully autonomous detection systems

SAR Vision requires active operator oversight. It is not architected for unattended or autonomous drone patrol without human monitoring.

Consumer or hobby drone users

Designed for organized SAR teams with formal operational structures and trained personnel. Not intended for personal or recreational aerial use.

Cloud-dependent deployment models

SAR Vision runs locally and does not transmit data externally. Teams requiring centralized remote processing infrastructure should evaluate other solutions.

Autonomous subject location reporting

The system does not provide autonomous GPS coordinates, subject tagging, or automatic dispatch triggers. All detections require human review and interpretation before any action.

SAR Vision Is Decision-Support Software

All detections require human verification. No detection output from SAR Vision should be acted upon without evaluation by a qualified operator. The system surfaces candidates; trained personnel assess them.

SAR Vision does not replace operator judgment. Aerial observation, search pattern planning, and subject determination remain under the authority of qualified SAR personnel. The system is an analytical aid, not a decision authority.

It is not an autonomous search system. SAR Vision does not direct aircraft, prioritize search sectors, or make resource allocation decisions. These functions remain entirely with incident command and field personnel.

Detection rates are environment-dependent. No threshold guarantees that all subjects present in video will be flagged. System performance should be understood as probabilistic assistance, not exhaustive coverage.

Planned Development

SAR Vision is functionally complete for its current scope and under continued improvement. The current stability release addresses human detection from RGB aerial video. Subsequent phases are planned based on operational priority and feedback from evaluation participants.

Active — Stability Release v0.42

Phase 1
Human Detection

  • Aerial person detection — SAR-optimized models
  • HDMI, RTMP, and file input sources
  • Local GPU-accelerated processing
  • Configurable confidence thresholds
  • Detection logging and timestamping
  • Field laptop deployment packaging
  • Recall-optimized configuration
Phase 2 — In Planning

Phase 2
Thermal & Animal Detection

  • Thermal integration under evaluation.
  • RGB + thermal multi-modal detection
  • Animal detection class for wildlife SAR
  • Heat signature flagging
  • Night operational capability via thermal
  • Expanded training data — thermal domain
Phase 3 — Concept

Phase 3
Multi-Drone & SARCommand

  • Parallel stream processing — multi-UAV
  • Shared detection view across operator stations
  • SARCommand integration — detections to sectors
  • KML / GeoJSON export for SARTopo
  • Mission replay and detection audit trail
  • Team-facing detection notification interface

Roadmap priorities are shaped directly by feedback from evaluation units. If your operational context involves specific terrain types, detection challenges, or equipment constraints not addressed here, that input is sought and directly informs development sequencing.

Submit Operational Inquiry

Structured Field Evaluation

SAR Vision is available for structured field evaluation with qualified SAR teams. Participation is coordinated while the system continues refinement and training dataset expansion.

This is a collaborative development phase. Participating units are expected to contribute observations on detection performance, false positive rates, field usability, and integration with existing procedures. That feedback directly determines development priorities.

Participation is coordinated directly with evaluation units, including structured check-ins, deployment guidance, and feedback review after operational use.

Evaluation Criteria
  • Active SAR team with an established UAS program and regular drone deployment on operations
  • Operations in environments with limited or no cellular or satellite uplink
  • Access to an NVIDIA GPU laptop or workstation for evaluation
  • Willingness to provide structured feedback following evaluation flights
  • Unit operates under formal SAR jurisdiction or accreditation (county, state, federal, or recognized volunteer)

Operational Inquiry

Field evaluation participation — active SAR units only

Submit team details for direct evaluation coordination follow-up.

Operational Boundaries