SAR Vision processes drone video in real time and flags potential human contacts for operator review. Designed for field conditions where connectivity is unreliable, hardware is constrained, and operational continuity is critical. Built to support the judgment of trained SAR personnel — not to replace it.
Standard detection models are trained on ground-level photography from consumer and autonomous vehicle datasets. Aerial SAR presents a different visual domain — and the cost of a missed detection is not a degraded performance metric. It is a person not found.
General models are trained on upright figures at ground level. Aerial footage shows overhead silhouettes, foreshortened limbs, and partial figures. These models were not trained primarily on overhead aerial SAR imagery.
Subjects in distress are frequently stationary, wearing earth-tone clothing, and partially obscured by terrain or vegetation. Without specific aerial SAR training data, detection systems do not generalize to these conditions reliably.
Cloud-based inference depends on uplink bandwidth that does not exist across most active search areas. Any system requiring network connectivity is unsuitable for remote field deployment.
CPU-only inference typically cannot maintain stable real-time 1080p processing on standard field laptops. Processing gaps mean a subject may be in frame during an interval that was never evaluated.
"A false positive costs an investigation team minutes. A false negative may cost the subject their life."
Designed to prioritize recall in search-and-rescue operational contexts. This tuning strategy intentionally surfaces low-confidence detections for operator review rather than suppressing them. SAR operations invert this priority. Verification of a false positive is a recoverable outcome. Suppression of a valid detection is not.
SAR Vision is configured to prioritize recall over precision: all plausible contacts are surfaced for operator review, including low-confidence candidates. This is the correct engineering decision for this operational context.
Illustrative Recall Comparison (Conceptual)
Team investigates flagged area. No subject found. Cost: bounded time for investigation.
Subject is in frame. System does not flag. Team continues past. Cost: potentially mission-critical.
Illustrative comparison based on internal evaluation datasets. Not independently validated.
Standard detection benchmarks weight precision and recall equally, or optimize toward precision because false positives carry a higher perceived cost. In SAR operations, that optimization is incorrect. SAR Vision is configured to surface all contacts the model considers plausible, including low-confidence candidates.
Confidence thresholds are configurable, but defaults are intentionally conservative. Low-confidence detections are presented to the operator for assessment rather than suppressed before review. The system aims to improve the likelihood that plausible contacts are not filtered before a human evaluates them.
This approach is reflected throughout the detection pipeline — from model training to post-processing — all oriented toward SAR context rather than general benchmark performance.
SAR Vision surfaces contacts. It does not adjudicate them. Every flagged detection requires human verification before any operational action. The system provides decision support; the operator retains decision authority.
An aerial search-assistance platform that detects, tracks, and logs potential human contacts for operator review in real time. Deployable on existing field hardware with fault-tolerant operation under unstable field conditions. No additional infrastructure required.
Purpose-built detection models designed for aerial SAR search conditions — not repurposed from surveillance or autonomous vehicle datasets. Trained to identify subjects from overhead perspectives in challenging terrain conditions.
Accepts HDMI capture from any drone monitor output, RTMP stream endpoints from mission planning software, or pre-recorded video files for post-flight review. No proprietary SDK required.
Runs on-device using GPU-accelerated hardware. A mid-range field laptop with a modern NVIDIA GPU is sufficient for real-time 1080p operation. Designed for stability during extended field sessions. All processing is local — no data leaves the device.
Confidence thresholds and post-processing parameters are configured to maximize detection coverage over precision. Low-confidence contacts are presented for operator review rather than suppressed at threshold.
All flagged contacts require human confirmation. SAR Vision does not take autonomous action or determine subject status. It presents contacts for assessment; the operator decides what action, if any, to take.
Does not replace SARTopo, CalTopo, or existing incident command tools. Adds a systematic aerial detection layer to drone operations teams are already conducting, without modifying established procedures.
Accepts HDMI capture (any USB capture card), RTMP endpoints from FPV or mission software, or local video files. Includes automatic signal recovery with fault-tolerant reconnection handling for unstable HDMI connections. Ingestion pipeline is designed to maintain throughput continuity under typical field conditions, with automatic recovery from common signal interruptions.
Frames conditioned for detection model input. Includes environmental compensation for challenging aerial lighting conditions common in overcast and low-contrast search environments.
Purpose-built detection models designed for aerial SAR conditions. Trained on imagery spanning overhead perspectives, partial occlusion, varied terrain, and low-contrast subject presentations. GPU-accelerated with resource management designed for extended operational sessions. Not general-purpose models.
Detection post-processing tuned for SAR operational context. Contacts above configurable thresholds annotated on live feed. All events logged with crash-tolerant persistence, timestamp, and frame reference for post-mission review. Video evidence captured with recovery handling for common signal interruptions and storage write errors.
Designed as standalone module and as a detection input within the SARCommand incident management concept, currently under development.
SAR operations don't occur in controlled environments. SAR Vision is designed to remain operational under the constraints that actually exist during active search — unstable power, intermittent signal, storage pressure, and sustained compute load.
All inference runs locally. Model weights are bundled with the application. No license server, no cloud API, no outbound data. Built for offline, low-connectivity field environments including dead zones, canyon terrain, and remote wilderness.
Designed to run on equipment the team already carries. No specialized hardware beyond a supported NVIDIA GPU. Packaging targets straightforward installation on existing field laptops.
Compatible with any drone providing a video output — DJI, Autel, Skydio, or other platforms via HDMI capture or RTMP stream. No manufacturer-specific integration required.
Recorded flight video can be processed after landing. Useful when operational conditions require full operator attention during flight, or for documentation and after-action review.
SAR Vision is an additional detection layer for drone-equipped operations. It does not replace SARTopo, incident command structure, or field coordinator judgment. Teams continue operating with existing tools; SAR Vision provides systematic aerial coverage that cannot be maintained manually at scale.
SAR Vision has been stability-tested and evaluated under sustained field conditions on portable hardware. The following reflects the current testing state. No performance claims are made beyond what has been directly observed and documented.
Demonstrated to regional SAR personnel. The system was reviewed and observed by active search-and-rescue team members under structured conditions. Operational feedback from those sessions is incorporated into the development cycle.
Tested against live drone HDMI feeds. SAR Vision has been tested against real-time drone video via HDMI capture, demonstrating detection pipeline stability and throughput under field-representative conditions.
Tested under sustained field deployment conditions. System has been run continuously on portable field hardware, confirming offline functionality, fault-tolerant signal handling, GPU stability under sustained load, and detection output during active UAV flight.
Specialized SAR detection models applied. Detection models are purpose-built on aerial SAR imagery, showing improved detection of overhead human figures in internal evaluation compared to general-purpose baselines.
Structured agency evaluation being explored. Formal evaluation with drone-equipped SAR units is being pursued to assess field suitability. Interested units would contribute structured operational feedback.
Human verification is required for all detections. No output from SAR Vision should be acted upon without evaluation by a qualified operator. The system surfaces candidates; personnel assess them.
Performance varies by terrain, lighting, and subject visibility. Detection reliability is affected by vegetation density, terrain complexity, ambient light conditions, and subject contrast against background. No system performs uniformly across all environments.
False positives are expected and by design. The recall-optimized configuration intentionally accepts a higher false positive rate. Teams should plan for flagged contacts that do not correspond to subjects on every deployment.
NVIDIA GPU is a deployment prerequisite. CPU-only hardware is not recommended for real-time deployment. This requirement must be confirmed before evaluation planning begins.
Continued operational refinement. SAR Vision has been stability-tested and is under continued improvement. Participating units should expect periodic updates and are expected to provide structured operational feedback as part of participation.
The system has not yet been deployed on a confirmed live subject recovery mission.
Not certified for operational decision authority.
Representative detection examples from internal evaluation footage processed through SAR Vision. These screenshots illustrate the types of subject candidates surfaced across varied terrain, lighting, and distance conditions.
All detections shown reflect recall-prioritized tuning designed to minimize missed subjects while preserving operator review control.
Examples reflect internal evaluation footage. Performance is environment-dependent and not independently validated.
SAR Vision was built for a specific operational context. Understanding where it fits — and where it does not — is part of evaluating whether it is appropriate for your unit.
Units conducting UAS-assisted searches who need systematic coverage of aerial video beyond what manual operator review can sustain.
Teams whose search areas lack reliable cellular or satellite uplink — remote wilderness, canyon terrain, or backcountry without communications infrastructure.
Personnel who want to increase detection coverage without delegating subject identification to the system — operators remain in the loop on every flagged contact.
Government and accredited volunteer units with structured operational procedures who can integrate detection assistance into existing field workflows.
SAR Vision requires active operator oversight. It is not architected for unattended or autonomous drone patrol without human monitoring.
Designed for organized SAR teams with formal operational structures and trained personnel. Not intended for personal or recreational aerial use.
SAR Vision runs locally and does not transmit data externally. Teams requiring centralized remote processing infrastructure should evaluate other solutions.
The system does not provide autonomous GPS coordinates, subject tagging, or automatic dispatch triggers. All detections require human review and interpretation before any action.
All detections require human verification. No detection output from SAR Vision should be acted upon without evaluation by a qualified operator. The system surfaces candidates; trained personnel assess them.
SAR Vision does not replace operator judgment. Aerial observation, search pattern planning, and subject determination remain under the authority of qualified SAR personnel. The system is an analytical aid, not a decision authority.
It is not an autonomous search system. SAR Vision does not direct aircraft, prioritize search sectors, or make resource allocation decisions. These functions remain entirely with incident command and field personnel.
Detection rates are environment-dependent. No threshold guarantees that all subjects present in video will be flagged. System performance should be understood as probabilistic assistance, not exhaustive coverage.
SAR Vision is functionally complete for its current scope and under continued improvement. The current stability release addresses human detection from RGB aerial video. Subsequent phases are planned based on operational priority and feedback from evaluation participants.
Roadmap priorities are shaped directly by feedback from evaluation units. If your operational context involves specific terrain types, detection challenges, or equipment constraints not addressed here, that input is sought and directly informs development sequencing.
SAR Vision is available for structured field evaluation with qualified SAR teams. Participation is coordinated while the system continues refinement and training dataset expansion.
This is a collaborative development phase. Participating units are expected to contribute observations on detection performance, false positive rates, field usability, and integration with existing procedures. That feedback directly determines development priorities.
Participation is coordinated directly with evaluation units, including structured check-ins, deployment guidance, and feedback review after operational use.
Field evaluation participation — active SAR units only