Abstract | Detecting collision-course targets in aerial scenes from purely passive optical images is challenging for a vision-based sense-and-avoid (SAA) system. Proposed herein is a processing pipeline for detecting and evaluating collision course targets from airborne imagery using machine vision techniques. The evaluation of eight feature detectors and three spatio-temporal visual cues is presented. Performance metrics for comparing feature detectors include the percentage of detected targets (PDT), percentage of false positives (POT) and the range at earliest detection (R det Rdet). Contrast and motion-based visual cues are evaluated against standard models and expected spatio-temporal behavior. The analysis is conducted on a multi-year database of captured imagery from actual airborne collision course flights flown at the National Research Council of Canada. Datasets from two different intruder aircraft, a Bell 206 rotor-craft and a Harvard Mark IV trainer fixed-wing aircraft, were compared for accuracy and robustness. Results indicate that the features from accelerated segment test (FAST) feature detector shows the most promise as it maximizes the range at earliest detection and minimizes false positives. Temporal trends from visual cues analyzed on the same datasets are indicative of collision-course behavior. Robustness of the cues was established across collision geometry, intruder aircraft types, illumination conditions, seasonal environmental variations and scene clutter. |
---|