LiSA
Situational awareness LiDAR for degraded visual environments
LiSA (Lidar for Situational Awareness) is an airborne sensor that is designed to provide tactical visibility and understanding of hazards, obstacles, flight paths and landing zones under harsh weather conditions and in degraded visual environments (DVE). With a range of up to 2 km, it delivers real-time 3D spatial awareness, and enables safe operations at night and in the face of heavy rain, fog, snow, sand, smoke and more.
The low-SWaP solution is DO-178C certified, emits no RF, and is not detectable in the visible band or by NVG sensors.
AIMS
Remote location, characterization and monitoring of human casualties
AIMS
(Automated Image-based Monitoring System) is a platform-agnostic system that can be installed on a range of UAS or UGV platforms to provide real-time detection of humans and monitoring of vital signs without the need for contact sensors. Running on low-SWaP COTS computing hardware, it offers a very high positive detection rate and an extremely low false alarm rate.
The powerful system can automatically detect and geolocate human bodies at up to 500ft away, and can identify body positions and detect heart and respiration rates as well as wounds. It is TAK-compatible and is ideal for battlefield medicine as well as first responders and SAR teams.
TANDOM
Real-time sensor fusion & tactical display output
TANDOM
(Tactical ANalytic Decision Optimization Module) provides real-time fusion capabilities for any application requiring immediate integration of multiple sensors and sensor modalities, including chemical, radiological, biological, video, and environmental. Designed to reduce cognitive burden, it allows operators to focus on high-confidence events, enabling swifter decision making and detecting events that may previously have been missed.
The TANDOM software runs on a mini-PC using industry-standard I/O protocols, and automatically fuses diverse sensor inputs into an intuitive tactical display with an exceptionally low false alarm rate.
ALLSEEN
AI & deep learning-powered automatic target recognition
ALLSEEN
(Areté Learned Likelihood Statistical Error Estimation Network) is a powerful AI- and deep learning-powered automatic target recognition (ATR) software that enables simultaneous detection, localization and characterization of objects within a scene. Fast, retrainable, and scalable, it provides operators with meaningful confidence estimates, thus reducing cognitive burden.
ALLSEEN can be easily integrated into a variety of UAV and UGV platforms, and is sensor-agnostic with only limited communication bandwidth requirements.