How Is

DroneTector

Using AI?

Detects, tracks, and classifies hostile drones in real time using radar, cameras, and acoustics.

Using micro-Doppler deep learning for radar signature classification, multi-sensor fusion ensembles, and edge computer vision for low-latency airspace monitoring.

Company Overview

Develops millimeter-wave radar and multi-sensor fusion systems powered by ML to detect, track, and classify hostile drones and swarms in real time for airports, critical infrastructure, and defense.

Product Roadmap & Public Announcements

Millimeter-wave radar, multi-sensor fusion (radar, camera, acoustic), distributed detection network, swarm-tracking. Partnerships with UK MoD DASA, NATO DIANA, Royal Academy of Engineering. Pilot deployments at airports and critical infrastructure.

Signals & Private Analysis

Micro-Doppler signature classification using deep learning. Edge-deployed inference for low-latency detection. Acoustic ML for drone fingerprinting. NATO DIANA provides allied defense test ranges. Future SaaS 'airspace monitoring as a service' offering.

DroneTector

Machine Learning Use Cases

Micro-Doppler Deep Learning
For
Product Differentiation
Engineering

<p>ML-powered micro-Doppler radar signature classification that distinguishes hostile drones from birds, clutter, and other airborne objects in real time.</p>

Layman's Explanation

It teaches the radar to tell the difference between a delivery drone and a seagull by learning the unique "fingerprint" of how each object's spinning parts reflect radar waves.

Use Case Details

DroneTector's most novel ML application is its use of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to classify micro-Doppler signatures extracted from millimeter-wave radar returns. Every airborne object — whether a quadcopter, fixed-wing drone, bird, or piece of debris — produces a unique micro-Doppler signature caused by the rotation of propellers, flapping of wings, or tumbling motion. DroneTector's ML pipeline ingests raw radar time-frequency spectrograms, applies short-time Fourier transforms (STFT) to generate micro-Doppler images, and feeds them into a CNN-LSTM hybrid architecture that learns both spatial features and temporal dynamics. The model is trained on proprietary datasets collected during field trials supported by UK MoD DASA and NATO DIANA, encompassing dozens of drone types, bird species, and environmental clutter scenarios. This enables the system to classify targets with high confidence even in cluttered urban or coastal environments where false alarms from conventional radar would be unacceptable. The result is a detection system that doesn't just see something in the sky — it knows what it is.

Analogy

It's like teaching a bouncer to recognize troublemakers not by their face, but by the way they walk through the door.

Multi-Sensor Fusion Ensemble
For
Risk Reduction
Product

<p>Ensemble learning-based multi-sensor fusion that combines radar, camera, and acoustic ML outputs into a single, high-confidence threat assessment.</p>

Layman's Explanation

It cross-checks what the radar sees, the camera spots, and the microphone hears — like getting three independent witnesses to agree before sounding the alarm.

Use Case Details

DroneTector's second breakthrough ML use case is its late-fusion ensemble architecture that intelligently combines the probabilistic outputs of three independent ML subsystems — radar classification, computer vision detection, and acoustic fingerprinting — into a unified threat confidence score. Each sensor modality has inherent strengths and weaknesses: radar excels at range and all-weather operation but struggles with very slow-moving targets; cameras provide visual confirmation but degrade in fog or darkness; acoustic sensors detect propeller noise but are limited by ambient noise and range. DroneTector's fusion layer uses a learned weighting model (likely a gradient-boosted ensemble or shallow neural network) that dynamically adjusts the contribution of each sensor based on real-time environmental conditions (weather, ambient noise level, time of day) and sensor health status. The system also applies Bayesian track association to correlate detections across modalities, ensuring that a radar blip, a camera bounding box, and an acoustic event are correctly linked to the same physical object. This dramatically reduces false alarms — the single biggest pain point for counter-drone operators — while maintaining near-perfect detection rates. The architecture is designed to be modular, so new sensor types (e.g., RF spectrum analyzers, LiDAR) can be plugged in as additional ensemble members without retraining the entire system.

Analogy

It's like a doctor who doesn't just rely on one test — they combine your X-ray, blood work, and symptoms before making a diagnosis.

Edge Computer Vision Detection
For
Operational Efficiency
Engineering

<p>Lightweight, edge-deployed computer vision model (YOLO-based) for real-time detection and tracking of small, fast-moving drones in complex visual environments.</p>

Layman's Explanation

It gives every security camera a pair of AI-powered binoculars that can instantly spot and lock onto a tiny drone against a busy sky — without needing a cloud connection.

Use Case Details

DroneTector deploys a customized, pruned variant of the YOLO (You Only Look Once) single-stage object detection architecture on edge compute hardware co-located with each camera node. The model is specifically fine-tuned on a proprietary dataset of small UAS imagery captured across diverse backgrounds (urban skylines, open fields, coastal environments, night/IR) and augmented with synthetic data generated from 3D drone models rendered against photorealistic backgrounds. Key innovations include attention mechanisms focused on small-object detection (addressing the notoriously difficult problem of spotting a 20cm drone at 300+ meters), temporal consistency modules that leverage frame-to-frame motion cues to suppress false positives from birds or debris, and dynamic input resolution scaling that allocates more pixels to regions of interest flagged by the radar subsystem. The model runs on NVIDIA Jetson Orin or equivalent low-power edge GPUs, achieving real-time inference at 30+ FPS without any cloud dependency — critical for defense and critical infrastructure deployments where latency and data sovereignty are non-negotiable. Detected drone bounding boxes and confidence scores are streamed to the central fusion engine via encrypted MQTT, where they are correlated with radar and acoustic detections. The edge deployment also enables the system to continue operating autonomously even if network connectivity is lost, a key requirement for field and forward-deployed military use cases.

Analogy

It's like giving every security camera the reflexes of a fighter pilot and the patience of a birdwatcher — but it never blinks and never needs coffee.

Key Technical Team Members

  • Dr. Matthew Moore, CEO & Co-Founder
  • Dr. Thomas Doherty, CTO & Co-Founder
  • Dr. Jordina Frances de Mas, COO & Co-Founder

Three PhDs from world-class institutions (St Andrews, Oxford) in millimeter-wave radar, optical systems, and AI/automated reasoning. Purpose-built ML-native counter-drone hardware that generalist defense contractors cannot easily replicate.

DroneTector

Funding History

  • 2023-2024: Founded, initial R&D
  • 2024: UK MoD DASA programme
  • 2024-2025: NATO DIANA, Royal Academy of Engineering support
  • 2025: Converge Challenge, early pilot deployments
  • 2026: Seeking first commercial contracts

DroneTector

Competitors

  • Radar: Robin Radar, Echodyne, Fortem Technologies
  • RF/Multi-Sensor: Dedrone (Axon), DroneShield, Sentrycs
  • Acoustic: Squarehead Technology
  • Defense Primes: Thales, Leonardo, Hensoldt
  • AI Startups: Iris Automation, D-Fend, SkySafe
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.