Skip to main content

Orca Toolboxes: Battle Tested Algorithms for Sensor Data

· 4 min read
Frederick Mannings
Head of Engineering @ OrcaTelemetry.io, Founder

The first three Orca Toolboxes are live - a production grade set of algorithms for sensor data, shipped as minimal binaries and designed to run inside your own cloud or on premise.

Today marks a milestone for the Orca Telemetry analytics platform. The Signal Quality, Condition Indicator and Deviation Detector toolboxes are now available, with three more (Feature Engineer, Remaining Useful Life and Auto-Label) scheduled across the rest of 2026.

Each toolbox is a drop in component that consumes the same sensor streams you are already collecting. They work natively alongside any timeseries or analytics database - for example TimescaleDB, QuestDB, InfluxDB or BigQuery - so there is no ingest rework required to get value from them.

Straight to Insight

The premise is simple. Most companies working with sensor data spend a disproportionate amount of engineering effort re-implementing algorithms that already exist. Orca has fixed infrastructure build problem, now we're going after the algorithms that yield the insight: vibration analysis, distribution-shift detection, feature extraction, remaining useful life projection. These are all solved problems in the academic literature but unsolved problems on most production scenarios.

The Orca Toolboxes close that gap. Every algorithm has been distilled from years of industrial deployment by engineers who have shipped this work at Airbus, Rolls-Royce, CMR Surgical and GKN Aerospace. They are statistically guaranteed to work on your sensor data, SIMD-optimised, microservice deployable, and carry a minimal memory footprint - around 70% less than a typical Python implementation of the equivalent capability.

What's Available Today

Three toolboxes are available immediately:

  • Signal Quality - Detects dropouts, saturation, noise floor and sampling drift. Flags bad data before it poisons downstream analytics so teams respond to real degradation instead of sensor noise.
  • Condition Indicator - Fuses derived metrics from raw sensor data into a single high-information condition indicator. Collapses a prediction target from twenty metrics down to one, so the team acts on a single statistically fused signal.
  • Deviation Detector - Detects shifts in the underlying distribution of a condition indicator. Catches degradation weeks before a threshold alarm would fire, turning reactive maintenance into scheduled work.

What Is Coming Next

Three further toolboxes complete the pipeline from raw data to a self-labelling training set:

  • Feature Engineer (Q3 2026) - Automated timeseries and frequency based feature engineering, efficient and validated, so ML pipelines stop stalling on pre-processing.
  • Remaining Useful Life (Q3 2026) - Projects asset failure with confidence intervals. Two weeks of runway instead of two hours means interventions are planned into maintenance windows, not triggered by alarms at 2am.
  • Auto-Label (Q4 2026) - Auto-generates training data from live fault events with full provenance. Six months in, teams have a production grade dataset that was built while they slept.

Native To Any Analytics Stack

A deliberate design goal was that the toolboxes must not force a stack migration. The algorithms consume standard sensor streams and emit standard metrics, so they sit comfortably next to whatever storage layer is already in place. As usually these metrics work natively with any database, such as TimescaleDB, QuestDB, InfluxDB or Postgres, teams can adopt them incrementally - one toolbox, one signal, one asset class at a time - without a rip and replace.

They also compose with each other. Signal Quality gates the input to Condition Indicator if you choose, which produces the fused signal that Deviation Detector watches, which in turn surfaces the fault events that Auto-Label will eventually consume. The full pipeline is a chain of interoperable algorithms.

Robustness, Clarity, & Uptime

Three commitments ship with every toolbox:

  • Robustness - Find a leak, get paid. If a vulnerability or edge case is discovered, a bounty is paid. No haggling.
  • Clarity - If an algorithm is not behaving as documented, customers get a dedicated support channel with a named engineer. No ticket queues in sight.
  • Uptime - Automated alerts fire the moment something drops, and priority support kicks in automatically.

Getting Access

The toolboxes work out of the box on vibration, temperature, current, pressure and robot hardware sensor data.

For teams who have been waiting for a credible alternative to building and maintaining this capability in-house (typically many thousands of hours to build and several hundred per year to maintain) this is the moment to take a look. Start the process here.

More updates will follow as Feature Engineer, Remaining Useful Life and Auto-Label land through the rest of the year!