The Engine

One system.
Drop it into anything.

The same neural architecture that detects anomalies in encrypted network traffic — without decryption — now identifies industrial equipment from a single photograph. Two scenarios. Same engine. No retraining.

Encrypted traffic analysis

A corporate network. All traffic is TLS 1.3 encrypted. You can't see the payloads. Traditional security tools are blind. The engine isn't.

01
Hour 0 — Passive

Learn what normal looks like

The engine sits on the network. Doesn't decrypt. Doesn't inspect payloads. It watches the shape of traffic — timing between packets, session durations, handshake patterns, flow volumes, which endpoints talk to which, and when. Every connection builds the model of what belongs.

$ engine observe --interface eth0 --mode passive
[observe] Monitoring 10.0.0.0/16 | All traffic TLS 1.3
[learn] Ingesting flows... 847/min across 340 endpoints
[learn] Building temporal profiles per host...
[learn] Mapping inter-host relationship graph...
[learn] Encoding session-duration distributions...
[learn] Baseline locked: 12,491 behavioural patterns
Anomaly detection active. No signatures. No rules.

No signatures. No rules. No threat feeds. The engine doesn't know what an attack looks like. It knows what your network looks like. Anything that deviates from the learned baseline is surfaced — even attacks that have never been seen before.

02
Day 14 — First anomaly

Something doesn't fit

A finance workstation opens a TLS 1.3 session to an external IP. Valid certificate. Standard port 443. Every firewall, IDS, and SIEM on the network sees clean, legitimate HTTPS. The engine sees four things that don't belong.

ANOMALY 10.0.14.73 → 185.*.*.47:443
tls: 1.3 | cert: valid (LE, 6 days old) | payload: encrypted
deviation 1: Fixed 4.2s send interval — no human application does this
deviation 2: Outbound packet sizes cluster at 312±8 bytes — structured, not organic
deviation 3: Session starts at 02:14 local — user has never been active before 07:30
deviation 4: 185.*.*.47 has zero relationship history with any host on this network
[engine] Behavioural confidence: 0.97 — this is not user-initiated traffic
03
Automatic — Rewind

Trace the origin

The engine doesn't just flag the anomaly. It rewinds. It searches its memory for the moment this host's behaviour started to change — before the C2 channel was established. It finds a behavioural shift 6 days earlier that was too subtle to trigger an alert on its own.

[trace] Rewinding behavioural history for 10.0.14.73...
day -6: Subtle DNS pattern shift — 3 queries to newly registered domain
day -6: First TLS session to 185.*.*.47 — short, 2.1s, single exchange
day -5: New background process: 847-byte burst every 60min (staging)
day -3: Interval shortened to 4.2s. Packet structure stabilised. C2 active.
day -1: Lateral probe: 10.0.14.73 → 10.0.14.80 (SMB, first ever contact)
day 0: 10.0.14.80 begins identical beacon pattern to 185.*.*.47
[engine] Two hosts compromised. Lateral movement confirmed.

It found the initial compromise. A 2.1-second TLS handshake, 6 days ago, that looked like nothing. The engine stored the behavioural pattern. When the C2 channel appeared, it connected the dots backwards through time.

04
Automatic — Lateral scan

Map the spread

Now the engine knows the behavioural fingerprint of this specific threat. It scans every host on the network — not for signatures, but for the same pattern of micro-deviations. Hosts that are compromised but haven't started beaconing yet. Pre-symptomatic detection.

[scan] Scanning 340 endpoints for behavioural correlation...
10.0.14.73 WKS-FINANCE-09 ACTIVE C2
10.0.14.80 WKS-FINANCE-14 ACTIVE C2
10.0.14.91 WKS-FINANCE-22 STAGING (pre-beacon)
10.0.6.12 SRV-FILESHARE-01 PROBE RECEIVED
336 hosts Clean
[engine] 4 affected hosts. 1 still in staging phase (not yet beaconing).
[engine] SRV-FILESHARE-01 probed but not yet compromised — act now.

It found a host that hadn't started beaconing yet. WKS-FINANCE-22 was in the staging phase — the same 847-byte hourly burst that .73 showed on day -5. Traditional tools would see nothing. The engine saw the pattern it already learned.

05
Deep analysis

Find what they took

The engine analyses the outbound flow from the compromised hosts. It can't read the encrypted content — but it can measure it. Over the past 72 hours, 10.0.14.73 sent 2.3GB more data to 185.*.*.47 than its behavioural profile predicts. The shape of the transfer matches data exfiltration: large sequential chunks during low-activity hours.

[exfil] Analysing outbound volume deviation...
10.0.14.73 → 185.*.*.47
expected (72h): ~340 MB (baseline)
actual (72h): 2.64 GB
excess: +2.3 GB (676% above normal)
transfer pattern: Sequential 50MB blocks, 02:00-05:00 window
matches: Structured data exfiltration profile
[engine] Probable exfiltration: financial data from FINANCE subnet
[engine] Window: last 72 hours. Estimated volume: 2.3 GB.
06
Complete picture

Full kill chain — from one anomaly

From a single timing anomaly, the engine reconstructed the entire attack. Initial access, staging, C2 establishment, lateral movement, pre-symptomatic spread, and data exfiltration. All from encrypted traffic. All without breaking a single cipher.

Incident Report CRITICAL
Classification
Advanced Persistent Threat
Kill Chain Stage
Exfiltration (active)
Compromised Hosts
2 active, 1 staging, 1 probed
Data at Risk
~2.3 GB exfiltrated
Initial Compromise
Day -6, 14:23 UTC
Attack Duration
~6 days undetected
Encryption Broken
No. Not required.
Signatures Used
None. Pure behavioural.

What your existing tools saw: Valid TLS. Clean certificates. Normal port 443 traffic. All green.

What the engine saw: A 2.1-second handshake that started a chain of behavioural micro-deviations across 4 hosts, culminating in 2.3 GB of structured data leaving the finance subnet at 2am through a perfect encrypted tunnel.

Same engine. Different domain.

Now point it at a nameplate.

No retraining. No reconfiguration. It learns from scratch.

Industrial equipment identification

An industrial site. 500 valve actuators. No existing asset register. One technician with a phone.

01
10 seconds

Capture

Point the phone at a nameplate. The system detects when a nameplate is in frame and captures automatically. GPS tagged. Walk to the next one.

[camera] Nameplate detected in frame
[capture] Auto-captured at optimal focus
GPS tagged. Queued for processing.
02
Seconds

Read + Verify

The engine reads the nameplate — even faded, corroded, or at an angle. Every extracted field is cross-referenced against a product intelligence database. Misreads are caught and corrected automatically. It gets more accurate with every scan.

[vision] 14 fields extracted from nameplate
[engine] VERIFY Manufacturer confirmed
[engine] VERIFY Model confirmed
[engine] VALIDATE All specs within range
[engine] CONDITION Visual assessment: Good
[engine] LEARN Pattern stored.
03
~26 seconds total

Asset Register Ready

One photo becomes a complete, validated asset record. Ready for your CMMS, your compliance reports, your maintenance planning. No clipboard. No data entry. No errors.

Rotork IQ35 REGISTER READY
Manufacturer
RotorkVERIFIED
Model
IQ35VERIFIED
Output Torque
366 NmVERIFIED
Supply Voltage
415V 3ph 50Hz
Enclosure
IP66/IP68
Condition
Good

+ 8 additional fields extracted and validated

Confidence
95%
0
Fields Per Scan
0
Manufacturers
~26s
Per Actuator

It's not a product. It's an engine.

Most companies build one tool that does one thing. We built a learning system that adapts to whatever you point it at. Network traffic. Equipment nameplates. Process data. Supply chains. The domain changes. The architecture doesn't.

Traditional Approach

Architecture

Build a separate model for each problem. Retrain from scratch. Different teams, different tools, different codebases.

Deployment

Months of data collection. Labelling. Training. Validation. By the time it's ready, the problem has changed.

Adaptability

None. Frozen after training. New data means new training cycle.

Our Approach

Architecture

One engine. Drop it into a new domain. It learns from the first interaction. No pretraining required.

Deployment

Point it at the problem and let it run. Baseline established in hours, not months. Getting smarter every minute.

Adaptability

Continuous. Every interaction refines the model. It doesn't just detect — it learns why, and carries that forward.

What would you point it at?

We're looking for the next domain. If you've got a problem where pattern recognition would change everything, we should talk.

Start a conversation