The deployment backend for physical AI
Infrastructure for robot fleet data.
DataCore captures multi-modal recordings, retrieves synchronized sensor and log slices, and produces reproducible datasets for debugging and training.
It fits into your existing stack and can add higher-level workflow modules where they help.
We are onboarding a small set of design partners for 2026 pilots.
Platform
DataCore is a robotics-native backend for data and operations.
It starts with reliable capture and retrieval, then adds the workflow layer teams need as fleets scale.
Plane 1
In pilotData plane
Edge-to-cloud capture, robotics-native cataloging, synchronized slices, and retrieval over unreliable networks.
Plane 2
In progressProcessing plane
Pipelines, dataset versions, provenance, and exports into your training and analytics stack.
Plane 3
In progressOps plane
Incidents tied to evidence, replay bundles, audit trails, and access controls.
Plane 4
PlannedIntelligence plane
Automation that increases yield per fleet hour, including QA support, episode extraction, and anomaly surfacing.
Workflows
Two workflows we optimize first
The goal is less time between an incident and a reliable fix.
Incident to replay
Workflow AWhen an incident happens, teams need exact context, not a bag file and a screenshot. DataCore stores the recording, retrieves a synchronized slice across sensors, state, and logs, and packages it as a replay bundle your team can run locally.
Outputs: incident object, synchronized slice, replay bundle, annotations
Logs to dataset export
Workflow BTraining data often turns into unversioned folders that teams cannot reproduce. DataCore turns selected slices into versioned datasets with provenance so teams can rerun training later and understand what changed.
Outputs: dataset version, lineage and provenance, export to your stack
These are the first workflows in scope. We expand workflow coverage as pilot deployments move into production.
Engagement model
How we start with new teams
We begin with a scoped deployment that proves operational value in your environment.
Recording reliability
Confirm that critical field recordings arrive complete and usable.
Incident-to-replay latency
Confirm that incident context can be retrieved and replayed fast enough for daily debugging.
Reproducible dataset export
Confirm that exported datasets are versioned, traceable, and reusable for training.
Production transition
Once these outcomes are validated, the same deployment extends directly into production.
System view
Architecture
DataCore is designed around two facts: networks fail and robotics data is multi-modal.
We focus on core primitives and integrate with the rest of your stack.
Edge
- Capture In pilot
- Buffer and upload In pilot
- Controlled interventions In progress
Cloud (DataCore)
- EdgeRelay In pilot
- TypeAtlas In pilot
- Storage and index In pilot
- Retrieval and pipelines In progress
Tooling
- Console In pilot
- APIs and SDKs In pilot
- Operator tools In progress
Start walkthrough Stop walkthrough
01 / 10
EDGE
Capture
Capture synchronized multi-modal streams on the robot (e.g., mark a 20s window around a near-miss).
Scroll to explore
EdgeRelay
Ingestion reliability
Resumable ingestion for unstable connectivity with durable acknowledgements, buffering, and backpressure controls.
TypeAtlas
Portable semantics
TypeAtlas maps payloads to stable type references (TypeRefs), keeping schemas and transforms portable across ROS, internal formats, and visualization tools.
Console
Daily operations surface
Operate from one surface to browse recordings, request synchronized slices, and export artifacts into your existing toolchain.
Controlled intervention primitives
Auditable operator handoffs
Scoped operator interventions with permissions, guardrails, and audit trails tied to incidents and recordings. We do not replace your teleop stack.
Security
Security and deployment options
Security and deployment are explicit: least-privilege access, retention controls, and audit trails tied to recordings.
- Projects are isolated by default
- Access is scoped with RBAC, API keys, and device identity
- Actions and artifacts link back to recordings
- Retention and deletion rules are explicit
Deployment
Deployment options: managed cloud, customer VPC or hybrid, and on-prem components when connectivity or regulation requires it.
If you have data residency, retention, or audit requirements, we scope them in pilot week one.
Execution plan
Roadmap (next 12 months)
Q1 2026
Pilot operations
- Standard pilot scope and success metrics
- Reliability and query performance instrumentation
- First connector set for target customer profiles
Q2 2026
Incident loop v1
- Incident objects and replay bundles
- RBAC baseline and audit primitives
- Retention controls for production pilots
Q3 2026
DatasetOps beta
- Dataset versions and exports
- Lineage and provenance UI
- Initial production expansion paths
Q4 2026
Scale
- Connector coverage and hardened operations
- DatasetOps as a standard add-on
- Higher throughput for larger fleets
Qualification
Is DataCore a fit?
DataCore is usually a fit for teams operating outside controlled networks with multi-sensor data and a training loop that needs reproducible datasets.
Good fit if you…
- Operate robots in the field, not only in controlled lab networks
- Depend on synchronized sensor, state, and log retrieval for debugging
- Need reproducible dataset versions, not one-off curation scripts
- Want auditable workflows as fleet size and team size grow
Probably not a fit if you only need…
- Only need a labeling UI or a single-machine local store
Next step
Request a pilot
Leave your email and a short note about your fleet setup. If there is a fit, we will send a proposed pilot scope.
Submitted
Thanks. We will review your note and reply if there is a fit.