About

We build backend infrastructure for robotics deployments.

Most robotics teams do not stall on model ideas first. They stall in the operational loop: collecting trustworthy data, reproducing failures, and shipping fixes without guesswork.

Thesis

Thesis: the deployment gap is operational

Research demos keep improving. But real deployments still fail on the long tail, because the learning loop breaks under real constraints.

When a demo looks good, the production questions are unglamorous: what happens when lighting changes, a camera shifts, the network drops, or an operator makes a mistake? Did you capture the right data, and did it arrive intact? When you have weeks of recordings, can you aggregate them into something you can browse, search, and slice again later?

Most teams end up stitching together a fragmented stack: loggers, object storage, ad-hoc scripts, a visualization tool, a training pipeline, and a spreadsheet of “what happened.” It works for a while. Then the fleet grows, the data grows, and nobody can keep the system coherent. People argue about what was collected, what was dropped, and which version of a dataset a result came from.

We think the missing layer is a deployment backend: a data + ops system designed for robotics semantics and edge reality. Its foundations are reliable collection, aggregation you can trust (sessions, manifests, and metadata), and fast retrieval.

Collect → Curate → Learn → Deploy → Monitor → Improve → repeat.

Compounding isn’t automatic. If failures force human intervention, intervention drives cost. Cost limits deployment scale. Limited scale means less real data. Less real data keeps the long tail unsolved.

The way out is to make the loop operable: capture data reliably in the real world, aggregate it into coherent recordings and datasets, and ship improvements safely.

That’s what we’re building.

Scope

What we build

DataCore

In pilot

The data and retrieval backbone: reliable edge-to-cloud capture, robotics-native indexing, and synchronized slices for debugging and training.

Programs (with partners)

In progress

Partner deployments that test reliability and retrieval performance in production environments. These are standardized product engagements, not custom services.

Deployments

Planned

Over time, we expect to run selected deployments on the same stack to harden it under real constraints while staying focused on infrastructure.

Operating model

Principles

1

Build for messy reality

If it only works on a good network, it won’t work where robots are.

2

Make every fleet hour improve the next release

Data should compound. Debugging should be fast. Datasets should be reproducible.

3

Design for autonomy with controlled interventions

Humans stay in the loop when systems need help. Interventions should be deliberate, scoped, and auditable.

4

Minimize per-deployment overhead

Robotics teams shouldn’t rebuild the same data plumbing at every company.

Team

Founders

Backgrounds in distributed systems and robotics.

Cristian Meo

Cristian Meo

Cofounder, CEO

Ph.D. in robotics and generative AI.

Alejandro Daniel Noel

Alejandro Daniel Noel

Cofounder, CTO

Ex-Google Cloud engineer.

Join us

Careers

We’re looking for exceptional engineers who want to push the boundaries of robotics and physical AI. You’ll work closely with us, own real systems, and ship fast.

Senior Software Engineer, Distributed Systems (Rust)

Full-time · Delft | Zurich | Remote

Build the Rust backend for reliable edge-to-cloud ingestion, indexing, and synchronized retrieval for robot fleets.

Learn more (PDF)

Robotics + ML Research Engineer, Data & Models

Full-time · Delft | Zurich | Remote

Run the loop: collect multi-modal robot data, build evaluations, and train models that stress-test and augment DataCore.

Learn more (PDF)

We’re building a dynamic, high-trust culture with high ownership and no corporate layers. If you’ve built real systems and care about reliability, reach out.