Article

Mar 26, 2026

What If You Could Ask Your Company: What Actually Worked in 2018?

There’s a moment that happens inside almost every company once it starts taking AI seriously. A team is trying to build something new. Maybe it’s a model to predict churn, or an agent that helps support reps, or a trading strategy that adapts faster to market changes. The first instinct is always the same: pull data from Snowflake, maybe pipe in some logs from S3, stitch together what’s available, and start experimenting in Python with Pandas, PyTorch, or whatever stack the team prefers. And then someone asks a deceptively simple question: “Didn’t we try something like this before?” That’s where things break. Because the real answer isn’t in Snowflake. It isn’t in your dashboards. It isn’t even in your current data pipelines. The answer exists, but it lives somewhere far less accessible: in backups, in old environments, in historical states of the company that no system was ever designed to query. And today, there is no way to ask that question properly...

The Problem: Your Company Has Memory, But No Way to Use It

Every company already has a complete record of its own decision-making. It’s just fragmented across systems that were never meant to work together.

If you look at a typical stack, it’s surprisingly consistent:

  • Product data lives in databases and gets piped into Snowflake or Databricks

  • Metrics are tracked in tools like Mixpanel or Amplitude

  • Internal conversations sit in Slack, email, and ticketing systems

  • Experiments are scattered across notebooks, MLflow runs, or ad-hoc scripts

  • Old versions of everything quietly accumulate in backup systems like Duplicati

The first four systems are optimized for what’s happening now. They help you analyze the present and maybe the recent past.

Backups are different. They are the only place where the full timeline exists. Every version of a dataset. Every iteration of a model. Every configuration that once ran in production. Every failed experiment that never made it into a dashboard.

But backups were built for disaster recovery, not understanding. So while they contain the most complete version of your company’s history, they are effectively unusable for anything beyond restore.

That creates a strange dynamic.

Teams spend months trying to recreate context that already exists. They rerun experiments that were already run. They debate strategies that were already tested. And when they train AI models, they often rely on partial, cleaned, or external datasets instead of the messy, high-signal history the company actually generated.

The result is not just inefficiency. It’s a structural blind spot.

Your company has memory. It just can’t reason over it.

What “Time Travel AI” Actually Means in Practice

The idea of “time travel AI” sounds abstract until you map it to a real workflow.

Imagine you’re working on a new pricing model for a SaaS product.

Today, your process probably looks like this:

You pull current customer data from Snowflake. You join it with recent product usage from your analytics tools. You maybe enrich it with a third-party dataset. Then you train a model, evaluate it, and iterate.

If you want historical context, you’re limited to whatever structured data was preserved and cleaned enough to make it into your warehouse. Anything outside that boundary is effectively gone.

Now imagine a different workflow.

Instead of starting with a partial snapshot, you start with the question:

“What pricing strategies have we tried before, and what actually worked?”

Duplicati makes it possible to answer that question directly by turning backups into a queryable, versioned data layer .

Concretely, that means:

  • Pulling past versions of your pricing tables, not just the current one

  • Reconstructing the exact datasets that fed previous experiments

  • Accessing historical support tickets and customer feedback tied to those changes

  • Recovering the model outputs or heuristics that were used at the time

Instead of guessing, you can recreate the environment as it existed in 2018, or 2021, or any specific moment that matters.

Not just the data, but the state of the system.

This is where the shift happens. You’re no longer training a model on a static dataset. You’re training it on a sequence of decisions, outcomes, and iterations that actually happened inside your company.

That is a fundamentally different input.

From Backups to a Digital Twin of Decision-Making

When you structure backup data properly, something interesting emerges over time.

You don’t just have files and tables. You have a timeline.

Each point in time represents:

  • The data that existed

  • The models or logic that were applied

  • The decisions that were made

  • The outcomes that followed

Duplicati turns this into a continuously updated, versioned dataset that AI systems can actually use .

In practice, that might look like:

  • Exporting backup data into Parquet or Delta Lake so it can be queried alongside Snowflake

  • Using tools like MLflow or Weights & Biases to tie experiments back to specific historical states

  • Layering a retrieval system on top so models can reference similar past scenarios during training

Once that infrastructure is in place, you can start doing things that were previously impossible.

You can simulate decisions against past conditions. You can ask how a model trained today would have performed using the exact inputs from years ago. You can identify patterns not just in data, but in how your organization behaves over time.

This is what a digital twin actually is in this context.

Not a static replica of your systems, but a replayable history of how your company thinks, experiments, and evolves.

Why This Replaces Guesswork, Not Just Tools

Most companies try to solve this problem indirectly.

They buy more data. They build more dashboards. They invest in better analytics tools. They assume that more visibility into the present will compensate for the lack of structured history.

It doesn’t.

Because the core issue isn’t access to data. It’s access to context.

External datasets can tell you what is happening in the world. They cannot tell you how your organization responded to similar situations in the past. Dashboards can summarize outcomes, but they strip away the conditions and decisions that produced them.

Time travel AI replaces that gap.

It doesn’t just help you analyze. It lets you ask:

  • When did we try something similar before?

  • What inputs did we use at the time?

  • What decisions did we make, and why?

  • What actually happened afterward?

And crucially, it lets your models learn from those answers.

That is not something you can approximate with a cleaner warehouse or a better BI tool. It requires access to the full operational timeline, which only exists in backups.

The Shift: From Storage to Institutional Memory

The biggest mistake in how companies think about backups is that they treat them as insurance.

A cost center. A compliance requirement. Something you only touch when things go wrong.

But if you look at what’s actually inside them, backups are something else entirely.

They are the only system that captures your company’s complete, unfiltered history.

Duplicati’s role is to make that history usable. Not by replacing your existing stack, but by connecting it to the one layer that has always been missing: a versioned, queryable record of everything that has ever happened.

Once that layer exists, the way you build AI changes.

You stop relying purely on snapshots and external data. You start training on your own trajectory as a company. You move from guessing what might work to understanding what has worked, under what conditions, and why.

And at that point, asking “what worked in 2018?” is no longer a thought experiment.

It becomes a normal part of how your systems reason.

Get started for free

Pick your own backend and store encrypted backups of your files anywhere online or offline. For MacOS, Windows and Linux.

Pick your own backend and store encrypted backups of your files anywhere online or offline. For MacOS, Windows and Linux.

  • Example image