
Can you run dbt Core directly on Tower?
With the dbt + FiveTran merger making the world wonder about the future for dbt Cloud, that's the single most common question we get. What would you accomplish, if you could?
You’d be able to keep your DBT models and your SQL transformation logic right next to your Pythonic workloads. You wouldn’t need to jump between environments. You wouldn’t need to maintain yet another system. You could let your team build in one place.
You could even avoid expensive vendor lock-in.
The good news is that this isn’t theory anymore. You can run dbt on Tower today.
Bringing dbt Into Tower
Data engineers use a lot of tools. Some are essential, others inherited. Many overlap. Often, getting them to work together means hours of setup, orchestration, and figuring out which system should own what.
We built dbt support in Tower because we wanted to reduce that overhead.
Running dbt on Tower lets you operate your entire transformation layer in one environment. You deploy your dbt project, run it on Tower’s infrastructure, and immediately connect it with the rest of your data flow.
Want to get started immediately? We’ve published a full example app for you to try.
The example showcases how you can run dbt Core in Tower. The app downloads the Brazilian E-Commerce Public Dataset by Olist, just as an example of what you can do with dbt.
Why This Matters
The dbt library has become the standard for SQL-based transformations. Many teams trust it and rely on it. You may already have dbt projects sitting in repos waiting to be run.
The problem is running those projects in production usually requires setting up and maintaining complex cloud infrastructure. It is time-consuming and costly, and comes with risks of vendor lock-in.
With Tower, you get another option: you keep dbt and have open infrastructure.
Transform data using SQL on top of Tower’s managed compute.
Pair dbt models with Python tasks inside a single, unified dataflow.
Write results into Tower tables with Iceberg support.
Keep your entire ETL pipeline in one system without locking yourself into proprietary tooling.
If it supports SQL, dbt can run it. And if dbt can run it, Tower can now run it too.
Why We Built This
A number of users asked us directly: “Can I run my dbt workloads on Tower?” These were teams with existing dbt projects who didn’t want to introduce another environment.
We already supported SQL execution, Tower tables, and multi-step dataflows. The missing piece was the ability to bring full dbt projects into that workflow.
Now, all teams using Tower can run dbt projects without changing anything in them. You can now build and manage complete ETL pipelines in one place.
Use the tools you need, when you need them
We discussed how there are countless tools in the data ecosystem in our earlier blog post. Many of them promise simplicity, but result in complex integrations with many moving parts.
Our goal is the opposite. We want your data pipelines to live in one place. Running dbt on Tower brings that vision a step closer.
Run dbt on Tower today.