Data engineers deserve better data infrastructure

Data engineers deserve better data infrastructure

Data engineers deserve better data infrastructure

Data engineers deserve better data infrastructure

Oct 7, 2025

Oct 7, 2025

I’m Brad Heller and I am a data engineer. Well, technically I’ve always had the title “software engineer” but the work I’ve always done is data engineering work. Before the title data engineer even existed, I was building data infrastructure and maintaining pipelines to feed data warehouses at Microsoft and for companies like WebMD. As the CTO or Head of Engineering at my fair share of startups, I’ve built loads of data products and internal reporting tools. And finally, as one of the engineering leads in charge of Snowflake’s control plane, I witnessed first hand how the data industry has grown up.

I’ve also seen the rise of the cloud. Over my career, I’ve gone from racking and stacking to writing lambdas provisioned with Pulumi. Now, the AI revolution is changing the way we work. This blog post was made a lot easier thanks to OpenAI! 

All this is to say, I know the pains that come with building data infrastructure yourself. It’s why I created Tower with Serhii! Toiling away with infrastructure challenges prevents data engineers from solving the single most critical problem their businesses face: Getting more value from their data.

The DIY infrastructure problem

Data infrastructure is hard to get right

As data engineers, we’re not supposed to be part-time AWS solution architects. But that’s what many of us end up doing. To get even a basic platform running, we’re forced to wade through docs, stitch together services, and then maintain the whole thing against a constant churn of “what comes next.” And that’s before you even touch the long list of business requirements—security, compliance, reliability, auditability—that enterprises expect by default.

And the result? Months of engineering time lost to reinventing the same data infrastructure all of your competitors also built, instead of focusing on getting more value from your companies’ data.

Platform features are even harder

Even if you manage to spin up something serviceable, you’re far from done. A platform isn’t just about running jobs. It needs scheduling, monitoring, logging, alerting, governance, and loads more depending on the business. We see teams trying to extend general-purpose CI/CD tools like GitHub Actions into data platforms

It works, until it doesn’t. 

Suddenly, you’re duct-taping observability, access control, retries, and data-specific workflows onto a tool that was never built for the job. And let’s not forget, no one gets promoted because of the cool new observability stack they brought online.

Operational overhead starts to add up

Data infrastructure isn’t something you setup once and forget it. It’s something you need to keep alive, a ship you need to keep running tightly. Who patches the base images? Who rotates certificates? Who makes sure the Kubernetes cluster doesn’t fall over at 2 a.m.? Many teams underestimate the day-2 burden of running their own stack.

Existing big data infrastructure solutions are way too expensive

On the other end of the spectrum, the established vendors have an answer: pay us a fortune. Tools like Databricks solve many problems, but at a price point and contract structure that locks you into paying far more than you use. There’s no real middle ground today—you either build and maintain the data infrastructure yourself, or you hand over your budget (and flexibility) to a heavyweight vendor. 

Bottom line: DIY data infrastructure is causing problems.

When you DIY, you build a solution to today’s issues. You realize those things can fail tomorrow and cost a fortune to fix, but don’t have a choice. Large vendors leverage this, solving the issue and locking in customers. You lose flexibility in exchange for security.

The reality is, you don’t need to reinvent the bicycle one new requirement at a time. You can have a tool that helps you with today’s problems and prepares you for the future, without having to create an emergency budget for it when the DIY solution no longer works for your scale. 

Tower’s stack future-proofs your data team at a fraction of the cost

We started Tower because we believe Python data engineers deserve data infrastructure that works with them, not against them. This led us to building a tool that helps you deliver results on a level previously only available to big players with custom built tools, with all the benefits:  

  • Cloud infrastructure that lets you focus on what matters. With Tower, you don’t need to think about infrastructure. You can focus on building data apps, pipelines, and products in Python while Tower takes care of orchestration, configuration, and execution across storage, compute, and more.

  • Secure, compliant, and flexible from day one. Tower bakes in enterprise-grade security, compliance, and reliability to your infrastructure and process. You get a platform that passes audits without spending six months toiling with IaC and compliance reports.

  • Future-proof by design. We work on the bleeding edge of data engineering and understand why and where things are changing. The single biggest shift in the ecosystem today is the migration towards open table formats like Apache Iceberg. Tower helps accelerate your adoption by giving you a single platform to pull everything into.

  • Bring software engineering best practices to data engineering. Integrated with Git, out-of-the box CI/CD, consistent and reproducible environments, and native observability are just a few ways that Tower helps bring data engineering and software engineering closer together.

  • A better workflow that lets you iterate quickly. With Tower you get reproducible development environments out of the box and a workflow that encourages you to move from your laptop to the cloud as quickly as possible.

  • Pay for what you use, not what the sales guy pressures you into. Tower is consumption-based with transparent pricing. Deploy workloads, pay for the compute you use, and nothing more. Scale down to zero when idle. No oversized contracts, no surprises on your invoices, and a platform that fits your company–no matter the size.

At Tower, we embrace future-forward data infrastructure 

We believe data engineering deserves the same kind of modern developer experience that web developers got with Vercel and Render. A cloud that’s tailored to the workflows of Python data engineers: fast iteration, built-in observability, and simple deployment. One without the baggage of yesterday’s platforms.

Data engineering doesn’t need to be a choice between under-engineering and over-paying. Tower gives you a third path: data infrastructure that just works, so you can get back to engineering. Try it out!

Btw, if you want to see how one of our energy customers applied these principles in their most recent project and more than doubled their execution speed, check out our a-Gnostics/Tower case study.

© Tower Computing 2025. All rights reserved

Data Engineering for fast-growing startups and enterprise teams.

© Tower Computing 2025. All rights reserved

Data Engineering for fast-growing startups and enterprise teams.

© Tower Computing 2025. All rights reserved

Data Engineering for fast-growing startups and enterprise teams.

© Tower Computing 2025. All rights reserved

Data Engineering for fast-growing startups and enterprise teams.