SOLUTION

Deploy Data Agents

Run Data Pipelines

What It Is

Teams building data agents need infrastructure that lets them connect language models to reliable, up-to-date business data, iterate quickly, and deploy safely to production. Tower provides a runtime and deployment layer for data agents that works with Iceberg lakehouses and supports a wide range of models - from cloud-hosted 1T+ parameter LLMs, to locally-running Small Language Models. Tower is designed for engineering teams that want to evaluate and operate data agents without building custom infrastructure for environments, orchestration, and observability.

Who It's For

Teams that are evaluating, building, or operating data agents in production, including:

Data Analytics and BI teams

Building natural-language “ask the data” interfaces on top of analytical datasets.

Data Engineering and Data Platform teams

Investigating pipeline failures by correlating deployments, schema changes, data volumes, and job logs

Customer Support and Customer Success

Assembling customer profiles from incident history, product usage, and entitlements to draft support responses.

Sales and Revenue Operations

Generating call briefs from usage data, billing records, invoices, renewal dates, and recent emails or meetings

Product Management and Growth

Analyzing KPI changes by correlating product releases, marketing activity, and customer behavior, then drilling into drivers through iterative “why” questions.

IT Operations

Correlating telemetry, incidents, and change logs to identify root causes of SLA violations.

How Tower helps with Data Agent Deployments

Tower provides infrastructure and tooling for developing and operating data agents:

Connect agents to fresh business data stored in Iceberg-based lakehouses

Run and compare multiple model and prompt versions in a scalable cloud environment

Separate development, testing, and production environments

Deploy agents either on Tower-managed infrastructure or into your own cloud or on-prem environment

Support local development on your own hardware, including local GPUs

Provide centralized observability (logs and metrics) across both Tower-managed and self-hosted deployments

Use code-first orchestration compatible with major agent tool calling frameworks like LangChain

Inspect and visualize tool-call dependencies to understand agent behavior and data flow

The goal is to make experimentation cheap, deployment repeatable, and production behavior observable without locking teams into a specific model or hosting setup.

Featured Blogs

The Tower MCP Server - vibe engineering from zero to App

Deploying AI Agents to Tower and teaching them to “speak Iceberg”

Preparing Your AI Agents for the Iceberg Age

The Hidden Headaches of LLM Inference for App Developers

Tower Supercharges LLM Inference for App Developers

Featured Talks

Tower provides infrastructure and tooling for developing and operating data agents:

01

Surviving the Agentic AI Hype with Small Language Models

December 2025

PyData Boston & Python Summit Warsaw

02

Preparing your AI Agents for the Ice(berg) Age

June 2025

AI + Data @ Scale, Santa Clara, CA

02

Local and Serverless DeepSeek R1 on Iceberg Lakehouses

May 2025

Iceberg Summit 2025

Featured Examples

Data Agent for Stock Data Retrieval

Stock trade recommendations using LLMs and ticker data in Iceberg…

Develop with DeepSeek R1 on local GPUs, deploy with serverless…

Power Your Team with Tower

Tower gives you a reliable, open lakehouse built on Apache Iceberg and compatible
with Snowflake, Spark, and what comes next.

Tower gives you a reliable, open lakehouse built on Apache Iceberg and compatible with Snowflake, Spark, and what comes next.