Databricks has built an Excel add-in.

Product News

Databricks has built an Excel add-in.
Here is an honest look at what that means.

On March 2, 2026, Databricks released its own Excel connector into Public Preview. We think it is worth examining clearly — including where it competes well and where it doesn’t.

Exponam, LLC  ·  March 2026

Databricks entering the Excel connectivity market is meaningful market validation. It confirms what Exponam has built a company around: governed, business-user access to lakehouse data from Excel is not an edge case or a transitional requirement. It is a permanent, strategically important capability that enterprises need.

We have spent three years in production enterprise deployments solving exactly this problem. Databricks has spent considerably more on engineering and market resources than we have. So we want to give an honest account of where the new product performs, where it falls short, and what the right choice is depending on your situation.

What Databricks built

The Databricks Excel Add-in uses the SQL warehouse endpoint — the same compute layer that powers Databricks SQL notebooks and BI tool connections — to query data and return results into Excel. Users browse Unity Catalog, apply filters, and import data into worksheets through a task pane interface. They can also write SQL directly using =DATABRICKS.SQL() cell functions. Unity Catalog governance applies throughout. The product is a web-based Office Add-in, running on Windows, macOS, and Excel for the web using the same cross-platform JavaScript framework.

For a first-version public preview, it is a credible product. It will serve some organizations well — particularly small teams already operating inside a Databricks workspace who need occasional data in a spreadsheet.

Where Exponam.Connect is different

The foundational architectural difference is the data retrieval path. Exponam.Connect offers users a choice: access data via Delta Sharing — the open protocol that retrieves compressed Parquet files directly from cloud object storage, bypassing Databricks compute entirely — or via a SQL warehouse endpoint when full SQL syntax is required. The SQL endpoint path, currently in private preview, gives users arbitrary joins, window functions, and subqueries. The Delta Sharing path gives them speed and zero compute cost.

A note on Delta Sharing availability. The zero-compute path requires that Delta Sharing be enabled in your Databricks environment. Most current enterprise deployments have it active, but it is subject to internal approval processes and is not universal. Where it has not yet been approved, the SQL endpoint path is available as an alternative.

Databricks’ add-in uses the SQL endpoint only — there is no zero-compute access mode. Every data pull consumes DBUs.

On Windows, Exponam.Connect uses the VSTO framework — the native Microsoft COM-based integration layer — rather than the web-based Office Add-in model Databricks uses on all platforms. The practical result: data writes into Excel are significantly faster for large datasets, as the trial results below demonstrate.

A direct comparison

The table below covers the dimensions that matter most in an enterprise evaluation. We have tried to be accurate about both products, including areas where they are genuinely equivalent or where Databricks holds an advantage.

Dimension Exponam.Connect Databricks Excel Add-in
Installation Installer link at exponam.com. Under five minutes to first data access.
▲ Exponam advantage
Download manifest XML → edit file → create shared folder → configure Trust Center → restart Excel. 10+ steps; fails silently in many corporate IT environments.
Author completed installation after multiple attempts and substantial troubleshooting.
Data retrieval User choice: Delta Sharing (zero DBUs, direct Parquet from cloud storage) or SQL endpoint (full SQL syntax, compute consumed). Both Unity Catalog governed.
▲ Exponam advantage
SQL warehouse endpoint only. Every import consumes DBUs.
Performance & data volume ~11 seconds per 1M rows on Windows. Full 2,879,789-row trial dataset retrieved successfully via both Delta Sharing and SQL endpoint paths.
▲ Exponam advantage on Windows
No published benchmarks. Trial testing returned 948,650 rows from the same 2.88M-row dataset before producing a non-descriptive error. SQL warehouse cold-start adds 2–5 min for non-serverless configurations.
▲ Data volume limit not documented; validate before deployment
Cost structure Transparent, volume-tiered license: $10/user/month at 100 users, scaling to $0.50/user/month at 100,000. Fixed and predictable.
▲ Exponam advantage at scale
No add-in license fee. But trial testing observed 16 DBU ($14.58) consumed in a single day of casual analyst use. At 5 DBU/day average, that is ~$70/user/month at list rate.
▲ Databricks advantage for very small teams only
SQL query Dynamic SQL editor (private preview) via SQL endpoint. Unity Catalog Views always available.
— Parity achieved
SQL query editor in task pane; =DATABRICKS.SQL() and =DATABRICKS.Table() cell functions.
External access .share files work without Databricks workspace accounts. Partners and clients access governed data without workspace provisioning.
▲ Exponam advantage
Requires a Databricks workspace account for every user.
ML model execution Exponam.AI runs Databricks ML Serving Endpoints as native Excel formulas. (Windows/VSTO edition only.)
▲ Exponam advantage — no equivalent
Not available.
UC governance Full UC governance on both Delta Sharing and SQL endpoint paths. Full UC governance.
— Parity
Metric Views Accessible via SQL endpoint path.
— Parity
Yes — supported.
Mac / web Supported. Installation straightforward but somewhat more involved than Windows. Exponam.AI and advanced SQL query not yet available on Mac.
— Comparable on Mac
Full support on macOS and Excel for web. Mac installation requires locating a hidden Library directory.
UI / usability Designed for business users. Controls and filters mirror Excel’s own conventions — no new mental model. Immediately familiar to anyone who lives in Excel.
▲ Exponam advantage for business users
Designed by technologists. Even the basic table-select workflow resembles a BI platform’s report configuration interface — comfortable for technical users, carries a learning curve for typical business users.
▲ Databricks advantage for technical audiences
Platform scope Databricks today. Snowflake and Azure Fabric on roadmap.
▲ Exponam advantage for multi-cloud
Databricks only.

What we observed in direct testing

In testing conducted for this comparison, both products were run against the same large enterprise-scale dataset. The results were unambiguous.

Product / path Rows returned Outcome
Exponam.Connect — Delta Sharing 2,879,789 Complete. Full dataset retrieved successfully.
Exponam.Connect — SQL endpoint 2,879,789 Complete. Full dataset retrieved successfully.
Databricks Excel Add-in 948,650 Incomplete. Retrieval stopped at approximately one-third of the dataset and returned a non-descriptive error with no recovery path offered.

Dataset: a large-scale enterprise transaction table available as a Delta Sharing trial dataset, containing approximately 2.88 million rows. Results from a single test run on standard hardware; individual results may vary by network conditions, warehouse configuration, and add-in version.

The Databricks add-in’s error was non-descriptive — no indication of whether it hit a row limit, a memory ceiling, or a query timeout, and no guidance on how to proceed. Organizations planning to use the add-in for large datasets should validate against their specific data volumes before deployment.

Designed for different users

The two products reveal their intended audiences the moment you use them. This is not a criticism of either — it is a useful signal for organizations deciding which to deploy and to whom.

Exponam.Connect was designed by business users for business users. The ribbon integration, task pane layout, and interaction model are all built to mirror what Excel users already know. Filters are applied exactly as they are on a native Excel sheet — same gestures, same mental model, no new vocabulary to learn. A finance analyst or operations manager can be productive within minutes of installation, without training or documentation. The design assumption is that Excel is home, and the add-in should feel like a natural extension of it.

The Databricks Excel Add-in was designed by technologists. Even using the basic table-select option — the simplest path through the product — the interface looks and feels like a BI platform’s report modification UI. It is parameter-driven, query-centric, and structured around concepts that are second nature to a data engineer or SQL analyst but unfamiliar to the typical Excel business user. There is nothing wrong with this for its intended audience. A Databricks-credentialed analyst who spends time in notebooks and SQL editors will feel at home. Someone who does not will not.

The practical deployment implication. In most enterprises, the population of Excel business users — finance, operations, supply chain, compliance, planning — outnumbers the technical Databricks user base by a wide margin. Both products target that broader population in principle. In practice, Exponam.Connect is the product they will actually use without hand-holding. The Databricks add-in will require change management and training investment that its “no additional license” pricing does not account for.

The cost question, with real numbers

The Databricks add-in carries no license fee, which looks attractive on first comparison. During testing, we ran four to five query imports in a single working day — casual, representative analyst use. The result: 16 DBU consumed, equating to $14.58 at the $0.70/DBU SQL Serverless list rate. For one analyst. One day. Light load.

Extrapolating to a realistic monthly cost. At a conservative 5 DBU/analyst/day and $0.70/DBU, that is $3.50/analyst/day — approximately $70/analyst/month on a standard 20-working-day basis. At 10 DBU/day the figure doubles to $140/analyst/month. These are deliberate query costs only; formula recalculation events in shared workbooks can add further unplanned consumption on top.

Against that, here is what Exponam.Connect costs at the same user counts:

User count Exponam ($/mo) Databricks @ $25/user † Databricks @ $70/user ‡
100 users $1,000 $2,500 $7,000
1,000 users $5,000 $25,000 $70,000
10,000 users $10,000 $250,000 $700,000

† Conservative floor estimate. ‡ Based on observed trial rate of 5 DBU/analyst/day at $0.70/DBU (SQL Serverless list) × 20 working days. Actual costs vary by query frequency, warehouse type, dataset size, and contracted DBU rates. Formula recalculation in shared workbooks can add further unplanned consumption.

At the observed trial rate, Exponam.Connect is 7× cheaper at 100 users, 14× cheaper at 1,000, and 70× cheaper at 10,000. Even the conservative $25/user floor produces multiples of 2.5×, 5×, and 25×. One caveat: for teams of fewer than 100 users, Exponam.Connect’s $1,000/month minimum is a meaningful consideration, and in that range the Databricks add-in’s absence of a license fee is a genuine advantage — if DBU consumption is carefully monitored.

The shared-workbook recalculation risk. Data imported via =DATABRICKS.Table() or =DATABRICKS.SQL() lives as a formula in the worksheet. Excel formula recalculation — triggered by workbook operations, third-party add-ins, or Ctrl+Alt+F9 — can silently re-execute those queries against a running SQL warehouse. In workbooks that circulate widely, this adds unbudgeted compute spend that is difficult to detect until the invoice arrives. Exponam.Connect imports data as static cell values; refresh is managed entirely through the ribbon and is not affected by any Excel recalculation event.

Where Databricks has a genuine advantage

We said we would be honest, so here it is.

Small teams with existing Databricks credentials. If your team is fewer than 100 people, already operating inside a Databricks workspace, and primarily running SQL-scale queries rather than million-row imports, the Databricks add-in is reasonable. No additional license, solid SQL functionality, the same Unity Catalog governance your team already uses.

Mac-first organizations. On macOS, both products use the Office Web Add-in framework and the picture is more balanced. Exponam.Connect’s Delta Sharing cost advantage still applies, but the VSTO performance advantage does not. And Exponam.AI and the advanced SQL editor are not yet available on Exponam.Connect’s Mac edition. Mac-heavy deployments should evaluate both products carefully.

Platform confidence. Databricks is a $62B company releasing a first-party product. Enterprises weigh vendor stability and support structure, and Databricks carries weight in that evaluation.

What comes next from Exponam

Databricks’ entry does not change our roadmap — it confirms it. Three capabilities currently in development:

Natural language / AI query. Users will be able to describe their data need in plain English and receive governed, reproducible results — with a private or BYO LLM option for regulated environments where data cannot leave controlled infrastructure.

Automatic path optimization. AI-driven routing will select between Delta Sharing and the SQL endpoint on each query, optimizing automatically for cost and performance. Users won’t need to choose — the system will.

Multi-cloud expansion. Snowflake and Azure Fabric support are on the near-term roadmap. Databricks will build a good connector for Databricks data. They will not build one for Snowflake. We will.

The bottom line

Databricks entering the space is good news for the market. It validates the problem and raises awareness that governed, no-code lakehouse data access from Excel is available. Some of that awareness will land on their product. Some will land on ours.

For organizations with more than 100 users, Windows-primary deployments, cost sensitivity at scale, external data sharing needs, or multi-cloud environments: Exponam.Connect is the stronger choice on the merits. For small teams already inside Databricks’ ecosystem who need light-duty access: the Databricks add-in is a reasonable starting point — with the caveat that data volume limits and DBU consumption should be validated before any broader rollout.

The full technical comparison — covering installation, architecture, performance, cost, governance, usability, and roadmap in depth — is available below.

Full Technical White Paper

Exponam.Connect vs. the Databricks Excel Add-in — comprehensive comparison. March 2026.

Download White Paper

© 2026 Exponam, LLC  ·  exponam.com  ·  info@exponam.com  ·  +1.646.360.0110

Exponam is a Databricks Validated Technology Partner

10 Million Rows!

That’s right – Import 10,000,000 rows of Databricks data into Excel!  If you haven’t tried the Exponam.Connect Excel Add-in, now is the time.  This is the fastest, cheapest, and easiest way for ALL users to access Databricks data.

NO CODE.  NO CONFIGURATIONS.  NO REPORT DESIGNER.  Just EASY and FAST imports with NO databricks COMPUTE!

Contact us for a trial.

Exponam & Apache Spark

 

Exponam’s direct integration with Apache Spark, including Databrick’s commercial offering, improves the time-to-value of quantitative, analytic, and machine learning results available with Spark.  Exponam’s integration achieves results with these advantages:

  • A native data source for loading and saving Exponam .BIG filesThe native data source is built with Exponam’s powerful core technology, which dynamically tunes itself to your enterprise Spark clusters’ runtime capabilities.  This ensures lean execution, high performance, and brisk throughput to and from Spark’s internal RDD (resilient distributed data) structures.  With Exponam, you can ingest large datasets into Spark out of highly compressed import files without wasting space and time.  And you are able to egress Spark data into a format that is orders of magnitude more compressed than standard delimited formats, allowing much larger datasets to be faithfully preserved for audit and archival needs.
  • Frictionless access with Spark DataFrames

    Exponam data load and save operations are available using standard DataFrame syntax that data scientists use every day, whether with Scala, Python, or Spark SQL.  Exponam’s default options can be trivially overridden using standard DataFrame options, unleashing the full power of Exponam’s underlying technology: security, file optimization levels, story files, and application-defined supplemental metadata.An Exponam file can contain any number of tables, each with its own schema and row-level data.  Each table can be loaded individually, allowing a single Exponam file to transport entire rich repositories of data into Spark.  Exponam’s schemas eliminate the potential ambiguity of inferred schemas, and mean that the native representation of objects in RDDs is always optimal and correct.Further, Exponam’s save operation with Spark DataFrames allows the flexibility that DataFrame users demand.  Save can be invoked in a cluster-aware fashion, with each node in the cluster generating an output file for its local data only, which can be advantageous for extremely large RDDs.  Alternately, DataFrame results can be coalesced (or glom’ed) through the master node, and result in a single output file.  The point is that Exponam allows you to use the pattern that best fits your cluster profile and data egress requirements.
  • Data lineage

    Modern data architectures seek to preserve data lineage across disparate products and solutions, an almost insurmountable task when data is moved between traditional silos, compute grids, and data grids.  With Exponam, the provenance of data is integral to the file itself.  This allows solution architectures using Apache Spark to maintain data lineage from ingest through egress, so that the linkage to upstream systems is faithfully preserved.
  • SecurityStandard data exchange formats for Spark require data that is unencrypted when at rest.  Exponam, in contrast, is always encrypted at rest, even as it is being loaded into the cluster.  The attack surface for potential data breach is demonstrably smaller with Exponam.Further, Exponam’s default behavior on load operations is to first establish the integrity of the file.  If the file has been tampered with, it will fail with a standard Spark exception, and absolutely no row-level data will be generated in Spark.

 

Download this article: