X
Blog | Data Analytics | Databricks

The Problem with “Snowberg”

Using an open lakehouse architecture – one that stores data in a user’s own cloud storage accounts using an open source storage format (like Apache Iceberg) and data catalog – can help to combat the velocity of change in an ever-evolving analytics ecosystem. Iceberg has continued to grow in popularity recently due to its flexibility and functionality, and it’s backed and/or supported by a large list of heavy hitters in the industry, including Netflix, Apple, and Snowflake.

Unfortunately, while Apache Iceberg is all about openness and cross-engine/platform support, Snowflake’s Iceberg implementation (“Snowberg”, if you will) seems to be the opposite. With a couple of added layers of complexity and technical quirks it forces users to make hard choices that don’t seem aligned with the full breadth of Iceberg functionality.

One major choice is whether to use managed or internal Iceberg tables in Snowflake. Here’s a quick rundown of the tradeoffs there:

Managed Iceberg tables

  • Snowflake compute can read/write, but other tools can’t write.
  • If you just use Snowflake’s proprietary Iceberg catalog, you’re locked into a closed ecosystem where only Spark can read your data, none of the other major platforms can read Snowberg.
  • If you want interoperability with other Iceberg clients, you have to also use Polaris Catalog, which introduces an additional catalog to provision, govern, and pay for.
  • Snowflake’s managed Iceberg tables don’t support table partitioning, which makes it virtually impossible to create large tables and achieve good performance. Snowflake substitutes partitioning their proprietary Snowflake clustering which makes reads fast on Snowflake, but they’ll still be slow on external engines. In order to take advantage of this clustering you’re locked into using Snowflake compute.
  • The small file problem. Iceberg file sizes have to be 16MB to match Snowflake internal proprietary data format for good query performance on Snowflake. So, a 1TB table will have 65K files which does not sound a lot but a 100TB table will have 6.5M files. In contrast, the default file size per the Iceberg spec is 512 MB, that’s 32x bigger than Snowlake Managed Iceberg tables.    

Unmanaged Iceberg tables

  • Other tools can read/write, but now Snowflake can’t write.
  • Requires external Iceberg catalogs (based on the Iceberg REST Catalog spec), e.g. AWS Glue or Apache Polaris or object storage as the catalog.

You might be asking yourself “shouldn’t Snowflake’s managed Polaris catalog make all this simpler?” Well… not really. Polaris is just a catalog, and you need to configure engines to write to those tables. However, Snowflake itself doesn’t write to Polaris, making it difficult to actually use Snowflake and another tool at the same time. Snowflake’s managed Iceberg tables can only be registered in Polaris as external Iceberg tables, and even then there’s a catch…you have to use Snowflake’s proprietary catalog integration between Snowflake and Polaris in order to sync your managed Iceberg tables to Polaris. All of this complexity… and you still need to decide whether Snowflake or some other tool is able to write to those tables, you can’t have both. 

At Nousot, we frequently help our clients design, build, and operate architectures that take advantage of interoperability. These architectures help us to provide innovative analytics solutions, maximizing the value we create for our clients. As the analytics and AI landscape continues to evolve, these open architectures will be critical in allowing organizations to quickly adapt and remain competitive. We hope Snowflake evolves to a true open lakehouse by simplifying their Iceberg support into a single Iceberg table type that allows reads/writes from any engine, supports all Apache Iceberg features, and adopts a single, open catalog that adheres to the Iceberg REST Catalog specification.


* This content was originally published on Nousot.com. Nousot and Lovelytics merged in April 2025.

Author

Related Posts

Apr 28 2026

Double Recognition: Reaffirming Our Status as Databricks Brickbuilder Specialists for AI, Security, and Governance

In a fast-evolving landscape where data complexity is the primary hurdle to innovation, general knowledge is no longer enough. To thrive in the age of Intelligence,...
Apr 23 2026

Data Context – The Missing Ingredient Critical for AI Success

In our practice, we actively counsel our clients regarding the critical importance of data availability and data quality for successful AI use case performance. Without...
A featured image for the blog that has the title with a background featuring retail shelves.
Apr 13 2026

Same Challenges, New Opportunities: Why AI is Finally Closing the Retail Execution Gap

Retail’s age-old problems remain, but the solutions are evolving. Discover how AI is finally solving CPG’s core issues.

Apr 09 2026

Why AI Transformation in Retail & CPG Requires Domain Experts, Not Just Technology

Discover why domain knowledge is the missing ingredient in Retail and CPG AI transformation strategies in this blog.

Mar 26 2026

Building a Workforce, Not a Chatbot, with Databricks Agent Bricks

Over the last couple years, we’ve seen a lot of enterprises focus their AI implementations solely on "generative" tasks: summarizing long documents, drafting emails, or...
Mar 13 2026

Beyond Reactive Analytics: Transforming Warranty Risk Management with Compound LLM and Databricks

Executive Overview   Traditional warranty analytics systems share a fatal flaw- they tell you what broke yesterday, not what will break tomorrow. By the time a warranty...
Robert Herjavec headshot on stylized teal background with Lovelytics colors
Feb 26 2026

Shark Tank’s Robert Herjavec Makes Strategic Investment in Lovelytics, Joins Board of Directors

AI-focused Databricks consulting firm secures investment from renowned technology entrepreneur to accelerate growth in enterprise AI[Arlington, VA] — Lovelytics, a...
Feb 24 2026

From Networks to Intelligence: How Telcos Can Turn Industry Pressure into Momentum

The Telecom Squeeze: More Demand, Tighter Margins The telecom industry is at an inflection point. Data consumption is exploding, customer expectations keep rising, and...
Feb 17 2026

Alex Wiss Is Our New CTO and We’re Changing How We Work

We have some big news to share. Alex Wiss is stepping into the role of Chief Technology Officer at Lovelytics. Most of you already know Alex. He has spent his whole...
Feb 06 2026

State of AI Agents 2026: Lessons on Governance, Evaluation, and Scale

Introduction Databricks has released its State of AI Agents 2026 report, a data-driven snapshot of how enterprises are shifting from chatbots and pilots toward agentic...