Using an open lakehouse architecture – one that stores data in a user’s own cloud storage accounts using an open source storage format (like Apache Iceberg) and data catalog – can help to combat the velocity of change in an ever-evolving analytics ecosystem. Iceberg has continued to grow in popularity recently due to its flexibility and functionality, and it’s backed and/or supported by a large list of heavy hitters in the industry, including Netflix, Apple, and Snowflake.
Unfortunately, while Apache Iceberg is all about openness and cross-engine/platform support, Snowflake’s Iceberg implementation (“Snowberg”, if you will) seems to be the opposite. With a couple of added layers of complexity and technical quirks it forces users to make hard choices that don’t seem aligned with the full breadth of Iceberg functionality.
One major choice is whether to use managed or internal Iceberg tables in Snowflake. Here’s a quick rundown of the tradeoffs there:
Managed Iceberg tables
- Snowflake compute can read/write, but other tools can’t write.
- If you just use Snowflake’s proprietary Iceberg catalog, you’re locked into a closed ecosystem where only Spark can read your data, none of the other major platforms can read Snowberg.
- If you want interoperability with other Iceberg clients, you have to also use Polaris Catalog, which introduces an additional catalog to provision, govern, and pay for.
- Snowflake’s managed Iceberg tables don’t support table partitioning, which makes it virtually impossible to create large tables and achieve good performance. Snowflake substitutes partitioning their proprietary Snowflake clustering which makes reads fast on Snowflake, but they’ll still be slow on external engines. In order to take advantage of this clustering you’re locked into using Snowflake compute.
- The small file problem. Iceberg file sizes have to be 16MB to match Snowflake internal proprietary data format for good query performance on Snowflake. So, a 1TB table will have 65K files which does not sound a lot but a 100TB table will have 6.5M files. In contrast, the default file size per the Iceberg spec is 512 MB, that’s 32x bigger than Snowlake Managed Iceberg tables.
Unmanaged Iceberg tables
- Other tools can read/write, but now Snowflake can’t write.
- Requires external Iceberg catalogs (based on the Iceberg REST Catalog spec), e.g. AWS Glue or Apache Polaris or object storage as the catalog.
You might be asking yourself “shouldn’t Snowflake’s managed Polaris catalog make all this simpler?” Well… not really. Polaris is just a catalog, and you need to configure engines to write to those tables. However, Snowflake itself doesn’t write to Polaris, making it difficult to actually use Snowflake and another tool at the same time. Snowflake’s managed Iceberg tables can only be registered in Polaris as external Iceberg tables, and even then there’s a catch…you have to use Snowflake’s proprietary catalog integration between Snowflake and Polaris in order to sync your managed Iceberg tables to Polaris. All of this complexity… and you still need to decide whether Snowflake or some other tool is able to write to those tables, you can’t have both.
At Nousot, we frequently help our clients design, build, and operate architectures that take advantage of interoperability. These architectures help us to provide innovative analytics solutions, maximizing the value we create for our clients. As the analytics and AI landscape continues to evolve, these open architectures will be critical in allowing organizations to quickly adapt and remain competitive. We hope Snowflake evolves to a true open lakehouse by simplifying their Iceberg support into a single Iceberg table type that allows reads/writes from any engine, supports all Apache Iceberg features, and adopts a single, open catalog that adheres to the Iceberg REST Catalog specification.
* This content was originally published on Nousot.com. Nousot and Lovelytics merged in April 2025.
