Databricks to acquire Tabular

Databricks and Tabular will work together towards a joint vision of the open lakehouse.

  • 1 week ago Posted in

Databricks has agreed to acquire Tabular, a data management company founded by Ryan Blue, Daniel Weeks, and Jason Reid. By bringing together the original creators of Apache Iceberg™ and Linux Foundation Delta Lake, the two leading open source lakehouse formats, Databricks will lead the way with data compatibility so that organisations are no longer limited by which of these formats their data is in. Databricks intends to work closely with the Delta Lake and Iceberg communities to bring format compatibility to the lakehouse; in the short term, inside Delta Lake UniForm and in the long term, by evolving toward a single, open, and common standard of interoperability. Databricks and Tabular will work together towards a joint vision of the open lakehouse.

The Rise of Lakehouse Architecture and Format Incompatibility

Databricks pioneered the lakehouse architecture in 2020 to enable the integration of traditional data warehousing workloads with AI workloads on a single, governed copy of data. For this to work, all data has to be in an open format so different workloads, applications, and engines could access the same data. Lakehouse architecture maximises enterprise productivity by democratising access to data. This is in contrast to proprietary data warehouses where only a proprietary SQL engine can read, write or share the data, and data often has to be copied and exported to be used by other applications, creating a high degree of vendor lock-in. Four years later, 74% of enterprises have deployed a lakehouse architecture.

The foundation of the lakehouse is open source data formats that enable ACID transactions on data stored in object storage. These formats dramatically improve the reliability and performance of data operations on the data lake and were specifically designed for open source engines such as Apache Spark™, Trino and Presto. To address these challenges, Databricks worked with the Linux Foundation to create the Delta Lake project. Since its inception, Delta Lake has over 500 code contributors from a diverse set of organisations, and over 10,000 companies globally use Delta Lake to process 4+ exabytes of data on average each day.

Around the same time Delta Lake was created, Ryan Blue and Daniel Weeks developed the Iceberg project at Netflix and donated it to the Apache Software Foundation. Since then, Delta Lake and Iceberg have emerged as the two leading open source standards for lakehouse formats. Even though both of these formats are based on Apache Parquet and share similar goals and designs, they became incompatible due to independent development.

Over time a number of other open source and proprietary engines have adopted these formats. However, they usually adopted only one of the standards and more often than not, only part of that standard, leading to fragmented and siloed enterprise data, undermining the value of the lakehouse architecture.

The Road to Interoperability

Companies need data interoperability to realise the benefits of the lakehouse, and Databricks will work closely with the Delta Lake and Iceberg communities to bring interoperability to the formats over time. This is a long journey, one that will likely take several years to achieve in those communities. That is why last year, Databricks introduced Delta Lake UniForm. UniForm tables provide interoperability across Delta Lake, Iceberg, and Hudi, and support the Iceberg restful catalog interface so companies can use the analytics engines and tools they are already familiar with, across all their data. Generally available today, UniForm allows companies to achieve compatibility. With the addition of the original Iceberg team, Databricks will greatly broaden the ambitions of Delta Lake UniForm.

“Databricks pioneered the lakehouse and over the past four years, the world has embraced the lakehouse architecture, combining the best of data warehouses and data lakes to help customers decrease TCO, embrace openness, and deliver on AI projects faster. Unfortunately, the lakehouse paradigm has been split between the two most popular formats: Delta Lake and Iceberg. Databricks and Tabular will work with the open-source community to bring the two formats closer to each other over time, increasing openness, and reducing silos and friction for customers,” said Ali Ghodsi, Co-founder and CEO at Databricks. “Last year, we announced Delta Lake UniForm to bring interoperability to these two formats, and we’re thrilled to bring together the foremost leaders in open data lakehouse formats to make UniForm the best way to unify your data for every workload.”

A Shared Commitment to Openness

Databricks and Tabular share a history of championing open source formats. Both companies were founded to commercialise open source technologies created by the founders and today, Databricks is the largest and most successful independent open source company by revenue and has donated 12 million lines of code to open source projects. This acquisition highlights Databricks’ commitment to open formats and open source data in the cloud, helping ensure that companies are in control of their data and free from the lock-in created by proprietary vendor-owned formats.

“We created Apache Iceberg to solve critical data challenges around correctness, performance, and scalability. It’s been amazing to see both Iceberg and Delta Lake grow massively in popularity, largely fueled by the open lakehouse becoming the industry standard. With Tabular joining Databricks, we intend to build the best data management platform based on open lakehouse formats so that companies don’t have to worry about picking the ‘right’ format or getting locked into proprietary data formats,” said Ryan Blue, Co-Founder and CEO at Tabular.

For the first time ever, the data within the largest companies listed on the London Stock Exchange...
Databricks has launched Databricks AI/BI, a new type of business intelligence (BI) product that...
Nerdio has launched new research showing that the UK has surpassed the US in DaaS adoption — with...
Hitachi Vantara 'sets new standards' for performance, efficiency, and innovation in data management...
New solutions tackle critical needs to generate value from data of all types in a flexible...
Caldic will work with Syniti to help future-proof its data landscape and optimise data quality.
Hammerspace says that its Global Data Platform can now be used to process, store and orchestrate...
AMPLYFI research uncovers discrepancies in determining content credibility, leaving businesses at...