(Pretty) big data wrangling with DuckDB and Polars

With examples in R and Python

Author
Affiliation

Principal Economist, Amazon

Published

September 1, 2024

Disclaimer: This is a Clone made by Florian Oswald

This is a clone of Grant McDermott’s wonderful workshop hosted here. What you see here is a copy of his work, to which I (Florian Oswald added the julia version of duckdb. I also added the relevant julia setup in requirements. Grant has not vetted those additions in any way, so any additional errors were introduced without his knowledge. Please checkout the source code for his original work, and make a fork from it, if you want to use it, on github. Thanks Grant as ever for sharing those great resources with the rest of the world!

Description

This workshop will introduce you to DuckDB and Polars, two data wrangling libraries at the frontier of high-performance computation. (See benchmarks.) In addition to being extremely fast and portable, both DuckDB and Polars provide user-friendly implementations across multiple languages. This makes them very well suited to production and applied research settings, without the overhead of tools like Spark. We will provide a variety of real-life examples in both R and Python, with the aim of getting participants up and running as quickly as possible. We will learn how wrangle datasets extending over several hundred million observations in a matter of seconds or less, using only our laptops. And we will learn how to scale to even larger contexts where the data exceeds our computers’ RAM capacity. Finally, we will also discuss some complementary tools and how these can be integrated for an efficient end-to-end workflow (data I/O -> wrangling -> analysis).

Disclaimer

The content for this workshop has been prepared, and is presented, in my personal capacity. Any opinions expressed herein are my own and are not necessarily shared by my employer. Please do not share any recorded material without the express permission of myself or the workshop organisers.