Use polars, or some framework that’s been built with performance from the start?
I have optimized pandas notebooks before, but it seems a bit like a foundation of sand.
plus a GPU-powered drop-in accelerator, cudf.pandas, that delivers order-of-magnitude speedups with no code changes.
Don’t have a GPU on your machine? No problem—you can use cudf.pandas for free in Google Colab, where GPUs are available and the library comes pre-installed.
Use polars, or some framework that’s been built with performance from the start?
I have optimized pandas notebooks before, but it seems a bit like a foundation of sand.
Ah, I see why NVIDIA is writing about it.