LLMs for data pipelines without losing control (API → DuckDB in ~10 mins)

| Hey folks, I’ve been doing data engineering long enough to believe that “real” pipelines meant writing every parser by hand, dealing with pagination myself, and debugging nested JSON until it finally stopped exploding. I’ve also been pretty skeptical of the “just prompt it” approach. Lately though, I’ve been experimenting with a workflow that feels less like hype and more like controlled engineering, instead of starting with a blank
LLM does the mechanical work, I stay in charge of structure + validation We’re doing a live session on Feb 17 to test this in real time, going from empty folder → github commits dashboard (duckdb + dlt + marimo) and walking through the full loop live if you’ve got an annoying API (weird pagination, nested structures, bad docs), bring it, that’s more interesting than the happy path. we wrote up the full workflow with examples here Curious, what’s the dealbreaker for you using LLMs in pipelines? [link] [comments] |
Want to read more?
Check out the full article on the original site