Extract. Transform. Read.A newsletter from Pipeline Hi past, present or future data professional! One of the dirty secrets about my job is how easy it can be to fix broken pipelines. Often I’m retriggering a failed DAG task or, if using a code-less pipeline, literally hitting refresh. In fact, “refresh” is a great example for one of the more abstract data engineering concepts: State. And, specifically the maintenance of state under any condition. This is the definition of an important I-word, “idempotency.” While idempotency sounds like an SAT word, it’s as simple as saying “Every time this process runs the result (end state) will be the same.” An easy-to-grasp example of idempotency is the Google Cloud BigQuery API’s “WRITE_TRUNCATE” property. If you run a pipeline with “WRITE_TRUNCATE”, your data will always be overwritten during the load step. A more precise version of implementing idempotency is something I include in nearly all my pipelines, a DELETE step. This is slightly more precise than overwriting data because I am specifying deletion for a particular window. But this means that when I run a job that deletes and inserts only yesterday’s data, the output will be the same each time, leaving historic data intact and avoiding the very real possibility of data loss. This is a very practical approach to designing data pipelines because you may get spur-of-the-moment requests to reload data or otherwise re-trigger your runs. When executed properly, idempotency is as easy as hitting page refresh. Thanks for ingesting, -Zach Quinn |
Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.
Extract. Transform. Read. A newsletter from Pipeline For a STEM discipline, there is a lot of abstraction in data engineering, evident in everything from temporary SQL views to complex, multi-task AirFlow DAGs. Though perhaps most abstract of all is the concept of containerization, which is the process of running an application in a clean, standalone environment–which is the simplest definition I can provide. Since neither of us has all day, I won’t get too into the weeds on containerization,...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but...