[ETR #39] Your Pipelines Will Fail On These 10 Days


Extract. Transform. Read.

A newsletter from Pipeline

Hi past, present or future data professional!

From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour.

If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams are very aware of DST, this isn’t always something we account for when building and scaling pipelines.

To build DST-resistant pipelines, you need to set your schedule parameters to daylight time vs standard time. And even if you think your builds are properly calibrated before breaking for the weekend, I’d still remind a team it’s DST and, if possible, designate an on-call position to respond to issues that shouldn’t wait until the next weekday.

In addition to DST, a less frequent problem is creating schedules and variables that account for Leap Year. While you could be like one engineer I know and tell yourself it’s a “future me” problem, I’d recommend creating logic to check for instances of that extra February day in a given year. You can also use the datetime package’s .day method to output the correct day.

Much more common than either DST or Leap Year is what I call the “31 problem.” This is when you want to isolate date attributes but are lagging a day behind because of the few months that have 31 days.

For instance, say you need to create a file string that is supposed to say “March 31” but the datetime module hasn’t accounted for an extra day in March so your output becomes “April 31”, a date that doesn’t exist.

To learn how to solve this problem and for more in-depth analyses of date issues (including code snippets), I encourage you to read “Why Your Data Pipelines Will Fail On These 10 Days Every Year (And What To Do About It)”.

Happy DST and thanks for ingesting,

-Zach Quinn

Extract. Transform. Read.

Reaching 20k+ readers on Medium and over 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.

Read more from Extract. Transform. Read.

Hi fellow data professional! Quick question: How much could I pay you to switch your job? Conventional wisdom in the tech industry in the last handful of years is that the way to supercharge growth and max out your career earnings is to frequently change jobs. On average, job switchers could and should target an increase of 15-20% of their current salary. But in a rocky economy (at least here in the U.S.), career experts are urging would-be switchers to consider the benefits of a stable role...

Hi fellow data professional and Happy New Year! In the second half of 2025, I made a radical choice: I (largely) stopped blogging. Over the past year, Medium (where I host my content) made a series of changes that de-prioritizes technical content, leading to the departure of several major publications, including Toward Data Science. Pair that platform disillusionment with a bit of burnout, and the result is a feeling that it’s time for a change. For 75+ weeks, I’ve preferred concise,...

Hi fellow data professional - Merry Christmas and Happy Holidays! Since an email is probably one of the least exciting things to open on Christmas morning, I'll keep this brief. As a thank you for subscribing and reading the newsletter this year, I'd like to offer a gift: My FREE guide to web scraping in Python. Centered around 3 "real world" projects, the guide highlights the importance of being able to retrieve, interpret and ingest unstructured data. Get your guide here. Have a restful...