Can Your Data Pipelines Be Dangerous?


Extract. Transform. Read.

A newsletter from Pipeline: Your Data Engineering Resource

Hi past, present or future data professional!

Data engineering can be dangerous; ok—not, like, physically, but by building and maintaining data infrastructure, data engineers are given a surprising amount of access and responsibility. Every commit, table alteration and deletion must be made with care. It took 2 years, but I finally learned a shortcut to make developing SQL staging tables less risky and more efficient.

Even seemingly minor mistakes like joining on the wrong key can result in losing days or months of valuable data, which can be equal to hundreds of thousands or millions of dollars in revenue visibility. Outside of code mistakes, not paying attention to logistic factors like vendor contracts and API usage can not only result in downtime, in a worst-case scenario it can lead to an all-out blackout.

If the stakes sound ominous, I’d suggest examining the root of your hesitation to work more confidently and efficiently—it may even be the code itself.

There is a happy medium between freely building data pipelines and using the appropriate guard rails. As long as you take your time and don’t commit code directly to the main branch then you can do data engineering safely and avoid bursting your pipelines.

For those who are anti-virus minded, here are this week’s links as plain text:

P.S. Want to learn how to go from code to automated pipeline? Take advantage of my 100% free email course:

Deploy Google Cloud Functions In 5 Days.

Thanks for ingesting,

-Zach

Extract. Transform. Read.

Reaching 20k+ readers on Medium and over 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.

Read more from Extract. Transform. Read.

Hi fellow data professional! In a previous newsletter, I mentioned an idea that I wanted to explore deeper. At the risk of double-quoting a la The Office’s Michael Scott quoting Wayne Gretzky (“You Miss 100% Of The Shots You Don’t Take - Waynze Gretzky - Michael Scott”), here is the idea. “To be marketable as a candidate, you don’t just want to show how you can go from A to B (requirements->pipeline). You need to go from A to C (requirements->pipeline->scale/support).” You might be asking...

Hi fellow data professional! Remember when the world ended? This month, 6 years ago, the world shut down and entered “unprecedented times.” Shortly after COVID-19 was designated a pandemic, I was unceremoniously furloughed from my day job at Disney World for 3-ish months. During COVID while others quarantined, I was on the move. After quickly feeling isolated in our third floor Central Florida apartment, my now-wife and I joined millions of other American 20-somethings who took a pandemic as...

Hi fellow data professional! I’ve broken my own data project rule. I’ve used the same data over and over again. For 3 years. It sounds boring but that depth exposure may actually be one of the few moats that slows encroaching AI. A little context: I support subscriptions, newsletters and growth for my employer. Spoiler alert: These areas are all basically the same thing. And they use basically the same three data sets. While I have opportunities to jump to other projects, this has been my...