Extract. Transform. Read.A newsletter from Pipeline: Your Data Engineering ResourceHi past, present or future data professional! Data engineering can be dangerous; ok—not, like, physically, but by building and maintaining data infrastructure, data engineers are given a surprising amount of access and responsibility. Every commit, table alteration and deletion must be made with care. It took 2 years, but I finally learned a shortcut to make developing SQL staging tables less risky and more efficient. Even seemingly minor mistakes like joining on the wrong key can result in losing days or months of valuable data, which can be equal to hundreds of thousands or millions of dollars in revenue visibility. Outside of code mistakes, not paying attention to logistic factors like vendor contracts and API usage can not only result in downtime, in a worst-case scenario it can lead to an all-out blackout. If the stakes sound ominous, I’d suggest examining the root of your hesitation to work more confidently and efficiently—it may even be the code itself. There is a happy medium between freely building data pipelines and using the appropriate guard rails. As long as you take your time and don’t commit code directly to the main branch then you can do data engineering safely and avoid bursting your pipelines. For those who are anti-virus minded, here are this week’s links as plain text:
P.S. Want to learn how to go from code to automated pipeline? Take advantage of my 100% free email course: Deploy Google Cloud Functions In 5 Days. Thanks for ingesting, -Zach |
Reaching 20k+ readers on Medium and over 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.
Hi fellow data professional! This edition almost became an apology because I’ve been on a tight deadline and pre-baby morning wake up thinking/writing time has become GSD (get sh!t done) hour. Long story short: I got brought in late to a time-sensitive project that required me to speed through a planned pipeline migration. As a recovering news junkie (aka journalist), I used to live and die by deadlines. But, given the unpredictability of data-oriented work and internal deliverables, it’s...
Hi fellow data professional! For years, the opening of The Simpsons, specifically Bart writing lines on the chalkboard, has been incredibly relatable to me. Not because I’m up to mischief (none I’ll admit to here, anyway), but because I spend most days writing the same three lines of SQL over and over again. If you've ever been paranoid about a table's content, you might know what I'm talking about. It’s the aggregate COUNT(*) grouped by a date field, ordered by date DESC. The output of that...
Hi fellow data professional! In a previous newsletter, I mentioned an idea that I wanted to explore deeper. At the risk of double-quoting a la The Office’s Michael Scott quoting Wayne Gretzky (“You Miss 100% Of The Shots You Don’t Take - Waynze Gretzky - Michael Scott”), here is the idea. “To be marketable as a candidate, you don’t just want to show how you can go from A to B (requirements->pipeline). You need to go from A to C (requirements->pipeline->scale/support).” You might be asking...