Extract. Transform. Read.A newsletter from Pipeline​ Hi past, present or future data professional! It’s never good when you wake up to this from a coworker: 💀 The skull wasn’t because the sender felt like they would suffer any kind of dramatic fate. Instead, they were prepared to administer near-fatal justice to the junior engineer who made several unnecessary overnight commits straight to our org’s main branch. The thing is, for a first-time violation, I can understand why testing is an afterthought for new engineers. Schools and courses emphasize local output over production so testing feels like an extra step. To properly test code, you need to configure a clean, production-adjacent environment. If you’re new to this concept, here are 2 of my favorites along with an unusual choice. The safe choice: Virtual Environment I use two virtual environments that can be configured interchangeably: Pyenv and Venv. Pyenv is easy to configure and use within a terminal in a “professional” IDE like VS code. Pyenv is ideal because it allows me to create an environment from a blank slate each time. Venv is another option. Instead of using Venv in VS Code this is how I set up a virtual Python environment inside of a Virtual Machine (VM). ​Read more to learn how to set up a quick, durable sandbox in a Compute Engine VM. The portable option: Docker I’ll confess: I didn’t used to be a fan of Docker. I didn’t really “get” containerization and could set up a virtual environment using the processes described above. However, I learned that Docker’s true power is its portability. Not only can I create a clean slate (an image), I can push this to a registry to create testing configs before I test script changes in production. Powerful stuff. The one issue I had was authenticating with GCP; I describe my solution here. The unusual pick: Jupyter Notebook Jupyter Notebook gets a bad rap in the data engineering community. Seen as a tool for data analysts and data scientists, it doesn’t quite make sense for data engineers to develop and test in an environment best known for its nicely rendered outputs. But buying into that argument would cause you to miss out on some useful features and, frankly, a nice UX. And so you don’t have to search for resources, here are this week’s links.
What’s your preferred testing ground? Let me know: zach@pipelinetode.com. Thanks for ingesting, -Zach Quinn |
Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! To clarify the focus of this edition of the newsletter, the reason you shouldn’t bother learning certain data engineering skills is due to one of two scenarios— You won’t need them You’ll learn them on the job You won’t need them Generally these are peripheral skills that you *technically* need but will hardly ever use. One of the most obvious skills, for most data engineering teams, is any...