Extract. Transform. Read.A newsletter from Pipeline Hi past, present or future data professional! It’s been a busy fall; I currently have 14 tasks in various states of development. Right now my JIRA board looks like I just won bingo—twice. Unfortunately when you climb the tech ladder things only get busier which means you’re going to burn out unless you take steps toward proactivity. For me this means learning which tasks I don’t need to (and really shouldn’t) do manually. And before you think I’m going to be like that developer who put his job on auto-pilot for 5 years, my prize for achieving this automation isn’t a week of Netflix binging–it’s more work. If you’re overwhelmed by the idea of automation, I suggest you start by implementing 4 simple, small-scale automations. Auto-fill column names in SQL queries I work on queries with as many as 150 columns. Once, I had a task where I needed to replace a SELECT * with the explicit column name. Instead of wasting 30 minutes of dev time, I grabbed the columns from the INFORMATION_SCHEMA and iterated through them like the code snippet below. from google.cloud import bigquery import pandas as pd query = “ SELECT column_name AS name FROM `project.dataset.INFORMATION_SCHEMA`.COLUMNS “ bq_client = bigquery.Client() df = bq_client.query(query).to_dataframe() for d in df[“name”]: print(f”{d},”) Never write another schema Creating schemas is my least favorite part of data engineering. Unfortunately, they are incredibly important and can lead to nasty errors if incorrectly defined or, worse, set to auto detect. Luckily, if you’re creating a schema based on an existing table, you can use the same INFORMATION_SCHEMA table to select the column names and types, which I explain here. Backfill multiple CSV files Like schema design, backfills are a pain that consume an inordinate amount of development time. Remember those 14 tasks I mentioned? At least 3 are backfills. The worst kind of backfill is when you have to load data from a single file like a CSV. Fortunately, if you already have your files saved in a shared location like cloud storage, you can code an iterative process to download, transform and upload the final data. Pro tip: Name your file with a date string to make it easy to identify and fill gaps programmatically. Schedule a recurring refresh for your API credentials As a junior engineer one of my quarterly chores was to manually refresh API credentials whenever our team calendar alert said a particular service’s creds would be expiring. Instead, my solution and advice to you and your team is to determine the “life span” of your creds and create a function (or functions) that will perform the following steps:
Instead of leading to laziness, automation encourages multitasking. If you implement any of the above solutions just be sure to test your output because the last thing anyone wants is a rogue autopilot. To optimize your time, here are this week’s links as plain text.
Questions? zach@pipelinetode.com Thanks for ingesting, -Zach Quinn |
Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! To clarify the focus of this edition of the newsletter, the reason you shouldn’t bother learning certain data engineering skills is due to one of two scenarios— You won’t need them You’ll learn them on the job You won’t need them Generally these are peripheral skills that you *technically* need but will hardly ever use. One of the most obvious skills, for most data engineering teams, is any...