[ETR #22] Fearless Small-Scale Automations


Extract. Transform. Read.

A newsletter from Pipeline

Hi past, present or future data professional!

It’s been a busy fall; I currently have 14 tasks in various states of development. Right now my JIRA board looks like I just won bingo—twice.

Unfortunately when you climb the tech ladder things only get busier which means you’re going to burn out unless you take steps toward proactivity.

For me this means learning which tasks I don’t need to (and really shouldn’t) do manually. And before you think I’m going to be like that developer who put his job on auto-pilot for 5 years, my prize for achieving this automation isn’t a week of Netflix binging–it’s more work.

If you’re overwhelmed by the idea of automation, I suggest you start by implementing 4 simple, small-scale automations.

Auto-fill column names in SQL queries

I work on queries with as many as 150 columns. Once, I had a task where I needed to replace a SELECT * with the explicit column name. Instead of wasting 30 minutes of dev time, I grabbed the columns from the INFORMATION_SCHEMA and iterated through them like the code snippet below.

from google.cloud import bigquery

import pandas as pd

query = “ SELECT column_name AS name FROM `project.dataset.INFORMATION_SCHEMA`.COLUMNS “

bq_client = bigquery.Client()

df = bq_client.query(query).to_dataframe()

for d in df[“name”]:

print(f”{d},”)

Never write another schema

Creating schemas is my least favorite part of data engineering. Unfortunately, they are incredibly important and can lead to nasty errors if incorrectly defined or, worse, set to auto detect.

Luckily, if you’re creating a schema based on an existing table, you can use the same INFORMATION_SCHEMA table to select the column names and types, which I explain here.

Backfill multiple CSV files

Like schema design, backfills are a pain that consume an inordinate amount of development time. Remember those 14 tasks I mentioned? At least 3 are backfills.

The worst kind of backfill is when you have to load data from a single file like a CSV. Fortunately, if you already have your files saved in a shared location like cloud storage, you can code an iterative process to download, transform and upload the final data.

Pro tip: Name your file with a date string to make it easy to identify and fill gaps programmatically.

Schedule a recurring refresh for your API credentials

As a junior engineer one of my quarterly chores was to manually refresh API credentials whenever our team calendar alert said a particular service’s creds would be expiring.

Instead, my solution and advice to you and your team is to determine the “life span” of your creds and create a function (or functions) that will perform the following steps:

  • Validate the existing credentials
  • Determine how many days until expiration
  • If days until expiration is 0 or 1 then generate new creds (nearly every API I’ve worked with supports programmatic credential generation)
  • Update your creds file or secret manager object (if using GCP Secret Manager)
  • Repeat check daily

Instead of leading to laziness, automation encourages multitasking. If you implement any of the above solutions just be sure to test your output because the last thing anyone wants is a rogue autopilot.

To optimize your time, here are this week’s links as plain text.

Questions? zach@pipelinetode.com

Thanks for ingesting,

-Zach Quinn

Extract. Transform. Read.

Reaching 20k+ readers on Medium and over 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.

Read more from Extract. Transform. Read.

Hi fellow data professional! Quick question: How much could I pay you to switch your job? Conventional wisdom in the tech industry in the last handful of years is that the way to supercharge growth and max out your career earnings is to frequently change jobs. On average, job switchers could and should target an increase of 15-20% of their current salary. But in a rocky economy (at least here in the U.S.), career experts are urging would-be switchers to consider the benefits of a stable role...

Hi fellow data professional and Happy New Year! In the second half of 2025, I made a radical choice: I (largely) stopped blogging. Over the past year, Medium (where I host my content) made a series of changes that de-prioritizes technical content, leading to the departure of several major publications, including Toward Data Science. Pair that platform disillusionment with a bit of burnout, and the result is a feeling that it’s time for a change. For 75+ weeks, I’ve preferred concise,...

Hi fellow data professional - Merry Christmas and Happy Holidays! Since an email is probably one of the least exciting things to open on Christmas morning, I'll keep this brief. As a thank you for subscribing and reading the newsletter this year, I'd like to offer a gift: My FREE guide to web scraping in Python. Centered around 3 "real world" projects, the guide highlights the importance of being able to retrieve, interpret and ingest unstructured data. Get your guide here. Have a restful...