[ETR #17] Warning: Your Google Cloud Function Might Fail Next Week


Extract. Transform. Read.

A newsletter from Pipeline

Technically, this title is misleading. Not because your Google Cloud Function won’t fail. It may.

And we’ll get to that.

I promise.

But because Google Cloud Functions are now called Google Cloud Run functions, selecting a name that reflects a fusion between Cloud Run and Cloud Functions, which were previously two distinct Google Cloud Platform products. While both products leverage serverless architecture to run code, Cloud Run was geared more toward those developing apps while Cloud Functions was more of a “quick and dirty” way to get simpler scripts, like ETL pipelines, into production.

No matter what GCP calls this product, you’ll still be able to run scripts using a serverless configuration. As a bonus, you’ll now be able to use NVIDIA-based CPUs to boost runtime compute power. To leverage this though, you’ll need to upgrade to a gen 2 cloud function.

With many new releases comes obsolescence. This case is no different. Effective October 14th, Google Cloud Functions (excuse me, Google Cloud Run functions), will no longer support Python 3.8, as Python itself is ending support for 3.8 in the same time frame.


To properly upgrade your functions, take these steps:

  • Choose a 3.x release > 8 that is compatible with your dependencies
  • Check your Python version
  • Download a version > 3.8
  • Update the runtime in your YAML deployment file

If you need to go into more depth with updating runtimes or other aspects of deployment, you can learn to deploy a cloud function in 5 days.

It’s 100% free and comes with access to a dedicated GitHub repository.

Enroll here.

I want to make sure I keep you sufficiently updated, so here are this week’s links:

Until next time—thanks for ingesting,

-Zach Quinn

Pipeline To DE

Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.

Read more from Pipeline To DE

Extract. Transform. Read. A newsletter from Pipeline For a STEM discipline, there is a lot of abstraction in data engineering, evident in everything from temporary SQL views to complex, multi-task AirFlow DAGs. Though perhaps most abstract of all is the concept of containerization, which is the process of running an application in a clean, standalone environment–which is the simplest definition I can provide. Since neither of us has all day, I won’t get too into the weeds on containerization,...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but...