Extract. Transform. Read.A newsletter from Pipeline Hi past, present or future data professional! To clarify the focus of this edition of the newsletter, the reason you shouldn’t bother learning certain data engineering skills is due to one of two scenarios—
You won’t need them Generally these are peripheral skills that you *technically* need but will hardly ever use. One of the most obvious skills, for most data engineering teams, is any visualization tool. This might involve out-of-the-box BI tools like Looker. Or we might be talking about scripting-based visualizations like the kind you’d generate using Matplotlib or Ggplot (shout out to any R users in the house). Speaking of R, remember those statistical languages/methodologies (R, Matlab, etc.) you learned as part of your data science degree? Yeah, you’ll almost never use them to build production pipelines. In some circumstances you may, however, use these tools to validate data or build analytic models. But ML modeling is typically outside the scope of a data engineering role. Unless your company isn’t yet in the cloud (there are some *ahem* late adapters out there), you likely won’t use paradigms like Postgres or obscure SQL variants like T-SQL. You’ll learn them on the job I’ll caveat this category with the assumption that you’re fortunate enough to land in an org that provides proper training and mentorship for new engineers. Even with the cynicism that comes with being a senior engineer, I believe most team members want to help each other; data engineering is a team sport, after all. One of the regular exercises this team executes is commits, reviews and merges into a production code base. If you’re like the majority of companies, you’ll do this through GitHub. I disagree with those who say you need to learn git before working professionally. I only knew the UI and picked up the CLI commands quickly. It’s not technically complex. Another big skill you don’t really need to worry about is a team’s codeless pipeline (assuming they use one). Some job listings include FiveTran (a big codeless provider), but there is often plenty of documentation and third-party support to help you acclimate to the platform and troubleshoot issues. Finally, something useful you’ll pick up on the job is how to properly validate data. It’s important to have a baseline understanding of “what looks right” when completing school or portfolio projects, but there’s no way to know what your team/org expects until you’re completing a deliverable. If you want to get a sense of how to “smell check” the data in SQL, you can refer to one of my previous articles. When acquiring skills or upskilling one of the most valuable things you can learn is where to focus your time and attention. Thanks for ingesting, -Zach Quinn |
Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.
Extract. Transform. Read. A newsletter from Pipeline For a STEM discipline, there is a lot of abstraction in data engineering, evident in everything from temporary SQL views to complex, multi-task AirFlow DAGs. Though perhaps most abstract of all is the concept of containerization, which is the process of running an application in a clean, standalone environment–which is the simplest definition I can provide. Since neither of us has all day, I won’t get too into the weeds on containerization,...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...
Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but...