[ETR #38] Powerful But Messy Data


Extract. Transform. Read.

A newsletter from Pipeline

Hi past, present or future data professional!

As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process.

But what about the other 5% of the time when requirements might be structured, but your data isn’t?

Unstructured data comes in many forms, including incomprehensible metadata from ioT devices; I have the most experience with textual data, so I can speak to how I recommend approaching this classification of data.

Since I nearly always work with structured data at work, I’ll be speaking from my experience scraping web data, parsing text files and reading PDFs.

  • Understand the min() max() and shape of your data; for textual data, this means knowing first and last pages (or tokens) and the length of your doc
  • As soon as possible, aggregate your raw data into a form you can work with; I’m partial to lists that I convert to data frame columns, but you could just as easily construct a dict()
  • Once you know what you’re looking for, leverage regex string searches to avoid processing EVERYTHING; there are many regex generators that can check your expressions as you write them
  • If you’re really lost, check the rendered output of your data; if this is a PDF, open your file in preview or a similar view

Finally, if you’re working with a particular type of data, understand what libraries are available to reduce the manual parsing that will be required.

And remember, the only shape you don’t want your data in is (0,0).

Thanks for ingesting,

-Zach Quinn

Extract. Transform. Read.

Reaching 20k+ readers on Medium and nearly 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.

Read more from Extract. Transform. Read.

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! It’s hardly controversial to say debugging is everyone’s least favorite part of programming. One widely-used debugging method is the rubber duck method, popularized in Pragmatic Programming, which suggests you talk through your code, aloud, to an inanimate object. Being able to speak intelligently about what prompted a technical decision is one of the most underrated data engineering skills. One...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! If you’re like me, in school you were always envious of your classmates that may not have applied themselves academically but were “good test takers.” Fortunately (for them at least), these folks would likely do well on what is quietly becoming the SAT of programming the GCA, or General Coding Assessment. Now, the General Coding Assessment isn’t any kind of board certifying test like the Bar...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! While many tech-oriented companies have (in one way or another) reneged on remote working arrangements, my employer made an extreme gesture to demonstrate its commitment to the ongoing office-less lifestyle: It removed an entire floor of our two-floor New Jersey office space. Other companies, like Spotify, have unveiled slogans like “Our employees aren’t children. Spotify will continue working...