[ETR #38] Powerful But Messy Data


Extract. Transform. Read.

A newsletter from Pipeline

Hi past, present or future data professional!

As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process.

But what about the other 5% of the time when requirements might be structured, but your data isn’t?

Unstructured data comes in many forms, including incomprehensible metadata from ioT devices; I have the most experience with textual data, so I can speak to how I recommend approaching this classification of data.

Since I nearly always work with structured data at work, I’ll be speaking from my experience scraping web data, parsing text files and reading PDFs.

  • Understand the min() max() and shape of your data; for textual data, this means knowing first and last pages (or tokens) and the length of your doc
  • As soon as possible, aggregate your raw data into a form you can work with; I’m partial to lists that I convert to data frame columns, but you could just as easily construct a dict()
  • Once you know what you’re looking for, leverage regex string searches to avoid processing EVERYTHING; there are many regex generators that can check your expressions as you write them
  • If you’re really lost, check the rendered output of your data; if this is a PDF, open your file in preview or a similar view

Finally, if you’re working with a particular type of data, understand what libraries are available to reduce the manual parsing that will be required.

And remember, the only shape you don’t want your data in is (0,0).

Thanks for ingesting,

-Zach Quinn

Extract. Transform. Read.

Reaching 20k+ readers on Medium and nearly 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.

Read more from Extract. Transform. Read.

Extract. Transform. Read. A newsletter from PipelineToDE Hi past, present or future data professional! After choosing a dataset, one of the most significant decisions you must make when creating displayable work is: How am I going to build this thing? For some, you may try to “vibe code” along with an LLM doing the grunt technical work. If you choose this approach, be warned: Nearly half of all “vibe code” generated contains security vulnerabilities and that’s before you even consider its...

Extract. Transform. Read. A newsletter from PipelineToDE Amid layoff announcements from Meta, Amazon and even UPS, it's job aggregator Indeed that signals a different concern for entry-level data job seekers. This week a post on Blind revealed Indeed’s plan to quietly reduce junior roles. They’re not necessarily going to stop hiring or layoff juniors (though they are losing 1300 employees by end of year)—they’re just going to stop paying attention to them. Specifically, Indeed will no longer...

Extract. Transform. Read. A newsletter from PipelineToDE Hi past, present or future data professional! I want to share the single most important realization I had back in the summer of 2021. I was burned out, juggling two part-time jobs, trying to plan a wedding, and drowning in full-time job applications. I felt overwhelmed and underprepared as I plunged into a sea of candidates I perceived to be more intelligent and better "fits" than me. My portfolio was full of the usual Titanic, Iris,...