[ETR #38] Powerful But Messy Data


Extract. Transform. Read.

A newsletter from Pipeline

Hi past, present or future data professional!

As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process.

But what about the other 5% of the time when requirements might be structured, but your data isn’t?

Unstructured data comes in many forms, including incomprehensible metadata from ioT devices; I have the most experience with textual data, so I can speak to how I recommend approaching this classification of data.

Since I nearly always work with structured data at work, I’ll be speaking from my experience scraping web data, parsing text files and reading PDFs.

  • Understand the min() max() and shape of your data; for textual data, this means knowing first and last pages (or tokens) and the length of your doc
  • As soon as possible, aggregate your raw data into a form you can work with; I’m partial to lists that I convert to data frame columns, but you could just as easily construct a dict()
  • Once you know what you’re looking for, leverage regex string searches to avoid processing EVERYTHING; there are many regex generators that can check your expressions as you write them
  • If you’re really lost, check the rendered output of your data; if this is a PDF, open your file in preview or a similar view

Finally, if you’re working with a particular type of data, understand what libraries are available to reduce the manual parsing that will be required.

And remember, the only shape you don’t want your data in is (0,0).

Thanks for ingesting,

-Zach Quinn

Pipeline To DE

Top data engineering writer on Medium & Senior Data Engineer in media; I use my skills as a former journalist to demystify data science/programming concepts so beginners to professionals can target, land and excel in data-driven roles.

Read more from Pipeline To DE

Extract. Transform. Read. A newsletter from Pipeline For a STEM discipline, there is a lot of abstraction in data engineering, evident in everything from temporary SQL views to complex, multi-task AirFlow DAGs. Though perhaps most abstract of all is the concept of containerization, which is the process of running an application in a clean, standalone environment–which is the simplest definition I can provide. Since neither of us has all day, I won’t get too into the weeds on containerization,...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! From 2014-2017 I lived in Phoenix, Arizona and enjoyed the state’s best resident privilege: No daylight saving time. If you’re unaware (and if you're in the other 49 US states, you’re really unaware), March 9th was daylight saving, when we spring forward an hour. If you think this messes up your microwave and oven clocks, just wait until you check on your data pipelines. Even though data teams...

Extract. Transform. Read. A newsletter from Pipeline Hi past, present or future data professional! To clarify the focus of this edition of the newsletter, the reason you shouldn’t bother learning certain data engineering skills is due to one of two scenarios— You won’t need them You’ll learn them on the job You won’t need them Generally these are peripheral skills that you *technically* need but will hardly ever use. One of the most obvious skills, for most data engineering teams, is any...