Extract. Transform. Read.A newsletter from Pipeline Hi past, present or future data professional! As difficult as data engineering can be, 95% of the time there is a structure to data that originates from external streams, APIs and vendor file deliveries. Useful context is provided via documentation and stakeholder requirements. And specific libraries and SDKs exist to help speed up the pipeline build process. But what about the other 5% of the time when requirements might be structured, but your data isn’t? Unstructured data comes in many forms, including incomprehensible metadata from ioT devices; I have the most experience with textual data, so I can speak to how I recommend approaching this classification of data. Since I nearly always work with structured data at work, I’ll be speaking from my experience scraping web data, parsing text files and reading PDFs.
Finally, if you’re working with a particular type of data, understand what libraries are available to reduce the manual parsing that will be required. And remember, the only shape you don’t want your data in is (0,0). Thanks for ingesting, -Zach Quinn |
Reaching 20k+ readers on Medium and over 3k learners by email, I draw on my 4 years of experience as a Senior Data Engineer to demystify data science, cloud and programming concepts while sharing job hunt strategies so you can land and excel in data-driven roles. Subscribe for 500 words of actionable advice every Thursday.
Hi fellow data professional! This edition almost became an apology because I’ve been on a tight deadline and pre-baby morning wake up thinking/writing time has become GSD (get sh!t done) hour. Long story short: I got brought in late to a time-sensitive project that required me to speed through a planned pipeline migration. As a recovering news junkie (aka journalist), I used to live and die by deadlines. But, given the unpredictability of data-oriented work and internal deliverables, it’s...
Hi fellow data professional! For years, the opening of The Simpsons, specifically Bart writing lines on the chalkboard, has been incredibly relatable to me. Not because I’m up to mischief (none I’ll admit to here, anyway), but because I spend most days writing the same three lines of SQL over and over again. If you've ever been paranoid about a table's content, you might know what I'm talking about. It’s the aggregate COUNT(*) grouped by a date field, ordered by date DESC. The output of that...
Hi fellow data professional! In a previous newsletter, I mentioned an idea that I wanted to explore deeper. At the risk of double-quoting a la The Office’s Michael Scott quoting Wayne Gretzky (“You Miss 100% Of The Shots You Don’t Take - Waynze Gretzky - Michael Scott”), here is the idea. “To be marketable as a candidate, you don’t just want to show how you can go from A to B (requirements->pipeline). You need to go from A to C (requirements->pipeline->scale/support).” You might be asking...