Unsticking the sprawl in data supply chains.
Business context and problem:
A major B2B Data provider was stuck under the weight of data ingest & processing system sprawl and legacy technical debt. They struggled in bringing new products and features in existing products. Their data supply chain was mainframe based, expensive to run, hard to maintain, and slow to make changes to. It required a large army from a major GSI to keep the lights on.
What we did:
Over the course of several months, we engaged in a series of Proof of Concepts aimed at exploring various cloud-based mechanisms to operate their data ingest feeds, storage capabilities, product integrations, and data distribution capabilities. We contemplated both augmenting their existing flows and complete green-field rewrites. We eventually built from scratch major components of the data-supply chain.
What this meant:
We decreased customer churn risk by improving overall latency and accuracy in the system and dramatically decreased operational costs. We also allowed for faster future product creation by simplifying the mechanisms needed to access data.
Skills we used:
Data science and engineering, machine learning, software engineering. Data pipeline hardening and optimizing.