A growth-stage DTC apparel brand asked us to do a two-week Blueprint. They were running on Shopify, hitting a ceiling on the questions they could answer, and wanted to know what it would take to get to a real data platform. Their hypothesis going in was that they needed a warehouse. Our hypothesis going in was that they needed a warehouse for the right reasons, which is a different conversation.
This is a sanitized version of how those two weeks actually played out. The brand operates in the $20M to $40M revenue band, sells through Shopify and a small wholesale channel, and has a six-person operating team plus a fractional CFO. They have a decent budget for the data work, an analyst on staff, and a CEO who is genuinely curious about getting better answers.
What we looked at
We started where every Blueprint starts. Stakeholder interviews across finance, ops, marketing, and CX. An audit of every report and spreadsheet the team relied on. A tracing exercise that mapped every metric in active use back to the source system and the calculation behind it.
The interview list was small. Four department leads, the head of analytics, the CEO. We ran sixty-minute conversations with each, focused on three questions: what decisions do you make on a recurring cadence, where do you currently get the data, and where do you not trust it. Total: seven hours of interviews, three hours of audit time, and about eight hours mapping the metric lineage.
The audit covered every spreadsheet, every Shopify report, every Looker Studio dashboard the team had stood up over the prior two years. We found forty-three reports across the various surfaces. Sixteen of them were running. Twelve were running but nobody could name the decision they supported. The remaining fifteen were broken or out of date and nobody had noticed.
What we found
The team had assumed the problem was that data was hard to get. The actual problem was that data was easy to get and impossible to trust.
The finance team, the marketing team, and the operations team each had their own version of revenue. Three definitions, three numbers, three reporting cadences. None of them reconciled. The CEO had stopped trusting any single number and was triangulating between the three in her head.
Inventory was worse. Reorder decisions were running on Excel snapshots that were already a week stale. Cohort and lifetime value analysis was effectively unavailable because customer identity wasn't reconciled between the email platform, the loyalty app, and Shopify. The team knew their CAC. They didn't know, with any confidence, which acquisition channels produced retained customers.
The stack itself wasn't the issue. They had perfectly good systems. The issue was that nothing was modeled, nothing was tested, and nobody had explicit ownership of any of the definitions. The data was technically present and operationally absent.
The team had assumed the problem was that data was hard to get. The actual problem was that data was easy to get and impossible to trust.
What we almost missed
Two findings in the second week changed our recommendations more than anything in the first.
The first was that the wholesale channel, which leadership treated as a footnote, was masking a cohort effect on the DTC side. The DTC retention curve looked healthier than it actually was because some of the customers being counted as retained were actually new wholesale orders re-attributed to the email subscriber list. Once we segmented the cohort properly, the retention story shifted significantly. That insight alone was worth the Blueprint.
The second was that the team had been about to sign a contract with a CDP vendor based on the assumption that they needed customer identity resolution as a separate product. After two weeks of audit, it was clear they didn't. The identity reconciliation could be handled inside dbt with the existing Shopify data and the email platform's API, at roughly a tenth of the cost. Surfacing that one finding paid for the Blueprint several times over.
The three recommendations
We presented three recommendations at the end of week two, ranked by how much they would change the operating cadence per dollar invested.
First: stand up a warehouse and a dbt semantic layer with a single canonical definition for revenue, customer, and inventory. This was the foundation. Without it, nothing else could compound. Estimated lift: twelve weeks for the build.
Second: stand up Looker on top of that semantic layer with three core dashboards designed around the actual operating reviews (weekly leadership standup, monthly business review, board prep). Not a long list of dashboards, three good ones. Estimated lift: four weeks, in parallel with the back half of the warehouse build.
Third: build a small data team capable of operating it. Two analysts, one analytics engineer, with explicit metric ownership across the three core domains. Estimated lift: two months of recruiting, partly overlapping with the build.
The ranking mattered. The team's instinct was to do all three at once. We pushed back on that and recommended sequencing, with the warehouse and semantic layer first, because the dashboards and the team are both more valuable on top of a clean foundation than they are running in parallel with one being built.
What got built first
They picked option one and option two. Option three got reframed as a hiring plan rather than a Blueprint deliverable, with us providing fractional analytics support during the build.
The Build started a week after the Blueprint wrapped. Twelve weeks later, the warehouse was live, the semantic layer was tested, the dashboards were running. The CEO stopped triangulating between three revenue numbers because there was now one. That alone was the reason the engagement paid for itself, more or less. Everything else compounded on top.
Four weeks after launch, the marketing team was running cohort analysis they hadn't been able to run before. Eight weeks after launch, the ops team had retired three of the spreadsheets they used to depend on. By the twelve-week mark, the analyst on staff was committing dbt models alongside our team and was on track to lead the next round of work himself. The data lead, hired during the Build phase, started day one with a stack that was already trustworthy, instead of inheriting the mess that most data leads walk into.