Skip to content

Why Black Box Site Scores Are a Liability, Not an Asset

Clyde Christian Anderson

The Meeting That Changes Everything

You're the real estate director. You've spent two weeks preparing a site recommendation for the expansion committee. The demographic data looks good. The foot traffic numbers support it. Your vendor's platform gave the site an 87 out of 100.

Then someone at the table asks: "How did you get that number?"

And you don't know.

Not because you didn't do your homework. Because the platform that generated the score won't tell you. The model is proprietary. The weights are hidden. The inputs are somewhere inside an algorithm that was trained on data you've never seen, tuned by data scientists you've never met.

I've been in those rooms. I grew up in a retail family and started evaluating sites when I was 15. I spent years in investment banking at Wells Fargo before founding GrowthFactor. The worst feeling in site selection isn't getting the wrong score. It's holding a number you can't explain to the people who have to sign the lease.

That's the black box problem. And it's the default in this industry.

ICSC put it well in a recent analysis: "The gap between information and conviction has become one of retail real estate's defining challenges." More data hasn't made decisions easier. More data without explanation has made them harder.

What "Black Box" Actually Means

The term gets thrown around loosely, so let's be specific.

A black box site score is one where you receive an output with no visibility into the process that produced it. In practice, that means:

  • You get a number, a letter grade, or a ranking
  • You don't see which variables were used or how they were weighted
  • You can't ask "why did Site A score higher than Site B?" and get a specific answer
  • If you disagree with the score, there's no mechanism to adjust the model
  • The vendor's response to methodology questions is some version of "trust the model"
Comparison of black box scoring versus glass box scoring workflows
Comparison of black box scoring versus glass box scoring workflows

This isn't a fringe problem. It's how most legacy site selection platforms have operated for decades. Not because the math is actually secret in most cases, but because the business model is built around delivering answers, not explanations. You're paying for the output. The methodology is the vendor's intellectual property.

The issue isn't that these models are bad. Some are quite good. The issue is that a score you can't interrogate is a score you can't use effectively.

Three Ways Opacity Kills Decisions

The real cost of a black box isn't an inaccurate model. Plenty of opaque models produce reasonable scores. The cost is what happens when that score meets an organization.

Retail expansion committees exist because opening a new store is a high-stakes decision. Build-out, lease obligations, staffing, and inventory typically put $1 million or more on the line per location. The committee's job is to pressure-test recommendations before committing capital.

But when the recommendation comes from a model nobody can examine, the committee breaks in one of three ways.

1. The rubber stamp. Nobody questions the score because nobody can question it. The platform said 87, so it must be good. Bad sites get approved because the data provided no grounds for productive skepticism. This is the most dangerous failure mode because it looks like the system is working.

2. The gut override. The VP with 20 years of market experience looks at the score and says, "I know this area. That number is wrong." Maybe they're right. Maybe it's ego. There's no way to reconcile their intuition with the model because the model won't show its reasoning. The experience in the room and the data on the screen exist in separate universes.

3. The paralysis. The committee can't reach a decision because the score gives them nothing to debate productively. One person trusts it, another doesn't, and the conversation devolves into opinions about the vendor rather than analysis of the site. Meanwhile, the lease option expires.

The stakes are real. With over 15,000 store closures in 2025 and rising build-out costs, there's less margin for error on every location decision. As Holly Cohen of Retail Advisory Services told ICSC: "There's just less room for error now. Each store really has to be terrific right out of the gate or it ends up being a drag on the whole P&L."

As one retail expansion VP told us: "They go to committee with a sales forecast from their platform, and the question always comes up, 'how did you get this number?' They have no idea."

Mike C., co-owner and head of real estate at Cavender's Western Wear, put it more directly: "Other services hide behind black-box models that are hard to trust. The beauty of GrowthFactor is they make site selection for us incredibly simple, and give us clear unbiased recommendations on the data when we need it."

What Transparent Scoring Looks Like

A transparent model isn't just one that shows you a score breakdown. That's table stakes. Real transparency means the model is something you build together with your data team, not something you receive from a vendor.

At GrowthFactor, every site is evaluated across multiple scoring lenses. Each one measures something distinct about a location's potential:

Demographic fit looks at whether the population around a site matches your actual customer profile. Not just income and age, but psychographic segments, spending behaviors, and lifestyle patterns that predict who walks through your door.

Foot traffic examines real visitation patterns from mobile device data. Not modeled estimates, but observed behavior. How many people actually pass this location? When? How does that compare to your existing stores?

Competition analysis maps every competitor and complementary business in the trade area. It's not just "who's nearby" but "how does their presence affect your expected performance?" A Starbucks next door might help a breakfast spot and hurt a coffee shop.

Market potential assesses whether the trade area is growing, stable, or declining. Population trends, new construction, economic indicators. A site that scores well today in a shrinking market is a different bet than the same score in a growing one.

Visibility and accessibility evaluates whether customers can find and reach the location. Road types, traffic counts, ingress and egress, signage visibility. A site with great demographics but terrible access is a trap.

GrowthFactor Deal Score showing individual lens grades with justifications for each scoring category
GrowthFactor Deal Score showing individual lens grades with justifications for each scoring category

Here's what matters: you can see every one of these scores. You can see the data behind them. And you can change the weights.

If your brand's performance is driven more by foot traffic than demographics, you adjust the model to reflect that. The score recalculates. If you think competition should be weighted differently for urban versus suburban locations, you tell us and we rebuild. The model reflects how your team sees your business, not how a vendor's data scientists think retail works in general.

When the Model Proves You Wrong

The real test of transparent scoring isn't when it confirms your intuition. It's when it challenges it.

One of our customers, a national ice cream brand, came to us with a hypothesis: locations that sell a higher percentage of pints (versus scoops) generate more revenue. It was a reasonable theory. Pint sales have higher margins and suggest brand loyalty strong enough that customers want to bring the product home.

We built a custom predictive model to test it. The result: pint mix wasn't even a statistically significant factor in predicting store revenue. The locations with the highest pint percentages weren't outperforming. The variables that actually mattered were different.

If the model had been a black box, they would have built their expansion strategy around the wrong metric and never known it. They would have prioritized locations near grocery-competitive neighborhoods (assuming pint buyers live there) and potentially passed on sites that would have outperformed.

Because the model was transparent and collaborative, they caught the false assumption before it cost them locations. That's what "glass box" means in practice. Not just seeing the score, but being able to challenge the assumptions behind it and learn something when the data disagrees with your gut.

Six Questions to Ask Any Vendor

If you're evaluating site selection platforms, here's how to test whether you're looking at a glass box or a black box. These aren't trick questions. Any vendor with a transparent methodology should be able to answer all six without hesitation.

1. Can I see every variable in my site score?
Not a summary. Not "we use demographics and foot traffic." The actual variables, with their current values for a specific site.

2. Can I adjust the weights based on my brand's priorities?
If foot traffic matters more than psychographics for your concept, can the model reflect that? Or is the weighting fixed?

3. Can I test what happens when I change an assumption?
Scenario planning. "What if we weight competition more heavily? How does that change the ranking?" If the model can't do this, it's a report, not a tool.

4. Will your team walk us through the model, variable by variable?
Not a sales demo. A working session where your analysts sit with their analysts and go through the methodology until your team understands it well enough to explain it to a committee.

5. How often is the model updated, and can I request updates?
Markets change. Data refreshes. A model built in January may not reflect what's happening in July. If the vendor updates annually (or less), you're making decisions on stale analysis.

6. Can I take this score to a committee and explain every number?
This is the real test. If your real estate director can't stand in front of the expansion committee and defend the score line by line, the model is a black box regardless of what the vendor calls it.

If the answer to any of these is "that's proprietary" or "we can show you the output but not the methodology," you know what you're working with.

Frequently Asked Questions

What is a black box site score in real estate?
A black box site score is a location evaluation where the vendor delivers a number or rating without revealing the underlying methodology, data inputs, variable weights, or model logic. The user receives the output but cannot examine, question, or adjust how the score was calculated. Most legacy site selection platforms operate this way.

What is glass box transparency in site selection?
Glass box transparency means the scoring model is fully visible and collaborative. Users can see every variable, understand every weight, adjust the model to reflect their brand's priorities, and challenge assumptions when the data contradicts their experience. The term contrasts with "black box" to emphasize that the model is auditable and built with the customer, not handed to them.

How do I evaluate whether a site selection platform is transparent?
Ask six questions: Can I see every variable? Can I adjust weights? Can I test scenarios? Will your team walk through the methodology? How often is the model updated? Can I defend this score in a committee meeting? If any answer is "that's proprietary," the platform operates as a black box. A truly transparent platform should welcome these questions.

Why does scoring transparency matter for retail expansion?
Retail expansion committees make million-dollar decisions based on site scores. When the score can't be explained, committees either rubber-stamp recommendations (approving bad sites), override data with gut instinct (ignoring good sites), or stall entirely. Transparent scoring enables productive debate, catches false assumptions, and gives real estate teams the confidence to defend their recommendations.

The Industry Is Already Moving

The push for transparency isn't theoretical. In 2025, the DOJ sued RealPage over an algorithmic pricing scheme in multifamily housing where opaque models generated rent recommendations based on competitors' nonpublic data. The settlement required an independent monitor with full access to review the model's code, training data, and runtime logic. That's forced glass box compliance. When a black box model drives bad outcomes, the question of "how did you get that number?" stops being a committee room frustration and becomes a legal one.

The site selection industry hasn't faced a comparable moment yet. But the pattern is the same: opaque models generating high-stakes recommendations that nobody can audit. The firms that get ahead of transparency will be the ones that chose it before they were required to.

The next generation of retail real estate professionals won't accept "trust the model." They've grown up with data. They know that a number without context is just a guess with formatting.

The teams that are expanding successfully right now are the ones who understand their own scoring methodology well enough to argue with it. Cavender's went from 9 new locations to 27 in a single year after switching to transparent scoring, with every site meeting or exceeding revenue projections. Books-A-Million's team saves 25 hours per week because the analysis is committee-ready from the start, no manual assembly needed.

That's not because the scores are magic. It's because the people making the decisions understand the scores well enough to act on them with confidence.

If your current platform can't survive the question "how did you get that number?" it's time to ask whether the convenience of a simple score is worth the cost of a decision you can't defend.

For a deeper look at the data methodology behind site selection models, see our guide to Data-Driven Site Selection. For a framework on evaluating site selection platforms, see AI Site Selection: Compare Retail Platforms. And for the full analysis on what replacing a fragmented tool stack actually looks like, download our whitepaper.

See GrowthFactor in action

Book a demo to learn how AI-powered site selection can transform your expansion strategy.