Skip to content

How to Audit a Site Score: 7 Questions Every Real Estate Director Should Ask

Clyde Christian Anderson

Before You Take That Score to Committee

You're looking at a site score. 87 out of 100. The location looks solid. The demographics align. Your instinct says go.

But before you walk into the expansion committee with that number, ask yourself one question: can you explain it?

Not summarize it. Explain it. Which data sources fed the score. How each component was weighted. What would change if one assumption shifted. Whether the score was built for your business model or a generic one.

I've been evaluating retail sites since I was 15, working in my family's business. I spent years in investment banking at Wells Fargo before founding GrowthFactor. And the pattern I've seen repeatedly is this: teams bring scores to committee, get asked how the number was derived, and don't have an answer.

I wrote recently about why opaque site scores are a liability in committee. This article is the practical follow-up: seven questions that let you audit any site score, from any platform, before you stake a decision on it.

What It Means to Audit a Site Score

Auditing a site score has nothing to do with financial audits or compliance reviews. It's a methodological check. You're asking: can I trace this number back to its inputs, understand how those inputs were combined, and explain why the result should be trusted?

An auditable score is one you can defend in front of people who weren't involved in generating it. An unauditable score is one you're asking the committee to take on faith.

The distinction matters because site selection decisions involve real capital. A $2 million to $4 million build-out, a multi-year lease commitment, the operational cost of opening and staffing a new location. Basing that on a number you can't interrogate is a risk that doesn't show up in the score itself.

Here are seven questions that expose whether a score is built to be questioned or built to be accepted without examination.

Comparison of an unauditable score with seven red flags versus an auditable score with six transparent layers

Question 1: What Data Sources Feed This Score?

Every site score starts with data. The question is which data.

Mobile device signals from providers like Unacast measure foot traffic patterns. Demographic profiles come from providers like ESRI, which layers Census data with proprietary projections and spending estimates. Cotenant analysis — both competitors and complementary businesses — comes from providers like Dataplor, which tracks brand-level location data globally. Vehicle traffic counts come from state DOTs or analytics firms like StreetLight Data.

Each data source has strengths and limitations. Mobile location data captures real movement but varies by device penetration across demographics. ESRI's demographic projections are industry-standard but still model-based — they're estimates, not counts. Cotenant data is only as current as its last refresh.

When you audit a site score, ask your vendor: which data providers feed each component? Are you using observed foot traffic or modeled estimates? Where do your demographics come from — ESRI, Census direct, or a third-party projection? How current is your cotenant data?

Red flag: A vendor who can't name their data sources, or one who says the data is "proprietary" without further explanation. Proprietary models are fine. Unknown inputs are not.

Question 2: How Is Each Component Weighted?

Two sites can score identically and carry completely different risk profiles. The reason is weighting.

If foot traffic is weighted at 40% of the composite score, a site in a dense pedestrian corridor will score well even if the demographics are a poor match. If demographics carry 40%, a site in a quiet suburb with the right income profile will score well despite low traffic.

Neither weighting is wrong. But the right weighting depends entirely on how your business works. A QSR chain that depends on lunch-hour impulse stops needs heavy foot traffic weighting. A furniture retailer whose customers plan trips weeks in advance needs demographics and market potential weighted higher.

The audit question is straightforward: can you see the weights? Can you change them? If your vendor applies the same weighting formula to a taco shop and a mattress store, the model is built for simplicity, not accuracy.

At GrowthFactor, transparency shows up in two ways. Our AI site scores evaluate every location across five configurable lenses, each producing a 0–100 score with a plain-language justification explaining exactly why it scored the way it did. Customers adjust the weight of each lens based on what actually drives performance for their brand. A frozen dessert chain weights seasonality indicators differently than an urgent care franchise.

For customers who need revenue forecasting, we build predictive models collaboratively — working alongside the customer's team so they fully understand which variables feed the model, how they're weighted, and why. The customer isn't handed a black box forecast. They're part of building it.

Both approaches are glass box. The site score tells you why a location scored 87. The predictive model tells you why it's forecasting $2.4 million in Year 1 revenue. Different tools, same principle: if you can't explain the number, the number isn't useful.

Red flag: Fixed weights that apply the same formula across all industries and formats. A useful model is one that reflects how your specific business generates revenue, not an industry average.

Question 3: Can You See the Score Broken Down by Dimension?

A composite score is useful for sorting 200 locations during a franchise rollout. It tells you which sites to look at first. It does not tell you why one site works and another doesn't.

Consider two sites that both score 82. Site A has excellent foot traffic and poor visibility. Site B has mediocre traffic but strong demographics and low competition. The composite number is identical. The investment thesis is different. The risk factors are different. The operational strategy for each location should be different.

If your platform gives you one number and no breakdown, you're working with a summary that hides the story underneath.

A dimensional breakdown does three things. First, it shows you where a site is strong and where it's weak, which is information that matters long after the lease is signed. Second, it lets you compare sites on specific dimensions rather than totals. Third, it gives you a conversation structure for committee: "This site scored high on traffic and demographics but lower on visibility. Here's our plan to mitigate the visibility gap."

GrowthFactor lens breakdown showing individual dimension grades with justifications for a site

Red flag: A platform that delivers only a composite score with no dimensional breakdown. If you can't see the components, you can't audit the result.

Question 4: Does Each Dimension Come With a Justification?

Breaking a score into dimensions is a good start. But a dimensional score without an explanation is still opaque.

Knowing that foot traffic scored 91 is better than knowing the site scored 84 overall. But knowing that foot traffic scored 91 because the adjacent grocery anchor draws 14,000 weekly visits and cross-visitation patterns show 22% overlap with your format tells you something you can actually evaluate.

Justification text is the difference between a score that informs a decision and a score that replaces one. With justifications, the committee can question specific claims: "Is 14,000 weekly visits accurate? Where does that number come from? How does 22% cross-visitation compare to our other locations?" Those are productive questions. They lead to better decisions.

Without justifications, the committee's only option is to accept or reject the number as a whole. That's not analysis. That's faith.

Red flag: Dimensional scores delivered as numbers with no accompanying explanation. A score without a reason is a conclusion without an argument.

Question 5: How Does This Score Compare to Your Top-Performing Stores?

Generic benchmarks are a weak foundation for site selection. Knowing that a site scores "above average" tells you it's above a benchmark that may have nothing to do with your portfolio.

The more useful comparison is analog matching: how does this potential site score against the data profile of your top five or ten stores? If your highest-performing locations share specific trade area characteristics, those characteristics should be the benchmark, not an industry composite.

When auditing a score, ask: does this platform compare new sites to my existing portfolio? Can I see which of my current stores this potential site most resembles, and on which dimensions?

This matters because site selection isn't abstract. You have real performance data from real locations. A platform that ignores your operating history in favor of generic benchmarks is leaving the most relevant data on the table.

"GrowthFactor doesn't just give data. It tells us which sites actually fit how we operate. That's rare."
— Jay T., Real Estate Manager, Preferred Growth Properties
GrowthFactor analog comparison table showing how a potential site matches against existing portfolio locations

Red flag: Scores benchmarked against industry averages with no option to calibrate against your own portfolio data.

Question 6: What Happens If You Change One Assumption?

Every model rests on assumptions. The question isn't whether assumptions exist. It's whether you can see what happens when they shift.

Sensitivity analysis answers a simple question: if foot traffic drops 15% from current levels, does the site score change by 2 points or by 20? If a planned development nearby falls through and the co-tenancy mix changes, does the score still hold?

This kind of stress testing reveals whether a model is stable or fragile. A stable model accounts for variability across multiple dimensions. A fragile model gives you a precise-looking number that falls apart under one changed condition.

Real estate directors know that conditions change. Leases get signed years before a store opens. The trade area you evaluated last quarter may look different by the time construction finishes. A score that can't be stress-tested is a score that assumes the world stays frozen.

Red flag: No ability to adjust inputs, run scenarios, or test sensitivity. If the model gives you a number and that's the end of the conversation, the model wasn't built for real decisions.

Question 7: Can You Explain This Score in 60 Seconds?

This is the meta-question. It's the summary of the six questions above.

If you can answer questions one through six, you can explain the score in 60 seconds. You know the data sources. You know the weights. You can point to the dimensional breakdown. You can reference the justification for each lens. You can show how the site compares to your portfolio. And you can describe what would need to change for the score to look different.

If you can't do that, the score has an audit problem. And the committee will find it.

"Other services hide behind black-box models that are hard to trust. The beauty of GrowthFactor is they make site selection for us incredibly simple, and give us clear unbiased recommendations on the data when we need it."
— Mike C., Co-Owner & Head of Real Estate, Cavender's

This is what we built GrowthFactor to address. We saw teams walking into committees unable to explain their own recommendations, not because they hadn't done the work, but because their tools wouldn't let them see inside the numbers. Every site on our platform receives a score broken down across five lenses, each with its own justification, each adjustable to match how the customer sees their business.

The Red Flags Checklist

A quick reference. If your current platform triggers three or more of these, your scores have an audit problem.

Red FlagWhat It Means
Vendor won't name data sourcesYou can't verify inputs
Fixed weights across all industriesModel isn't calibrated to your business
No dimensional breakdownYou're trusting a summary without detail
Scores with no justification textConclusions without arguments
Generic benchmarks onlyYour portfolio data is being ignored
No scenario or sensitivity testingModel assumes static conditions
You can't explain the score in 60 secondsThe committee will notice before you do

What a Passing Audit Looks Like

A score that passes all seven questions looks like this: you open the report and see a composite score alongside individual dimension scores. Each dimension shows its rating, the data behind it, and a plain-language justification. The weights are visible and adjustable. You can compare the site profile against your existing locations. And you can change one assumption and watch the score respond.

That's not a feature wish list. It's the minimum for a score that can survive a committee conversation.

At GrowthFactor, we built the five-lens scoring framework specifically because we watched teams struggle with this problem. Clyde's background in investment banking showed us what happens when you bring a number to a meeting without a transparent methodology: the number gets questioned, and the person presenting it has nowhere to go. The five-lens approach gives you somewhere to go. Each lens is a dimension you can discuss, defend, and adjust. And when customers need revenue forecasting on top of site scoring, we build those predictive models together — so the methodology is theirs to explain, not ours to hide.

"Before GrowthFactor, we had to create all analysis and presentation materials manually. Now we pull a report and it's ready for committee."
— Jack F., Real Estate Manager, Books-A-Million

Frequently Asked Questions

What is a site score audit?
A site score audit is a methodological review of how a location score was generated. It involves checking the data sources, examining how components are weighted, reviewing dimensional breakdowns and justifications, and testing whether the score holds up under changed assumptions. The goal is to verify that the score is defensible, not just plausible.

Who should audit site scores before expansion decisions?
Real estate directors, expansion managers, and anyone who will present a site recommendation to a committee or executive team. If you're the person who has to answer "how did you get this number?" then you're the person who needs to audit the score.

How often should site scores be re-audited?
At minimum, re-audit scores before any committee presentation. Market conditions shift: demographic changes, new competition, zoning updates, and traffic pattern shifts can all affect a score. Sites evaluated more than 90 days before a committee meeting should be re-scored with current data.

What's the difference between a site score and a revenue forecast?
A site score evaluates location quality across multiple dimensions (traffic, demographics, competition, visibility, market potential) and produces a 0–100 rating with justifications. A revenue forecast uses machine learning to estimate how much money a store at that location would generate. The score tells you whether a site is promising. The forecast tells you how promising, in dollar terms. Both should be auditable — at GrowthFactor, site scores show exactly why each lens scored the way it did, and predictive models are built collaboratively with the customer so the methodology is fully understood. Learn more about GrowthFactor's transparent scoring approach.

The Score Isn't the Decision

The score is evidence for the decision. It's one input among several, including your team's operational knowledge, your market strategy, and the lease terms on the table.

But if that evidence can't survive seven questions, it's not evidence. It's a number on a screen.

Audit your scores. Ask the questions. And if your platform can't answer them, find one that can.

Read our whitepaper on transparent site selection methodology, or schedule a demo to see how the five-lens framework works with your data.

See GrowthFactor in action

Book a demo to learn how AI-powered site selection can transform your expansion strategy.