AI for Retail Site Selection: End Spreadsheet Purgatory
Written by: Andrew Teeples
What Spreadsheet-Based Site Selection Actually Looks Like
A broker sends a site for consideration. The real estate director opens a new tab. Then another. Then six more.
Demographics from the Census Bureau. Foot traffic from one platform. Competitive data from another. Zoning from the county assessor's site. Rent comps from CoStar. Drive-time analysis from Google Maps. Historical sales from an internal spreadsheet that only one person knows how to update.
Each data source requires its own login, its own export format, and its own set of assumptions. The director copies numbers into a master spreadsheet, matches rows with VLOOKUP, formats a summary, and starts building slides for the committee meeting. By the time the package is ready, days have passed and two other brands have already toured the site.
This is not an exaggeration. According to Altus Group's CRE Innovation Report, surveying 400 global CRE executives at firms with $250M+ in assets under management, 60% still use spreadsheets as their primary reporting tool. Half use them for valuation and cash flow analysis. Forty-five percent use them for budgeting and forecasting.
The problem is not that spreadsheets are bad tools. The problem is that spreadsheets were designed for calculation, not for managing a multi-location expansion pipeline where every site requires 8 to 12 data sources, a standardized scoring framework, and a committee-ready deliverable.
The Five Hidden Costs of Manual Site Selection
The obvious cost is time. But time is only the beginning. Spreadsheet-based site selection compounds into five categories of cost that most teams never quantify.
| Cost Category | What Happens | Business Impact |
|---|---|---|
| Time on data gathering | Team spends 60-70% of analysis time collecting and formatting data, not evaluating it | Fewer sites evaluated per quarter; pipeline bottleneck |
| Throughput ceiling | Manual process limits team to 5-10 sites per cycle instead of 30-50 | Picking from a small sample instead of the best of many options |
| Inconsistency | Different analysts use different sources, assumptions, and methodologies | Committee cannot compare sites on equal footing; decisions default to gut feel |
| Speed-to-decision | Days or weeks between broker submission and GO/NO-GO | Prime locations go to competitors who respond faster |
| Institutional knowledge loss | Scoring logic lives in one person's head or a custom spreadsheet only they maintain | Team cannot scale beyond that person's capacity; key-person risk |
The throughput ceiling is the most consequential. Best-in-class teams evaluate 30 to 50 sites per store opening. Most teams evaluate 5 to 10 because the manual process cannot handle more. The difference in outcomes is significant: choosing the best of 50 candidates produces fundamentally better locations than choosing the best of 5.
According to Salesforce's 2025 State of Sales report, the average seller uses 8 separate tools to close deals. For retail RE teams, that number maps closely to the reality of juggling separate platforms for demographics, foot traffic, mapping, listings, zoning, and internal tracking. Each tool adds an export, a reformatting step, and a version control headache.
Where the Spreadsheet Process Breaks Down
The spreadsheet approach works at low volume. A team evaluating 5 sites per quarter can manage the manual overhead. The process breaks at three specific inflection points.
Broker intake at scale. When a brand reaches the stage where multiple brokers are submitting sites regularly, the email-and-spreadsheet intake process collapses. Submissions arrive in different formats (PDF, email body text, phone calls). There is no structured way to log them, screen them, or route them into the evaluation pipeline. Sites get lost in inboxes.
Committee preparation. Every committee meeting requires a standardized package: site score, demographics summary, competitive landscape, trade area analysis, cannibalization assessment, and revenue forecast. Building this package manually for each site means the team spends its time assembling slides instead of analyzing data. When the CFO asks "how did you get this number?", the answer is a formula in cell J47 of a spreadsheet that was last updated two weeks ago.
Portfolio-level analysis. Evaluating a single site against brand criteria is manageable in a spreadsheet. Evaluating how a new site interacts with existing locations (cannibalization, trade area overlap, market coverage) requires spatial analysis that spreadsheets fundamentally cannot do. Teams either skip cannibalization analysis or do it manually with maps and approximations, which is how brands end up with stores that split demand rather than grow it.
What Changes When the Spreadsheet Goes Away
The transformation from spreadsheet-based to platform-based site selection is not about adding AI for its own sake. It is about collapsing the data-gathering phase so that the team's time shifts from assembly to analysis.
| Workflow Step | Spreadsheet Approach | Platform Approach |
|---|---|---|
| Broker submits a site | Email with PDF attachment; manually logged if at all | Structured portal; site auto-populates with demographics, traffic, zoning |
| Initial screening | Analyst looks up demographics and competition manually; 2-4 hours per site | Instant site score across standardized criteria; seconds per site |
| Deep analysis | Export data from 6-8 sources; VLOOKUP into master spreadsheet; build charts | Full report with trade area, analogs, cannibalization, forecast in one view |
| Committee package | Manually assemble slides from spreadsheet data; 1-2 days per site | Committee-ready report generates from pipeline data; minutes |
| Cannibalization check | Manual radius comparison or skipped entirely | Automated trade area overlap with dollar-impact estimates |
| Decision audit trail | Scattered across email threads, meeting notes, and spreadsheet versions | Every decision, score, and rationale logged in the pipeline |
The location intelligence market reflects this shift. According to Grand View Research, the market reached $24.7 billion in 2025 and is projected to hit $53.6 billion by 2030 at 16.8% annual growth. Retail brands are a significant share of that spend because the cost of a wrong location (a 10-year lease at $155 per square foot in build-out costs alone, per Cushman & Wakefield 2025 data) far exceeds the cost of the tools that prevent it.
What the Numbers Look Like After the Switch
The shift from spreadsheets to a purpose-built platform produces measurable changes in three areas: throughput, time, and decision quality.
Throughput. Cavender's Western Wear opened 27 new locations in 2025, up from 9 in 2024 before adopting GrowthFactor. TNT Fireworks reviews 10x more sites per committee meeting and opened 150+ locations in under six months. The throughput increase comes not from working faster on each site but from eliminating the data assembly bottleneck that limited how many sites could be processed.
Time. Books-A-Million saves 25 hours per week per user. That time was previously spent on the data-gathering and formatting steps that a platform automates entirely. A full site analysis report, including demographics, foot traffic, competitive density, cannibalization, and a composite site score, generates in roughly two seconds on GrowthFactor's platform. The manual equivalent takes hours.
Decision quality. The less visible but more valuable change is in how committees make decisions. When every site arrives with a standardized score, transparent methodology, and visible trade area analysis, the committee can compare sites on equal footing. GrowthFactor's Glass Box approach means every variable and weighting in the scoring model is visible and adjustable, so when the CFO asks "how did you get this number?", the answer is built into the report.
One frozen dessert brand hypothesized that stores with higher pint-to-scoop sales ratios would predict better revenue. GrowthFactor built a custom model, ran the numbers, and proved pint mix was not a significant factor. That kind of hypothesis testing, where the model adapts to how the customer sees their business, is what separates platform-based analysis from spreadsheet formulas that never get questioned.
How to Start Moving Off Spreadsheets
The transition does not need to happen all at once. Most teams that successfully move from spreadsheets to a platform follow a staged approach.
Stage 1: Standardize the scorecard. Before adding any technology, define the criteria your team uses to evaluate a site. If different analysts use different metrics, different data sources, or different thresholds, no platform will fix the underlying inconsistency. Write down the 5 to 10 factors that matter most for your brand.
Stage 2: Consolidate data sources. Identify which of your 6 to 10 data sources can be replaced by a single platform. The goal is not to add another tool to the stack but to reduce the stack. GrowthFactor consolidates demographics, foot traffic, competitive data, zoning overlays, and scoring into one view, replacing the export-and-VLOOKUP cycle.
Stage 3: Run parallel. Evaluate the next 5 to 10 sites through both your existing process and the new platform. Compare the outputs. This builds trust with your team and surfaces any gaps between what the platform provides and what your committee needs.
Stage 4: Shift the default. Once the team trusts the platform output, make it the starting point for every evaluation. The spreadsheet becomes the exception (for edge cases or custom analysis), not the default.
For a detailed comparison of site selection platforms and what to look for in an evaluation, see our site selection solutions buyer's guide. For a deeper look at the data methodology behind location scoring, see our data-driven site selection guide.
Frequently Asked Questions
How much time does manual site selection actually take?
Most retail RE teams spend 60 to 70% of their analysis time on data collection and formatting rather than evaluation. A single site evaluation that includes demographics, foot traffic, competition, and committee preparation typically consumes multiple days of work when done manually across separate tools and spreadsheets. Platform-based approaches reduce this to minutes for initial screening and hours (not days) for deep analysis.
What is wrong with using spreadsheets for site selection?
Spreadsheets are excellent calculation tools. The problem is using them as a site selection workflow: they cannot ingest broker submissions in a structured way, they cannot perform spatial analysis (trade area mapping, cannibalization modeling), they create version control issues when multiple people touch the same file, and they cannot generate standardized committee-ready reports. At low volume (fewer than 10 sites per year), the overhead is manageable. Above that, it becomes a throughput bottleneck.
What does AI site selection software cost?
Pricing varies widely by platform type. Self-serve platforms with AI scoring start around $400 per month. Full-service consulting engagements with custom modeling run $50,000 to $200,000+ per project. Enterprise data subscriptions (foot traffic, demographics, competitive data) range from $10,000 to $100,000+ per year depending on coverage. The cost comparison that matters is not the subscription fee but the cost of a wrong location: a 10-year lease with $155 per square foot in build-out costs represents a seven-figure commitment per site.
How many sites should a retail brand evaluate per opening?
Best-in-class teams evaluate 30 to 50 sites per store opening. Most teams evaluate 5 to 10 because manual processes limit throughput. The difference in outcomes is significant: a larger evaluation sample improves the probability of finding optimal locations and reduces the risk of settling for sites that merely clear a minimum threshold.
How do you make site selection data committee-ready?
Committee-ready means the site package answers "how did you get this number?" without requiring the presenter to reference external spreadsheets. A complete committee package includes: a composite site score with visible methodology, demographics analysis of the trade area, foot traffic and accessibility data, competitive landscape, cannibalization risk assessment with dollar-impact estimates, analog store comparisons, and a revenue forecast. Platform-based tools generate this package as a byproduct of the evaluation process.
Can spreadsheets handle cannibalization analysis?
Not effectively. Cannibalization analysis requires spatial modeling: determining where trade areas overlap and estimating the revenue impact on existing stores. Spreadsheets can approximate this with radius-based comparisons, but radius rings assume customers travel equal distances in all directions, which is rarely true. One GrowthFactor customer discovered their actual trade area was 23 minutes (based on mobility data), not the 16 minutes they had assumed from a radius model, revealing overlap with a nearby store that manual analysis had missed.
Does AI site selection replace human judgment?
No. AI handles the data-intensive steps that consume the majority of analyst time: gathering, standardizing, scoring, and visualizing location data. The human layer remains essential for interpreting scores in context, applying brand-specific knowledge, and making the final GO/NO-GO decision. The goal is to shift human effort from data assembly to data interpretation.
What is the difference between "glass box" and "black box" AI in site selection?
Black box models produce a score without showing how it was calculated. Glass box models show every variable, every weighting, and every data source that contributed to the score. The distinction matters most in committee settings: when a CFO asks why a site scored 82 out of 100, a glass box model can show that demographics contributed 18 points, foot traffic contributed 15, and competitive density reduced the score by 8. A black box model cannot.
How long does it take to transition from spreadsheets to a platform?
Most teams can run their first site evaluation on a platform within days of setup. Full transition (where the platform becomes the default workflow rather than a supplement) typically takes 4 to 8 weeks, with the primary variable being team adoption rather than technical implementation. Starting with parallel evaluation (running 5 to 10 sites through both old and new processes) builds confidence and identifies any gaps.
Is AI site selection worth it for a small real estate team?
The ROI depends on evaluation volume and the cost of your locations. A team evaluating fewer than 10 sites per year with lease commitments under $100,000 annually may find that spreadsheets plus basic analytics are sufficient. A team evaluating 20+ sites per year with seven-figure lease commitments will likely find that the time savings and decision quality improvements pay for the platform within the first quarter. Books-A-Million, for context, saves 25 hours per week per user.
Citations
The human algorithm
Request Your demo
Schedule meeting
Or submit your information below and we'll be in touch to schedule.