Five Providers, Five Versions of "Foot Traffic"
You'd think foot traffic data would be straightforward. Count the people who walk into a store. Report the number.
It's not. Every provider uses a different panel, a different methodology, and a different definition of what counts as a "visit." One provider's 10,000 weekly visits to a strip mall might be another's 7,200. Both are estimates. Neither is wrong, exactly. They're measuring different things with different tools.
For site selection teams in 2026, the question isn't "which provider has the best data." It's "which provider measures what I actually need, and how much should I trust the number?" This comparison covers five major providers, what each one gives you, and where every one of them falls short.
How Foot Traffic Data Actually Works
Before comparing providers, it helps to understand what you're buying. Every provider in this space follows roughly the same process.
Step 1: Collect location signals. A panel of mobile devices sends GPS coordinates to apps that have opted users into location sharing. The panel might be 20 million or 40 million devices, depending on the provider. No provider covers every phone in the country. They're all working from a sample.
Step 2: Match signals to places. When a device pings near a known business location (a "point of interest" or POI), the provider attributes that ping as a visit. GPS drifts. Buildings have multiple tenants. A device near a strip mall could be visiting the grocery store, the nail salon, or the parking lot. The accuracy of the POI database determines how reliable this step is.
Step 3: Extrapolate to the full population. If a provider's panel represents roughly 5% of the population in a given area, they multiply observed visits to estimate totals. This extrapolation uses demographic weighting, geographic adjustments, and corrections for known panel biases (age, income, device type). This step is where providers diverge most.
Each step introduces error. Panel bias, POI mapping accuracy, and extrapolation assumptions compound. The result is an estimate that varies in quality by provider, by geography, and by location type.
Every foot traffic provider runs the same three steps. Each step is a different source of error. The final visit number carries all three.
Provider-by-Provider Breakdown
Five providers, three methodologies. Each card shows what the provider measures best and where it falls short.
Placer.ai
What it measures: Estimated visits to US locations based on a mobile device panel.
Panel: Tens of millions of devices sourced from opted-in mobile applications. Placer's data documentation describes a debiasing model that corrects for age, geography, and iOS/Android distribution.
Methodology: GPS data from app SDKs, processed through proprietary ML algorithms. Visits are estimated and extrapolated for each location. Placer reports 90%+ correlation with first-party data sources in validation studies.
What you get: A polished analytics dashboard with visit trends, visitor demographics, trade area maps, competitive benchmarking, and cross-shopping analysis. Placer's strength is visualization and trend analysis across retail, dining, and entertainment categories. Their free tools (industry indices, POI lookups) are genuinely useful for quick research.
What you don't get: Placer's datasets come pre-organized for specific analysis types. That makes their standard analyses clean and easy, but it also means you're working within their framework. If you need a different cut of the data or want to understand the assumptions behind the extrapolation, there's limited transparency into the modeling layer. The platform is US-only for foot traffic coverage.
Best for: Retail analysts who need visit trends, competitive benchmarking, and trade area visualization. Teams that consume foot traffic data for reporting and presentations rather than building custom models.
Honest limitation: The platform can feel overwhelming. One real estate practitioner described it as "paralysis by analysis" because of the sheer volume of data surfaces. If your team doesn't have a dedicated analyst, the depth can work against you.
SafeGraph (Dewey)
What it measures: Visit counts and patterns at US points of interest, plus a detailed POI database.
Panel: Approximately 40 million cellular devices that have opted into location sharing.
Methodology: GPS signals matched to POIs using building footprint polygons (not just centroids, which matters for accuracy in dense areas). Weekly Places Patterns include hourly visit counts for roughly 4 million US POIs. Data uses NAICS code standardization for consistent categorization.
What you get: Raw visit data with hourly granularity, a well-structured POI database, and developer-friendly delivery (APIs, flat files, cloud marketplace). SafeGraph's POI quality is its strongest differentiator. The data is vetted for accuracy: if a location is in the database, it's been verified as a real, operating business.
What you don't get: SafeGraph is a data provider, not an analysis platform. You buy the data and bring your own tools for visualization and analysis. There's no built-in dashboard for trade area mapping or competitive benchmarking. You need a GIS analyst or data team to turn raw SafeGraph data into site selection decisions.
Best for: Data teams that want raw visit data to feed into proprietary models. Researchers and analysts who need hourly granularity and clean POI data. Organizations with the technical capacity to work with flat files or API integrations.
Honest limitation: A peer-reviewed study in PLOS One found demographic bias in SafeGraph's panel: older adults and non-white populations were underrepresented in visit data at polling locations. This matters for any site selection team evaluating locations in diverse or aging markets. The panel may not reflect who actually visits.
Unacast
What it measures: Mobility patterns, foot traffic, and migration trends based on GPS smartphone data.
Panel: Millions of devices (exact panel size not publicly disclosed). Unacast sources first-party GPS data through direct partnerships with app providers, with opt-in consent from users.
Methodology: A three-step process: gather GPS signals, extrapolate to estimate total visits, and correct for known biases. The correction step adjusts for panel composition and geographic coverage gaps.
What you get: Trade area analysis, migration trend data, competitive visit analysis, and cross-visitation patterns. Unacast is strong on mobility analytics: understanding not just how many people visit a location, but where they came from, how far they traveled, and what other locations they visit.
What you don't get: Unacast's POI data is largely inferred from mobility patterns and data suppliers, not independently researched. They provide geographic coordinates for locations but not building polygons. In a dense retail corridor where three businesses share a building, coordinate-level data may not tell you which business a visitor actually entered.
Best for: Real estate teams focused on migration analysis, trade area mapping, and understanding visitor origin patterns. Organizations evaluating market-level trends, not just individual site visits.
Honest limitation: Like most providers, Unacast's coverage is primarily US-based for detailed foot traffic data. International location intelligence exists but with less granularity. The inferred POI data means you should cross-check location accuracy against your own site records, especially for smaller or newer locations.
Foursquare
What it measures: Visits, foot traffic trends, and audience segments based on first-party location data and a massive POI database.
Panel: Over 100 million POIs across 200+ countries, making it the largest global POI database. Visit data sourced from first-party check-ins, SDK partnerships, and location signals.
Methodology: Foursquare's origin as a consumer check-in app gives it a unique data asset: years of first-party visit confirmations (people actively saying "I'm here"). The current platform layers SDK-sourced location data on top of this historical check-in data, with third-party audit verification of data quality.
What you get: The broadest global POI coverage of any provider. Foursquare Analytics provides chain-level foot traffic dashboards with competitive benchmarking. Foursquare Attribution measures how ad campaigns drive store visits (550+ media partners). Foursquare Audience enables visitation-based targeting segments.
What you don't get: Foursquare's analytics are more marketing-oriented than site selection-oriented. The platform excels at "did our ad campaign drive visits?" and "how does our chain compare to competitors this quarter?" It's less built for "should I sign a lease at this specific strip mall?" The historical check-in data skews younger and more urban than the general population.
Best for: Marketing teams measuring campaign-to-visit attribution. Brands that need global POI data. Organizations that want to build audience segments based on real-world visitation behavior.
Honest limitation: The shift from consumer app to enterprise platform means Foursquare's data has two layers: the legacy check-in data (strong on restaurants and entertainment, weaker on industrial and medical) and the newer SDK data (broader but less self-reported). Understanding which data layer drives your specific analysis matters for accuracy assessment.
StreetLight Data (Jacobs)
What it measures: Vehicular traffic, pedestrian movement, and multimodal transportation patterns.
Panel: 40 billion anonymous location probes per month from smartphones and connected vehicle navigation systems.
Methodology: Location signals are matched to vehicle and pedestrian trips using road network data, Census data, and proprietary algorithms. Results are validated against ground-truth data (physical traffic counts, sensor data). StreetLight publishes an annual methodology white paper and provides AADT (Annual Average Daily Traffic) estimates.
What you get: The best vehicular traffic data available without physical sensors. Origin-destination matrices, travel time analyses, and multimodal breakdowns (cars, bikes, pedestrians). Validated against real-world traffic counts, which most foot traffic providers don't offer.
What you don't get: StreetLight answers "how many cars drive past this location?" not "how many people walk into this store." It's a transportation platform, not a retail foot traffic platform. Valuable for drive-through concepts and pad sites, but it doesn't measure store visits.
Best for: QSR and drive-through concepts where road traffic volume matters more than pedestrian visits. Site selection teams evaluating pad sites, outparcels, and highway-adjacent locations. Anyone who needs validated vehicular AADT data.
Honest limitation: The interface assumes transportation planning expertise. If your team doesn't think in O/D matrices and trip generation rates, the data requires translation. StreetLight's methodology page has good documentation, but the learning curve is steeper than Placer or Foursquare.
What Every Provider Gets Wrong (or at Least Imprecise)
No foot traffic data provider will tell you this on their marketing page, but every methodology shares the same structural weaknesses.
Panel bias is real. Mobile device panels skew toward younger, urban, higher-income populations. People who download apps that share location data are not a random sample of the population. The PLOS One study on SafeGraph data documented this across spatial scales, but the finding applies to every GPS-based provider. If you're evaluating a site in a rural market or an area with an older demographic, the foot traffic numbers may systematically undercount actual visits.
Rural areas are the blind spot. Low device density in rural locations means fewer data points to extrapolate from. A provider might have 5,000 devices in a metro area and 50 in a rural county. The metro estimate is statistically meaningful. The rural estimate is barely a guess. If your expansion strategy includes secondary and tertiary markets, you need to ask your provider directly: "What's your panel density in this specific market?"
Visit attribution in multi-tenant centers is imprecise. A GPS ping near a strip mall doesn't tell you which store the person visited. Providers use different methods to solve this (building polygons vs. centroid radius vs. dwell time thresholds), and the accuracy varies by location geometry. A freestanding building is easy. An inline tenant in a 400,000-square-foot shopping center is hard. If co-tenancy evaluation matters to your decision (and it should), foot traffic data at the individual tenant level should be treated as directional, not precise.
"Validated against first-party data" means less than you think. Several providers report 90%+ correlation with real-world data. But correlation at the aggregate level (monthly visits to a chain) doesn't guarantee accuracy at the individual site level (Tuesday afternoon traffic at one location). The sites where foot traffic data matters most, new or unfamiliar locations, are exactly the sites where validation data doesn't exist.
How to Choose: A Decision Framework
The right provider depends on what you're trying to do.
| If you need... | Consider | Why |
|---|---|---|
| Visit trends and competitive benchmarking | Placer.ai | Best visualization and retail-focused analytics |
| Raw visit data for custom models | SafeGraph (Dewey) | Cleanest POI data, hourly granularity, developer-friendly |
| Migration and visitor origin analysis | Unacast | Strongest mobility and trade area analytics |
| Global POI coverage or ad attribution | Foursquare | Largest POI database, marketing measurement tools |
| Vehicular traffic and road-level data | StreetLight | Validated AADT, transportation-grade analytics |
The question most teams skip: Do you need a foot traffic data provider, or do you need foot traffic data as part of a broader site evaluation?
If foot traffic is the only variable you're evaluating, a dedicated provider makes sense. But foot traffic tells you how many people pass by or visit. It doesn't tell you if those people match your customer profile. It doesn't tell you if the competitive environment supports another location. It doesn't tell you if the site is visible from the road or zoned for your use.
A site scoring framework that treats foot traffic as one input among several (demographics, competition, market potential, visibility) will produce better site decisions than a foot traffic number alone, no matter which provider supplied it. We covered why in The Real Cost of Running Your Expansion on 6 Different Tools: subscribing to a foot traffic provider, a demographics tool, a mapping platform, and a spreadsheet-based scoring system creates the exact tool fragmentation that slows expansion teams down.
The Real Question: What Changes Your Decision?
I've seen teams spend $50,000 a year on a foot traffic data platform and still make bad site decisions. The data was accurate. The interpretation was wrong.
Foot traffic volume without context is noise. A location with 50,000 weekly visits in a tourist district and a location with 15,000 weekly visits in a residential neighborhood could be equally good or equally bad for your concept. The tourist location has volume but low repeat rates. The residential location has lower volume but higher conversion and loyalty.
What changes your decision isn't the foot traffic number. It's the foot traffic number combined with who those people are, what they're doing, and whether your concept fits the pattern. When you measure your true trade area, foot traffic data becomes one data point in a larger picture, not the picture itself.
The best use of foot traffic data in site selection is to answer two questions: "Is there enough activity here to support my concept?" and "How does this location compare to my best-performing existing stores?" Everything beyond that requires additional data layers that foot traffic providers don't offer.
Frequently Asked Questions
Which foot traffic data provider is most accurate?
No provider is universally "most accurate." Accuracy varies by geography (urban vs. rural), location type (freestanding vs. multi-tenant), and time period. All major providers use mobile device panels that skew toward younger and higher-income populations. The most reliable approach is to validate any provider's estimates against your own first-party data for existing locations before trusting their estimates for new ones.
How much does foot traffic data cost?
Pricing varies widely. Placer.ai's enterprise subscriptions start around $50,000 per year. SafeGraph offers data marketplace pricing based on coverage area and data products. Unacast and Foursquare price by use case and scale. StreetLight prices by analysis zone and time period. Integrated platforms that include foot traffic as one data layer alongside demographics, competition, and scoring (like GrowthFactor) start at $400 per month.
Can foot traffic data predict sales at a new location?
Foot traffic data alone is a weak predictor of sales. Visit volume doesn't account for conversion rates, average transaction values, customer demographics, or competitive dynamics. A scoring model that combines foot traffic with demographic fit, market potential, competition analysis, and visibility produces more reliable sales predictions than foot traffic alone. When evaluating whether a market is saturated, foot traffic data helps estimate demand, but you need supply-side data to complete the picture.
Is mobile foot traffic data reliable for rural locations?
Less reliable than for urban locations. Mobile device panels have lower density in rural areas, which means fewer observations and wider confidence intervals in extrapolated visit estimates. If your expansion includes rural or secondary markets, ask your provider about panel density in those specific geographies. Some providers perform better than others in low-density areas depending on their panel partnerships.
Do I need a separate foot traffic data subscription?
It depends on your team's needs. If your analysts build custom models and need raw visit data, a dedicated provider (SafeGraph, Unacast) gives you flexibility. If your team needs foot traffic as one input in a broader site evaluation workflow, an integrated platform that combines foot traffic with demographics, scoring, and competitive analysis avoids the tool fragmentation problem that slows most expansion teams down.