Building a Different Kind of Map
I've lived in Seattle for 4+ years and I still don't know where to eat.
That's not quite true. I have my spots. But when I want to find somewhere new, I open Google Maps and the same places keep showing up. The results feel optimized for something, but not for helping me find good food.
A few weeks ago I came across a project by Lauren Leek, a data scientist in London who'd felt the same way. She built a model to find restaurants that rated higher than you'd expect given their circumstances. Not the most reviewed. The ones doing better than they should be.
It reframed the question. Instead of "what's popular?" she asked "what's underrated?"
This might come as a surprise, but Google Maps is not a neutral list of nearby restaurants. It's a ranking system, and the ranking determines what you see.
Google mainly uses something called "prominence" to decide ordering. Prominence is calculated from review count, how fast reviews come in, brand recognition, and web presence. This creates a feedback loop: restaurants that already have reviews show up higher, which brings them more customers, which brings them more reviews, which pushes them higher still.
This is fine if you want something like Starbucks. Starbucks benefits from brand recognition across thousands of locations. A new independent coffee shop starts at zero. It doesn't show up in searches, so it doesn't get foot traffic, so it doesn't get reviews, so it doesn't show up in searches...you see the problem.
Location matters too. A restaurant near Pike Place gets reviewed by thousands of tourists passing through each week. A better restaurant on a quieter block, serving the same quality food to a smaller crowd, accumulates reviews at a fraction of the rate.
The result is that what you see when you search isn't "what's good nearby," it's "what's already been found."
I want to build something that asks a different question. Instead of ranking by popularity, find the places that are earning more love than you'd expect.
I'm calling it Alimenta. Latin for to feed/nourish.
The idea is to build a model that learns what ratings typically look like for restaurants with similar structural features: price level, how long they've been open, what services they offer, how many reviews they have, whether they take reservations, and dozens of other factors the platforms already track. The model won't making judgments about what any restaurant "should" score. It will learning patterns from thousands of data points, then flag the ones that break the pattern in a positive direction.
For example, a restaurant that rates 4.3 when similar places average 3.8 is doing something the numbers don't fully explain. Maybe it's the chef. Maybe it's the service. Maybe it's something hard to quantify. Whatever it is, that's exactly what I want to find.
I don't know if this will work. The model might produce hot garbage. Seattle might not have enough restaurants to train on (unlikely). The patterns might not generalize. I'll find out by building it.
If it does work, I'll publish the methodology. How the model was trained, what features mattered, what the error rates looked like. The whole point is to build something that helps people find good food, not to create another black box.
Even if it fails, the exercise is useful I think. There's value in building your own tools to answer your own questions, rather than accepting what someone else's algorithm decides to show you (looking at you Google/Meta).
My wife is a chef. She has opinions about restaurants that have nothing to do with Google Maps ratings. Places she respects, places she's curious about, spots her coworkers and friends have mentioned. That knowledge exists in conversations and professional networks built over years. No search engine captures it.
I'm not trying to replace that. But there might be something useful between the algorithmic feed and the insider network. A tool that finds what the ranking system misses.
Wish me luck!
— Austin