Ridgeline Founder Stories: Querying the Earth Over Space and Time
An interview with Dan Hammer and Nathaniel Manning about the story behind LGND
When most people think of Generative AI, they think of language: ChatGPT answering questions, Gemini writing essays, MidJourney turning text into images. But a major next modality for AI is not words or pictures—it’s the Earth itself.
A recent Ridgeline investment, LGND, is pioneering the use of large Earth observation models (LEOMs)—the equivalent of large language models, but trained on decades of satellite and geospatial data. Their mission is bold yet simple: to let anyone query the Earth over space and time. Imagine asking questions like, “Where are the newest fire breaks in California?” or “Which dams have been retrofitted in the last decade?” and receiving precise, up-to-date answers—instantly.
LGND was founded by three longtime collaborators with deep expertise in this space: Dan Hammer, Chief Product Officer, who previously built APIs at NASA, co-founded Clay, and won the Pritzker Environmental Genius Award; Nathaniel Manning, CEO, a repeat founder who built Kettle, an AI-driven climate risk insurance company, and has spent his career applying technology to global-scale problems; and Bruno Sánchez-Andrade Nuño, Chief Scientist, who was previously Chief Scientist at Mapbox and has advised institutions from NASA to the World Bank on geospatial technology.
Together, they’re lowering the barriers to using Earth observation data—making it faster, cheaper, and more accessible for industries ranging from finance and insurance to climate, energy, and national security.
In the conversation that follows, Dan and Nat share how LGND came to be, what makes this the right moment for geospatial AI, and the real-world applications that could change how we understand—and act on—Earth’s most pressing challenges.
You’ve both built extraordinary careers—Dan, with your work at NASA and Clay, and Nat, with your focus on technology that addresses global-scale problems. How did the two of you first connect, and what sparked the idea for LGND?
DH: We met years before LGND as Presidential Innovation Fellows—Nat in the first cohort (later Chief Data Officer at USAID), me in the third (I built the NASA APIs). In 2022, our third co-founder Bruno and I started Clay, an open, nonprofit large Earth observation model (the “P” and “T”—pre-trained transformers—of GPT, but trained on satellite imagery). Around then, Nat was rolling off Kettle, a wildfire insurance company that had spent millions training bespoke satellite models. We all saw that LEOMs could make this work faster and cheaper at scale—Nat brought the operator DNA we needed—so the three of us founded LGND.
NM: When ChatGPT arrived, we asked: what does this platform shift mean for geospatial? Dan and Bruno had the same question and created Clay—an open-source large Earth observation model. From my side, at Kettle we’d been training vertical Convolutional Neural Networks (CNNs) for one-off answers. The breakthrough was realizing that transformer-based models could be applied to Earth data in a general way, just like language. Every major dataset is going to have its transformer moment. Language already had it; geospatial hadn’t. Clay proved it was possible, and LGND is how we make it usable for real customers.
For someone hearing about LGND for the first time, how would you describe your mission—not just in technical terms, but in terms of the value you bring to your users?
DH: Large models are just that—large collections of numbers. There’s a big [usability] gap between a 70B-parameter model and an end-user application. LGND is everything between large Earth observation models and real applications. Our mission is to let many more developers and analysts use these models.
The vision: make Earth information searchable—as searchable as the web is through Google. Google Earth is great for browsing, but not searching. I’ll know we’ve succeeded when I can type, “Where are the most recently renovated playgrounds in Berkeley?” That information exists in imagery but isn’t indexed for search today. We want to enable natural search over changes on Earth’s surface.
NM: I say it simply: the ability to query the Earth over space and time. Ask: “How has the forest changed in Southern California and how did that affect wildfires?” Or, “Find hotels in Mexico City with no construction within a mile.” Those answers aren’t in Weather.com or Booking.com’s text fields—but we can answer by querying the pixels.
AI has made natural language UI and image generation possible in the last few years. Why is geospatial and Earth observation data the next modality to take a leap forward leveraging AI?
DH: There’s decades of data. It’s abundant and highly structured on a standard global grid. The patterns within it are unstructured (not labeled as “mine,” “reforestation,” etc.), but the source data itself is ideal for transformers to help humans find what they need across the Earth.
NM: It’s a massive dataset that hasn’t had its LLM moment yet. For scale: the text used to train top language models is under ~1 petabyte; open geospatial/EO imagery exceeds 200 petabytes. Video is similarly heavy—and also still early.
There’s also a platform shift underway. For 20+ years, we organized language with keywords/PageRank and maps with map tiles. In the last three years, language moved to embeddings as the first-order data object. We believe maps will move from tiles to geo-embeddings—a new way to represent the physical world for AI systems to reason over.
Can you share a vivid, concrete example of LGND in action—something a person could visualize in their daily life that shows the power of your platform?
DH: We’re not changing image collection—satellites have streamed data for years. If you have enough budget, you can build bespoke models today. The unlock with LGND is you can do it 10,000x faster and cheaper, which changes interaction with the data: you can ask more subtle, local, or long-tail questions without handcrafted pipelines.
Example: “Find grain silos of a certain size with red roofs in Brazil” for supply-chain monitoring. Historically, not worth a custom build. With LGND, you can create that small, precise dataset quickly and affordably. Another real example we worked on: blue tarps for artisanal gold mines in the Amazon—a specific proxy a customer cared about.
NM: Insurance: “Find properties fully surrounded by trees” to assess wildfire exposure. Consumer travel: “I want snorkeling in February—show me beaches without seasonal seaweed, and hotels with no construction within a mile.” Real estate: “Walkable to groceries, large trail network nearby, ideally redwoods.” Much of that context is latent in imagery and timelines.
We’re also making Earth understandable to AI—so AI agents can incorporate geospatial context into whatever they’re doing: underwriting, siting a data center, monitoring utility assets, or booking travel.
Dan, you’ve worked at some of the most influential institutions in Earth observation. What lessons from NASA and Clay have most shaped how LGND approaches building its technology?
DH: One of the most widely used things I built was the Astronomy Picture of the Day API. It took a wildly popular but static HTML page and turned it into structured data that anyone could compose into their own apps. Usage exploded, and people built things I never imagined.
That experience drives LGND: Earth imagery is incredibly valuable, but the friction to integrate it into downstream products is too high. We’re building the interface—not only for developers, but also for chat interfaces—so Earth observation becomes a natural modality inside conversational AI.
Nat, you’ve founded companies aimed at tackling climate and risk head-on. How does LGND connect to your belief that “if clean tech costs less than dirty tech, we don’t have a climate change problem anymore”?
NM: We’re not directly pulling carbon out of the air or making energy sources less carbon intensive. Instead, LGND is an enabler—a critical feeder into other industries working on those problems. There’s an old adage: you can’t fix what you can’t measure. Right now, we aren’t effectively measuring what’s happening on the Earth because it’s too expensive and time-consuming.
Today, answering questions like “How much deforestation happened in this part of the Amazon?” or “Where are the fire breaks in Los Angeles?” can cost the equivalent of a quarter million dollars per question—months of effort by highly skilled engineers. That’s unsustainable.
One example: a risk modeler we work with wanted to build a flood model. To do that, they needed to know where all the dams and weirs are across U.S. rivers—a dataset that doesn’t exist in Google Maps. Finding it manually would have cost hundreds of thousands of dollars. Then they’d want to know which of those structures had been retrofitted in the last decade, or which ones flooded during recent events—each new question adding another six-figure cost.
With LGND, those workflows become as simple as typing in queries. You could generate a list of all dams, add a column showing retrofit dates from imagery, and another column indicating whether they flooded during a specific event. What used to take months and a team of engineers becomes a matter of queries an analyst can run themselves.
That’s how we help make clean solutions cost less: by dramatically lowering the price of knowledge.
Which industries do you think will surprise people by how quickly they adopt Earth AI—and why?
DH: A personal bet: education. I picture the child of an NGA analyst coming home from school and saying, “Look what I found—I mapped all the illegal mines.” Lowering the barrier so non-experts can explore and discover is powerful.
NM: Stage one: legacy, physical-world sectors—utilities, logistics, agriculture, insurance—where the current workflow costs a “quarter-million-dollars-per-question” in time and talent. LGND lets a strong analyst ask those questions—no MLE + MLOps team required. Stage two: consumer platforms—think Airbnb, Zillow—where Earth context becomes part of the everyday UX.
Why did you choose to partner with Ridgeline? What stood out about their approach or thesis that made them the right fit for LGND’s next stage?
DH: I know and trust Andrew McMahon from our government days. Beyond that, Ridgeline’s fluency in the public sector matters. Much of this information is (and should be) a public good. As costs drop and usability improves, we see big opportunities across DOI, EPA, USDA—not just defense and intel. Few VCs add more value in that transition than Andrew and Ridgeline.
NM: Roughly 80% of dollars in the geospatial stack today trace back to government (especially for imagery providers). We needed a partner who truly knows D.C. and procurement. Ridgeline also has a thesis around updating legacy industries, which aligns with our near-term customers and use cases.
If we sat down together three years from now, what’s the story you’d love to be able to tell about LGND’s impact?
DH: I’d love that classroom story—the kid who out-performs a traditional workflow because the tools are finally accessible. More broadly, I want to see chat interfaces answer “where” questions over up-to-date Earth data—without visiting 15 sites or manually flipping through imagery.
NM: It comes back to measure, then fix. I want us to make what’s happening on Earth—and how it’s changing—readable to AI, so we can direct powerful compute toward planetary problems: climate, risk, infrastructure. Concrete examples on our roadmap: near-real-time estimates of Amazon deforestation, a continuously updated map of rooftop solar (and where it should go next), and flood-relevant assets (e.g., dams and weirs) with change logs and event overlays—delivered as simply as adding columns to a table.
Learn more about LGND at lgnd.io.
Note: This interview was edited for readability.