Back to News
Milestone· NthDS Team

Crawler Passes 73,000 Wells Processed

73,000 Wells and Counting

NthDS Crawler has crossed a major milestone: 73,000 wells processed across the Permian, DJ, Appalachian, and Williston basins. Each well represents hundreds of pages of legacy documents - scout tickets, DSTs, completion records, core analysis reports, and wireline logs - now structured, searchable, and queryable.

What 73,000 Wells Looks Like

To put this in perspective:

  • Over 2 million pages of legacy documents ingested and classified
  • 15+ document types automatically detected (G&G reports, RRC filings, DSTs, completion records, invoices, contracts, and more)
  • Bypassed pay zones flagged across thousands of wells using cross-referenced show data, DST results, and completion histories
  • Hydrocarbon confidence scores computed for every well with sufficient data

All of this data is instantly queryable through Muradin, our conversational AI. No SQL required. Ask in plain English, get answers backed by source documents.

Built at Scale, Deployed On-Premise

The 73K milestone was not achieved on a massive cloud cluster. Crawler runs on standard enterprise hardware with local GPU inference. Every one of these wells was processed behind the customer's firewall - air-gapped, zero data egress, full compliance with the strictest security requirements in the energy industry.

This is the key differentiator. Generic AI tools from big tech require your data to leave your network. Crawler does not.

What Operators Are Finding

The most impactful discoveries are not from the AI's classification accuracy (though that exceeds 95% across document types). The real value is in what the system finds when it cross-references data across thousands of wells simultaneously:

  • Bypassed pay zones that were overlooked because the completion data lived in a different filing cabinet than the show data
  • Formation patterns that emerge when you can query "Which wells show gas in the Terry Sand within 5 miles of our acreage?"
  • Regulatory gaps where filings exist in state portals but were never captured in internal systems

Next Steps

We are accelerating basin coverage and expanding Crawler's extraction models to handle additional document types. If you have a data room that needs intelligence, let us show you what Crawler can do.