A customer of ours runs about a dozen bid proposals a month. Two decades of previous proposals sit in their SharePoint, indexed by year and project, written mostly by people who left the company years ago.
In the last year they started using ChatGPT to draft new bids. They query their own proposal archive — in their own SharePoint, through Aquifer — and ChatGPT writes a first draft based on what’s worked for them historically. Custom GPT with a small knowledge base on top covering how their team writes proposals. ChatGPT drafts. Their team edits and submits.
They did not have to move their proposal archive to OpenAI’s servers. They did not have to set up a separate vector database. They did not have to build a RAG pipeline. They did not have to procure a new tool. We were already connected to their SharePoint as part of their integration setup; we added a vector index, which allows AIs to query similar text, to that data inside their own existing Aquifer datastore; the ChatGPT Action they configured queries it through our query layer.
This is the story we want to tell about AI and Aquifer.
A quick translation, if you have not been swimming in this
A few terms in the paragraphs above and below will sound like jargon if you have not been working with AI tooling. Worth a 30-second primer.
Vector database / vector index. A way of storing text so that “find similar content” queries actually work. Each piece of text gets converted into a numeric fingerprint that captures its meaning — not its keywords. Proposals about steel procurement land near each other in fingerprint-space; proposals about HVAC commissioning land in a different region. When ChatGPT asks for “proposals about steel procurement,” it is not keyword-searching — it is asking the vector index for the documents whose fingerprints are closest to the question’s. That is what makes AI document search useful: it finds things by what they mean, not by which exact words they use.
RAG (Retrieval-Augmented Generation). The pattern where, before answering, the AI retrieves relevant content from your database and then generates an answer based on what it found. Without RAG, ChatGPT only knows what was in its training data, which does not include your bid proposals from 2018. With RAG, it can read your documents and write a draft grounded in them.
MCP (Model Context Protocol) and ChatGPT Actions. The “doorways” AI tools use to talk to outside systems. Anthropic’s Claude (and a growing list of other tools) use MCP. OpenAI’s ChatGPT uses ChatGPT Actions. They are different protocols built for the same purpose — letting an AI tool fetch data from a system that is not built into the AI itself. Aquifer ships both, so whichever AI your team uses, it can query your Aquifer data.
The point of vector indexes and RAG: they are how ChatGPT (or Claude, or any model) reads your content. Most companies trying to build this from scratch end up running a separate vector database alongside their existing systems, with a fragile pipeline to keep the two in sync. Aquifer puts the vector index in the same place as everything else — the customer’s existing Aquifer datastore — so there is nothing extra to manage. The MCP server and ChatGPT Action are the doorways that let your AI tool of choice walk in and use it.
Your AI of choice, not ours
Customers do not ask us about MCP. They do not ask us about ChatGPT Actions. They do not ask us about RAG, vector databases, or the relative merits of one model context protocol versus another.
What they ask is: can my AI of choice see my operational data?
The answer, for over a year now in production, is yes.
Aquifer ships two AI connectors today: a ChatGPT Action and an MCP server, both backed by the same query layer underneath. Between them, every major AI tool a customer might be using — ChatGPT, Claude, CoPilot, custom in-house GPTs — can read their Aquifer-connected data. We added the second protocol because some customers were already on Claude or building agent workflows in Copilot, and “use whatever AI you want” is the only honest answer in this market.
The protocol is not the story. The story is that the AI tool your team already pays for, already uses, can now query the project data, the ERP data, the GIS data, the document archives — all of it — through the platform you already have connecting those systems.
The two patterns we keep seeing
Customers use AI on top of Aquifer for two things, with different shapes.
1. Cross-system search
The first pattern: find a thing in any system Aquifer is connected to, by asking your AI in plain English.
“Which Procore projects from the last six months have outstanding RFIs from acoustic engineers?”
“Show me every change order over $50k posted to Sage in Q1, with the corresponding Procore commitment.”
“Find the vendor we used for hot-rolled steel on the Cleveland Clinic project, and pull their last invoice.”
These are questions that today require someone to log into Procore, log into Sage, dump exports into Excel, and pivot. Or to ping the project accountant on Slack and wait. With AI on top of Aquifer, the answer comes back in seconds because the data is already unified in the customer’s Aquifer datastore.
The pattern is search across systems. The technical move underneath is that Aquifer’s datastore can model data across systems — Procore commitments and Sage subcontracts and ArcGIS asset records all live in the same database, with relationships preserved. A single query crosses what used to be three separate logins.
2. Document search and document generation
The second pattern, and the one driving the most customer pull right now: search and generate against document archives.
The bid proposal customer above is one example. The pattern shows up wherever documents matter — proposals, RFPs, technical specs, schedule narratives, owner-reporting packets, marketing collateral. Customers are sitting on decades of high-quality content in SharePoint, network shares, or attached to project records in Procore. Most of it never gets reused because nobody can find it.
We added vector indexing on document content as part of the Aquifer datastore, alongside the structured-data tables the connector framework already populates. That means an Aquifer customer’s SharePoint ingestion does not just sync document metadata — it indexes the content of those documents in the same datastore that holds their Procore and Sage data, queryable through the same API. ChatGPT (or Claude, or a custom agent) can ask Aquifer for “proposals we wrote for healthcare clients between 2018 and 2024 with steel procurement language” and get back the right corpus, ready to feed into a draft.
For the customer running a dozen bids a month, the time savings are dramatic and the quality is higher than it would be from a generalist model writing from scratch — because the model is reading their proposals, in their voice, with their pricing structure and their language for risk.
Why this works (architecturally)
Plenty of products are bolting AI connectivity or chat bots to themselves right now. The question is not does the AI have a wrapper to call you — it’s what does the AI get when it calls?
Three things make Aquifer’s data layer specifically valuable for AI access:
Cross-system data modeling. Aquifer’s datastore unifies data from your connected systems through SQL transformations. Procore projects, Sage jobs, ArcGIS sites, SharePoint document corpora all map into a coherent model in the customer’s own database. AI agents querying Aquifer get joins, not silos. A “one AI connection per app” world means the agent has to stitch together fifteen partial views of the same project; an Aquifer query returns one.
Vector indexes on the documents that matter. We index document content where it lives — inside the customer’s own Aquifer datastore, alongside their structured records. No data leaves the customer’s environment for a separate vector DB; no separate retrieval system to maintain; no syncing problems between the document index and the structured data it relates to. The unstructured-text layer and the structured-data layer are queryable through the same API.
The data is already cleaned. Every Aquifer customer’s datastore is the result of pipelines that have already mapped fields, normalized vendor identifiers, reconciled cost codes, and resolved entity collisions. AI agents querying the cleaned data layer get answers that match the customer’s operational truth — not raw API responses with three-character vendor codes and inconsistent naming. This is the unsexy part of why AI on top of Aquifer works and AI directly on top of source-system APIs disappoints.
What this adds up to is a data surface AI can actually be useful against. Most AI integration in construction today is an agent calling one app’s API and getting back what that app knows. The interesting questions cross apps. Aquifer is where the cross-app answer lives.
AI enablement, not AI features
There is a pattern in B2B software right now where every vendor adds an “AI Assistant” to their existing product — a chatbot in the corner of the dashboard that answers questions about that product’s data, only when the user is logged into that product.
Aquifer’s ChatGPT Action and MCP server work the other way. We make customer data legible to whatever AI tool the customer already uses. If your team uses ChatGPT, your data works in ChatGPT. If they use Claude, your data works in Claude. If they’re building agents in Cursor, your data works in Cursor. When the next tool ships and your team adopts it, your data works there too.
This is the difference between AI features and AI enablement. AI features are local — they live inside one product’s surface and only know that product’s data. AI enablement is structural — it makes your operational data legible to any AI you choose to point at it.
We think AI enablement is the more durable category. The AI tools your team uses will change every six months. The systems they need to read — Procore, Sage, ArcGIS, SharePoint, Autodesk — will not. The platform connecting those two layers should not be tied to a specific model or chat surface. It should make your data legible to whatever’s next.
What’s next
This started because customers asked for it. The bid proposal customer was not the only one — multiple customers in different segments were arriving at the same workflow on their own, asking us to make their Aquifer data accessible to the AI tools they were already using. We built the ChatGPT Action first, the MCP server soon after, and we expect the next protocol or two to land before the year is out. The pattern repeats: we ship for the AI customers actually use, not for the tool we wish they used.
Beyond the two patterns above, customers are starting to push into agent workflows — AI that does not just answer questions but takes actions: drafting an RFI, posting a change order, generating an owner-reporting packet. That is a longer arc and we are building toward it carefully. Most of those workflows fail today because the agent’s read of the data is wrong. We think the read has to work first. The write comes when the read is reliable.
If you are an Aquifer customer and have not turned on the ChatGPT Action or MCP server yet, ask your account contact — both are configurable per environment, and we can walk through which use cases your team should start with.
If you are not an Aquifer customer and you are trying to figure out how to get your operational data accessible to your AI of choice, that is a 30-minute conversation we are happy to have. The construction-tech AI conversation has a lot of vapor in it right now. We would rather show you what is already running in production.