Home / Blog / POV

Vibe-Coded Integrations: Code Is the Easy 10%

AI made writing integration code easier. It did not change the 90% of integration work that is not code — discovery, stakeholder alignment, undocumented business rules, and ongoing ownership. Here is what we are seeing in the field.

An IT generalist working late at a laptop — the kind of person being told they can now build an integration in an afternoon.

Writing integration code has gotten easier with AI. Running integrations has not.

The first half of that sentence is what every B2B SaaS founder is talking about right now. Cursor and Claude can write a Procore-to-Sage script in an afternoon. The marketing claim, repeated at every conference and in every board deck, has gotten more ambitious than developers will be more productive. It is that integration platforms are paid help for a problem that no longer exists, because anyone with a Claude Code subscription can now do this work. We have started hearing it from prospects directly — they were planning to have an IT generalist, someone who has never written code and has never been responsible for an integration, handle the build and the ongoing maintenance, because AI supposedly makes that possible.

The second half of the sentence is the part being underdiscussed. Writing integration code is roughly 10% of building an integration. The other 90% — discovery, stakeholder alignment, business-rule capture, the gap between a system’s UI and its API, schema and data-model design, error-handling judgment, and ongoing ownership of the integration once it is running in production — is the part LLMs do not touch. And it is the part you do not pick up from a Claude Code class.

This is a piece about the 90%, and about who is going to do it.


The 10/90 split

Let us be specific about what is in the 10% and what is in the 90%.

The 10% (code). Pulling records from one API. Pushing records to another. Mapping fields. Handling pagination, auth refresh, retry, error handling. Writing the dbt model that turns Procore commitments into Sage subcontracts. Writing a python script that pushes RFIs to a partner. The actual code-writing layer.

LLMs are genuinely good at this. The code is similar across integrations — pull from API, transform, push to API, log result. It is the kind of pattern-matching work LLMs excel at. We use Claude internally for it (more on that below). Good engineering teams everywhere are using AI for the code layer, and they should.

The 90% (everything else). What does the customer actually need? Which fields go where? What is the business rule for what counts as an “approved” change order — does it match what is posted in Sage? Why does the field labeled “Project Number” in the Procore UI map to a different field than the one labeled “Project Number” in Procore’s API? Why does the AP team’s definition of a closed period differ from the project accounting team’s? Where does the integration log get written and who reads it? When the integration breaks at 11pm the night before month-end close, who is getting paged?

None of this is in the code. None of it is in the system documentation. It lives in someone’s head, in someone’s spreadsheet, in a Teams message from 2024 that nobody can find. Getting it right requires sitting with a project accountant, a controller, a project manager, and an IT director — separately, then together — until the model of how data should flow matches what the business actually does.

Then it requires keeping it working as the business changes.

It also requires a second category of work that AI does not replace: engineering judgment. Knowing what idempotent means before you discover it matters in production. Recognizing when a partner API change will silently break your sync. Schema and data-model design — what the tables should look like, what the indexes need to cover, how to handle slowly-changing dimensions. Error-handling philosophy — when to retry, when to alert, when to fail loud versus silent. Knowing where to put the audit log so the auditor can find it next year. These are skills engineers build over years of seeing things break, and they are not in a Claude Code class you take online.

LLMs cannot do this. Not because they are not smart enough — because the inputs are not legible to them, and because the experience that builds this judgment is not something a one-week class compresses.


What we are actually seeing

We would love to give you a single hero failure story — one customer tried to vibe-code their integration and here is what blew up. We do not have that story yet. What we have is more interesting, because it is the actual pattern.

Pattern 1: it never gets started. A customer’s IT person took a Claude Code class seven months ago. They were going to vibe-code an integration. They have not started. They do not have the time — they are also building Power BI dashboards, running IT security, the M365 tenancy, and the integrations they already have — and integration is not their craft. They have never been responsible for one before beyond managing pieces of it in a web portal. The code-writing speedup matters for someone whose blocker was the code. For someone whose blocker is the engineering judgment integration work requires, AI does not change the equation.

Pattern 2: it takes longer than the platform promised. Another customer is using an iPaaS product with an AI assistant. The marketing said days; their actual elapsed time is months. They are still in test, not production. The AI assistant accelerated one part — the code — and did nothing for the part that is still slow: getting the right business rules captured, getting stakeholders to agree on the canonical version, dealing with the data discrepancies that surface every time you actually look at production data. The platform did not lie, exactly. It just measured the wrong thing.

Pattern 3: two senior people, three weeks, and an app. This is adjacent to integrations but worth naming because the same dynamic shows up. “Two IT people spent three weeks and now we have an internal app that does X.” Two people for three weeks at six-figure salaries plus benefits is roughly $30,000 to $40,000 in fully-loaded cost. Plus the training cost when the rest of the team has to learn it. Plus the support cost — those same two people are now on the hook for the next outage. The “free, AI-built” tool is not free; the cost moved off the balance sheet and onto the calendar of two senior employees who already had full-time jobs.

We think these three patterns are more honest than a single dramatic failure story. The story is not that AI writes bad code. The story is that AI does not change who is going to do the work, and the people who would do the work are already busy.


What AI does well — including for us

We want to credit AI honestly here, because the critique above is incomplete without it.

We use Claude internally to write SQL integration models for customers and building our own software. It is faster than typing them by hand, and the quality is good enough that the human review pass is fast. SQL is the right shape for LLM output — it is well-represented in training data, the syntax is rigid enough to catch errors, and the semantics are constrained enough that “looks right” usually is right.

We have built Aquifer to work with AI on the customer side too. Our MCP server (similar to the one we wrote about in Your AI of Choice, Connected to Your Operational Data) supports integration creation and maintenance, not just data querying. Customers building integrations in Aquifer write SQL and dbt — both languages and frameworks LLMs are excellent at. There is no Aquifer-specific DSL to learn or that Claude has never seen.

This is deliberate. We architected Aquifer around the assumption that AI would become the dominant way people write integration code, and that the maintenance and support layer — not the code-writing layer — is where the hidden cost lives. The platform is built for the 90%, not the 10%. Datastore in the middle so the integration logic does not live inside any one source system. Cleaned, normalized data so AI agents querying later get meaningful answers. dbt as the transformation language so customers can read, audit, and modify the SQL themselves. Connectors maintained by us so customers do not have to track every Procore API release.

The result is that customers can create their integrations using AI on top of Aquifer if they want to. Some do. Many others use Aquifer’s professional services or a partner — not because they could not write the code with Claude, but because they do not want to own it. Owning means being the has to figure out why an API change broke last night’s commitment sync, the person who has to walk the project accountant through why the report numbers shifted.


The question to ask

Here is the question we would ask any IT director being told to “just use AI” for the next integration:

Claude and ChatGPT are not going to own this integration. A person or team will. Are they already busy?

If the answer is “no, we have a dedicated integration team with capacity” — sure, vibe-code it. AI is genuinely accelerating the code layer. A capable, dedicated team can ship faster than they could two years ago.

If the answer is “our IT director is wearing five hats and can’t make a Tuesday standup” — the AI conversation is a distraction. The bottleneck is not code production; it is human ownership. The “free” integration costs at least one person-month of senior IT time to get into production, and another person-month per year to maintain it. That cost does not disappear because Claude wrote the SQL. It just hides until something breaks.

This is the part of the AI integration narrative we think is being underdiscussed. Code-writing speed has gone up dramatically. Operational ownership capacity has not.


What this means for build vs. buy

The build-vs-buy conversation in 2026 is not “AI lets us build everything ourselves.” It is more nuanced than that.

There is a fourth path being marketed, which we want to name explicitly: build with AI even though you do not have a dedicated team, on the claim that AI now lets non-developers handle integration work end to end. This is the trap. We have not seen this path produce a maintained production integration. The first version sometimes ships. The second version — the fix when something breaks — usually does not. The third version never happens because the IT generalist who started the project also has a network outage to deal with. AI lowered the syntax barrier. The engineering-judgment barrier is intact, and that is the barrier that matters when an integration is running in production.

The honest pitch is not “AI will replace integration platforms.” It is “AI is a powerful new tool, and the integration platforms that survive will be the ones that use it well, expose it cleanly to customers, and remember that code is the easy part.”


Close

If you are trying to figure out where AI fits in your integration strategy and where it does not — that is a 30-minute conversation we would be glad to have. We have watched a lot of teams talk themselves into vibe-coded integrations and quietly come back six months later. We can save you the six months.

Book a call →

Want to see Aquifer in your stack?