Frequently asked questions

"Can't we just do this ourselves?"

Six questions we know you'll get internally — and the honest answers to each.

1
"We've got smart people. Can't we just set up ChatGPT to do this?"
You can. And individual prompting will give you individual results — inconsistent, unrepeatable, and disconnected from JET's institutional knowledge.

A deal qualification rubric that encodes 30 years of JET's decision-making is fundamentally different from pasting a tender into ChatGPT and asking "should we pursue this?" The first is a system. The second is a shortcut. The system gives every team member the same quality of decision support. The shortcut gives you whatever that person happened to prompt that day.
"This is not about going to ChatGPT and prompting. We actually have to figure out what intelligence looks like for you and where you should be smarter."
— Alison Jacobson, discovery session
2
"The people who'd build this — aren't they already stretched?"
Yes. The senior team is the constraint on every proposal JET writes — QA, methodology, budgets, sign-off. Asking those same people to also architect an AI system means pulling them off the revenue-generating work that keeps JET running.

StrideShift's role is to do the thinking and building so that your experts can stay focused on what they're best at: winning the proposals that matter.
"Our reliance on our senior team is high. We've got enough middle and junior people that can do stuff, but the reliance on that senior person that's doing 10 things — it becomes a problem."
— James Keevy, discovery session
3
"What exactly does StrideShift bring that we can't learn ourselves?"
Three things you'd struggle to replicate internally:

1. We know where the dead ends are. We've built agentic systems across many organisations and use cases. The trial and error to get agents reliably evaluating grants against a rubric, or tailoring CVs without hallucinating qualifications, is significant. We route around problems that first-timers hit.

2. We see what you can't see. We walked into the discovery session expecting to build a grants engine. Within thirty minutes, we'd uncovered that JET mostly loses on price, not quality. That reframe came from asking questions you hadn't thought to ask yourselves.

3. We work across industries. We bring a perspective from the intersection of multiple organisations and use cases. That cross-pollination matters when designing a system that's actually going to work — not just a prototype that looks good in a demo.
4
"What about vendor lock-in? What if we want to move on?"
This is why we put two options on the table.

Option A (we build, we run) has some lock-in by design — you're paying for an ongoing service. But you can cancel with 30 days' notice. The qualification rubric and CV knowledge base we create together are yours regardless.

Option B (we set up, you run) has zero lock-in. The system runs on JET's own accounts. We hand it over and walk away. Everything we build is yours. If JET's leadership changes or priorities shift, the system keeps working without us in the room.

In both cases, the intellectual work — the rubric, the agent directions, the governance documentation — belongs to JET. That's the durable asset, and it doesn't depend on StrideShift.
5
"Is this going to take our team down a rabbit hole?"
It shouldn't — and the way we've structured it is designed to prevent exactly that.

Phase 1 is tightly scoped: deal qualification and CV tailoring. Clear inputs, clear outputs, evaluable within weeks. We validate both tools against JET's own history before handover — if the qualification engine wouldn't have flagged the proposals you lost, it's not ready.

This is a circumscribed first use case, not a transformation programme. If it works, JET decides what to do next. If it doesn't, you've invested in a qualification rubric that's valuable regardless and learned something real about where agents can and can't help.

No rabbit holes. A baby step with a clear exit.
"What you will do here will act as a catalyst… I don't want to narrow the scope too much, but I also want to make it manageable for us to do something that plugs into a bigger process."
— James Keevy, discovery session
6
"Is our data safe? What about governance and security?"
In Option B, the system runs on JET's own accounts. Your data stays in your environment. Whatever security JET already has around data storage and hosting extends to the agentic system. Nothing leaves JET unless JET tells it to.

In Option A, standard data agreements apply. We're happy to work with JET's ICT board committee on the specifics.

In both cases: AI services have matured significantly. Enterprise-grade privacy controls, no training on your data without consent, and clear data residency options are standard. The tools we use are built for exactly this. We'd walk through the governance details with the ICT board committee as part of the setup — and in Option B, we produce documentation specifically for that purpose.

The concern about "will our IP end up with a competitor" has a simple, factual answer: no. But we understand why the question gets asked, and we'd rather address it directly than have it linger.

The value isn't in tools JET could theoretically build.
It's in the thinking you don't have time to do.

The diagnostic work, the reframing, the guardrails, and the speed of someone who does this full-time — so your experts can stay focused on what they're best at.

← Back to the proposal
StrideShift × JET
Enter passphrase
Incorrect passphrase