Every Woka product begins with an AI hypothesis — not because the label sells, but because the question you ask on day one determines which data pipelines exist, which latency is acceptable, and when humans must stay in the loop. Those things are hard to bolt on later if the product is designed around forms and tables first.
In short: “AI-first” here means: describe the job the model (or rules engine) must solve, then build UI and data flows around it — not the other way around.
How that differs from CRUD-first, AI later
| Lens | Typical CRUD-first | Our AI-first approach |
|---|---|---|
| First question | “Which tables, which forms, who can edit?” | “What signals, what prediction, what confidence bar?” |
| Data | Whatever manual entry can feed reports | Instrumented, versioned, time-stamped for training and evaluation |
| Latency | Feels “fast enough” until scale bites | You admit early: p95 ms, or batch throughput per minute |
| Ops | Workflows sit outside the product | Policies for when to defer to a human are part of the core |
The roadmap stops being a list of screens and becomes a plan for data quality and safe automation.
Three layers to build early (and why deferring hurts)
- Feature store & lineage — track input features, model versions, and why the system made a call at a point in time (debugging, compliance).
- Near-real-time data — dispatch events, order state, last-mile and customs updates must flow before “prettier model” discussions matter.
- Human-in-the-loop — clear thresholds, fallbacks, and ownership when the model abstains or confidence is low.
If the hypothesis is “suggest the best route for this batch in under 200ms”, the product needs all three. Pure CRUD often pushes them to “2.0” — by then history is thin, latency debt is real, and fixes cost more.
What it looks like in our product lines
Road logistics
Route suggestions ship with the dispatch console, not as an after-the-fact report. Planners need the suggestion, signal provenance, and a clear override path in one flow — not a separate “AI module.”
Last-mile
The driver app must queue work offline — server-side models are useless if the device cannot reach the network. That changes caching, sync order, and logging so we can still train and evaluate real-world behavior.
Forwarding
Customs filing and invoice extraction can share a single extraction layer; training data flows between both. Improve once, both customs and finance teams get more stable quality.
When AI-first is not the right first move
- The problem is still pre-data — standardize the process before adding models.
- The operation is tiny and hand-written rules are enough — data and model ops can outweigh the benefit.
- Ingestion is too flaky — stabilize pipelines before you optimize inference.
Closing
The trade-off is a narrower product surface and deeper operational impact: automate where value is real, be explicit where people must decide, and keep data good enough to improve over time. That balance has worked for us and for teams we build with.
Want to go deeper on signals (e.g. dispatch data or operations feedback loops)? Contact us for a short scoping chat — we do not commit to anything before we understand your real context.