Tech stack behind 8 logistics products — how we ship fast and stay maintainable

Shipping and maintaining 8 logistics SaaS products with a 6-person team — stack choices, shared infra, deploy pipeline, and the trade-offs that keep it sane.

Woka ships 8 logistics SaaS products — Road Logistics, Proxy Purchasing, Rail Freight, Last-Mile Delivery, Freight Forwarding, Website Development, AI Automation, and Fulfilment Management. All are in production, serving customers across Vietnam and Southeast Asia.

Our engineering team: 6 developers (3 backend, 2 frontend, 1 DevOps). How do we maintain 8 products without burning out? Shared infrastructure, ruthless code reuse, and careful stack choices.

Core Philosophy: Build shared infrastructure once, reuse across all products. 40% of our codebase is shared components, services, and schemas.

Stack overview

Woka platform architecture

Frontend

  • React 18 + Next.js 14 App Router — All customer-facing web apps and ops dashboards
  • React Native — Driver mobile apps (road logistics, last-mile delivery) and warehouse scanner apps (fulfilment)
  • TypeScript — Strict mode enforced; no JS allowed in new code
  • TanStack Query — Server state management (replaces Redux for 90% of use cases)
  • Tailwind CSS — Utility-first styling; shared design tokens across all products

Why Next.js? Server-side rendering for SEO and performance. Incremental static regeneration (ISR) for product pages and marketing content. API routes for simple backend tasks (contact forms, webhooks).

Why React Native over Flutter? Our team knows TypeScript. Code sharing between web and mobile (shared types, API client, business logic) reduces duplication by 40%.

Backend

  • NestJS (Node.js) — All HTTP APIs and business logic
  • Python + FastAPI — AI/ML services (demand forecasting, route optimization, NLP chatbots)
  • PostgreSQL 16 — Primary database for transactional data
  • Redis 7 — Caching, session storage, pub/sub for real-time features
  • Elasticsearch 8 — Search and analytics (order lookup, SKU search, log aggregation)

Why NestJS over Express? Built-in TypeScript support, dependency injection, structured architecture (controllers, services, repositories). Easier to onboard new developers and maintain consistency across products.

Why Python for AI? Mature ML libraries (PyTorch, scikit-learn, transformers). FastAPI is performant enough for inference workloads (p95 latency < 200ms).

Infrastructure

  • Docker + Kubernetes — All services containerized; K8s for orchestration
  • Google Cloud Platform — GKE (Kubernetes), Cloud SQL (managed Postgres), Memorystore (managed Redis)
  • Cloudflare — CDN, DNS, DDoS protection, WAF
  • GitHub Actions — CI/CD pipelines (lint, test, build, deploy)

Why Kubernetes? Horizontal scaling for traffic spikes (e.g., last-mile delivery peaks at 6–8pm daily). Resource isolation between products (freight forwarding doesn’t affect road logistics performance).

Why GCP over AWS? Lower latency to Vietnam/Southeast Asia (GCP Singapore region vs. AWS Singapore). Simpler pricing for managed services.

Monitoring and observability

  • Sentry — Error tracking and performance monitoring
  • Grafana + Prometheus — Metrics dashboards (request latency, error rate, resource usage)
  • Loki — Log aggregation and search
  • Uptime Robot — External uptime monitoring (alerts via Slack and email)

SLA target: 99.8% uptime per product. Achieved 99.9% average over past 12 months.

Code reuse strategy

1. Shared component library

We maintain an internal npm package (@woka/ui) with 50+ reusable React components:

  • Data tables — Sortable, filterable, paginated tables for order lists, inventory views, driver rosters
  • Forms — Input fields, selects, date pickers with validation (react-hook-form + zod)
  • Modals and drawers — Consistent UX for create/edit workflows
  • Charts — Recharts wrappers for SLA dashboards, revenue reports, analytics

Benefit: New product development is 30–40% faster because we’re not rebuilding tables and forms from scratch.

2. Shared backend services

Three common microservices used across all products:

  • Auth service — JWT-based authentication, role-based access control (RBAC), session management
  • Notification service — Email (Nodemailer), SMS (Twilio), push notifications (FCM), webhooks
  • File storage service — S3-compatible object storage for POD photos, customs documents, packing lists

Architecture: Deployed as standalone services with REST APIs. Each product calls these services instead of implementing its own auth or notification logic.

Benefit: Security updates (e.g., password hashing algorithm change) propagate to all products by updating one service.

3. Shared database schemas

Common entities modeled once and reused:

  • User — id, email, password_hash, role, created_at, updated_at
  • Organization — id, name, tax_id, address, subscription_tier
  • Order — id, org_id, status, created_at, items[], total_amount
  • Location — id, name, address, lat, lng, type (warehouse, hub, customer)

Each product extends these base schemas with product-specific fields (e.g., road logistics adds driver_id and vehicle_id to Order; fulfilment adds sku_id and bin_location).

Benefit: Consistent data model across products makes cross-product reporting easier (e.g., “How many orders did this customer place across all products last month?”).

Deployment architecture

Multi-tenancy model

Each customer organization is a tenant. Data is logically isolated by org_id foreign key. All tenants share the same database and application servers (not physically isolated like separate DB per tenant).

Why shared database?

  • Lower infrastructure cost (1 Postgres instance instead of 100)
  • Easier to run aggregate queries across tenants for analytics
  • Simpler backup/restore (one backup job instead of 100)

Security: Row-level security enforced in application code (every query filters by org_id). PostgreSQL row-level security (RLS) policies as a safety net.

Continuous deployment

Pipeline:

  1. Developer pushes to GitHub (main branch)
  2. GitHub Actions runs:
    • Lint (ESLint, Prettier)
    • Type check (TypeScript, strict mode)
    • Unit tests (Jest)
    • Integration tests (Supertest for APIs, Playwright for web)
  3. If all checks pass, build Docker image and push to Google Container Registry
  4. Trigger K8s rolling update (zero-downtime deployment)
  5. Smoke test production endpoint
  6. If smoke test fails, auto-rollback to previous version

Frequency: 5–10 deploys per day across all products. Small, incremental changes instead of big-bang releases.

Database migrations

We use Prisma for schema management and migrations.

Migration workflow:

  1. Developer defines schema change in schema.prisma (e.g., add new column to orders table)
  2. Run prisma migrate dev to generate migration SQL
  3. Commit migration file to Git
  4. CI pipeline runs migration against staging database
  5. Manual QA on staging
  6. Merge to main → CI runs migration against production database during deployment

Safety: Migrations are backward-compatible (no dropping columns or renaming tables without a multi-step migration plan).

Performance optimizations

1. Database query optimization

  • Indexing: All foreign keys have indexes. Complex queries (order search, inventory lookup) have composite indexes.
  • Connection pooling: PgBouncer in front of Postgres to reduce connection overhead (10,000 requests/sec with 50 DB connections).
  • Read replicas: Reporting queries (dashboards, exports) hit read replicas; transactional writes go to primary.

2. Caching strategy

  • Redis for hot data: User sessions, frequently accessed lookups (SKU details, carrier rates), real-time GPS positions
  • CDN for static assets: Images, CSS, JS bundles cached at edge (Cloudflare)
  • API response caching: TanStack Query on frontend caches API responses for 5 minutes (reduces backend load by 60%)

3. Background job processing

Long-running tasks (customs declaration generation, invoice PDF creation, email sending) are offloaded to background workers.

Queue: BullMQ (Redis-backed job queue)
Workers: Node.js worker processes; scale horizontally based on queue depth

Example: When a warehouse ships 200 orders, the system queues 200 “send tracking notification” jobs. 5 worker processes handle these in parallel (40 jobs each). Total processing time: 30 seconds instead of 5 minutes if done synchronously.

Lessons learned

1. Don’t over-engineer early

Our first product (Road Logistics) started as a monolith. We extracted shared services (auth, notifications) only after building the second product (Proxy Purchasing) and seeing duplication.

Mistake: Trying to build a “perfect” microservices architecture from day one would have slowed us down by 3–6 months.

2. TypeScript everywhere saves debugging time

Strict TypeScript catches 70% of bugs at compile time (type mismatches, null pointer errors, missing fields). This reduces production incidents by ~50% compared to JavaScript.

Investment: 10–15% slower development (writing types, fixing type errors) but 3x faster debugging when things break.

3. Monitoring is not optional

We shipped the first product without structured logging. When bugs occurred, we had no idea what went wrong. Adding Sentry + Loki after the fact was painful (retrofitting error handling, adding context to logs).

Lesson: Set up monitoring and logging on day one, even before the first customer.

4. Database migrations need rollback plans

We’ve had 2 production incidents from bad migrations (one added a non-nullable column without default value, crashed the app; another locked a table for 10 minutes during peak traffic).

Solution: Test migrations on a copy of production data. For large tables (over 10M rows), use online schema change tools (pt-online-schema-change for MySQL, pgroll for Postgres).

What’s next

Three technical initiatives for 2026:

  1. OpenTelemetry for distributed tracing — Track a request across multiple microservices (frontend → API gateway → NestJS → Python AI service → Postgres)
  2. Edge functions for real-time features — Move GPS tracking and route calculation closer to users (Cloudflare Workers in Vietnam/Singapore)
  3. GraphQL API — Replace REST for complex client needs (e.g., “fetch order + driver + vehicle + route in one query”)

If you’re building logistics software and want to compare notes, reach out at info@woka.io. Always happy to chat with fellow engineers.