Built for boards that get audited.
Every modern board portal claims security. Diwan is the one you can take fully on-prem the day legal asks. Eight commitments we hold ourselves to, on every tenant, in every deployment.
Tenant isolation
Every tenant lives in its own per-tenant directory at data/tenants/<slug>/. All reads + writes flow through tenantReadPath()/tenantWritePath() which read the slug from AsyncLocalStorage set by middleware. Cross-tenant access is impossible by construction.
- Per-tenant data dir, not a shared multi-tenant database
- AsyncLocalStorage carries the tenant slug across every request
- Middleware resolves slug from request host (or the on-prem env pin)
- Operator (BHD) access is read-only and logged in the audit trail
Authentication
OTP login over email or WhatsApp. No passwords stored, no password reset flow, no phishing surface. Session cookies are HttpOnly, Secure, SameSite=Lax, signed.
- OTP codes expire within 10 minutes
- Rate-limited per phone/email + per IP
- Sessions invalidated on role change or admin disable
- SAML / SSO available on Enterprise (Microsoft 365, Google Workspace)
Authorisation
Per-tenant role matrix at data/tenants/<slug>/roles.json. Each role declares which modules it can read, create, update, delete. hasPermission() is the canonical check on every mutating endpoint.
- Module × action permission grid, fully editable per tenant
- Sector + sub-sector module visibility runs on top
- requiresPlan gate: Free tenants cannot enable Pro modules even via features.json
- Superadmin role only on the Diwan operator tenant; cloud customers do not get it
Audit trail
Every admin mutation is wrapped in withAudit() which appends an immutable JSONL entry to data/tenants/<slug>/audit.json: who, what, when, where (IP).
- Append-only, never mutated
- Includes diff of changed fields, not full record dumps
- CSV export for regulators (Pro and above)
- Operator access logged separately and identifiable in the trail
Transport + storage
TLS 1.2 minimum, TLS 1.3 negotiated where supported. Cloudflare Full (Strict) origin certificate. HTTP/2 to all browsers; HTTP/3 negotiated.
- min_tls_version: 1.2 (Cloudflare zone setting)
- always_use_https + automatic_https_rewrites enforced
- HSTS preload candidate (subject to operator opt-in per zone)
- Backups encrypted at rest, taken daily, 30-day retention
Data residency
Cloud SaaS data lives on a Hostinger VPS in the EU, fronted by Cloudflare's Muscat edge. Custom-domain customers run on the same Cloud infrastructure. On-Prem customers' data lives entirely on their own servers; we never see it.
- EU primary, Muscat edge, proximity + latency to Omani users
- On-Prem mode (MAJLIS_MODE=onprem) bypasses Cloud entirely
- No third-country data transfers without an explicit signed amendment
- Sub-processor list public on the DPA page
Operations
Atomic file writes via withFileLock(), no torn writes. PM2 process supervision with auto-restart on crash. Daily VPS-level backups. Smoke tests run after every deploy.
- withFileLock() wraps every concurrent-write critical section
- Single Node process per tenant pool; clear error boundary
- Smoke runs against every deploy, regression blocks the release
- On-Prem ships as Docker compose with documented backup commands
Incident response
Personal-data breach notification within 72 hours of confirmation, per the DPA. Operator on-call covers Muscat business hours; Enterprise customers get a named CSM with after-hours phone.
- Email to tenant admins within 72h with nature, scope, mitigations
- Public status snapshot at /api/almajlis/health.json
- Post-incident report to affected tenants within 14 days
- Service-credit policy on the next invoice for SLA breaches