Use cases
Why LowLatencyPubSub
- Optimized for lowest latency and cross-region reachability (fastest-path routing, firewall/DPI friendly).
- Online-only fanout with no persistence — designed for ephemeral real-time signals.
Typical topologies
- 1→N (broadcast) — One writer, many listeners. Prices, scores, game state.
- N→N (rooms) — Many writers, many listeners on the same channel. Chat rooms, collaborative sessions.
- N→1 (telemetry uplink) — Many writers, one listener. Sensor data, metrics aggregation, control plane feedback.
Common use cases
- Real-time data — Push frequent updates (prices, scores, sensor data) to many subscribers. One channel per stream; many listeners subscribe.
- Gaming — Deliver game events or state updates with low latency. Often one writer (game server) and many listeners (players), or multiple writers for region shards.
- Chat and signals — One channel per room or topic. Writers and listeners use the same tenant and channel; the service fans out to all subscribers.
Additional patterns
- Control plane / edge coordination — Send control signals between your services and globally distributed nodes (configuration updates, feature toggles, cache invalidation). Low latency matters more than persistence.
- Presence and live status — Lightweight “online/offline” or “health” signals where only the latest state matters and missed history is acceptable.
- Event notifications — Deliver ephemeral notifications to browsers and mobile clients without operating your own global WebSocket infrastructure.
- Market data fanout / tick streams — Very frequent small messages to many subscribers (quotes, order book deltas, live odds). Designed for “latest state” rather than history.
- Live collaboration signals — Typing indicators, cursor presence, “user is editing” beacons, whiteboard pings. The data is ephemeral and should not be stored.
- Realtime matching / rendezvous — Quickly connect participants (players, callers, devices) by exchanging short session offers/answers and metadata before switching to a direct media/data path.
- Progress updates for long-running jobs — Push incremental status (“10%…20%…done”) from workers to UI dashboards without polling. Losing intermediate updates is acceptable as long as the latest arrives.
- IoT alarms and threshold triggers — Send low-latency alerts (“temperature > X”, “door opened”) to multiple listeners. No need to persist every signal inside the transport layer.
- Multi-region failover signaling — Broadcast fast failover / leader-change / “drain traffic” signals to services in multiple regions during incidents.
Channel design tips
- Use stable channel roots (namespaces) and keep payload ≤64KB.
- Use tenants for environment/customer isolation (see Using tenants).
- Keep channel names descriptive and consistent:
orders.created,game.room.123,sensors.eu.temp. - Reserve separate roots for public vs internal traffic (example:
pub.#for client-facing events,sys.#for internal control messages). - Prefer a consistent event naming scheme (for example, keep the event type as the last segment:
orders.123.created,orders.123.updated). - Version your event schema when payload formats may evolve:
v1.orders.#(ororders.v1.#). - Avoid unbounded channel cardinality (e.g. a unique channel per message). Prefer per-room / per-match / per-stream channels.
Channel patterns
All examples below use only exact names or the .# suffix (the only wildcard allowed in tokens).
-
Sharded streams (load distribution) — split a hot stream into a fixed number of shards:
orders.shard.00.#…orders.shard.63.#(writer picks shard by hash of id). -
Entity tree (object-centric channels) — “everything about an object” and “everything about a type”:
entity.user.123.#,entity.user.#(events:entity.user.123.profile.updated). -
Command / event split — separate commands, events, and system signals into different roots:
cmd.game.42.#,evt.game.42.#,sys.game.42.#. -
Region-first naming — geography in the first segments for convenient prefix subscriptions:
geo.eu.de.by.munich.#; subscriptions:geo.eu.#,geo.eu.de.#. -
Room partitioning — split large rooms/channels by traffic type:
room.123.msg.#,room.123.presence.#,room.123.typing.#. -
Latest-state channels — channels for statuses where only the “last update” matters:
status.user.123.online,status.service.api.latency.p50+ subscriptionstatus.user.#. -
Session / rendezvous channels — short-lived sessions and participant coordination:
session.abc123.#,matchmaking.queue.eu.#,matchmaking.match.42.#. -
Public vs internal roots (capability boundaries) — so permissions are readable at a glance:
pub.myapp.#(client-facing),sys.myapp.#(internal control),priv.user.123.#(per-user). -
Schema versioning in the prefix — safe payload evolution:
v1.orders.#→ laterv2.orders.#. -
Gaming proximity (AOI) — cells/geohash, subscribe to neighbors:
game.42.cell.u33dc.#(client holds 9–25 subscriptions and updates them as it moves).
LowLatencyPubSub routes messages through the service overlay along the fastest available paths, which can outperform direct IP routes and works better in networks with restrictive policies.
See the Use cases section in the sidebar for step-by-step examples.