Atlas Verified

Building Real-Time Systems on PostgreSQL: Beyond the ORM

Atlas Verified Team
·
February 13, 2026
·
8 min read

When your application needs real-time updates, the default answer in 2025 is to reach for a managed service: Firebase Realtime Database, Supabase Realtime, Pusher, Ably, or a hosted WebSocket layer. These services abstract away the complexity of pushing updates to connected clients. They work well — until they don't.

At Atlas Verified, we started with a managed database platform that included real-time subscriptions. As our system grew, we hit the limits of that abstraction. Connection pooling constraints, real-time reliability issues, the need for PostGIS spatial queries, and the desire for custom trigger logic all pushed us in the same direction: take control of the database layer.

We migrated to self-hosted PostgreSQL and built our real-time system using PostgreSQL's native LISTEN/NOTIFY mechanism paired with Server-Sent Events. This post covers why we made that decision, how the architecture works, and what we learned.

Why We Left the Managed Platform

Managed database platforms offer a compelling value proposition: authentication, row-level security, real-time subscriptions, auto-generated APIs, and a dashboard — all out of the box. For early-stage products, this is genuinely valuable. You can build a functional application without writing a single line of backend code.

The problems emerge as your application matures and your requirements diverge from the platform's assumptions.

Connection limits. Managed platforms often route database connections through a connection pooler that imposes its own limits. When you have multiple backend services that each need their own connection pools — an API server, a background worker system, a real-time listener — you quickly exhaust the available connections. We found ourselves engineering around the pooler's limitations rather than solving actual product problems.

Real-time reliability. The managed platform's real-time system worked through Change Data Capture on PostgreSQL's Write-Ahead Log. This is architecturally elegant but introduced failure modes we couldn't control. WAL-based replication has inherent latency. The real-time service occasionally fell behind, delivering stale updates. Reconnection after network interruptions was unreliable. And debugging these issues required access to infrastructure we didn't control.

Extension requirements. Our application processes trade documents that include geographic data: port locations, shipping routes, vessel positions. PostGIS — the spatial extension for PostgreSQL — is essential for these queries. The managed platform supported PostGIS, but we needed specific versions and configurations that weren't always available.

Custom trigger logic. We needed database triggers that fired application-specific events — not just generic "row changed" notifications. When a document's processing status changes, we need to notify the specific connected client who uploaded it. The managed platform's real-time was table-level; we needed entity-level.

The migration decision wasn't about cost or principle. It was pragmatic: we were spending more time working around the platform's constraints than we would have spent building the features ourselves.

LISTEN/NOTIFY: PostgreSQL's Hidden Pub/Sub

PostgreSQL has included a built-in publish/subscribe mechanism since version 9.0, released in 2010. Despite being over 15 years old, LISTEN/NOTIFY is remarkably underused. Most developers don't know it exists.

The mechanism is simple. Any database connection can execute LISTEN on a named channel to subscribe. Any connection can execute NOTIFY to send a message to all listeners on that channel. The payload is a string, limited to 8,000 bytes.

What makes LISTEN/NOTIFY powerful for real-time applications:

Zero infrastructure overhead. There's no separate message broker to deploy, monitor, or scale. If you have PostgreSQL, you have pub/sub. No Redis, no RabbitMQ, no Kafka — for this specific use case.

Transactional guarantees. NOTIFY messages are delivered only when the enclosing transaction commits. If a transaction inserts a row and sends a notification, and the transaction rolls back, the notification is never delivered. This is a property you don't get from external message brokers without careful orchestration.

Trigger integration. PostgreSQL triggers can call pg_notify as part of their execution. This means you can have the database automatically notify listeners whenever specific data changes, without any application code in the write path.

The limitations are real and worth understanding:

No persistence. If no one is listening when a NOTIFY fires, the message is lost. There's no replay, no message queue, no delivery guarantees for disconnected clients. Your listeners must handle reconnection and state recovery.

Payload size. The 8,000-byte limit on notification payloads means you can't send large objects through NOTIFY. We send minimal payloads — the table name, operation type, and primary key — and let the client fetch the full data if needed.

Single database scope. LISTEN/NOTIFY works within a single PostgreSQL instance. It doesn't span replicas or clusters. For architectures where all writes go to a single primary, this isn't a limitation.

Our Architecture: Triggers, SSE, and Channel Multiplexing

Here's how the pieces fit together.

Database triggers are installed on tables where real-time updates matter: documents, messages, conversations, notifications, shipment verifications, and more. When a row is inserted, updated, or deleted, the trigger fires and calls pg_notify with a JSON payload containing the table name, operation type, the primary key, and any filter columns.

A dedicated listener connection in our API server executes LISTEN on the notification channel and receives all trigger events. This is a long-lived, dedicated PostgreSQL connection — separate from the query connection pool — that does nothing but listen for notifications.

An event bus receives notifications from the listener and routes them to connected clients based on their subscriptions. A client subscribed to documents where a specific shipment ID matches only receives notifications about documents in that specific shipment.

Server-Sent Events deliver notifications to the browser. Each connected client opens a single SSE connection to the API server, specifying which channels and filters they want. The event bus matches incoming notifications against client subscriptions and pushes matching events down the appropriate SSE connections.

Why SSE Over WebSockets

This is a decision we've been consistently happy with. Server-Sent Events are simpler than WebSockets in every dimension that matters for our use case.

Unidirectional by design. Our real-time needs are server-to-client: the server tells the client when data changes. The client doesn't need to send real-time messages to the server — it uses standard HTTP requests for that. SSE's unidirectional model matches our actual requirement.

HTTP-native. SSE connections are standard HTTP requests with a specific content type. They work through every HTTP proxy, load balancer, and CDN without special configuration. WebSockets require HTTP Upgrade, which some proxies don't handle correctly and some corporate firewalls block entirely.

Automatic reconnection. The browser's EventSource API handles reconnection natively. If the connection drops, the browser automatically reconnects with exponential backoff. With WebSockets, you implement reconnection logic yourself — and get it wrong at least once.

The one area where WebSockets win — bidirectional communication — we simply don't need. For applications that require real-time client-to-server messaging, WebSockets are the right choice. For applications that push notifications from server to client, SSE is the right choice.

Building a Query Layer Without an ORM

When we left the managed platform, we also left its auto-generated query API. We needed a database access layer for our API server.

The standard options are ORMs like Prisma, TypeORM, or Drizzle — or raw SQL with a driver. We chose a third path: a lightweight query builder that provides a chainable API surface while generating and executing raw SQL underneath.

Familiar API. Our existing codebase was written against the managed platform's client library. A compatible API surface meant minimal code changes during migration. Queries that previously ran through the platform's client could run through our query builder with identical syntax.

No magic. ORMs generate SQL that you can't see and sometimes can't control. When a query is slow, you need to read the generated SQL, which often bears little resemblance to the ORM call that produced it. Our query builder generates predictable SQL — each chained method maps to a specific SQL clause, and you can always inspect the generated query.

Embedded joins. Trade data is relational. A shipment has documents, which have nodes, which have verification checks. Being able to express nested selections and get back a nested result — without writing a complex JOIN query by hand every time — is a significant developer experience improvement.

The result is a query layer that's a few hundred lines of code, handles the vast majority of our database interactions, and produces SQL we can always read and optimize.

Connection Pool Budgeting

When you self-host PostgreSQL, you own the connection budget. PostgreSQL's max_connections setting defines the absolute ceiling, and every connection consumes memory.

We discovered that thinking about connections as a budget — a finite resource shared across services — prevents a class of problems that bite teams who don't plan for it.

Each service declares its maximum connection count. The sum of all maximums must stay well below PostgreSQL's limit, with headroom for ad-hoc connections — psql sessions, migration scripts, monitoring tools. This sounds obvious, but most teams don't do it until they hit connection exhaustion in production. By budgeting connections upfront, we've avoided that particular 3 AM page.

Safe Migrations at Scale

Migrating from a managed platform to self-hosted PostgreSQL meant moving hundreds of tables, functions, triggers, and indexes. But the bigger challenge was building a migration system that would keep working as the application evolved.

Our automated migration system works like this: migration files are named with timestamps and live in a dedicated directory. A migration runner scans the directory, compares against a tracking table that records which migrations have been applied, and executes any pending migrations in order. Each migration runs inside a transaction — if it fails, it rolls back cleanly and blocks the deploy until the issue is fixed.

The migration runner executes automatically on every deploy, before the application starts. This means schema changes are applied as part of the deployment process, with no manual steps. A developer creates a timestamped SQL file, pushes it to the repository, and the next deploy applies it.

Key design decisions that keep this safe: migrations use idempotent guards so they don't fail if partially applied. Every migration must be a self-contained SQL script that runs without human input. And if a migration fails, the transaction rolls back entirely — the application won't start with a partially-applied schema change.

What We Gained

Moving to self-hosted PostgreSQL with custom real-time was a significant engineering investment. But the returns have been substantial:

Reliability. Real-time notifications arrive within milliseconds of the underlying data change. No WAL replication lag, no third-party service outages in the critical path.

Debuggability. When something goes wrong, we can trace the entire path: trigger fires, notification sent, event bus routes, SSE delivers. Every step is in our code, in our logs, on our infrastructure.

Flexibility. Adding real-time to a new table is a single SQL migration that installs a trigger. No platform configuration, no third-party dashboard, no support ticket.

Performance. Direct PostgreSQL connections with a tuned pool outperform routed connections through a managed platform's infrastructure. Query latency dropped meaningfully after migration.

The tradeoff is operational responsibility. We manage the database, handle backups, monitor replication, and plan capacity. For a team comfortable with PostgreSQL operations, this is manageable. For a team that isn't, the managed platform's abstractions might be worth their limitations.


Atlas Verified runs on self-hosted PostgreSQL with custom real-time infrastructure, processing thousands of trade compliance verifications daily across the global supply chain.

Enjoyed this article?

Share it with your network

Real-Time Systems on PostgreSQL: Beyond ORMs | Atlas Verified