Why I Built PgBeam
If you're running PostgreSQL across regions, you know the latency problem.
A database in Virginia serves a request from Tokyo in 100-200ms per round trip. But a new PostgreSQL connection is not one round trip. It is four: TCP handshake, TLS negotiation, PG startup, and authentication. That is 400-800ms before the first query executes.
In serverless and edge architectures, where connections are short-lived, teams pay this cost on nearly every request. The result is either degraded user experience or expensive over-provisioning of read replicas, along with the operational complexity that comes with them.
The gap I found
I looked at the solutions available and found that every one of them forces a trade-off:
PgBouncer handles connection pooling but offers no query caching. Every read still round-trips to the origin. You also have to host and operate it yourself.
Cloudflare Hyperdrive provides pooling and caching, but only works from Cloudflare Workers. If you're on Vercel, AWS Lambda, Fly.io, or Railway, you can't use it.
Prisma Accelerate offers similar capabilities, but requires the Prisma ORM. If your team uses Drizzle, Knex, raw pg, SQLAlchemy, or any non-JavaScript stack, it's not an option.
RDS Proxy is VPC-bound, single-region, AWS-only, and adds no caching.
I couldn't find a platform-agnostic, managed solution that combines connection pooling and query caching across multiple regions. That's the gap PgBeam fills.
| PgBeam | Hyperdrive | Prisma Accelerate | PgBouncer | |
|---|---|---|---|---|
| Pooling | Yes | Yes | Yes | Yes |
| Query cache | Yes | Yes | Yes | No |
| Multi-region | Yes | Yes | Yes | No |
| Platform lock-in | None | CF Workers | Prisma ORM | None |
| Managed | Yes | Yes | Yes | Self-hosted |
| Read replicas | Yes | No | No | No |
| Custom domains | Yes | No | No | N/A |
The approach
PgBeam is a managed PostgreSQL proxy deployed across six global regions. It speaks the PG wire protocol natively, so standard drivers and ORMs work without code changes. Adoption requires changing one environment variable:
# Before
DATABASE_URL=postgresql://user:pass@db.example.com:5432/mydb
# After
DATABASE_URL=postgresql://user:pass@abc.gw.pgbeam.app:5432/mydbNo SDK. No code changes. No migration. This matters because PgBeam should slot into existing stacks without engineering effort or vendor risk.
Three capabilities compound to reduce database load and improve response times:
- Global routing: latency-based DNS directs each connection to the nearest proxy region.
- Connection pooling: warm upstream connections eliminate the 4-RTT handshake overhead, reducing connection establishment cost by 3-7x.
- Edge query caching: read query results are cached in the nearest region with stale-while-revalidate semantics, controlled per query via SQL annotations or dashboard rules.
How the proxy works
PgBeam is a wire-protocol proxy. It speaks the PostgreSQL frontend/backend protocol directly, so from your application's perspective it looks like a normal PostgreSQL server. The proxy handles TLS termination, authentication, and the startup handshake, then maps your session onto a pre-authenticated upstream connection.
This is different from an HTTP-level proxy or an SDK wrapper. Because PgBeam operates at the wire protocol level, it can inspect individual queries, cache read results, and manage connection lifecycle without your application knowing anything about it. Any client that speaks PostgreSQL works: psql, pg for Node, asyncpg for Python, JDBC, ODBC, Go's database/sql.
The six production regions are us-east-1, us-west-2, eu-west-1, ap-south-1, ap-southeast-1, and ap-northeast-1. All regions are connected via full-mesh VPC peering, which enables a feature called pool region routing: the proxy region closest to your user accepts the connection, but the upstream connection to your database is established from the proxy region closest to your database. This keeps the client-to-proxy hop fast while minimizing proxy-to-database latency.
Benchmark
We publish a live latency benchmark that runs serverless functions from 20 global regions. Each function opens a real TLS PostgreSQL connection to a database in us-east-1, once directly and once through PgBeam. It measures full connect time plus p50 query latency across 5 samples (first discarded as warmup). No synthetic data, no cherry-picked regions.
Running live benchmarks
Measuring real latency from global regions
From Mumbai, a direct connection costs 1,367ms (connect + query). Through PgBeam with a cache hit, that drops to 9ms — 152x faster. Singapore goes from 1,571ms to 12ms (131x). Tokyo from 1,087ms to 16ms (68x). Even on cache misses, connection pooling alone cuts overhead by 3-7x because you skip the 4-RTT handshake. Mumbai's connect time drops from 1,174ms to 8ms — the query still round-trips to the origin, but the connection is already warm.
Limitations
- Read caching only. We do not replicate data, writes pass through to the upstream.
- Eventual consistency. Cached reads can be up to 60s stale by default. Caching can be enabled and configured per query from Cache Rules.
- No cross-region cache sync. Each region's cache is independent and expires via TTL.
- PostgreSQL only. There's no support for MySQL or other databases right now.
Why self-fund
PgBeam is self-funded. Running six global regions costs real money every month. The pricing exists to make the project sustainable long-term, not to maximize revenue during a technical preview.
The goal is straightforward: cover infrastructure costs from day one so the project doesn't depend on runway or external funding to keep running. If PgBeam solves a real problem, it should be able to sustain itself on the value it delivers.
What's next
PgBeam is in technical preview, intended for internal testing and evaluation only. The platform is not production-ready and no SLA or uptime guarantees are provided. We're collecting feedback on real workloads. Here's what we've shipped and what's coming:
Shipped:
- Dashboard. Web UI for managing projects, viewing cache hit rates, query insights, and configuring pool and cache settings.
- Vercel Marketplace. One-click provisioning as a Vercel add-on with integrated billing.
- Read replicas. Automatic read/write splitting with round-robin replica selection. No ORM-level routing needed.
- Custom domains. Use your own domain for connection strings with DNS verification and automatic TLS.
Coming:
- More regions. We're evaluating demand for additional regions beyond the current six.
- Multi-cloud. Azure and Google Cloud coming soon.
- Serverless protocols. WebSocket and HTTP query endpoints for edge runtimes that can't open TCP sockets.
Running global infrastructure isn't cheap. Right now, our focus is on validating PgBeam with real users and making sure the product delivers before scaling further. If you find this space interesting, please reach out.
If your team is running PostgreSQL across regions and paying the latency tax, try PgBeam. Check the live benchmarks to see latency numbers from 20 global regions.