[Translation] The Future of PostgreSQL: How a 64-bit Transaction Counter Solves Scaling Issues

Dmitry Kuznetsov
1 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

For many years, the PostgreSQL community was skeptical about using this database management system (DBMS) for high-transaction environments. While PostgreSQL worked well for lab tests, mid-tier web applications, and smaller backend systems, it was believed that for heavy transactional loads, you’d need an expensive DBMS designed specifically for such purposes. As a result, PostgreSQL wasn’t particularly developed in that direction, leaving a range of issues unanswered.

However, the reality has turned out differently. More and more of our clients are encountering problems that stem from this mindset. For example, in the global PostgreSQL community, it’s considered that 64 cores is the maximum size of a server where PostgreSQL can run effectively. But we’re now seeing that this is becoming a minimum typical configuration. One particular bottleneck that has emerged is the transaction counter, and this is a far more interesting issue. So, let’s dive into what the problem is, how we solved it, and what the international community thinks about it.

Read more

Share This Article