pgbench is a benchmarking tool bundled with PostgreSQL, designed to simulate a TPC-B-like workload, not a full TPC-C
1. Initialize the Test Database
This sets up the schema and populates data.
1
pgbench -i -s 10 mydb
-i: Initialize the database.
-s 10: Scale factor. Each scale unit ~100,000 rows in the pgbench_accounts table.
mydb: The database to test.
2. Run a Simple Benchmark Test
1
pgbench -c 10 -j 2 -T 60 mydb
-c 10: 10 concurrent clients.
-j 2: 2 threads.
-T 60: Run for 60 seconds.
mydb: Target database.
It will output something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
[postgres@iZ2ze4mflpfiplp0evcw8gZ root]$ pgbench -c 10 -j 2 -T 60 mydb pgbench (18devel) starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 10 query mode: simple number of clients: 10 number of threads: 2 maximum number of tries: 1 duration: 60 s number of transactions actually processed: 12841 number of failed transactions: 0 (0.000%) latency average = 46.726 ms initial connection time = 39.072 ms tps = 214.013767 (without initial connection time) [postgres@iZ2ze4mflpfiplp0evcw8gZ root]$
3. Run Custom SQL Scripts
You can benchmark with custom SQL transactions:
1
pgbench -f myscript.sql -c 10 -T 60 mydb
Where myscript.sql contains something like:
1 2 3 4
BEGIN; SELECT * FROM pgbench_accounts WHERE aid = :aid; UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid; END;
Use :variable for substitution. We can define variables using -D:
root@lavm-bar1guved6:~# gcc --version gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the sourcefor copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE
Direct leak of 1024 byte(s) in 1 object(s) allocated from: #0 0x7f6fcee65887 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145 #1 0x55adcdfee87a in main /home/postgres/codes/sample/epoll_server.c:54 #2 0x7f6fcebb1d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
SUMMARY: AddressSanitizer: 1024 byte(s) leaked in 1 allocation(s).
If bulk reads or writes are performed, there is a risk that one-time data can quickly oust useful pages from the buffer cache.
As a precaution, bulk operations use rather small buffer rings, and eviction is performed within their boundaries, without affecting other buffers.
A buffer ring of a particular size consists of an array of buffers that are used one after another. At first, the buffer ring is empty, and individual buffers join it one by one, after being selected from the buffer cache in the usual manner. Then eviction comes into play,but only within the ring limits
Buffers added into a ring are not excluded from the buffer cache and can still be used by other operations. So if the buffer to be reused turns out to be pinned, or its usage count is higher than one, it will be simply detached from the ring and replaced by another buffer. PostgreSQL supports three eviction strategies.
strategy
trigger
buffer ring
Bulk reads
sequential scans of large tables if their size exceeds 1/4 of the buffer cache(128MB:16384 page)
256KB(32 page)
Bulk writes
applied by Copy from, create table as select , and create materialized view commands, as well as by those alter table flavors that cause table rewrites.
default: 16MB(2048 page)
Vacuuming
full table scan without taking the visibility map into account
256KB(32 page)
Buffer rings do not always prevent undesired eviction. If UPDATE or DELETE commands affect a lot of rows, the performed table scan applies the bulk reads strategy, but since the pages are constantly being modified, buffer rings virtually become useless.
Another example worth mentioning is storing oversized data in TOAST tables. In spite of a potentially large volume of data that has to be read, toasted values are always accessed via an index, so they bypass buffer rings.
Let’s take a closer look at the bulk reads strategy. For simplicity, we will create a table in such a way that an inserted row takes the whole page. By default, the buffer cache size is 16,384 pages, 8 kb each. So the table must take more than 4096 pages for the scan to use a buffer ring.