review https://github.com/lostlevels/blog/blob/main/content/posts/locks.md

It seems that I wasn't able to retrieve the content of the page you requested. The page might be restricted or not publicly accessible. Please ensure that the URL is correct and that the content is publicly available. If you have access to the content, you might consider sharing the relevant parts here for review.
try again

The article on GitHub discusses lock contention in PostgreSQL, particularly focusing on scenarios where frequently updated rows can lead to performance bottlenecks. Here's a summary and review of the content:
Summary
-
Introduction:
- The article explains how a transaction that updates a row in PostgreSQL acquires a
RowExclusiveLock
, which can lead to lock contention if multiple transactions attempt to update the same row concurrently.
- The article explains how a transaction that updates a row in PostgreSQL acquires a
-
Setup:
- A voting setup is used to simulate lock contention. Two scenarios are considered: one where votes increment a shared counter (leading to potential contention) and another where votes are recorded without a counter, thus avoiding contention.
-
Benchmark:
- The benchmark is run on an 8-core device with 8 threads and 32 connections. The performance of both scenarios is compared by measuring throughput, latency, and WAL (Write-Ahead Logging) generation.
-
Results:
- The non-counter scenario achieves nearly 5x greater throughput than the counter scenario but generates more WAL. The counter scenario experiences higher latency due to lock contention.
-
Discussion:
- The article discusses the trade-offs between using counters and non-counters. While counters are simple and provide immediate counts, they can lead to lock contention. Non-counters avoid contention but require aggregation for counts.
-
Potential Improvements:
- Suggestions include batching counts or using a "counter of counters" approach to reduce lock contention. An example with "bins" is provided, showing improved throughput compared to the regular counter approach.
-
Conclusion:
- The article concludes that while counters are straightforward, they may not be suitable under heavy write loads due to lock contention. The choice of approach depends on specific requirements and performance findings.
Review
-
Clarity: The article is well-structured and clearly explains the problem of lock contention and the scenarios tested.
-
Practicality: It provides practical examples and benchmarks, making it useful for database administrators and developers facing similar issues.
-
Depth: The discussion on potential improvements and trade-offs offers valuable insights into optimizing database performance.
-
Suggestions: It might be beneficial to include more detailed explanations of the SQL scripts used and possibly explore additional optimization techniques or tools that can help manage lock contention.
Overall, the article is informative and provides a good balance between theory and practical application. It effectively highlights the challenges of lock contention in PostgreSQL and offers viable solutions to mitigate these issues.
do you see the description of experiment? could we reproduce it with our tools?

Yes, the article describes an experiment involving benchmarking two scenarios in PostgreSQL to measure lock contention and performance differences. We can reproduce this experiment using our tools by setting up a similar environment and running the described SQL scripts with pgbench
.
Experiment Configuration
-
Setup:
-
Create tables:
vote
,person
,vote_counter
, andperson_vote
. -
Populate these tables with initial data.
-
-
Scenarios:
-
Counter Scenario: Increment a shared counter for votes.
-
Non-Counter Scenario: Record votes without a shared counter.
-
-
Benchmark:
-
Use
pgbench
to simulate concurrent transactions. -
Measure throughput, latency, and WAL generation.
-
Proposed JSON Configuration for Experiment
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f counter.sql bench"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f non-counter.sql bench"
}
}
}
Would you like me to proceed with running this experiment?
those .sql have to be created, use pre_shell for that. Show JSON again and we'll decide to proceed or not

Here's the updated JSON configuration for the experiment, including the creation of the necessary .sql
files using pre_shell
:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql bench"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql bench"
}
}
}
Please review the configuration, and let me know if you'd like to proceed with running the experiment.
let's try it

The experiment has been initiated successfully. You can track its progress and view the results once completed at the following link: Pipeline on GitLab.
Here's the JSON configuration used for the experiment:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql bench"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql bench"
}
}
}
Feel free to reach out if you have any questions or need further assistance!
2024-08-30 19:04:15 UTC [6204-1] 127.0.0.1(18054) postgres@bench FATAL: database "bench" does not exist

It seems there was an error indicating that the database "bench" does not exist. This might be due to the database not being created before the benchmark was run. Let's adjust the configuration to ensure the database is created before running the tests.
Here's the updated JSON configuration with the database creation step included:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE DATABASE bench; \\c bench; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql bench"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE DATABASE bench; \\c bench; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql bench"
}
}
}
Would you like me to proceed with this updated configuration?
nonono
just use postgres

Let's adjust the configuration to use the default postgres
database instead of creating a new one. Here's the updated JSON configuration:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql postgres"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql postgres"
}
}
}
Let's proceed with this configuration. I'll initiate the experiment again.The experiment has been re-initiated using the postgres
database. You can track its progress and view the results once completed at the following link: Pipeline on GitLab.
Here's the JSON configuration used for the experiment:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql postgres"
},
"02_non_counter_scenario": {
"pre_sql": "CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql postgres"
}
}
}
Feel free to reach out if you have any questions or need further assistance!

The experiment with pipeline ID 1434118756 has finished, but there were some errors during the execution. Here's a detailed analysis of the errors encountered:
Errors Encountered
-
Job: dedicated-test
-
Error: Connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL: database "bench" does not exist.
-
Error: Could not create connection for setup.
-
Error: Failed to execute
workload_pgbench
command. -
Error: An unexpected error occurred.
-
Error: Test stopped with error, exit code: 1.
-
Error: No files to upload.
-
-
Job: dedicated-artifacts
-
Error: Connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL: database "bench" does not exist.
-
Error: Could not create connection for setup.
-
Error: Failed to execute
workload_pgbench
command. -
Error: An unexpected error occurred.
-
Error: Artifact directory for label "02_non_counter_scenario" was not found.
-
Warning: Retrying... context=artifacts-uploader error=request redirected.
-
Analysis
The primary issue seems to be related to the non-existence of the "bench" database, which led to connection failures and subsequent errors in executing the pgbench
workload. This was addressed in the re-initiated experiment by using the default postgres
database instead.
If you have any further questions or need additional assistance, please let me know!

The experiment with pipeline ID 1434130315 has finished, but there were some errors during the execution. Here's a detailed analysis of the errors encountered:
Errors Encountered
-
Job: dedicated-test
-
Error: Relation "vote" already exists.
-
Error: Failed to execute SQL.
-
Error: An unexpected error occurred.
-
Error: Test stopped with error, exit code: 1.
-
Error: No files to upload.
-
-
Job: dedicated-artifacts
-
Error: Relation "vote" already exists.
-
Error: Failed to execute SQL.
-
Error: An unexpected error occurred.
-
Error: Artifact directory for label "02_non_counter_scenario" was not found.
-
Warning: Retrying... context=artifacts-uploader error=request redirected.
-
Analysis
The primary issue seems to be related to the "vote" table already existing, which caused the SQL execution to fail. This indicates that the setup script was run more than once without dropping existing tables, leading to conflicts.
To resolve this, we should ensure that the setup script includes DROP TABLE IF EXISTS
statements before creating tables to avoid such conflicts.
If you have any further questions or need additional assistance, please let me know!
yes

To address the issue of existing tables causing conflicts, we can modify the SQL setup script to include DROP TABLE IF EXISTS
statements. This will ensure that any existing tables are removed before new ones are created, preventing the "relation already exists" error.
Here's the updated JSON configuration with the necessary changes:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "DROP TABLE IF EXISTS vote_counter, person_vote, vote, person; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql postgres"
},
"02_non_counter_scenario": {
"pre_sql": "DROP TABLE IF EXISTS vote_counter, person_vote, vote, person; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql postgres"
}
}
}
Would you like me to proceed with running this updated experiment?
yes!

The experiment has been re-initiated with the updated configuration to handle existing tables. You can track its progress and view the results once completed at the following link: Pipeline on GitLab.
Here's the JSON configuration used for the experiment:
{
"MODE": "dedicated",
"SERVER_TYPE": "n2-standard-8",
"POSTGRES_VERSION": "16",
"VOLUME_SIZE": "100",
"TEST_RUNS": {
"01_counter_scenario": {
"pre_sql": "DROP TABLE IF EXISTS vote_counter, person_vote, vote, person; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"WITH id(vote_id) AS (SELECT floor(1 + random() * 4)) UPDATE vote_counter SET counter = counter + 1 WHERE vote_id = (SELECT vote_id FROM id);\" > /tmp/counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/counter.sql postgres"
},
"02_non_counter_scenario": {
"pre_sql": "DROP TABLE IF EXISTS vote_counter, person_vote, vote, person; CREATE TABLE vote (vote_id BIGINT PRIMARY KEY); CREATE TABLE person (person_id BIGINT PRIMARY KEY); CREATE TABLE vote_counter (vote_id BIGINT PRIMARY KEY REFERENCES vote(vote_id), counter BIGINT DEFAULT 0 NOT NULL); CREATE INDEX ON vote_counter(vote_id); CREATE TABLE person_vote (person_id BIGINT REFERENCES person(person_id) NOT NULL, vote_id BIGINT REFERENCES vote(vote_id) NOT NULL); CREATE INDEX ON person_vote(person_id, vote_id); INSERT INTO vote(vote_id) SELECT generate_series FROM generate_series(1, 4); INSERT INTO vote_counter(vote_id, counter) SELECT generate_series, 0 FROM generate_series(1, 4); INSERT INTO person(person_id) SELECT generate_series FROM generate_series(1, 10000);",
"pre_shell": "echo \"INSERT INTO person_vote(vote_id, person_id) VALUES (floor(1 + random() * 4), floor(1 + random() * 10000));\" > /tmp/non-counter.sql",
"workload_pgbench": "pgbench -j 8 -T 240 -n -c 32 -f /tmp/non-counter.sql postgres"
}
}
}
Feel free to reach out if you have any questions or need further assistance!

The experiment with pipeline ID 1434210529 has completed successfully. Here are the results and analysis:
Results
-
Job: dedicated-test
- Status: Success
-
Job: dedicated-artifacts
-
Status: Success
-
Output:
-
01_counter_scenario: Transactions per second (TPS) = 3982.138315
-
02_non_counter_scenario: Transactions per second (TPS) = 21763.850444
-
-
-
Job: dedicated-cleanup
- Status: Success
Analysis
-
The non-counter scenario achieved significantly higher TPS compared to the counter scenario. This aligns with the expectation that avoiding shared counters can reduce lock contention and improve throughput.
-
The experiment successfully demonstrated the performance differences between the two approaches, with the non-counter scenario achieving over 5 times the TPS of the counter scenario.
If you have any further questions or need additional insights, feel free to ask!
visualize

Here is a visualization of the experiment results showing the Transactions per Second (TPS) for both scenarios:
The chart clearly illustrates the significant performance improvement in the non-counter scenario compared to the counter scenario. If you have any further questions or need additional insights, feel free to ask!
report numbers + visualize

Experiment Results
-
Counter Scenario:
- Transactions per Second (TPS): 3982.14
-
Non-Counter Scenario:
- Transactions per Second (TPS): 21763.85
The non-counter scenario achieved significantly higher TPS compared to the counter scenario, demonstrating the performance benefits of avoiding shared counters to reduce lock contention.
Visualization
Here is a bar chart visualizing the TPS for both scenarios:
If you have any further questions or need additional insights, feel free to ask!