Use BIGINT in Postgres
This post examines a common database design decision
involving the choice of using BIGINT
versus INT
data types.
You may already know that the BIGINT
data type uses
twice the storage on disk (8 bytes per value) compared to
the INT
data type (4 bytes per value).
Knowing this, a common
decision is to use INT
wherever possible, only resorting to using
BIGINT
when it was obvious*
that the column will be storing
values greater than 2.147 Billion (the
max of INT
).
That's what I did too, until 2-3 years ago!
I started changing my default mindset to using BIGINT
over INT
,
reversing my long-held habit.
This post explains why I default to using BIGINT
and examine the performance impacts of the decision.
TLDR;
As I conclude at the end:
The tests I ran here show that a production-scale database with properly sized hardware can handle that slight overhead with no problem.
Why default to BIGINT
?
The main reason to default to BIGINT
is to avoid
INT
to BIGINT
migrations. The need to do an INT
to BIGINT
migration comes up at the
least opportune time and the task is time consuming.
This type of migration typically involves at least one column used
as a PRIMARY KEY
and that is often used elsewhere as a FOREIGN KEY
on other table(s) that must also be migrated.
In the spirit of defensive database design, BIGINT
is the safest choice. Remember the *obvious part mentioned
above? Planning and estimating is a difficult topic and
people (myself included) get it wrong all the time!
Yes, there is overhead for using BIGINT
,
but I believe the overhead associated with the extra 4 bytes
is trivial for the majority of production databases.
OpenStreetMap to PostGIS is getting lighter
If you have ever wanted OpenStreetMap data in Postgres/PostGIS, you are probably familiar with the osm2pgsql tool. Lately I have been writing about the osm2pgsql developments with the new Flex output and how it is enabling improved data quality. This post changes focus away from the flex output and examines the performance of the osm2pgsql load itself.
One challenge with osm2pgsql over the years has been generic
recommendations have been difficult to make. The safest recommendation
for nearly any combination of hardware and source data size was
to use osm2pgsql --slim --drop
to put most of the intermediate data
into Postgres instead of relying directly on RAM, which it needed a lot of.
This choice has offsetting costs of putting all that data into Postgres (only to be deleted) in terms of disk usage and I/O performance.
A few days ago, a pull request from Jochen Topf to create a new RAM middle caught my eye. The text that piqued my interest (emphasis mine):
When not using two-stage processing the memory requirements are much much smaller than with the old ram middle. Rule of thumb is, you'll need about 1GB plus 2.5 times the size of the PBF file as memory. This makes it possible to import even continent-sized data on reasonably-sized machines.
Wait... what?! Is this for real??
(Webinar) OpenStreetMap to PostGIS: Easier and Better!
This page has the resources and recording for the OpenStreetMap to PostGIS: Easier and Better! webinar from Wednesday March 31, 2021.
Downloads for session
Scripts used for the demo:
Round Two: Partitioning OpenStreetMap
A few weeks ago I decided to seriously consider Postgres' declarative table partitioning for our OpenStreetMap data. Once the decision was made to investigate this option, I outlined our use case with requirements to keep multiple versions of OpenStreetMap data over time. That process helped draft my initial plan for how to create and manage the partitioned data. When I put the initial code to the test I found a snag and adjusted the plan.
This post shows a working example of how to partition OpenStreetMap data loaded using PgOSM-Flex.
TLDR
Spoiler alert!
It works, I love it! I am moving forward with the plan outlined in this post. Some highlights from testing with Colorado sized data:
- Bulk import generates 17% less WAL
- Bulk delete generates 99.8% less WAL
- Simple aggregate query runs 75% faster
First Review of Partitioning OpenStreetMap
My previous two posts set the stage to evaluate declarative Postgres partitioning for OpenStreetMap data. This post outlines what I found when I tested my plan and outlines my next steps. The goal with this series is to determine if partitioning is a path worth going down, or if the additional complexity outweighs any benefits. The first post on partitioning outlined my use case and why I thought partitioning would be a potential benefit. The maintenance aspects of partitioning are my #1 hope for improvement, with easy and fast loading and removal of entire data sets being a big deal for me.
The second post detailed my approach to partitioning to allow me to partition based on date and region. In that post I even bragged that a clever workaround was a suitable solution.
"No big deal, creating the
osmp.pgosm_flex_partition
table gives eachosm_date
+region
a single ID to use to define list partitions." -- Arrogant MeRead on to see where that assumption fell apart and my planned next steps.
I was hoping to have made a "Go / No-Go" decision by this point... I am currently at a solid "Probably!"
Load data
For testing I simulated Colorado data being loaded once per month on the 1st of each month and North America once per year on January 1. This was conceptually easier to implement and test than trying to capture exactly what I described in my initial post. This approach resulted in 17 snapshots of OpenStreetMap being loaded, 15 with Colorado and two with North America. I loaded this data twice, once using the planned partitioned setup and the other using a simple stacked table to compare performance against.