Permanent identifiers should not carry data. This is like the cardinal sin of data management. You always run into situations where the thing you thought, "surely this never changes, so it's safe to squeeze into the ID to save a lookup". Then people suddenly find out they have a new gender identity, and they need a last final digit in their ID numbers too.
Even if nothing changes, you can run into trouble. Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right? Well, wrong, since although the date doesn't change, your knowledge of it might. Immigrants who didn't know their exact date of birth got assigned 1. Jan by default... And then people with actual birthdays on 1 Jan got told, "sorry, you can't have that as birth date, we've run out of numbers in that series!"
Librarians in the analog age can be forgiven for cramming data into their identifiers, to save a lookup. When the lookup is in a physical card catalog, that's somewhat understandable (although you bet they could run into trouble over it too). But when you have a powerful database at your fingertips, use it! Don't make decisions you will regret just to shave off a couple of milliseconds!
> Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right? Well, wrong, since although the date doesn't change, your knowledge of it might. Immigrants who didn't know their exact date of birth got assigned 1. Jan by default... And then people with actual birthdays on 1 Jan got told, "sorry, you can't have that as birth date, we've run out of numbers in that series!"
To me, what your example really shows is the problem with incorrect default values, not a problem with encoding data into a key per se. If they'd chosen a non-date for unknown values, maybe 00 or 99 for day or month components, then the issue you described would disappear.
But in case, the intention for encoding a timestamp into a UUID isn't for any implied meaning. It's both to guarantee uniqueness with a side effect that IDs are more or less monotonically increasing. Whether this is actually desirable depends on your application, but generally if the application is as a indexed key for insertion into a database, it's usually more useful for performance than a fully random ID as it avoids rewriting lots of leaf-nodes of B-trees. If you insert a load of these such keys, it forms a cluster on one side of the tree that can the rebalance with only the top levels needing to be rewritten.
>To me, what your example really shows is the problem with incorrect default values, not a problem with encoding data into a key per se. If they'd chosen a non-date for unknown values, maybe 00 or 99 for day or month components, then the issue you described would disappear.
You still have that problem from organic birthdays and also the problem of needing to change ids to correct birth dates.
And then have to enter/handle a non-date through all systems? How do you know if this non-dated person is over the age of minority? Eligible for a pension?
Maybe the answer is to evenly spread the defaults over 365 days.
If you don't know their birthday, you can presumably never answer that question in any case.
If you only know the birth year and keyed 99 as the month for unknown, then your algorithm would determine they were of a correct age on the start of the year after that was true, which I guess would be what you want for legal compliance.
If you don't even know if the birth year is correct, then the correct process depends on policy. Maybe they choose any year, maybe they choose the oldest/youngest year they might be, maybe they just encode that as 0000/9999.
Again, if you don't know the birth year of someone, you would have no way of knowing their age. I'm not sure that means that the general policy of putting a birthday into their ID number is flawed.
Many governments re-issue national IDs to the same person with different numbers, which is far less problematic that the many governments who choose to issue the same national ID (looking at you USA with your SSN) to multiple individuals. It doesn't seem like a massive imposition on a person who was originally issued an ID based on not knowing when their birthday to be re-issued a new ID when their birthday was ascertained. Perhaps even give them a choice of keeping the old one knowing it will cause problems, or take the new one instead and having the responsibility to tell people their number had changed.
Presumably the governments that choose to embed the date into a national ID number do so because it's more useful for their purposes to do so than just assigning everyone a random number.
In either the AAA or BB component there is something about the gender.
But it does mean that there is a limit of people born per day of a certain gender.
But for a given year, using a moniker will only delay the inevitable. Sure, there are more numbers, but still limited as there are SOME parts that need to reflect reality. Year, gender (if that's still the case?) etc.
BB is a mod-97 checksum. The first A of AAA encodes your gender in an even/odd fashion, I forgot if its the first or last A doing that.
MM or DD can be 00 if unknown. Also MM has +20 or +40 in some cases.
If you know someones birth date and gender, the INSZ is almost certainly 1 in 500 numbers, with a heavy skew to the lower AAA. Luckily, you can't do much damage with someones number,unlike an USA SSN (but I'd still treat it confidential).
I've worked on a system where ULIDs (not UUIDv7, but similar) were used with a cursor to fetch data in chronological order and then—surprise!—one day records had to be backdated, meaning that either the IDs for those records had to be counterfeited (potentially violating invariants elsewhere) or the fetching had to be made smarter.
You can choose to never make use of that property. But it's tempting.
I made a service using something like a 64 bit wide ULID but there was never a presumption that data is be inserted or updated earlier than the most recent record.
If the domain is modeling something like external events (in my case), and that external timestamp is packed into your primary key, and you support receiving events out of chronological order, then it just follows that you might insert stuff ealrier than you latest record.
You're gonna have problems "backdating" if you mix up time of insertion with when the event you model actually ocurred. Like id you treat those as the same thing when they aren't.
> You're not going to try and extract a timestamp from a uuid.
I totally used uuidv7s as "inserted at" in a small project and I had methods to find records created between two timestamps that literally converted timestamps to uuidv7 values so I could do "WHERE id BETWEEN a AND b"
> You're not going to try and extract a timestamp from a uuid.
What? The first 48 bits of an UUID7 are a UNIX timestamp.
Whether or not this is a meaningful problem or a benefit to any particular use of UUIDs requires thinking about it; in some cases it’s not to be taken lightly and in others it doesn’t matter at all.
I see what you’re getting at, that ignoring the timestamp aspect makes them “just better UUIDs,” but this ignores security implications and the temptation to partition by high bits (timestamp).
Nobody forces you to use a real Unix timestamp. BTW the original Unix timestamp is 32 bits (expiring in 2038), and now everyone is switching to 64-bit time_t. What 48 bits?
All you need is a guaranteed non-decreasing 48-bit number. A clock is one way to generate it, but I don't see why a UUIDv7 would become invalid if your clock is biased, runs too fast, too slow, or whatever. I would not count on the first 48 bits being a "real" timestamp.
Besides the UUIDv7 specification, that is? Otherwise you have some arbitrary kind of UUID.
> I would not count on the first 48 bits being a "real" timestamp.
I agree; this is the existential hazard under discussion which comes from encoding something that might or might not be data into an opaque identifier.
I personally don't agree as dogmatically with the grandparent post that extraneous data should _not_ be incorporated into primary key identifiers, but I also disagree that "just use UUIDv7 and treat UUIDs as opaque" is a completely plausible solution either.
I mean, any 32-bit unsigned integer is a valid Unix timestamp up until 19 January 2038, and, by extension, any u64 is, too, for far longer time.
The only promise of Unix timestamps is that they never go back, always increase. This is a property of a sequence of UUIDs, not any particular instance. At most, one might argue that an "utterly valid" UUIDv7 should not contain a timestamp from far future. But I don't see why it can't be any time in the past, as long as the timestamp part does not decrease.
The timestamp aspect may be a part of an additional interface agreement: e.g. "we guarantee that this value is UUIDv7 with the timestamp in UTC, no more than a second off". But I assume that most sane engineers won't offer such a guarantee. The useful guarantee is the non-decreasing nature of the prefix, which allows for sorting.
Fantastic real life example. Italian PNs carry also the gender, which something you can change surgically, and you'll eventually run into the issue when operating at scale.
I don't agree with the absolute statement, though. Permanent identifiers should not generally carry data. There are situations where you want to have a way to reconciliate, you have space or speed constraints, so you may accept the trade off, md5 your data and store it in a primary index as a UUID. Your index will fragment and thus you will vacuum, but life will still be good overall.
Right, because it has. The change in gender identity (or in choosing to make said identity more public )has already taken place, and the surgery seems to affirm that.
That is only true if you're using an extremely idiosyncratic definition of gender. As far as 95% of English speakers are concerned, gender is defined by the body you possess.
Does that mean hundreds of years of English-speakers referring to sailing ship as "she" were all part of a conspiracy to hide that ships have jiggly bits? :p
This is actually a very deep and interesting topic.
Stripping information from an identifier disconnects a piece of data from the real world which means we no longer can match them. But such connection is the sole purpose of keeping the data in the first place. So, what happens next is that the real world tries to adjust and the "data-less" identifier becomes a real world artifact. The situation becomes the same but worse (eg. you don't exist if you don't remember your social security id). In extreme cases people are tattooed with their numbers.
The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
> Stripping information from an identifier disconnects a piece of data from the real world which means we no longer can match them. But such connection is the sole purpose of keeping the data in the first place.
The identifier is still connected to the user's data, just through the appropriate other fields in the table as opposed to embedded into the identifier itself.
> So, what happens next is that the real world tries to adjust and the "data-less" identifier becomes a real world artifact. The situation becomes the same but worse (eg. you don't exist if you don't remember your social security id). In extreme cases people are tattooed with their numbers.
Using a random UUID as primary key does not mean users have to memorize that UUID. In fact in most cases I don't think there's much reason for it to even be exposed to the user at all.
You can still look up their data from their current email or phone number, for instance. Indexes are not limited to the primary key.
> The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
A fully random primary key takes into account that things change - since it's not embedding any real-world information. That said I also don't think there's much issue with embedding creation time in the UUID for performance reasons, as the article is suggesting.
> Using a random UUID as primary key does not mean users have to memorize that UUID. In fact in most cases I don't think there's much reason for it to even be exposed to the user at all.
So what is such an identifier for? Is it only for some technical purposes (like replication etc.)?
Why bother with UUID at all then for internal identifiers? Sequence number should be enough.
"Internal" is a blurry boundary, though - you pick integer sequence numbers and then years on an API gets bolted on to your purely internal database and now your system is vulnerable to enumeration attacks. Does a vendor system where you reference some of your internal data count as "internal"? Is UID 1 the system user that was originally used to provision the system? Better try and attack that one specifically... the list goes on.
UUIDs or other similarly randomized IDs are useful because they don't include any ordering information or imply anything about significance, which is a very safe default despite the performance hits.
There certainly are reasons to avoid them and the article we're commenting on names some good ones, at scale. But I'd argue that if you have those problems you likely have the resources and experience to mitigate the risks, and that true randomly-derived IDs are a safer default for most new systems if you don't have one of the very specific reasons to avoid them.
Internal means "not exposed outside some boundary". For most people, this boundary encompasses something larger than a single database, and this boundary can change.
UUIDs are good for creating entries concurrently where coordinating between distributed systems may be difficult.
May also be that you don't want to leak information like how many orders are being made, as could be inferred from a `/fetch_order?id=123` API with sequential IDs.
Sequential primary keys are still commonly used though - it's a scenario-dependant trade-off.
> > Using a random UUID as primary key does not mean users have to memorize that UUID. [...]
> So what is such an identifier for? [...] Why bother with UUID at all then for internal identifiers?
The context, that you're questioning what they're useful for if not for use by the user, suggests that "internal" means the complement. That is, IDs used by your company and software, and maybe even API calls the website makes, but not anything the user has to know.
Otherwise, if "internal" was intended to mean something stricter (only used by a single non-distributed database, not accessed by any applications using the database, and never will be in the future), then my response is just that many IDs are neither internal in this sense nor intended to be memorized/saved by the user.
> The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
I think artificial and data-less identifiers are the better means of identification that takes into account that things change. They don't have to be the identifier you present to the world, but having them is very useful.
E.g. phone numbers are semi-common identifiers now, but phone numbers change owners for reasons outside of your control. If you use them as an internal identifier, changing them between accounts gets very messy because now you don't have an identifier for the person who used to have that phone number.
It's much cleaner and easier to adapt if each person gets an internal context-less identifier and you use their phone number to convert from their external ID/phone number to an internal ID. The old account still has an identifier, there's just no external identifier that translates to it. Likewise if you have to change your identifier scheme, you can have multiple external IDs that translate to the same internal ID (i.e. you can resolve both their old ID and their new ID to the same internal ID without insanity in the schema).
> I think artificial and data-less identifiers are the better means of identification that takes into account that things change. They don't have to be the identifier you present to the world, but having them is very useful.
If the only reason you need a surrogate key is to introduce indirection in your internal database design then sequence numbers are enough. There is no need to use UUIDs.
The whole discussion is about externally visible identifiers (ie. identifiers visible to external software, potentially used as a persistent long-term reference to your data).
> E.g. phone numbers are semi-common identifiers now, but phone numbers change owners for reasons outside of your control. If you use them as an internal identifier, changing them between accounts gets very messy because now you don't have an identifier for the person who used to have that phone number.
Introducing surrogate keys (regardless of whether UUIDs or anything else) does not solve any problem in reality. When I come to you and say "My name is X, this is my phone number, this is my e-mail, I want my GDPR records deleted", you still need to be able to find all data that is related to me. Surrogate keys don't help here at all. You either have to be able to solve this issue in the database or you need to have an oracle (ie. a person) that must decide ad-hoc what piece of data is identified by the information I provided.
The key issue here is that you try to model identifiable "entities" in your data model, while it is much better to model "captured information".
So in your example there is no "person" identified by "phone number" but rather "at timestamp X we captured information about a person at the time named Y and using phone number Z".
Once you start thinking about your database as structured storage of facts that you can use to infer conclusions, there is much less need for surrogate keys.
> So in your example there is no "person" identified by "phone number" but rather "at timestamp X we captured information about a person at the time named Y and using phone number Z". Once you start thinking about your database as structured storage of facts that you can use to infer conclusions, there is much less need for surrogate keys.
This is so needlessly complex that you contradicted yourself immediately. You claim there is no “person” identified but immediately say you have information “about a person”. The fact that you can assert that the information is about a person means that you have identified a person.
Clearly tying data to the person makes things so much easier. I feel like attempting to do what you propose is begging to mess up GDPR erasure.
> “So I got a request from a John Doe to erase all data we recorded for them. They identified themselves by mailing address and current phone number. So we deleted all data we recorded for that phone number.”
> “Did you delete data recorded for their previous phone number?”
> “Uh, what?”
The stubborn refusal to create a persistent identifier makes your job harder, not easier.
> If the only reason you need a surrogate key is to introduce indirection in your internal database design then sequence numbers are enough. There is no need to use UUIDs.
The UUID would be an example of an external key (for e.g. preventing crawling keys being easy). This article mentions a few reasons why you may later decide there are better external keys.
> When I come to you and say "My name is X, this is my phone number, this is my e-mail, I want my GDPR records deleted", you still need to be able to find all data that is related to me.
How are you going to trace all those records if the requester has changed their name, phone number and email since they signed up if you don't have a surrogate key? All 3 of those are pretty routine to change. I've changed my email and phone number a few times, and if I got married my name might change as well.
> Once you start thinking about your database as structured storage of facts that you can use to infer conclusions, there is much less need for surrogate keys.
I think that spirals into way more complexity than you're thinking. You get those timestamped records about "we got info about person named Y with phone number Z", and then person Y changes their phone number. Now you're going to start getting records from person named Y with phone number A, but it's the same account. You can record "person named Y changed their phone number from Z to A", and now your queries have to be temporal (i.e. know when that person had what phone number). You could back-update all the records to change Z to A, but that breaks some things (e.g. SMS logs will show that you sent a text to a number that you didn't send it to).
Worse yet, neither names nor phone numbers uniquely identify a person, so it's entirely possible to have records saying "person named Y and phone number Z" that refer to different people if a phone number transfers from a John Doe to a different person named John Doe.
I don't doubt you could do it, but I can't imagine it being worth it. I can't imagine a way to do it that doesn't either a) break records by backdating information that wasn't true back then, or b) require repeated/recursive querying that will hammer the DB (e.g. if someone has had 5 phone numbers, how do you get all the numbers they've had without pulling the latest one to find the last change, and then the one before that, and etc). Those queries are incredibly simple with surrogate keys: "SELECT * FROM phone_number_changes WHERE user_id = blah".
> The UUID would be an example of an external key (for e.g. preventing crawling keys being easy). This article mentions a few reasons why you may later decide there are better external keys.
So we are talking about "external" keys (ie. visible outside the database). We are back to square one: externally visible surrogate keys are problematic because they are detached from real world information they are supposed to identify and hence don't really identify anything (see my example about GDPR).
It does not matter if they are random or not.
> How are you going to trace all those records if the requester has changed their name, phone number and email since they signed up if you don't have a surrogate key?
And how does surrogate key help? I don't know the surrogate key that identifies my records in your database.
Even if you use them internally it is an implementation detail.
If you keep information about the time information was captured, you can at least ask me "what was your phone number last time we've interacted and when was it?"
> I think that spirals into way more complexity than you're thinking.
This complexity is there whether you want it or not and you're not going to eliminate it with surrogate keys. It has to be explicitly taken care of.
DBMSes provide means to tackle this essential complexity: bi-temporal extensions, views, materialized views etc.
Event sourcing is a somewhat convoluted way to attack this problem as well.
> Those queries are incredibly simple with surrogate keys: "SELECT * FROM phone_number_changes WHERE user_id = blah".
Sure, but those queries are useless if you just don't know user_id.
In my country, citizens have an "ID" (a UUID, which most people don't know the value of!) and a social security number which they know - which has all the problems described above).
While the social security number may indeed change (doubly assigned numbers, gender reassignment, etc.), the ID needn't change, since it's the same physical person.
Public sector it-systems may use the ID and rely on it not changing.
Private sector it-systems can't look up people by their ID, but only use the social security number for comparisons and lookups, e.g. for wiping records in GDPR "right to be forgotten"-situations. Social security numbers are sortof-useful for that purpose because they are printed on passports, driver's licenses and the like. And they are a problem w.r.t. identity theft, and shouldn't ever be used as an authenticator (we have better methods for that).
The person ID isn't useful for identity theft, since it's only used between authorized contexts (disregarding Byzantine scenarios with rogue public-sector actors!). You can't social engineer your way to personal data using that ID unless (safe a few movie-plot scenarios).
So what is internal in this case? The person id is indeed internal to the public sector's it-systems, and useful for tracking information between agencies. They're not useful for Bob or Alice. (They ARE useful for Eve, or other malicious inside actors, but that's a different story, which realistically does require a much higher level of digital maturity across the entire society)
> Stripping information from an identifier disconnects a piece of data from the real world which means we no longer can match them. But such connection is the sole purpose of keeping the data in the first place.
The surrogate key's purpose isn't to directly store the natural key's information, rather, it's to provide an index to it.
> The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
There isn't 'another' - there's just one. The surrogate key. The other pieces of information you're describing are not the means of indexing the data. They are the pieces of data you wish to retrieve.
Any piece of information that can be used to retrieve something using this index has to be available "outside" your database - ie. to issue a query "give me piece of information identified by X" you have to know X first. If X is only available in your index then you must have another index to retrieve X based on some externally available piece of information Y. And then X becomes useless as an identifier - it just adds a level of indirection that does not solve any information retrieval problem.
That's my whole point: either X becomes a "real world artifact" or it is useless as identifier.
You can't take into account the fact that things change when you don't know what those changes might be. You might end up needing to either rebuild a new database, have some painful migration, or support two codepaths to work with both types of keys.
You can’t design something by trying to anticipate all future changes. things will change and break.
In my personal design sense, I have found keeping away generality actually helps my code last longer (based on more concrete ideas) and easier to change when those days come.
In my experience, virtually every time I bake concrete data into identifiers I end up regretting it. This isn’t a case of trying to predict all possible future changes. It’s a case of trying to not repeat the exact same mistake again.
I misunderstood then. I interpreted your comment to say that you eschew generalization (e.g. uuids) in favor of concrete data (e.g. names, email addresses) for ids in your designs.
I don't think the timestamped UUIDs are "carrying data", it is just a heuristic to improve lookup performance. If the timestamp is wrong, it will just run as slow as the non-timestamped UUID.
If you take the gender example, for 99% of people, it is male/female and it won't change, and you can use that for load balancing. But if later, you found out that the gender is not the one you expect for that bucket, no big deal, it will cause a branch misprediction, but instead of happening 50% of the times when you use a random value, it will only happen 1% of the times, significant speedup with no loss in functionality.
As soon as you encode imperfect data in an immutable key, you always have to check when you retrieve it. If that piece of data isn't absolutely 100% guaranteed to be perfect, then you have to query both halves of the load balanced DB anyway.
The curious thing about the article is that, it's definitely premature optimization for smaller databases, but when the database gets to the scale where these optimizations start to matter, you actually don't want to do what they suggest.
Specifically, if your database is small, the performance impact is probably not very noticeable. And if your database is large (eg. to the extent primary keys can't fit within 32-bit int), then you're actually going to have to think about sharding and making the system more distributed... and that's where UUID works better than auto-incrementing ints.
I agree there's a scale below which this optimization matters and a scale above which you want your primary key to have locality. But...
* I think there is a wide range in the middle where your database can fit on one machine if you do it well, but it's worth optimizing to use a cheaper machine and/or extend the time until you need to switch to a distributed db. You might hit this middle range soon enough (and/or it might be a painful enough transition) that it's worth thinking about it ahead of time.
* If/when you do switch to a distributed database, you don't always need to rekey everything:
** You can bypass/defeat the existing keys' locality by via hashing on lookup or reversing bits. Some databases (e.g. DynamoDB) actually force this.
** Allocating new ids in the old way could be a big problem, but there are ways out. You might be able to switch allocation schemes entirely without clients noticing if your external keys are sufficiently opaque. If you went with UUIDv7 (which addresses some but not all of the article's points), you can just keep using it. If you want to keep using dense(-ish), (mostly-)sequential bigints, you can amortize the latency by reserving blocks at a time.
Your comment is sufficiently generic that it’s impossible to tell what specific part of the article you’re agreeing with, disagreeing with, or expanding upon.
That's the creation date of that guid though. It doesn't say anything about the entity in question. For example, you might be born in 1987 and yet only get a social security number in 2007 for whatever reason.
So, the fact that there is a date in the uuidv7 does not extend any meaning or significance to the record outside of the database.
To infer such a relationship where none exists is the error.
You can argue that, but then what is its purpose? Why should anyone care about the creation date of a by-design completely arbitrary thing?
I bet people will extract that date and use it, and it's hard to imagine use which wouldn't be abuse. To take the example of a PN/SSN and the usual gender bit: do you really want anyone to be able to tell that you got a new ID at that time? What could you suspect if a person born in 1987 got a new PN/SSN around 2022?
Leaks like that, bypassing whatever access control you have in your database, is just one reason to use real random IDs. But it's even a pretty good one in itself.
> What could you suspect if a person born in 1987 got a new PN/SSN around 2022?
Thank you for spelling it for me.
For the readers,
It leaks information that the person is likely not a natural born citizen.
The assumption doesn't have to be a hundred percent accurate,
There is a way to make that assumption
And possibly hold it against you.
And there are probably a million ways that a record created date could be held against you
If they don't put it in writing, how will you prove
They discriminated against you.
Thinking... I don't have a good answer to this. If data exists, people will extract meaning from it whether rightly or not.
> The only rules that really matter are these: what a man can do and what a man can't do.
When evaluating security matters, it's better to strip off the moral valence entirely ("rightly") and only consider what is possible given the data available.
Another potential concerning implication besides citizenship status: a person changed their id when put in a witness protection program.
But UUIDv7 doesn’t change that at all. It doesn’t matter what flavor of UUID you choose. The ID is always “like” an index to a block in that you traverse the tree to find the node. What UUIDv7 does is improve some performance characteristics when creating new entries and potentially for caching.
That is absolutely not the purpose. The specific purpose of uuidv7 is to optimize for B-Tree characteristics, not so you can craft queries based on the IDs being sequential.
This assumption that you can query across IDs is exactly what is being cautioned against. As soon as you do that, you are talking a dependency on an implementation detail. The contract is that you get a UUID, not that you get 48 bits of timestamp. There are 8 different UUID types and even v7 has more than one variant.
I would argue that is one of very few situations where leaking the timestamp that the ID was created when you already have the ID is a possible concern at all.
And when working with very large datasets, there are very significant downsides to large, completely random IDs (which is of course what the OP is about).
The time component either has meaning and it should be in its own column, or it doesn't have meaning and it is unnecessary and shouldn't be there at all.
I'm not a normalization fanatic, but we're only talking about 1NF here.
When I think "premature optimization," I think of things like making a tradeoff in favor of performance without justification. It could be a sacrifice of readability by writing uglier but more optimized code that's difficult to understand, or spending time researching the optimal write pattern for a database that I could spend developing other things.
I don't think I should ignore what I already know and intentionally pessimize the first draft in the name of avoiding premature optimization.
More broadly, this is the ages old surrogate vs natural key discussion, but yes the comment completely misses the point of the article. I can only assume they didn't read it in full!
The article explicitly argues against the use of GUIDs as primary keys, and I'm arguing for it.
A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.
> A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.
This came up in the last two threads I read about uuidv7.
This is simply not a meaningful statement. Any ID you expose externally is also an internal ID. Any ID you do not expose is internal-only.
If you expose data in a repeatable way, you still have to choose what IDs to expose, whether that’s the primary key or a secondary key. (In some cases you can avoid exposing keys at all, but those are narrow cases.)
You have one ID as a primary key. It is used for building relations in your database.
The second ID has nothing to do with internal structure of your data. It is just another field.
You can change your structure however you want (or type of your "internal" IDs) and you don't have to worry about an external consumer. They still get their artificial ID.
So what you meant is not to expose the primary key?
That’s a more reasonable statement but I still don’t agree. This feels like one of those “best practices” that people apply without thinking and create pointless complexity.
Don’t expose your primary key if there is a reason to separate your primary key from the externally-exposed key. If your primary key is the form that you want to expose, then you should just expose the primary key. e.g. If your primary key is a UUID, and you create a separate UUID just to expose publicly, you have most likely added useless complexity to your system.
Like the other poster said, this is a problem with default values not encoding the birthday into the personnummer.
I think it also is important to remember the purpose of specific numbers. For instance I would argue a PN without the birthday would be strictly worse. With the current system (I only know the Swedish one, but assume it's the same) I only have to remember a 4 digit (because the number is bdate + unique 4 digits). If we would instead use completely random numbers I would have to remember at least an 8 digit number (and likely to be future proof you'd want at least 9 digits). Sure that's fine for myself (although I suspect some people already struggle with it), but then I also have to remember the numbers for my 2 kids and my partner and things become quickly annoying. Especially, because one doesn't use the numbers often enough that it becomes easy, but still often enough that it becomes annoying to look up, especially when one doesn't always cary their phone with them.
> Even if nothing changes, you can run into trouble. Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right?
I guess that Norway has solved it in the same or similar way as Sweden? So a person is identified by the PNR and for those systems that need to track a person over several PNR (government agencies) use PRI. And a PRI is just the first PNR assigned to a person with a 1 inserted in the middle. If that PRI is occupied, use a 2,and so on.
Did you read the article? He doesn’t recommend natural keys, he recommends integer-based surrogates.
> A prime example of premature optimization.
Disagree. Data is sticky, and PKs especially so. Moreover, if you’re going to spend time optimizing anything early on, it should be your data model.
> Don't make decisions you will regret just to shave off a couple of milliseconds!
A bad PK in some databases (InnoDB engine, SQL Server if clustered) can cause query times to go from sub-msec to tens of msec quite easily, especially with cloud solutions where storage isn’t node-local. I don’t just mean a UUID; a BIGINT PK on a 1:M can destroy your latency for the simple reason of needing to fetch a separate page for every record. If instead the PK is a composite of (<linked_id>, id) - e.g. (user_id, id) - where id is a monotonic integer, you’ll have WAY better data locality.
Postgres suffers a different but similar problem with its visibility map lookups.
I read it (and regret it is a waste of my time). Their arguments are:
* integer keys are faster;
* uuidv7 keys are faster;
* if you want obfuscated keys, using integer and do some your own obfuscation (!!!).
I can get on-board of uuidv7 (with the trade-off, of course, on stronger guessability). The integer keys argument is strange. At that point, you need to come up with a custom-built system to avoid id collision in a distribution system and tries to achieve only 2x saving (the absolute minimal you should do is 64-bit keys). Very puzzling suggestion and to me very wrong.
Note that in this entire article, the recommendation is not about using natural keys (email address, some composite of user identification etc.), so I am skipping that whole discussion.
> Did you read the article? He doesn’t recommend natural keys, he recommends integer-based surrogates.
I am not a cryptographer, but I would want his recommendation reviewed by a cryptographer. And then I would have to implement it. UUIDs have been extensively reviewed by cryptographers, I have a variety of excellent implementations I can use, I know they solve the problem well. I know they can cause performance issues; they're a security feature that is easy to implement, and I can deal with the performance issues if and when they crop up. (Which, in my experience, it's unusual. Even at a large company, most databases I encounter do not have enough data. I will err on the side of security until it becomes a problem, which is a good problem to have.)
Why they are a security feature? They are not, the article even says it. Even if UUID4 are random, nobody guarantees that they are generated with a cryptographically secure random number generator, and in fact most implementations don't!
The reason why in a lot of context you use UUID is when you have a distributed system where you want your client to decide the ID that is then stored in multiple systems that not communicate. This is surely a valid scenario for random UUID.
To me the rule is use UUID as a customer-facing ID for things that has to have an identity (e.g. a user, an order, etc) and expose it publicly through APIs, use integer ID as internal identifier that are used to create relations between entities, and interal IDs are always kept private. That way numeric ID that are more efficient remain inside the database and are used for joining data, UUID is used only for accessing the object from an API (for example) but then internally when joining (where you have to deal with a lot of rows) you can use the more efficient numeric ID.
By the way, I think that the thing of "using UUID" came from NoSQL databases, where surely you use an UUID, but also you don't have to join data. People than transposed a best practice in one scenario to SQL, where its not really that best practice...
If a sequential ID is exposed to the client, the client can trivially use it to determine the number of records and the relative age of any records. UUID solves this, and the use of a cryptographically secure number generator isn't really necessary for it to solve this. The author's scheme might be similarly effective, but I trust UUIDs to work well. There are obviously varying ways to hide this information other than UUIDs, but UUIDs are simple and I don't have to think about it, I just get the security benefits. I don't have to worry about not exposing IDs to the clients, I can do it freely.
I have never seen anyone post an actual example of the German Tank problem creating an issue for them, only that it’s possible.
> I don’t have to think about it
And here we have the main problem of most DB issues I deal with on a daily basis - someone didn’t want to think about the implications of what they were doing, and it’s suddenly then my emergency because they have no idea how to address it.
It sounds to me like you’re just arguing for premature optimization of another kind (specifically, prematurely changing your entire architecture for edge cases that probably won’t ever happen to you).
If you have an architecture already, obviously it's hard to change and you may want to postpone it until those edge cases which probably won't ever happen to you, happen. But for new architectures, value your own grey hairs over small performance improvements.
I think you're attacking a straw man. The article doesn't say "instead of UUIDv4 primary keys, use keys such as birthdays with exposed semantic meaning". On the contrary, they have a section about how to use sequence numbers internally but obfuscated keys externally. (Although I agree with dfox's and formerly_proven's comments [1, 2] that XOR method they proposed for this is terrible. Reuse of a one-time pad is probably the most basic textbook example of bad cryptography. They referred to the values as "obfuscated" so they probably know this. They should have just gone with a better method instead.)
Insert order or time is information. And if you depend on that information you are going to be really disappointed when back dated records have to be inserted.
Right, to ensure your clients don't depend on that information, make the key opaque outside the database through methods such as the ones dfox and formerly_proven suggested, as I said.
I don't think the objection is that it exposes semantic meaning, but that any meaningful information is contained within the key at all, eg. even a UUID that includes timestamp information about when it was generated is "bad" in a sense, as it leaks information. Unique identifiers should be opaque and inherently meaningless.
Your understanding is inconsistent with the examples in vintermann's comment. Using a sequence number as an internal-only surrogate key (deliberately opaqued when sent outside the bounds of the database) is not the same as sticking gender identity, birth date, or any natural properties of a book into a broadly shared identifier.
Okay, but they ignore the stuff I was talking about, consistent with my description of this as a straw man attack.
> A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.
The opaquing prevents that.
They also describe this as a "premature optimization". That's half-right: it's an optimization. Having the data to support an optimization, and focusing on optimizing things that are hard to migrate later, is not premature.
Same with Austrian social security numbers, which, in somes cases, don't contain the persons birth date and in some cases don't contain any existing date at all.
Yet many websites enforce a valid date and pull the persons birthdate from it...
This is incredibly database-specific. In Postgres random PKs are bad. But in distributed databases like Cockroach, Google Cloud Datastore, and Spanner it is the opposite - monotonic PKs are bad. You want to distribute load across the keyspace so you avoid hot shards.
In Google Cloud Bigtable we had the issue that our domain's primary key was a sequential integer autogenerated by another app. So we just reversed it, and it distributed automatically quite nicely.
It's also application specific. If you have workload that's write heavy, has temporal skew and is highly concurrent, but rarely creates new records, you're probably better off with a random PK, even in PG.
It is, although you can have sharded PostgreSQL, in which case I agree with your assessment that you want random PKs to distribute them.
It's workload-specific, too. If you want to list ranges of them by PK, then of course random isn't going to work. But then you've got competing tensions: listing a range wants the things you list to be on the same shard, but focusing a workload on one shard undermines horizontal scale. So you've got to decide what you care about (or do something more elaborate).
As long as the key has sufficient entropy (i.e. not monotonic sequential ints), that ensures the keyspace is evenly distributed, correct? So UUID>=v4, ULID, KSUID, possibly snowflake, should be fine for the sake of even distribution of the hashes.
100%. You can use rendezvous hashing to determine the shard(s). The hash of a sequence should be randomly distributed as changing the LSB should propagate to 50% change in the output bits.
I think they address this in the article when they say that this advice is specific to monolithic applications, but I may be misremembering (I skimmed).
I'm not making any claims at all, I was just adding context from my recollection of the article that appeared to be missing from the conversation.
Edit: What the article said:
> The kinds of web applications I’m thinking of with this post are monolithic web apps, with Postgres as their primary OLTP database.
So you are correct that this does not disqualify distributed databases.
The article sums up some valid arguments against UUIDv4 as PKs but the solution the author provides on how to obfuscate integers is probably not something I'd use in production. UUIDv7 still seems like a reasonable compromise for small-to-medium databases.
I tend to avoid UUIDv7 and use UUIDv4 because I don't want to leak the creation times of everything.
Now this doesn't work if you actually have enough data that the randomness of the UUIDv4 keys is a practical database performance issue, but I think you really have to think long and hard about every single use of identifiers in your application before concluding that v7 is the solution. Maybe v7 works well for some things (e.g identifiers for resources where creation times are visible to all with access to the resource) but not others (such as users or orgs which are publicly visible but without publicly visible creation times).
I'm also not a huge fan of leaking server-side information; I suspect UUIDv7 could still be used in statistical analysis of the keyspace (in a similar fashion to the german tank problem for integer IDs). Also, leaking data about user activity times (from your other comment) is a *really* good point that I hadn't considered.
I've read people suggest using a UUIDv7 as the primary key and a UUIDv4 as a user-visible one as a remedy.
My first thought when reading the suggestion was, "well but you'll still need an index on the v4 IDs, so what does this actually get you?" But the answer is that it makes joins less expensive; you only require the index once, when constructing the query from the user-supplied data, and everything else operates with the better-for-performance v7 IDs.
To be clear, in a practical sense, this is a bit of a micro-optimization; as far as I understand it, this really only helps you by improving the data locality of temporally-related items. So, for example, if you had an "order items" table, containing rows of a bunch of items in an order, it would speed up retrieval times because you wouldn't need to do as many index traversals to access all of the items in a particular order. But on, say, a users table (where you're unlikely to be querying for two different users who happen to have been created at approximately the same time), it's not going to help you much. Of course the exact same critique is applicable to integer IDs in those situations.
Although, come to think of it, another advantage of a user-visible v4 with v7 Pk is that you could use a different index type on the v4 ID. Specifically, I would think that a hash index for the user-visible v4 might be a halfway-decent way to go.
I'm still not sure either way if I like the idea, but it's certainly not the craziest thing I've ever heard.
If that kind of stuff is on the able you can also use boring 64bit integer keys and encrypt those (e.g. [1]). Which in the end is just a better thought out version of what the article author did.
UUIDv47 might have a space if you need keys generated on multiple backend servers without synchronization. But it feels very niche to me.
The issue will be very context specific. In other words to (reasonably) answer the question, we'd have to judge each application individually.
For one example, say you were making voting-booth software. You really don't want a (hidden) timestamp attached to each vote (much less an incrementing id) because that would break voter confidentiality.
More generally, it's more a underlying principle of data management. Not leaking ancillary data is easier to justify than "sure we leak the date and time of the record creation, but we can't think of a reason why that matters."
Personally I think the biggest issue are "clever" programmers who treat the uuid as data and start displaying the date and time. This leads to complications ("that which is displayed, the customer wants to change"). It's only a matter of time before someone declares the date "wrong" and it must be "fixed". Not to mention time zone or daylight savings conversions.
Well you're leaking user data. I'm sure you can imagine situations where "the defendant created an account on this site on this date" could come up. And the user could have created that account not knowing that the creation date is public, because it's not listed anywhere in the publicly viewable part of the profile other than the UUID in the URL.
Hacker news is also doing fine, even though I can just click your profile and see you joined in october 2024. It doesn't matter for every use case.
But there are cases where it matters. Using UUIDv7 for identifiers means you need to carefully consider the security and privacy implications every time you create a new table identified by a UUID, and you'll possibly end up with some tables where you use v4 and some where you use v7. Worst case, you'll end up with painful migrations from v7 to v4 as security review identifies timestamped identifiers as a security concern.
The whole point though is that the ID itself leaks info, even if the profile is not public. There are many cases where you reference an object as a foreign key, even if you can't see the entire record of that foreign key.
If your system (pseudo-) random number generator (RNG) is compromised to derive a portion of its entropy from things that are knowable by knowing the time when the function ran, then the search space for cracking keys created around the same time can be shrunken considerably.
This doesn’t even rely on your system’s built-in RNG being low quality. It could be audited and known to avoid such issues but you could have a compromised compiler or OS that injects a doctored RNG.
E.g, if your service users have timestamp as part of the key and this data is visible to other users, you would know when that account was created. This could be an issue.
There was a HN comment about competitors tracking how many new signups are happening and increasing the discounts/sales push based on that. Something like this.
In a business I once worked for, one of the users of the online ordering system represented over 50% of the business' income, something you wouldn't necessarily want them to know.
However, because the online ordering system assigned order numbers sequentially, it would have been trivial for that company to determine how important their business was.
For example, over the course of a month, they could order something at the start of the month and something at the end of the month. That would give them the total number of orders in that period. They already know how many orders they have placed during the month, so company_orders / total_orders = percentage_of_business
It doesn't even have to be accurate, just an approximation. I don't know if they figured out that they could do that but it wouldn't surprise me if they had.
That's happening everywhere. You can order industrial parts from a Fortune 500 and check some of the numbers on it too, if they're not careful about it.
Depends on the data. If you use a primary key in data about a person that shouldn't include their age (e.g. to remove age-based discrimination) then you are leaking an imperfect proxy to their age.
Apart from all the other answers here: an external entity knowing the relative creation time for two different accounts, or just that the two accounts were created close in time to each other can represent a meaningful information leak.
It's not leaking that's the concern. It's that not having the names of objects be easily enumerable is a strongly security-enhancing feature of a system.
Yes of course everyone should check and unit test that every object is owned by the user or account loading it, but demanding more sophistication from an attacker than taking "/my_things/23" and loading "/my_things/24" is a big win.
If all you want is to obfuscate the fact that your social media site only has 200 users and 80 posts, simply use a permutation over the autoincrement primary key. E.g. IDEA or CAST-128, then encode in base64. If someone steps on your toes because somewhere in your codebase you're using a forbidden legacy cipher, just use AES-128. (This is sort of the degenerate/tautological base case of format-preserving encryption)
The article is self-contradictory in that it acts like that key is super-important ("Operations becomes a nightmare. You now have a cryptographic secret to manage. Where does this key live? Protected by a wrapping key living in a KMS or HSM? Do you use the same key across prod, staging, and dev? If dev needs to test with prod data, does it need access to prod encryption keys? What about CI pipelines? Local developer machines?") but then also acknowledges that we're talking about an obfuscation layer of stuff which is not actually sensitive ("to hide timestamps that aren't sensitive"). Don't get me wrong, it's a definitive drawback for scaling the approach, but most applications have to manage various secrets, most of which are actually important. E.g. session signing keys, API keys etc. It's still common for applications to use signed session with RCE data formats. The language from that article, while not wrong, is much more apt for those keys.
That being said, while fine for obfuscation, it should not be used for security for this purpose, e.g. hidden/unlisted links, confirmation links and so on. Those should use actual, long-ish random keys for access, because the inability to enumerate them is a security feature.
I always thought they are used and stored as they are because the kind of transformation you mention seems terribly expensive given the YT's scale, and I don't see a clear benefit of adding any kind of obfuscation here.
Counterargument... I do technical diligence so I talk to a lot of companies at points of inflection, and I also talk to lots who are stuck.
The ability to rapidly shard everything can be extremely valuable. The difference between "we can shard on a dime" and "sharding will take a bunch of careful work" can be expensive If the company has poor margins, this can be the difference between "can scale easily" and "we're not getting this investment".
I would argue that if your folks have the technical chops to be able to shard while avoiding surrogate guaranteed unique keys, great. But if they don't.... a UUID on every table can be a massive get-out-of-jail free card and for many companies this is much, much important than some minor space and time optimizations on the DB.
Sort of related, but we had to shard as usage grew and didn’t have uuids and it was annoying. Wasn’t the most annoying bit though. Whole thing is pretty complex regardless of uuid, if you have a highly interconnected data model that needs to stay online while migrating.
Right, but if you start off with uuids and the expectation that you might use them to shard, you'll wind up factoring that into the data model. Retrofitting, as you rightly say, can be much harder.
> For many business apps, they will never reach 2 billion unique values per table, so this will be adequate for their entire life. I’ve also recommended always using bigint/int8 in other contexts.
I'm sure every dba has a war story that starts with similar decision in the past
I work on an application where we encrypt the integer primary key and then use the bytes to generate something that looks like a UUID.
In our case, we don't want database IDs in an API and in URLs. When IDs are sequential, it enables things like dictionary attacks and provides estimates about how many customers we have.
Encrypting a database ID makes it very obvious when someone is trying to scan, because the UUID won't decrypt. We don't even need a database round trip.
* How do you manage the key for encrypting IDs? Injected to app environment via envvar? Just embedded in source code? I ask this because I'm curious as to how much "care" I should be putting in into managing the secret material if I were to adopt this scheme.
* Is the ID encrypted using AEAD scheme (e.g. AES-GCM)? Or does the plain AES suffice? I assume that the size of IDs would never exceed the block size of AES, but again, I'm not a cryptographer so not sure if it's safe to do so.
> Random values don’t have natural sorting like integers or lexicographic (dictionary) sorting like character strings. UUID v4s do have "byte ordering," but this has no useful meaning for how they’re accessed.
Might the author mean that random values are not sequential, so ordering them is inefficient? Of course random values can be ordered - and ordering by what he calls "byte ordering" is exactly how all integer ordering is done. And naive string ordering too, like we would do in the days before Unicode.
Hi there. Thanks for the feedback. I updated that section to hopefully convey the intent more. The type of ordering we care about for this topic is really B-Tree index traversal when inserting new entries and finding existing entries (single and multiple values i.e. an IN clause, updates, deletes etc). There's a compelling example I re-created from Cybertec showing the pages needed and accessed for equivalent user-facing results, comparing storing PKs as big integers vs. UUID v4s, and how many more pages were needed for v4 UUIDs. I found that to be helpful to support my real world experience as a consultant on various "medium sized" Postgres databases (e.g. single to 10s of millions of records) where clients were experiencing excessive latency for queries, and the UUID v4 PK/FKs selection made for reasons earlier was one of the main culprits. The indexes wouldn’t fit into memory resulting in a lot of sequential scans. I’d confirm this by showing an alternative schema design and set of queries where everything was the same except integer PKs/FKs were used. Smaller indexes (fit in memory), reliable index scans, less latency, faster execution time.
Using an UUIDv4 as primary key is a trade-off: you use it when you need to generate unique keys in a distributed manner. Yes, these are not datetime ordered and yes, they take 128 bits of space. If you can't live with this, then sure, you need to consider alternatives. I wonder if "Avoid UUIDv4 Primary Keys" is a rule of thumb though.
Yup. There are alternatives depending on what the situation is: with non-distributed, you could just use a sufficiently sized int (which can be rather small when the table is for e.g humans). You could add a separate timestamp column if that is important.
But if you need UUID-based lookup, then you might as well have it as a primary key, as that will save you an extra index on the actual primary key. If you also need a date and the remaining bits in UUIDv7 suffice for randomness, then that is a good option too (though this does essentially amount to having a composite column made up of datetime and randomness).
> you use it when you need to generate unique keys in a distributed manner
Just to complement this with a point, but there isn't any mainstream database management system out there that is distributed on the sense that it requires UUIDs to generate its internal keys.
There exist some you can find on the internet, and some institutions have internal systems that behave this way. But as a near universal rule, the thing people know as a "database" isn't distributed on this sense, and if the column creation is done inside the database, you don't need them.
I do not understand why 128 bits is considered too big - you clearly can't have less, as on 64 bits the collision probability on real world workloads is just too high, for all but the smallest databases.
Auto-incrementing keys can work, but what happens when you run out of integers? Also, distributed dbs probably make this hard, and they can't generate a key on client.
There must be something in Postgres that wants to store the records in PK order, which while could be an okay default, I'm pretty sure you can this behavior, as this isn't great for write-heavy workloads.
I doubt many real world use cases would run out of incrementing 64 bit ids - collisions if they were random sure, but i64 max is 9,223,372,036,854,775,807 - if each row took only 1 bit of space, that would be slightly more than an exabyte of data.
The issue is more fundamental - if you have purely random keys, there's basically no spatial locality for the index data. Which means that for decent performance your entire index needs to be in memory, rather than just recent data. And it means that you have much bigger write amplification, since it's rare that the same index page is modified multiple times close-enough in time to avoid a second write.
You won't run out of 64-bit integer. IMO, 64-bit integer (and even less for some tables that's not expected to grow much) it the best approach for internal database ID. If you want to expose ID, it might make sense to introduce second UUID for selected tables, if you want to hide internal ID.
Isn't part of this that inserting into a btree index is more performant when the keys are increasing rather than being random? A random id will cause more re-balancing operations than always inserting at the end. Increasing ids are also more cache friendly
The point is how closely located data you access often is. If data is roughly sorted by creation time then data you access close to one another in time is stored close to one another on disk. And typically access to data is correlated with creation time. Not for all tables but for many.
Accessing data in totally random locations can be a performance issue.
Depends on lots of things ofc but this is the concern when people talk about UUID for primary keys being an issue.
Values of the same type can be sorted if a order is defined on the type.
It's also strange to contrast "random values" with "integers". You can generate random integers, and they have a "sorting" (depending on what that means though)
Why would you need to order by UUID? I am missing something here. Most of the time we use UUID keys for being able to create a new key without coordination and most of the time we do not want to order by primary key.
Most common database indexes are ordered, so if you are using UUIDv4 you will not only bloat the index you will also have poor locality. If you try to use composite keys to fix locality, you'll end up with an even more bloated index.
I have seen a lot of people sort by (generated) integer values to return the rows "in creation order" assuming that sorting by an integer is somehow magically faster than sorting by a proper timestamp value (which give a more robust "creation order" sorting than a generated integer value).
Assuming the integer value is the PK, it can in fact be much faster for MySQL / MariaDB due to InnoDB’s clustering index. If it can do a range scan over the PK, and that’s also the ORDER BY (with matching direction), congratulations, the rows are already ordered, no sort required. If it has to do a secondary index lookup to find the rows, this is not guaranteed.
Any fixed sized bitstring has an obvious natural ordering, but since they're allocated randomly they lack the density and locality of sequential allocation.
Why not just use UUIDs as a unique column next to a bigint PK?
The power and main purpose of UUIDs is to act as easy to produce, non-conflicting references in distributed settings. Since the scope of TFA is explicitly set to be "monolithic web apps", nothing stops you from having everything work with bigint PKs internally, and just add the UUIDs where you need to provide external references to rows/objects.
Even MySQL benefits from these changes as well. What we're really discussing is random primary key inserts (UUIDv4) vs incrementing primary key inserts (UUIDv6 or v7).
PlanetScale wrote up a really good article on why incrementing primary keys are better for performance when compared to randomly inserted primary keys; when it comes to b-tree performance. https://planetscale.com/blog/btrees-and-database-indexes
The is article is about a solution in search of a problem, a classic premature optimization issue. UUIDv4 is perfectly fine for many use cases, including small databases. Performance argument must be considered when there’s a problem with performance on the horizon. Other considerations may be and very often superior to that.
It's not really feasible to rekey your UUIDv4 keyed database to int64s after the fact, imo. Sure your new tables could be integer-keyed, but the bulk of your storage will be UUID (and UUIDv4, if that's what you started with) for a very long time
I think you're right that it won't matter for most companies. But having been at a company with persistent DB performance issues with UUIDv4 keys as a contributing factor, it sucks.
You might have missed the big H2 section in the article:
"Recommendation: Stick with sequences, integers, and big integers"
After that then, yes, UUIDv7 over UUIDv4.
This article is a little older. PostgreSQL didn't have native support so, yeah, you needed an extension. Today, PostgreSQL 18 is released with UUIDv7 support... so the extension isn't necessary, though the extension does make the claim:
"[!NOTE] As of Postgres 18, there is a built in uuidv7() function, however it does not include all of the functionality below."
What those features are and if this extension adds more cruft in PostgreSQL 18 than value, I can't tell. But I expect that the vast majority of users just won't need it any more.
There are plenty of ways to deal with that. You can shard by some other identifier (though I then question your table design), you can assign ranges to each shard, etc.
Because then you run into an issue when you 'n' changes. Plus, where are you increasing it on? This will require a single fault-tolerant ticker (some do that btw).
Once you encode shard number into ID, you got:
- instantly* know which shard to query
- each shard has its own ticker
* programatically, maybe visually as well depending on implementation
I had IDs that encode: entity type (IIRC 4 bit?), timestamp, shard, sequence per shard. We even had a admin page wher you can paste ID and it will decode it.
id % n is fine for cache because you can just throw whole thing away and repopulate or when 'n' never changes, but it usually does.
Especially in larger systems, how does one solve the issue of reaching the max value of an integer in their database? Sure for unsigned bigint thats hard to achieve but regular ints? Apps quickly outgrow that.
OK... but that concern seems a bit artificial.. if bigints are appropriate: use them. If the table won't get to bigint sizes: don't. I've even used smallint for some tables I knew were going to be very limited in size. But I wouldn't worry about smallint's very limited number of values for those tables that required a larger size for more records: I'd just use int or bigint for those other tables as appropriate. The reality is that, unless I'm doing something very specific where being worried about the number of bytes will matter... I just use bigint. Yes, I'm probably being wasteful, but in the cases where those several extra bytes per record are going to really add up.... I probably need bigint anyway and in cases where bigint isn't going to matter the extra bytes are relatively small in aggregate. The consistency of simply using one type itself has value.
And for those using ints as keys... you'd be surprised how many databases in the wild won't come close to consuming that many IDs or are for workloads where that sort of volume isn't even aspirational.
Now, to be fair, I'm usually in the UUID camp and am using UUIDv7 in my current designs. I think the parent article makes good points, but I'm after a different set of trade-offs where UUIDs are worth their overhead. Your mileage and use-cases may vary.
Idk I use whatever scales best and that would be an close to infinite scaling key. The performance compromise is probably zeroed out once you have to adapt ur database to a different one supporting the current scale of the product. Thats for software that has to scale. Whole different story for stuff that doesnt have to grow obviously. I am in the UUID camp too but I dont care whether its v4 or v7.
It's not like there are dozens of options and you constantly have to switch. You just have to estimate if at maximum growth your table will have 32 thousand, 2 billion or 9 quintillion entries. And even if you go with 9 quintillion for all cases you still use half the space of a UUID
UUIDv4 are great for when you add sharding, and UUIDs in general prevent issues with mixing ids from different tables. But if you reach the kind of scale where you have 2 billion of anything UUIDs are probably not the best choice either
This is such a mediocre article. It provides plenty of valid reasons to consider avoiding UUID in databases, however, it doesn’t say what should be used should one want primary keys that are not easy to predict. The XOR alternative is too primitive and, well, whereas I get why should I consider avoiding UUID, then what should I use instead?
An additional thing I learned when I worked on a ulid alternative over the weekend[0] is: Postgres's internal Datum type is at most 64 bits which means every uuid requires heap allocation[1] (at least until we get 128 bit machines).
The author should include benchmarks otherwise, saying that UUIDs “increase latency” is meaningless. For instance, how much longer does it take to insert a UUID vs. an integer? How much longer does scanning an index take?
The article is muddled, I wish he'd split it into two. One for UUID4 and another for UUID7.
I was using 64-bit snowflake pks (timestamp+sequence+random+datacenter+node) previously and made the switch to UUID7 for sortable, user-facing, pks. I'm more than fine letting the DB handle a 128-bit int vs over a 64-bit int if it means not having make sure that the latest version of my snowflake function has made it to every db or that my snowflake server never hiccups, ever.
Most of the data that's going to be keyed with a uuid7 is getting served straight out of Redis anyway.
What about newest postgresql support for uuidv7? Anybody did tests? This is what we're heading towards at the moment of writing so I'd like to ask to eventually roll back the decision
I've seen this type of advice a few times now. Now I'm not a database expert by any stretch of imagination, but I have yet to see UUID as primary key in any of the systems I've touched.
Are there valid reasons to use UUID (assuming correctly) for primary key? I know systems have incorrectly expose primary key to the public, but assuming that's not the concern. Why use UUID over big-int?
I mean this is the primary reason right here! You can pre-create an entire tree of relationships client side and ship it off to the database with everything all nice and linked up. And since by design each PK is globally unique you’ll never need to worry about constraint violations. It’s pretty damn nice.
About 10 years ago I remember seeing a number of posts saying "don't use int for ids!". Typically the reasons were things like "the id exposes the number of things in the database" and "if you have bad security then users can increment/decrement the id to get more data!". What I then observed was a bunch of developers rushing to use UUIDs for everything.
UUIDv7 looks really promising but I'm not likely to redo all of our tables to use it.
I'm new to the security side of things; I can understand that leaking any information about the backend is no bueno, but why specifically is table size an issue?
At least for the Spanner DB, it's good to have a randomly-distributed primary key since it allows better sharding of the data and avoids "hot shards" when doing a lot of inserts. UUIDv4 is the typical solution, although a bit-reversed incrementing integer would work too
I've been using ULIDs [0] in prod for many years now, and I love them. I just use string encoding, though if I really wanted to squeeze out every last MB, I could do some conversion so it is stored as 16 bytes instead of 26 chars. In practice it's never mattered, and the simplicity of just string IDs everywhere is nice.
Sometimes I have to talk to legacy systems, all my APIs have str IDs, and I encode int IDs as just decimal left padded with leading zeros up to 26 chars. Technically not a compliant ULID but practically speaking, if I see leading `00` I know it's not an actual ULID, since that would be before Nov-2004, and ULID was invented in 2017. The ORM automatically strips the zeros and the query just works.
I'm just kind of over using sequential int IDs for anything bigger than hobby level stuff. Testing/fixturing/QA are just so much easier when you do not have to care about whether an ID happens to already exist.
This misses the point. The reason not to use UUIDv4 is that having an index on random values is slow(er), because sequential inserts into the underlying B-tree are faster than random inserts. You're hitting the same problem with your `public_id` column, that it's not the primary key doesn't change that.
For InnoDB-based DBs that are not Aurora, and if the secondary index isn’t UNIQUE, it solves the problem, because secondary non-unique index changes are buffered and written in batches to amortize the random cost. If you’re hashing a guaranteed unique entity, I’d argue you can skip the unique constraint on this index.
For Aurora MySQL, it just makes it worse either way, since there’s no change buffer.
Being able to create something and know the id of it before waiting for an http round trip simplifies enough code that I think UUIDs are worth it for me. I hadn't really considered the potential perf optimization from orderable ids before though - I will consider UUID v7 in future.
While that is often neat solution, do not do that by simply XORing the numbers with constant. Use a block cipher in ECB mode (If you want the ID to be short then something like NSA's Speck comes handy here as it can be instantiated with 32 or 48 bit block).
And do not even think about using RC4 for that (I've seen that multiple times), because that is completely equivalent to XORing with constant.
Long article about why not to use UUIDv4 as Primary Keys, but.. Who is doing so? And why are they doing that? How would you solve their requirements? Just throwing out "you can use UUIDv7" doesn't help with, e.g., the size they take up.
Aren't people using (big)ints are primary keys, and using UUIDs as logical keys for import/export, solving portability across different machines?
UUIDs are usually the go-to solution to enumeration problems. The space is large enough that an attacker cannot guess how many X you have (invoices, users, accounts, organizations, ...). When people replace the ints by UUIDv4, they keep them as primary keys.
I'd add that it's also used when data is created in multiple places.
Consider say weather hardware. 5 stations all feeding into a central database. They're all creating rows and uploading them. Using sequential integers for that is unnecessarily complex (if even possible.)
Given the amount of data created on phones and tablets, this affects more situations than first assumed.
It's also very helpful in export / edit / update situations. If I export a subset of the data (let's say to Excel), the user can edit all the other columns and I can safely import the result. With integer they might change the ID field (which would be bad). With uuid they can change it, but I can ignore that row (or the whole file) because what they changed it to will be invalid.
If you put an index on the UUID field (because you have an API where you can retrieve objects with UUID) you have kind of the same problem, at least in Postgres where a primary key index or a secondary index are more or less the same (to the point is perfectly valid in pgsql to not have any primary key defined for the table, because storage on disk is done trough an internal ID and the indexes, being primary or not, just reference to the rowId in memory). Plus the waste of space of having 2 indexes for the same table.
Of course this is not always the case that is bad, for example if you have a lot of relations you can have only one table where you have the UUID field (and thus expensive index), and then the relations could use the more efficient int key for relations (for example you have an user entity with both int and uuid keys, and user attribute references the user with the int key, of course at the expense of a join if you need to retrieve one user attribute when retrieving the user is not needed).
*edit: sorry, misread that. My answer is not valid to your question.
original answer: because if you dont come up with these ints randomly they are sequential which can cause many unwanted situations where people can guess valid IDs and deduce things from that data. See https://en.wikipedia.org/wiki/German_tank_problem
Hence the presumed implication behind the public_id field in GP's comment: anywhere identifiers are exposed, you use the public_id field, thereby preventing ID guessing while still retaining the benefits of ordered IDs where internal lookups are concerned.
Edit: just saw your edit, sounds like we're on the same page!
Decades of security vulnerabilities and compromises because of sequential/guessable PKs is (only!) part of the reason we're here. Miss an authorization check anywhere in the application and you're spoon-feeding entire tables to anyone with the inclination to ask for it.
I also think we can use a combination of a PID - persistent ID (I always thought it was public) and an auto-increment integer ID. Having a unique key helps when migrating data between systems or referencing a piece of data in a different system. Also, using serial IDs in URLs and APIs can reveal sensitive information, e.g. how many items there are in the database.
Always try to avoid having two services using the same DB. Only way I'd ever consider sharing a DB is if only one service will ever modify it and all others only read.
You probably don't want integer primary keys, and you probably don't want UUID primary keys. You probably want something in-between, depending on your use case. UUID is one extreme on this spectrum, which tries to solve all of the problems, including ones you might not have.
That depends a lot on many factors and thus I dont like generic statements like that which tend to be more focused on a specific database pattern. That said everyone should indeed be aware of the potential tradeoffs.
And of course we could come up with many ways to generate our own ids and make them unique, but we have the following requirements.
- It needs to be a string (because we allow composing them to 'derive' keys)
- A client must be able to create them (not just a server) without risk for collisions
- The time order of keys must not be guessable easily (as the id is often leaked via references which could 'betray' not just the existence of a document, but also its relative creation time wrt others).
- It should be easy to document how any client can safely generate document ids.
The lookup performance is not really such a big deal for us. Where it is we can do a projection into a more simple format where applicable.
> One misconception about UUIDs is that they’re secure. However, the RFC describes that they shouldn’t be considered secure “capabilities.”
> From RFC 41221 Section 6 Security Considerations:
> Do not assume that UUIDs are hard to guess; they should not be used as security capabilities
This is just wrong, and the citation doesn't support it. You're not guessing a 122-bit long random identifier. What's crazy is that the article, immediately prior to this, even cites the very math involved in showing exactly how unguessable that is.
… the linked citation (to §4.4, which is different from the in-prose citation) is just about how to generate a v4, and completely unrelated to the claim. The prose citation to §6 is about UUIDs generally: the statement "Do not assume that [all] UUIDs are hard to guess" is not logically inconsistent with properly-generated UUIDv4s being hard to guess. A subset of UUIDs have security properties, if the system generating & using them implements those properties, but we should not assume all UUIDs have that property.
Moreover, replacing an unguessable UUID with an (effectively random) 32-bit integer does make it guessable, and the scheme laid out seems completely insecure if it is to be used in the contexts one finds UUIDv4s being an unguessable identifier.
The additional size argument is pretty weak too; at "millions of rows", a UUID column is consuming an additional ~24 MiB.
Sometimes its nice for your PK to be uniformly distributed. As a reader, even if it hurts as a writer. For instance, you can easily shard queries and workloads.
> the impact to inserts and retrieval of individual items or ranges of values from the index.
It's not just Postgres or even OLTP. For example, if you have an Iceberg table with SCD2 records, you need to regularly locate and update existing records. The more recent a record is, the more likely it is to be updated.
If you use UUIDv7, you can partition your table by the key prefix. Then the bulk of your data can be efficiently skipped when applying updates.
The space requirement and index fragmentation issue is nearly the same no matter what kind of relational database you use. Math is math.
Just the other day I delivered significant performance gains to a client by converting ~150 million UUIDv4 PKs to good old BIGINT. They were using a fairly recent version of MariaDB.
If they can live with making keys only in one place, then sure, this can work. If however they need something that is very highly likely unique, across machines, without the need to sync, then using a big integer is no good.
if they can live with MariaDB, OK, but I wouldn't choose that in the first place these days. Likely Postgres will also perform better in most scenarios.
Yeah, they had relatively simple requirements so BIGINT was a quick optimization. MariaDB can guarantee uniqueness of auto-incrementing integers across a cluster of several servers, but that's about the limit.
Had the requirements been different, UUIDv7 would have worked well, too, because fragmentation is the biggest problem here.
Personally my approach has been to start with big-ints and add a GUID code field if it becomes necessary. And then provide imports where you can match objects based on their code, if you ever need to import/export between tenants, with complex object relationships.
- If you use uuids as foreign keys to another table, it’s obvious when you screw up a join condition by specifying the wrong indices. With int indices you can easily get plausible looking results because your join will still return a bunch of data
- if you’re debugging and need to search logs, having a simple uuid string is nice for searching
I fun trick I did was generate UUID-like ids. We all can identify a UUIDv4 most of the time by looking at one. "Ah, a uuid" we say to ourselves. A little over a decade ago I was working on a massive cloud platform and rather than generate string keys like the author above suggested (int -> binary -> base62 str) we opted for a more "clever" approach.
The UUID is 128bits. The first 64bits are a java long. The last 64bits are a java long. Let's just combine the Tenant ID long with a Resource ID long to generate a unique id for this on our platform. (worked until it didn't).
yeah, the problem for us was the resource id. What id was it? Was it a post? an upload? a workspace? it wasn't nearly as descriptive as we needed it to be.
I really hoped the author would discuss alternatives for distributed databases that writes in parallel. Sequential key would be atrocious in such circumstance this could kill the whole gain of distributed database as hotspots would inevitably appear.
I would like to hear from others using, for example, Google Spanner, do you have issues with UUID. I don't for now, most optimizations happen at the Controller level, data transformation can be slow due to validations. Try to keep service logic as straightforward as possible.
Hi, a question for you folks. What if I don’t like to embed timestamp in uuid as v7 do? This could expose to timing attacks in specific scenarios.
Also is it necessary to show uuid at all to customers of an API? Or could it be a valid pattern to hide all the querying complexity behind named identifiers, even if it could cost a bit in terms of joining and indexing?
The context is the classic B2B SaaS, but feel free to share your experiences even if it comes from other scenarios!
The counter argument I would say is that having all these integer ids comes with many problems. You can't make em public cause they leak info. They are not unique across environments. Meaning you have to spin up a lot of bs envs to just run it. But retros are for complaining about test envs, right?
Uuid4 are only 224bits is a bs argument. Such a made up problem.
But a fair point is that one should use a sequential uuid to avoid fragmentation. One that has a time part.
Some additional cases we encounter quite often where UUIDs help:
- A client used to run our app on-premises and now wants to migrate to the cloud.
- Support engineers want to clone a client’s account into the dev environment to debug issues without corrupting client data.
- A client wants to migrate their account to a different region (from US to EU).
Merging data using UUIDs is very easy because ID collisions are practically impossible. With integer IDs, we'd need complex and error-prone ID-rewriting scripts. UUIDs are extremely useful even when the tables are small, contrary to what the article suggests.
Very useful article, thank you! Many people suggest CUID2, but it is less efficient and is better used for frontend/url encoding. For backend/db, only UUID v7 should be used.
You'll have to rip the ability to generate unique numbers from quite literally anywhere in my app and save them without conflict from my cold, dead hands.
The ability to know ahead of time what a primary key will be (in lieu of persisting it first, then returning) opened up a whole new world of architecting work in my app. It made a lot of previous awkward things feel natural.
The implication is that you need to know the PK ahead of time so that you can insert it into other tables which reference it as an FK without waiting for it to be returned, which further implies that you don’t have FK constraints, because the DB would disallow this.
Tbf in Postgres, you can declare FKs to be deferrable, so their existence is checked at transaction commit, rather than at insertion time.
If you don’t have the DB enforcing referential integrity, you need to be extremely careful in your application logic; IME, this inevitably fails. At some point, someone writes bad code, and you get data anomalies.
> Tbf in Postgres, you can declare FKs to be deferrable, so their existence is checked at transaction commit, rather than at insertion time.h further implies that you don’t have FK constraints, because the DB would disallow this.
I'm using EF core which hooks up these relationships and allows me to persist them in a single transaction using MSSQL server.
> If you don’t have the DB enforcing referential integrity
I'm building an electronic medical system. I'm well aware of the benefits of referential integrity.
If we embraced REST, as Roy Fielding envisioned it, we wouldn't have this, and all similar, conversations. REST doesn't expose identifier, it only exposes relationships. Identifiers are an implementation details.
Another interesting article from Feb-2024 [0] where the cost of inserting a uuid7() and a bigint is basically the same. To me it wasn't quite clear what the problem with the buffer cache is but the author makes it much more clear than OP's article:
> We need to read blocks from the disk when they are not in the PostgreSQL buffer cache. Conveniently, PostgreSQL makes it very easy to inspect the contents of the buffer cache. This is where the big difference between uuidv4 and uuidv7 becomes clear. Because of the lack of data locality in uuidv4 data, the primary key index is consuming a huge amount of the buffer cache in order to support new data being inserted – and this cache space is no longer available for other indexes and tables, and this significantly slows down the entire workload.
> Do not assume that UUIDs are hard to guess; they should not be used as security capabilities
The issue is that is true for more or less all capability URLs. I wouldn't recommend UUIDs per se here, probably better to just use a random number. I have seen UUIDs for this in practice though and these systems weren't compromised because of that.
I hate the tendency that password recovery flows for example leave the URL valid for 5 minutes. Of course these URLs need to have a limited life time, but mail isn't a real time communication medium. There is very little security benefit from reducing it from 30 minutes to 5 minutes for example. You are not getting "securer" this way.
UUIDs make enumeration attacks harder and also prevent situations where seeing a high valid ID value lets you estimate how much money a private company is earning if they charge based on the object the ID is associated with. If you can sample enough object ID values and see when the IDs were created, you could reverse engineer their ARR chart and see whether they're growing or not which many companies want to avoid.
My advice is: Avoid Blanket Statements About Any Technology.
I'm tired of midwit arguments like "Tech X is N% faster than tech Y at performing operation Z. Since your system (sometimes) performs operation Z, it implies that Tech X is the only logical choice in all situations!"
It's an infuriatingly silly argument because operation Z may only represent about 10% of the total CPU usage of the whole system (averaged out)... So what is promoted as a 50% gain may in fact be a 5% gain when you consider it in the grand scheme of things... Negligible. If everyone was looking at this performance 'advantage' rationally; nobody would think it's worth sacrificing important security or operational properties.
I don't know what happened to our industry; we're supposed to be intelligent people but I see developers falling for these obvious logical fallacies over and over.
I remember back in my day, one of the senior engineers was discussing upgrading a python system and stated openly that the new version of the engine was something like 40% slower than the old version but he didn't even have to explain himself why upgrading was still a good decision; everybody in the company knew he was only talking about the code execution speed and everybody knew that this was a small fraction of the total.
Not saying UUIDv7 was a bad choice for Postgres. I'm sure it's fine for a lot of situations but you don't have to start a cult preaching the gospel of The One True UUID to justify your favorite project's decisions.
I do find it kind of sly though how the community decided to make this UUIDv7 instead of creating a new standard for it.
The whole point of UUID was to leverage the properties of randomness to generate unique IDs without requiring coordination. UUIDv7 seems to take things in a philosophically different path. People chose UUID for scalability and simplicity (both of which you get as a result of doing away with the coordination overhead), not for raw performance...
That's the other thing which drives me nuts; people who don't understand the difference between performance and scalability. People foolishly equate scalability with parallelism or concurrency; whereas that's just one aspect of it; scalability is a much broader topic. It's the difference between a theoretical system which is fast given a certain artificially small input size and one which actually performs better as the input size grows.
Lastly; no mention is made about the complex logic which has to take place behind the scenes to generate UUIDv7 IDs... People take it for granted that all computers have a clock which can produce accurate timestamps where all computers in the world are magically in-sync... UUIDv7 is not simple; it's very complicated. It has a lot of additional complexity and dependencies compared to UUIDv4. Just because that complexity is very well hidden from most developers, doesn't mean it's not there and that it's not a dependency... This may become especially obvious as we move to a world of robotics and embedded systems where cheap microchips may not have enough Flash memory to hold the code for the kinds of programs required to compute such elaborate IDs.
Yep. We have tables that use UUIDv4 that have 60M+ rows and don't have any performance problems with them. Would some queries be faster using something else? Probably, but again, for us it's not close to being a bottleneck. If it becomes a problem at 600M or 6B rows, we'll deal with it then. We'll probably switch to UUIDv7 at some point, but it's not a priority and we'll do some tests on our data first. Does my experience mean you should use UUIDv4? No. Understand your own system and evaluate how the tradeoffs apply to you.
I have tables that have billions of rows that use UUIDv4 primary keys and I haven't encountered any issues either. I do use UUIDv7 for write-heavy tables, but even then, I got a way bigger performance boost from batching inserts than switching from UUIDv4 to UUIDv7. Issue is way overblown.
Not really, no. They’re very convenient for certain problems and work really well in general. I’ve never had a performance issue where the problem boiled down to my use of UUID.
You never having seen the problem doesn't mean it never happens; I have dealt with a serious performance problem in the past that was due to excessive page fragmentation due to a GUID PK.
To your original point, these are heuristics; there isn't always time to dig into every little architectural decision, so having a set of rules of thumb on hand helps to preempt problems at minimal cognitive cost. "Avoid using a GUID as a primary key if you can" is one of mine.
A major one for me is preventing duplicate records.
If the client POSTs a new object to insert it into the database; if there is a connection failure and the client does not receive a success response from the server, the client cannot know whether the record was inserted or not without making an expensive and cumbersome additional read call to check... The client cannot simply assume that the insertion did not happen purely on the basis that they did not receive a success response. It could very well be that the insertion succeeded but the connection failed shortly after so response was not received. If the IDs are auto-incremented on the server and the client posts the same object again without any ID on it, the server will create a duplicate record in the database table (same object with a different ID).
On the other hand, if the client generates a UUID for the object it wants to create on the front-end, then it can safely resend that exact object any number of times and there is no risk of double-insertion; the object will be rejected the second time and you can show the user a meaningful error "Record was already created" instead of creating two of the same resource; leading to potential bugs and confusion.
Ehm.. so you're saying that INSERT ... RETURNING id is not atomic from the client's pov because something terrible could happen just when client is receiving the answer inside its SQL driver?
I'm actually more thinking about the client sitting on the front-end like a single page app. Network instability could cause the response to not reach the front-end after a successful insert. This wouldn't be extremely common but would definitely be a problem for you as the database admin if you have above a certain number of users. I've seen this issue on live production systems and the root cause of duplicate records can be baffling because of how infrequently it may happen. Tends to cause issues that are hard to debug.
I never understood the arguments against using using globally unique ids. For example how it somehow messes up indexes. I’m not a CS major but those are typically b-trees are they not? If you have a primary key whose generation is truly random such that each number is equally likely, then that b-tree is always going to be balanced.
Yes there are different flavors of generating them with their own pros and cons, but at the end of the day it’s just so much more elegant than some auto incrementing crap your database creates. But that is just semantic, you can always change the uuid algorithm for future keys. And honestly if you treat the uuid as some opaque entity (which you should), why not just pick the random one?
And I just thought of the argument that “but what if you want to sort the uuid…” say it’s used for a list of stories or something? Well, again… if you treat the uuid as opaque why would you sort it? You should be sorting on some other field like the date field or title or something. UUIDs are opaque, damn it. You don’t sort opaque data. “Well they get clustered weird” say people. Why are you clustering on a random opaque key? If you need certain data to be clustered, then do it on the right key (user_id field did your data was to be clustered by user, say)
Letting the client generate the primary keys is really liberating. Not having to care about PK collisions or leaking information via auto incrementing numbers is great!
> If you have a primary key whose generation is truly random such that each number is equally likely, then that b-tree is always going to be balanced.
Balanced and uniformly scattered. A random index means fetching a random page for every item. Fine if your access patterns are truly random, but that's rarely the case.
> Why are you clustering on a random opaque key?
InnoDB clusters by the PK if there is one, and that can't be changed (if you don't have a PK, you have some options, but let's assume you have one). MSSQL behaves similarly, but you can override it. If your PK is random, your clustering will be too. In Postgres, you'll just get fragmented indexes, which isn't quite as bad, but still slows down vacuum. Whether that actually becomes a problem is also going to depend on access patterns.
One shouldn't immediately freak out over having a random PK, but should definitely at least be aware of the potential degradation they might cause.
Permanent identifiers should not carry data. This is like the cardinal sin of data management. You always run into situations where the thing you thought, "surely this never changes, so it's safe to squeeze into the ID to save a lookup". Then people suddenly find out they have a new gender identity, and they need a last final digit in their ID numbers too.
Even if nothing changes, you can run into trouble. Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right? Well, wrong, since although the date doesn't change, your knowledge of it might. Immigrants who didn't know their exact date of birth got assigned 1. Jan by default... And then people with actual birthdays on 1 Jan got told, "sorry, you can't have that as birth date, we've run out of numbers in that series!"
Librarians in the analog age can be forgiven for cramming data into their identifiers, to save a lookup. When the lookup is in a physical card catalog, that's somewhat understandable (although you bet they could run into trouble over it too). But when you have a powerful database at your fingertips, use it! Don't make decisions you will regret just to shave off a couple of milliseconds!
To me, what your example really shows is the problem with incorrect default values, not a problem with encoding data into a key per se. If they'd chosen a non-date for unknown values, maybe 00 or 99 for day or month components, then the issue you described would disappear.
But in case, the intention for encoding a timestamp into a UUID isn't for any implied meaning. It's both to guarantee uniqueness with a side effect that IDs are more or less monotonically increasing. Whether this is actually desirable depends on your application, but generally if the application is as a indexed key for insertion into a database, it's usually more useful for performance than a fully random ID as it avoids rewriting lots of leaf-nodes of B-trees. If you insert a load of these such keys, it forms a cluster on one side of the tree that can the rebalance with only the top levels needing to be rewritten.
You still have that problem from organic birthdays and also the problem of needing to change ids to correct birth dates.
Maybe the answer is to evenly spread the defaults over 365 days.
If you only know the birth year and keyed 99 as the month for unknown, then your algorithm would determine they were of a correct age on the start of the year after that was true, which I guess would be what you want for legal compliance.
If you don't even know if the birth year is correct, then the correct process depends on policy. Maybe they choose any year, maybe they choose the oldest/youngest year they might be, maybe they just encode that as 0000/9999.
Again, if you don't know the birth year of someone, you would have no way of knowing their age. I'm not sure that means that the general policy of putting a birthday into their ID number is flawed.
Many governments re-issue national IDs to the same person with different numbers, which is far less problematic that the many governments who choose to issue the same national ID (looking at you USA with your SSN) to multiple individuals. It doesn't seem like a massive imposition on a person who was originally issued an ID based on not knowing when their birthday to be re-issued a new ID when their birthday was ascertained. Perhaps even give them a choice of keeping the old one knowing it will cause problems, or take the new one instead and having the responsibility to tell people their number had changed.
Presumably the governments that choose to embed the date into a national ID number do so because it's more useful for their purposes to do so than just assigning everyone a random number.
YY.MM.DD-AAA.BB
In either the AAA or BB component there is something about the gender.
But it does mean that there is a limit of people born per day of a certain gender.
But for a given year, using a moniker will only delay the inevitable. Sure, there are more numbers, but still limited as there are SOME parts that need to reflect reality. Year, gender (if that's still the case?) etc.
If you know someones birth date and gender, the INSZ is almost certainly 1 in 500 numbers, with a heavy skew to the lower AAA. Luckily, you can't do much damage with someones number,unlike an USA SSN (but I'd still treat it confidential).
> Norwegian PNs have your birth date (in DDMMYY format) as the first six digits.
You can already feel the disaster rising because sone program expects always the latter.
And it doesn’t fix the problem, it just makes it less likely.
Random vs time biased uuids are not a decision to shave off ms that you will regret.
Most likely they will be a decision that shaves off seconds (yes, really - especially when you consider locality effects) and you'll regret nothing.
You can choose to never make use of that property. But it's tempting.
If the domain is modeling something like external events (in my case), and that external timestamp is packed into your primary key, and you support receiving events out of chronological order, then it just follows that you might insert stuff ealrier than you latest record.
You're gonna have problems "backdating" if you mix up time of insertion with when the event you model actually ocurred. Like id you treat those as the same thing when they aren't.
I totally used uuidv7s as "inserted at" in a small project and I had methods to find records created between two timestamps that literally converted timestamps to uuidv7 values so I could do "WHERE id BETWEEN a AND b"
Hyrum's Law suggests that someone will.
What? The first 48 bits of an UUID7 are a UNIX timestamp.
Whether or not this is a meaningful problem or a benefit to any particular use of UUIDs requires thinking about it; in some cases it’s not to be taken lightly and in others it doesn’t matter at all.
I see what you’re getting at, that ignoring the timestamp aspect makes them “just better UUIDs,” but this ignores security implications and the temptation to partition by high bits (timestamp).
All you need is a guaranteed non-decreasing 48-bit number. A clock is one way to generate it, but I don't see why a UUIDv7 would become invalid if your clock is biased, runs too fast, too slow, or whatever. I would not count on the first 48 bits being a "real" timestamp.
Besides the UUIDv7 specification, that is? Otherwise you have some arbitrary kind of UUID.
> I would not count on the first 48 bits being a "real" timestamp.
I agree; this is the existential hazard under discussion which comes from encoding something that might or might not be data into an opaque identifier.
I personally don't agree as dogmatically with the grandparent post that extraneous data should _not_ be incorporated into primary key identifiers, but I also disagree that "just use UUIDv7 and treat UUIDs as opaque" is a completely plausible solution either.
The only promise of Unix timestamps is that they never go back, always increase. This is a property of a sequence of UUIDs, not any particular instance. At most, one might argue that an "utterly valid" UUIDv7 should not contain a timestamp from far future. But I don't see why it can't be any time in the past, as long as the timestamp part does not decrease.
The timestamp aspect may be a part of an additional interface agreement: e.g. "we guarantee that this value is UUIDv7 with the timestamp in UTC, no more than a second off". But I assume that most sane engineers won't offer such a guarantee. The useful guarantee is the non-decreasing nature of the prefix, which allows for sorting.
I don't agree with the absolute statement, though. Permanent identifiers should not generally carry data. There are situations where you want to have a way to reconciliate, you have space or speed constraints, so you may accept the trade off, md5 your data and store it in a primary index as a UUID. Your index will fragment and thus you will vacuum, but life will still be good overall.
The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
The identifier is still connected to the user's data, just through the appropriate other fields in the table as opposed to embedded into the identifier itself.
> So, what happens next is that the real world tries to adjust and the "data-less" identifier becomes a real world artifact. The situation becomes the same but worse (eg. you don't exist if you don't remember your social security id). In extreme cases people are tattooed with their numbers.
Using a random UUID as primary key does not mean users have to memorize that UUID. In fact in most cases I don't think there's much reason for it to even be exposed to the user at all.
You can still look up their data from their current email or phone number, for instance. Indexes are not limited to the primary key.
> The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
A fully random primary key takes into account that things change - since it's not embedding any real-world information. That said I also don't think there's much issue with embedding creation time in the UUID for performance reasons, as the article is suggesting.
So what is such an identifier for? Is it only for some technical purposes (like replication etc.)?
Why bother with UUID at all then for internal identifiers? Sequence number should be enough.
UUIDs or other similarly randomized IDs are useful because they don't include any ordering information or imply anything about significance, which is a very safe default despite the performance hits.
There certainly are reasons to avoid them and the article we're commenting on names some good ones, at scale. But I'd argue that if you have those problems you likely have the resources and experience to mitigate the risks, and that true randomly-derived IDs are a safer default for most new systems if you don't have one of the very specific reasons to avoid them.
Not for me :)
"Internal" means "not exposed outside the database" (that includes applications and any other external systems)
May also be that you don't want to leak information like how many orders are being made, as could be inferred from a `/fetch_order?id=123` API with sequential IDs.
Sequential primary keys are still commonly used though - it's a scenario-dependant trade-off.
> > Using a random UUID as primary key does not mean users have to memorize that UUID. [...]
> So what is such an identifier for? [...] Why bother with UUID at all then for internal identifiers?
The context, that you're questioning what they're useful for if not for use by the user, suggests that "internal" means the complement. That is, IDs used by your company and software, and maybe even API calls the website makes, but not anything the user has to know.
Otherwise, if "internal" was intended to mean something stricter (only used by a single non-distributed database, not accessed by any applications using the database, and never will be in the future), then my response is just that many IDs are neither internal in this sense nor intended to be memorized/saved by the user.
I think artificial and data-less identifiers are the better means of identification that takes into account that things change. They don't have to be the identifier you present to the world, but having them is very useful.
E.g. phone numbers are semi-common identifiers now, but phone numbers change owners for reasons outside of your control. If you use them as an internal identifier, changing them between accounts gets very messy because now you don't have an identifier for the person who used to have that phone number.
It's much cleaner and easier to adapt if each person gets an internal context-less identifier and you use their phone number to convert from their external ID/phone number to an internal ID. The old account still has an identifier, there's just no external identifier that translates to it. Likewise if you have to change your identifier scheme, you can have multiple external IDs that translate to the same internal ID (i.e. you can resolve both their old ID and their new ID to the same internal ID without insanity in the schema).
If the only reason you need a surrogate key is to introduce indirection in your internal database design then sequence numbers are enough. There is no need to use UUIDs.
The whole discussion is about externally visible identifiers (ie. identifiers visible to external software, potentially used as a persistent long-term reference to your data).
> E.g. phone numbers are semi-common identifiers now, but phone numbers change owners for reasons outside of your control. If you use them as an internal identifier, changing them between accounts gets very messy because now you don't have an identifier for the person who used to have that phone number.
Introducing surrogate keys (regardless of whether UUIDs or anything else) does not solve any problem in reality. When I come to you and say "My name is X, this is my phone number, this is my e-mail, I want my GDPR records deleted", you still need to be able to find all data that is related to me. Surrogate keys don't help here at all. You either have to be able to solve this issue in the database or you need to have an oracle (ie. a person) that must decide ad-hoc what piece of data is identified by the information I provided.
The key issue here is that you try to model identifiable "entities" in your data model, while it is much better to model "captured information".
So in your example there is no "person" identified by "phone number" but rather "at timestamp X we captured information about a person at the time named Y and using phone number Z". Once you start thinking about your database as structured storage of facts that you can use to infer conclusions, there is much less need for surrogate keys.
This is so needlessly complex that you contradicted yourself immediately. You claim there is no “person” identified but immediately say you have information “about a person”. The fact that you can assert that the information is about a person means that you have identified a person.
Clearly tying data to the person makes things so much easier. I feel like attempting to do what you propose is begging to mess up GDPR erasure.
> “So I got a request from a John Doe to erase all data we recorded for them. They identified themselves by mailing address and current phone number. So we deleted all data we recorded for that phone number.”
> “Did you delete data recorded for their previous phone number?”
> “Uh, what?”
The stubborn refusal to create a persistent identifier makes your job harder, not easier.
The UUID would be an example of an external key (for e.g. preventing crawling keys being easy). This article mentions a few reasons why you may later decide there are better external keys.
> When I come to you and say "My name is X, this is my phone number, this is my e-mail, I want my GDPR records deleted", you still need to be able to find all data that is related to me.
How are you going to trace all those records if the requester has changed their name, phone number and email since they signed up if you don't have a surrogate key? All 3 of those are pretty routine to change. I've changed my email and phone number a few times, and if I got married my name might change as well.
> Once you start thinking about your database as structured storage of facts that you can use to infer conclusions, there is much less need for surrogate keys.
I think that spirals into way more complexity than you're thinking. You get those timestamped records about "we got info about person named Y with phone number Z", and then person Y changes their phone number. Now you're going to start getting records from person named Y with phone number A, but it's the same account. You can record "person named Y changed their phone number from Z to A", and now your queries have to be temporal (i.e. know when that person had what phone number). You could back-update all the records to change Z to A, but that breaks some things (e.g. SMS logs will show that you sent a text to a number that you didn't send it to).
Worse yet, neither names nor phone numbers uniquely identify a person, so it's entirely possible to have records saying "person named Y and phone number Z" that refer to different people if a phone number transfers from a John Doe to a different person named John Doe.
I don't doubt you could do it, but I can't imagine it being worth it. I can't imagine a way to do it that doesn't either a) break records by backdating information that wasn't true back then, or b) require repeated/recursive querying that will hammer the DB (e.g. if someone has had 5 phone numbers, how do you get all the numbers they've had without pulling the latest one to find the last change, and then the one before that, and etc). Those queries are incredibly simple with surrogate keys: "SELECT * FROM phone_number_changes WHERE user_id = blah".
So we are talking about "external" keys (ie. visible outside the database). We are back to square one: externally visible surrogate keys are problematic because they are detached from real world information they are supposed to identify and hence don't really identify anything (see my example about GDPR).
It does not matter if they are random or not.
> How are you going to trace all those records if the requester has changed their name, phone number and email since they signed up if you don't have a surrogate key?
And how does surrogate key help? I don't know the surrogate key that identifies my records in your database. Even if you use them internally it is an implementation detail.
If you keep information about the time information was captured, you can at least ask me "what was your phone number last time we've interacted and when was it?"
> I think that spirals into way more complexity than you're thinking.
This complexity is there whether you want it or not and you're not going to eliminate it with surrogate keys. It has to be explicitly taken care of.
DBMSes provide means to tackle this essential complexity: bi-temporal extensions, views, materialized views etc.
Event sourcing is a somewhat convoluted way to attack this problem as well.
> Those queries are incredibly simple with surrogate keys: "SELECT * FROM phone_number_changes WHERE user_id = blah".
Sure, but those queries are useless if you just don't know user_id.
Public sector it-systems may use the ID and rely on it not changing.
Private sector it-systems can't look up people by their ID, but only use the social security number for comparisons and lookups, e.g. for wiping records in GDPR "right to be forgotten"-situations. Social security numbers are sortof-useful for that purpose because they are printed on passports, driver's licenses and the like. And they are a problem w.r.t. identity theft, and shouldn't ever be used as an authenticator (we have better methods for that). The person ID isn't useful for identity theft, since it's only used between authorized contexts (disregarding Byzantine scenarios with rogue public-sector actors!). You can't social engineer your way to personal data using that ID unless (safe a few movie-plot scenarios).
So what is internal in this case? The person id is indeed internal to the public sector's it-systems, and useful for tracking information between agencies. They're not useful for Bob or Alice. (They ARE useful for Eve, or other malicious inside actors, but that's a different story, which realistically does require a much higher level of digital maturity across the entire society)
The surrogate key's purpose isn't to directly store the natural key's information, rather, it's to provide an index to it.
> The solution is not to come up with yet another artificial identifier but to come up with better means of identification taking into account the fact that things change.
There isn't 'another' - there's just one. The surrogate key. The other pieces of information you're describing are not the means of indexing the data. They are the pieces of data you wish to retrieve.
That's my whole point: either X becomes a "real world artifact" or it is useless as identifier.
I guess you can assign 3-4 bits for identifier version number as well.
And yes - for long living data dealing with compatibility issues is inevitable so you have to take that into account from the very beginning.
In my personal design sense, I have found keeping away generality actually helps my code last longer (based on more concrete ideas) and easier to change when those days come.
If you take the gender example, for 99% of people, it is male/female and it won't change, and you can use that for load balancing. But if later, you found out that the gender is not the one you expect for that bucket, no big deal, it will cause a branch misprediction, but instead of happening 50% of the times when you use a random value, it will only happen 1% of the times, significant speedup with no loss in functionality.
As long as you're not in China or India around specific years ...
GP's point stands strong.
Specifically, if your database is small, the performance impact is probably not very noticeable. And if your database is large (eg. to the extent primary keys can't fit within 32-bit int), then you're actually going to have to think about sharding and making the system more distributed... and that's where UUID works better than auto-incrementing ints.
* I think there is a wide range in the middle where your database can fit on one machine if you do it well, but it's worth optimizing to use a cheaper machine and/or extend the time until you need to switch to a distributed db. You might hit this middle range soon enough (and/or it might be a painful enough transition) that it's worth thinking about it ahead of time.
* If/when you do switch to a distributed database, you don't always need to rekey everything:
** You can bypass/defeat the existing keys' locality by via hashing on lookup or reversing bits. Some databases (e.g. DynamoDB) actually force this.
** Allocating new ids in the old way could be a big problem, but there are ways out. You might be able to switch allocation schemes entirely without clients noticing if your external keys are sufficiently opaque. If you went with UUIDv7 (which addresses some but not all of the article's points), you can just keep using it. If you want to keep using dense(-ish), (mostly-)sequential bigints, you can amortize the latency by reserving blocks at a time.
I think IDs should not carry information. Yes, that also means I think UUIDv7 was wrong to squeeze a creation date into their ID.
Isn't that clear enough?
So, the fact that there is a date in the uuidv7 does not extend any meaning or significance to the record outside of the database. To infer such a relationship where none exists is the error.
I bet people will extract that date and use it, and it's hard to imagine use which wouldn't be abuse. To take the example of a PN/SSN and the usual gender bit: do you really want anyone to be able to tell that you got a new ID at that time? What could you suspect if a person born in 1987 got a new PN/SSN around 2022?
Leaks like that, bypassing whatever access control you have in your database, is just one reason to use real random IDs. But it's even a pretty good one in itself.
Thank you for spelling it for me. For the readers, It leaks information that the person is likely not a natural born citizen. The assumption doesn't have to be a hundred percent accurate, There is a way to make that assumption And possibly hold it against you.
And there are probably a million ways that a record created date could be held against you If they don't put it in writing, how will you prove They discriminated against you.
Thinking... I don't have a good answer to this. If data exists, people will extract meaning from it whether rightly or not.
> The only rules that really matter are these: what a man can do and what a man can't do.
When evaluating security matters, it's better to strip off the moral valence entirely ("rightly") and only consider what is possible given the data available.
Another potential concerning implication besides citizenship status: a person changed their id when put in a witness protection program.
Pretty sure sorting and filtering them by date/time range in a database is the purpose.
What? This is definitely not the case and can’t be because B-tree nodes change while UUIDs do not.
It is easy to have strong opinions about things you are sheltered from the consequences of.
This assumption that you can query across IDs is exactly what is being cautioned against. As soon as you do that, you are talking a dependency on an implementation detail. The contract is that you get a UUID, not that you get 48 bits of timestamp. There are 8 different UUID types and even v7 has more than one variant.
And when working with very large datasets, there are very significant downsides to large, completely random IDs (which is of course what the OP is about).
I'm not a normalization fanatic, but we're only talking about 1NF here.
I don't think I should ignore what I already know and intentionally pessimize the first draft in the name of avoiding premature optimization.
A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.
For example, if https://github.com/pytorch/pytorch/issues/111111 can be seen but https://github.com/pytorch/pytorch/issues/111110 can't, someone might infer the existence of a hidden issue relating to a critical security problem.
Whereas if the URL was instead https://github.com/pytorch/pytorch/issues/761500e0-0070-4c0d... that risk would be avoided.
Let's clarify things.
The author argues against UUIDv4 as primary keys when compared to integers or bigints in large databases for performance reasons.
The examples you give refer to the common mistake of using a non-unique attribute that can be changed for a given entity as a primary key.
Do not expose your internal IDs. As simple as that.
This is simply not a meaningful statement. Any ID you expose externally is also an internal ID. Any ID you do not expose is internal-only.
If you expose data in a repeatable way, you still have to choose what IDs to expose, whether that’s the primary key or a secondary key. (In some cases you can avoid exposing keys at all, but those are narrow cases.)
The second ID has nothing to do with internal structure of your data. It is just another field.
You can change your structure however you want (or type of your "internal" IDs) and you don't have to worry about an external consumer. They still get their artificial ID.
That’s a more reasonable statement but I still don’t agree. This feels like one of those “best practices” that people apply without thinking and create pointless complexity.
Don’t expose your primary key if there is a reason to separate your primary key from the externally-exposed key. If your primary key is the form that you want to expose, then you should just expose the primary key. e.g. If your primary key is a UUID, and you create a separate UUID just to expose publicly, you have most likely added useless complexity to your system.
I think it also is important to remember the purpose of specific numbers. For instance I would argue a PN without the birthday would be strictly worse. With the current system (I only know the Swedish one, but assume it's the same) I only have to remember a 4 digit (because the number is bdate + unique 4 digits). If we would instead use completely random numbers I would have to remember at least an 8 digit number (and likely to be future proof you'd want at least 9 digits). Sure that's fine for myself (although I suspect some people already struggle with it), but then I also have to remember the numbers for my 2 kids and my partner and things become quickly annoying. Especially, because one doesn't use the numbers often enough that it becomes easy, but still often enough that it becomes annoying to look up, especially when one doesn't always cary their phone with them.
I guess that Norway has solved it in the same or similar way as Sweden? So a person is identified by the PNR and for those systems that need to track a person over several PNR (government agencies) use PRI. And a PRI is just the first PNR assigned to a person with a 1 inserted in the middle. If that PRI is occupied, use a 2,and so on.
PRI could of course have been a UUID instead.
Did you read the article? He doesn’t recommend natural keys, he recommends integer-based surrogates.
> A prime example of premature optimization.
Disagree. Data is sticky, and PKs especially so. Moreover, if you’re going to spend time optimizing anything early on, it should be your data model.
> Don't make decisions you will regret just to shave off a couple of milliseconds!
A bad PK in some databases (InnoDB engine, SQL Server if clustered) can cause query times to go from sub-msec to tens of msec quite easily, especially with cloud solutions where storage isn’t node-local. I don’t just mean a UUID; a BIGINT PK on a 1:M can destroy your latency for the simple reason of needing to fetch a separate page for every record. If instead the PK is a composite of (<linked_id>, id) - e.g. (user_id, id) - where id is a monotonic integer, you’ll have WAY better data locality.
Postgres suffers a different but similar problem with its visibility map lookups.
* integer keys are faster;
* uuidv7 keys are faster;
* if you want obfuscated keys, using integer and do some your own obfuscation (!!!).
I can get on-board of uuidv7 (with the trade-off, of course, on stronger guessability). The integer keys argument is strange. At that point, you need to come up with a custom-built system to avoid id collision in a distribution system and tries to achieve only 2x saving (the absolute minimal you should do is 64-bit keys). Very puzzling suggestion and to me very wrong.
Note that in this entire article, the recommendation is not about using natural keys (email address, some composite of user identification etc.), so I am skipping that whole discussion.
I am not a cryptographer, but I would want his recommendation reviewed by a cryptographer. And then I would have to implement it. UUIDs have been extensively reviewed by cryptographers, I have a variety of excellent implementations I can use, I know they solve the problem well. I know they can cause performance issues; they're a security feature that is easy to implement, and I can deal with the performance issues if and when they crop up. (Which, in my experience, it's unusual. Even at a large company, most databases I encounter do not have enough data. I will err on the side of security until it becomes a problem, which is a good problem to have.)
The reason why in a lot of context you use UUID is when you have a distributed system where you want your client to decide the ID that is then stored in multiple systems that not communicate. This is surely a valid scenario for random UUID.
To me the rule is use UUID as a customer-facing ID for things that has to have an identity (e.g. a user, an order, etc) and expose it publicly through APIs, use integer ID as internal identifier that are used to create relations between entities, and interal IDs are always kept private. That way numeric ID that are more efficient remain inside the database and are used for joining data, UUID is used only for accessing the object from an API (for example) but then internally when joining (where you have to deal with a lot of rows) you can use the more efficient numeric ID.
By the way, I think that the thing of "using UUID" came from NoSQL databases, where surely you use an UUID, but also you don't have to join data. People than transposed a best practice in one scenario to SQL, where its not really that best practice...
> I don’t have to think about it
And here we have the main problem of most DB issues I deal with on a daily basis - someone didn’t want to think about the implications of what they were doing, and it’s suddenly then my emergency because they have no idea how to address it.
I think you're attacking a straw man. The article doesn't say "instead of UUIDv4 primary keys, use keys such as birthdays with exposed semantic meaning". On the contrary, they have a section about how to use sequence numbers internally but obfuscated keys externally. (Although I agree with dfox's and formerly_proven's comments [1, 2] that XOR method they proposed for this is terrible. Reuse of a one-time pad is probably the most basic textbook example of bad cryptography. They referred to the values as "obfuscated" so they probably know this. They should have just gone with a better method instead.)
[1] https://news.ycombinator.com/item?id=46272985
[2] https://news.ycombinator.com/item?id=46273325
https://news.ycombinator.com/item?id=46276995
https://news.ycombinator.com/item?id=46273798
> A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.
The opaquing prevents that.
They also describe this as a "premature optimization". That's half-right: it's an optimization. Having the data to support an optimization, and focusing on optimizing things that are hard to migrate later, is not premature.
Same with Austrian social security numbers, which, in somes cases, don't contain the persons birth date and in some cases don't contain any existing date at all.
Yet many websites enforce a valid date and pull the persons birthdate from it...
Someone should have told Julius Caesar and Gregory XIII that :-p
It's workload-specific, too. If you want to list ranges of them by PK, then of course random isn't going to work. But then you've got competing tensions: listing a range wants the things you list to be on the same shard, but focusing a workload on one shard undermines horizontal scale. So you've got to decide what you care about (or do something more elaborate).
This is just another case of keys containing information and is not smart.
The obvious solution is to have a field that drives distribution, allowing rebalancing or whatever.
As a consumer of these databases we're stuck with them as designed, which means we have to worry about key distribution.
Edit: What the article said: > The kinds of web applications I’m thinking of with this post are monolithic web apps, with Postgres as their primary OLTP database.
So you are correct that this does not disqualify distributed databases.
Now this doesn't work if you actually have enough data that the randomness of the UUIDv4 keys is a practical database performance issue, but I think you really have to think long and hard about every single use of identifiers in your application before concluding that v7 is the solution. Maybe v7 works well for some things (e.g identifiers for resources where creation times are visible to all with access to the resource) but not others (such as users or orgs which are publicly visible but without publicly visible creation times).
I've read people suggest using a UUIDv7 as the primary key and a UUIDv4 as a user-visible one as a remedy.
My first thought when reading the suggestion was, "well but you'll still need an index on the v4 IDs, so what does this actually get you?" But the answer is that it makes joins less expensive; you only require the index once, when constructing the query from the user-supplied data, and everything else operates with the better-for-performance v7 IDs.
To be clear, in a practical sense, this is a bit of a micro-optimization; as far as I understand it, this really only helps you by improving the data locality of temporally-related items. So, for example, if you had an "order items" table, containing rows of a bunch of items in an order, it would speed up retrieval times because you wouldn't need to do as many index traversals to access all of the items in a particular order. But on, say, a users table (where you're unlikely to be querying for two different users who happen to have been created at approximately the same time), it's not going to help you much. Of course the exact same critique is applicable to integer IDs in those situations.
Although, come to think of it, another advantage of a user-visible v4 with v7 Pk is that you could use a different index type on the v4 ID. Specifically, I would think that a hash index for the user-visible v4 might be a halfway-decent way to go.
I'm still not sure either way if I like the idea, but it's certainly not the craziest thing I've ever heard.
See perhaps "UUIDv47 — UUIDv7-in / UUIDv4-out (SipHash‑masked timestamp)":
* https://github.com/stateless-me/uuidv47
* Sept 2025: https://news.ycombinator.com/item?id=45275973
UUIDv47 might have a space if you need keys generated on multiple backend servers without synchronization. But it feels very niche to me.
1: https://wiki.postgresql.org/wiki/XTEA_(crypt_64_bits)
For one example, say you were making voting-booth software. You really don't want a (hidden) timestamp attached to each vote (much less an incrementing id) because that would break voter confidentiality.
More generally, it's more a underlying principle of data management. Not leaking ancillary data is easier to justify than "sure we leak the date and time of the record creation, but we can't think of a reason why that matters."
Personally I think the biggest issue are "clever" programmers who treat the uuid as data and start displaying the date and time. This leads to complications ("that which is displayed, the customer wants to change"). It's only a matter of time before someone declares the date "wrong" and it must be "fixed". Not to mention time zone or daylight savings conversions.
But there are cases where it matters. Using UUIDv7 for identifiers means you need to carefully consider the security and privacy implications every time you create a new table identified by a UUID, and you'll possibly end up with some tables where you use v4 and some where you use v7. Worst case, you'll end up with painful migrations from v7 to v4 as security review identifies timestamped identifiers as a security concern.
This doesn’t even rely on your system’s built-in RNG being low quality. It could be audited and known to avoid such issues but you could have a compromised compiler or OS that injects a doctored RNG.
However, because the online ordering system assigned order numbers sequentially, it would have been trivial for that company to determine how important their business was.
For example, over the course of a month, they could order something at the start of the month and something at the end of the month. That would give them the total number of orders in that period. They already know how many orders they have placed during the month, so company_orders / total_orders = percentage_of_business
It doesn't even have to be accurate, just an approximation. I don't know if they figured out that they could do that but it wouldn't surprise me if they had.
Yes of course everyone should check and unit test that every object is owned by the user or account loading it, but demanding more sophistication from an attacker than taking "/my_things/23" and loading "/my_things/24" is a big win.
(What do you think Youtube video IDs are?)
I shared this article a few weeks ago, discussing the problems with this kind of approach: https://notnotp.com/notes/do-not-encrypt-ids/
I believe it can make sense in some situations, but do you really want to implement such crypto-related complexity?
That being said, while fine for obfuscation, it should not be used for security for this purpose, e.g. hidden/unlisted links, confirmation links and so on. Those should use actual, long-ish random keys for access, because the inability to enumerate them is a security feature.
I actually haven no idea. What are they?
(Also what is the format of their `si=...` thing?)
The ability to rapidly shard everything can be extremely valuable. The difference between "we can shard on a dime" and "sharding will take a bunch of careful work" can be expensive If the company has poor margins, this can be the difference between "can scale easily" and "we're not getting this investment".
I would argue that if your folks have the technical chops to be able to shard while avoiding surrogate guaranteed unique keys, great. But if they don't.... a UUID on every table can be a massive get-out-of-jail free card and for many companies this is much, much important than some minor space and time optimizations on the DB.
Worth thinking about.
I'm sure every dba has a war story that starts with similar decision in the past
In our case, we don't want database IDs in an API and in URLs. When IDs are sequential, it enables things like dictionary attacks and provides estimates about how many customers we have.
Encrypting a database ID makes it very obvious when someone is trying to scan, because the UUID won't decrypt. We don't even need a database round trip.
* How do you manage the key for encrypting IDs? Injected to app environment via envvar? Just embedded in source code? I ask this because I'm curious as to how much "care" I should be putting in into managing the secret material if I were to adopt this scheme.
* Is the ID encrypted using AEAD scheme (e.g. AES-GCM)? Or does the plain AES suffice? I assume that the size of IDs would never exceed the block size of AES, but again, I'm not a cryptographer so not sure if it's safe to do so.
But the author does not say timestamp ordering, he says ordering. I think he actually means and believes that there is some problem ordering UUIDv4.
But if you need UUID-based lookup, then you might as well have it as a primary key, as that will save you an extra index on the actual primary key. If you also need a date and the remaining bits in UUIDv7 suffice for randomness, then that is a good option too (though this does essentially amount to having a composite column made up of datetime and randomness).
Just to complement this with a point, but there isn't any mainstream database management system out there that is distributed on the sense that it requires UUIDs to generate its internal keys.
There exist some you can find on the internet, and some institutions have internal systems that behave this way. But as a near universal rule, the thing people know as a "database" isn't distributed on this sense, and if the column creation is done inside the database, you don't need them.
Auto-incrementing keys can work, but what happens when you run out of integers? Also, distributed dbs probably make this hard, and they can't generate a key on client.
There must be something in Postgres that wants to store the records in PK order, which while could be an okay default, I'm pretty sure you can this behavior, as this isn't great for write-heavy workloads.
Accessing data in totally random locations can be a performance issue.
Depends on lots of things ofc but this is the concern when people talk about UUID for primary keys being an issue.
Values of the same type can be sorted if a order is defined on the type.
It's also strange to contrast "random values" with "integers". You can generate random integers, and they have a "sorting" (depending on what that means though)
The power and main purpose of UUIDs is to act as easy to produce, non-conflicting references in distributed settings. Since the scope of TFA is explicitly set to be "monolithic web apps", nothing stops you from having everything work with bigint PKs internally, and just add the UUIDs where you need to provide external references to rows/objects.
PlanetScale wrote up a really good article on why incrementing primary keys are better for performance when compared to randomly inserted primary keys; when it comes to b-tree performance. https://planetscale.com/blog/btrees-and-database-indexes
Integer PKs were seen as fine for years - decades, even - before the rise of UUIDs.
If you're using latest version of PG, there is a plugin for it.
That's it.
"Recommendation: Stick with sequences, integers, and big integers"
After that then, yes, UUIDv7 over UUIDv4.
This article is a little older. PostgreSQL didn't have native support so, yeah, you needed an extension. Today, PostgreSQL 18 is released with UUIDv7 support... so the extension isn't necessary, though the extension does make the claim:
"[!NOTE] As of Postgres 18, there is a built in uuidv7() function, however it does not include all of the functionality below."
What those features are and if this extension adds more cruft in PostgreSQL 18 than value, I can't tell. But I expect that the vast majority of users just won't need it any more.
Once you encode shard number into ID, you got:
- instantly* know which shard to query
- each shard has its own ticker
* programatically, maybe visually as well depending on implementation
I had IDs that encode: entity type (IIRC 4 bit?), timestamp, shard, sequence per shard. We even had a admin page wher you can paste ID and it will decode it.
id % n is fine for cache because you can just throw whole thing away and repopulate or when 'n' never changes, but it usually does.
And for those using ints as keys... you'd be surprised how many databases in the wild won't come close to consuming that many IDs or are for workloads where that sort of volume isn't even aspirational.
Now, to be fair, I'm usually in the UUID camp and am using UUIDv7 in my current designs. I think the parent article makes good points, but I'm after a different set of trade-offs where UUIDs are worth their overhead. Your mileage and use-cases may vary.
UUIDv4 are great for when you add sharding, and UUIDs in general prevent issues with mixing ids from different tables. But if you reach the kind of scale where you have 2 billion of anything UUIDs are probably not the best choice either
0: https://bsky.app/profile/hugotunius.se/post/3m7wvfokrus2g
1: https://github.com/postgres/postgres/blob/master/src/backend...
https://github.com/blitss/typeid-postgres
[0] https://www.cybertec-postgresql.com/en/unexpected-downsides-...
https://www.postgresql.org/docs/current/functions-uuid.html
I was using 64-bit snowflake pks (timestamp+sequence+random+datacenter+node) previously and made the switch to UUID7 for sortable, user-facing, pks. I'm more than fine letting the DB handle a 128-bit int vs over a 64-bit int if it means not having make sure that the latest version of my snowflake function has made it to every db or that my snowflake server never hiccups, ever.
Most of the data that's going to be keyed with a uuid7 is getting served straight out of Redis anyway.
Are there valid reasons to use UUID (assuming correctly) for primary key? I know systems have incorrectly expose primary key to the public, but assuming that's not the concern. Why use UUID over big-int?
UUIDv7 looks really promising but I'm not likely to redo all of our tables to use it.
Main reason I use it is the German Tank problem: https://en.wikipedia.org/wiki/German_tank_problem
(tl;dr; prevent someone from counting how many records you have in that table)
Someone may or may not have used the pattern to get to know the attrition rate through running a simple script every month))
This way you avoid most of the issues highlighted in this article, without compromising your confidential data.
https://cloud.google.com/blog/products/databases/announcing-...
Sometimes I have to talk to legacy systems, all my APIs have str IDs, and I encode int IDs as just decimal left padded with leading zeros up to 26 chars. Technically not a compliant ULID but practically speaking, if I see leading `00` I know it's not an actual ULID, since that would be before Nov-2004, and ULID was invented in 2017. The ORM automatically strips the zeros and the query just works.
I'm just kind of over using sequential int IDs for anything bigger than hobby level stuff. Testing/fixturing/QA are just so much easier when you do not have to care about whether an ID happens to already exist.
[0] https://github.com/ulid/spec
id => 123, public_id => 202cb962ac59075b964b07152d234b70
There are many ways to generate the public_id. A simple MD5 with a salt works quite well for extremely low effort.
Add a unique constraint on that column (which also indexes it), and you'll be safe and performant for hundreds of millions of rows!
Why do we developers like to overcomplicate things? ;)
For Aurora MySQL, it just makes it worse either way, since there’s no change buffer.
While that is often neat solution, do not do that by simply XORing the numbers with constant. Use a block cipher in ECB mode (If you want the ID to be short then something like NSA's Speck comes handy here as it can be instantiated with 32 or 48 bit block).
And do not even think about using RC4 for that (I've seen that multiple times), because that is completely equivalent to XORing with constant.
Aren't people using (big)ints are primary keys, and using UUIDs as logical keys for import/export, solving portability across different machines?
Consider say weather hardware. 5 stations all feeding into a central database. They're all creating rows and uploading them. Using sequential integers for that is unnecessarily complex (if even possible.)
Given the amount of data created on phones and tablets, this affects more situations than first assumed.
It's also very helpful in export / edit / update situations. If I export a subset of the data (let's say to Excel), the user can edit all the other columns and I can safely import the result. With integer they might change the ID field (which would be bad). With uuid they can change it, but I can ignore that row (or the whole file) because what they changed it to will be invalid.
Of course this is not always the case that is bad, for example if you have a lot of relations you can have only one table where you have the UUID field (and thus expensive index), and then the relations could use the more efficient int key for relations (for example you have an user entity with both int and uuid keys, and user attribute references the user with the int key, of course at the expense of a join if you need to retrieve one user attribute when retrieving the user is not needed).
https://www.postgresql.org/docs/current/hash-index.html
original answer: because if you dont come up with these ints randomly they are sequential which can cause many unwanted situations where people can guess valid IDs and deduce things from that data. See https://en.wikipedia.org/wiki/German_tank_problem
Edit: just saw your edit, sounds like we're on the same page!
One more reason to stay away from microservices, if possible.
If you separate them (i.e. microservices) the they no longer try to use one db.
Sometimes it might even be for a good reason.
So far, people have talked a lot about UUIDs, so I'm genuinely curious about what's in-between.
And of course we could come up with many ways to generate our own ids and make them unique, but we have the following requirements.
- It needs to be a string (because we allow composing them to 'derive' keys) - A client must be able to create them (not just a server) without risk for collisions - The time order of keys must not be guessable easily (as the id is often leaked via references which could 'betray' not just the existence of a document, but also its relative creation time wrt others). - It should be easy to document how any client can safely generate document ids.
The lookup performance is not really such a big deal for us. Where it is we can do a projection into a more simple format where applicable.
> Misconceptions: UUIDs are secure
> One misconception about UUIDs is that they’re secure. However, the RFC describes that they shouldn’t be considered secure “capabilities.”
> From RFC 41221 Section 6 Security Considerations:
> Do not assume that UUIDs are hard to guess; they should not be used as security capabilities
This is just wrong, and the citation doesn't support it. You're not guessing a 122-bit long random identifier. What's crazy is that the article, immediately prior to this, even cites the very math involved in showing exactly how unguessable that is.
… the linked citation (to §4.4, which is different from the in-prose citation) is just about how to generate a v4, and completely unrelated to the claim. The prose citation to §6 is about UUIDs generally: the statement "Do not assume that [all] UUIDs are hard to guess" is not logically inconsistent with properly-generated UUIDv4s being hard to guess. A subset of UUIDs have security properties, if the system generating & using them implements those properties, but we should not assume all UUIDs have that property.
Moreover, replacing an unguessable UUID with an (effectively random) 32-bit integer does make it guessable, and the scheme laid out seems completely insecure if it is to be used in the contexts one finds UUIDv4s being an unguessable identifier.
The additional size argument is pretty weak too; at "millions of rows", a UUID column is consuming an additional ~24 MiB.
> the impact to inserts and retrieval of individual items or ranges of values from the index.
Classic OLTP vs OLAP.
(in the scientific reporting world this would be the perennial "in mice")
It would be the equivalent of "if you're a middle-aged man" or "you're an American".
P.S. I think some of the considerations may be true for any system that uses B-Tree indexes, but several will be Postgres specific.
If you use UUIDv7, you can partition your table by the key prefix. Then the bulk of your data can be efficiently skipped when applying updates.
Just the other day I delivered significant performance gains to a client by converting ~150 million UUIDv4 PKs to good old BIGINT. They were using a fairly recent version of MariaDB.
if they can live with MariaDB, OK, but I wouldn't choose that in the first place these days. Likely Postgres will also perform better in most scenarios.
Had the requirements been different, UUIDv7 would have worked well, too, because fragmentation is the biggest problem here.
And I'd likely want a unique constraint on that?
But that also adds complexity.
- If you use uuids as foreign keys to another table, it’s obvious when you screw up a join condition by specifying the wrong indices. With int indices you can easily get plausible looking results because your join will still return a bunch of data
- if you’re debugging and need to search logs, having a simple uuid string is nice for searching
The UUID is 128bits. The first 64bits are a java long. The last 64bits are a java long. Let's just combine the Tenant ID long with a Resource ID long to generate a unique id for this on our platform. (worked until it didn't).
I would like to hear from others using, for example, Google Spanner, do you have issues with UUID. I don't for now, most optimizations happen at the Controller level, data transformation can be slow due to validations. Try to keep service logic as straightforward as possible.
Also is it necessary to show uuid at all to customers of an API? Or could it be a valid pattern to hide all the querying complexity behind named identifiers, even if it could cost a bit in terms of joining and indexing?
The context is the classic B2B SaaS, but feel free to share your experiences even if it comes from other scenarios!
[1] https://gist.github.com/mikelehen/3596a30bd69384624c11
Uuid4 are only 224bits is a bs argument. Such a made up problem.
But a fair point is that one should use a sequential uuid to avoid fragmentation. One that has a time part.
- A client used to run our app on-premises and now wants to migrate to the cloud.
- Support engineers want to clone a client’s account into the dev environment to debug issues without corrupting client data.
- A client wants to migrate their account to a different region (from US to EU).
Merging data using UUIDs is very easy because ID collisions are practically impossible. With integer IDs, we'd need complex and error-prone ID-rewriting scripts. UUIDs are extremely useful even when the tables are small, contrary to what the article suggests.
Hash index is ideally suited for UUIDs but for some reason Postgres hash indexes cannot be unique.
The ability to know ahead of time what a primary key will be (in lieu of persisting it first, then returning) opened up a whole new world of architecting work in my app. It made a lot of previous awkward things feel natural.
Tbf in Postgres, you can declare FKs to be deferrable, so their existence is checked at transaction commit, rather than at insertion time.
If you don’t have the DB enforcing referential integrity, you need to be extremely careful in your application logic; IME, this inevitably fails. At some point, someone writes bad code, and you get data anomalies.
I'm using EF core which hooks up these relationships and allows me to persist them in a single transaction using MSSQL server.
> If you don’t have the DB enforcing referential integrity
I'm building an electronic medical system. I'm well aware of the benefits of referential integrity.
> We need to read blocks from the disk when they are not in the PostgreSQL buffer cache. Conveniently, PostgreSQL makes it very easy to inspect the contents of the buffer cache. This is where the big difference between uuidv4 and uuidv7 becomes clear. Because of the lack of data locality in uuidv4 data, the primary key index is consuming a huge amount of the buffer cache in order to support new data being inserted – and this cache space is no longer available for other indexes and tables, and this significantly slows down the entire workload.
0 - https://ardentperf.com/2024/02/03/uuid-benchmark-war
The issue is that is true for more or less all capability URLs. I wouldn't recommend UUIDs per se here, probably better to just use a random number. I have seen UUIDs for this in practice though and these systems weren't compromised because of that.
I hate the tendency that password recovery flows for example leave the URL valid for 5 minutes. Of course these URLs need to have a limited life time, but mail isn't a real time communication medium. There is very little security benefit from reducing it from 30 minutes to 5 minutes for example. You are not getting "securer" this way.
I'm tired of midwit arguments like "Tech X is N% faster than tech Y at performing operation Z. Since your system (sometimes) performs operation Z, it implies that Tech X is the only logical choice in all situations!"
It's an infuriatingly silly argument because operation Z may only represent about 10% of the total CPU usage of the whole system (averaged out)... So what is promoted as a 50% gain may in fact be a 5% gain when you consider it in the grand scheme of things... Negligible. If everyone was looking at this performance 'advantage' rationally; nobody would think it's worth sacrificing important security or operational properties.
I don't know what happened to our industry; we're supposed to be intelligent people but I see developers falling for these obvious logical fallacies over and over.
I remember back in my day, one of the senior engineers was discussing upgrading a python system and stated openly that the new version of the engine was something like 40% slower than the old version but he didn't even have to explain himself why upgrading was still a good decision; everybody in the company knew he was only talking about the code execution speed and everybody knew that this was a small fraction of the total.
Not saying UUIDv7 was a bad choice for Postgres. I'm sure it's fine for a lot of situations but you don't have to start a cult preaching the gospel of The One True UUID to justify your favorite project's decisions.
I do find it kind of sly though how the community decided to make this UUIDv7 instead of creating a new standard for it.
The whole point of UUID was to leverage the properties of randomness to generate unique IDs without requiring coordination. UUIDv7 seems to take things in a philosophically different path. People chose UUID for scalability and simplicity (both of which you get as a result of doing away with the coordination overhead), not for raw performance...
That's the other thing which drives me nuts; people who don't understand the difference between performance and scalability. People foolishly equate scalability with parallelism or concurrency; whereas that's just one aspect of it; scalability is a much broader topic. It's the difference between a theoretical system which is fast given a certain artificially small input size and one which actually performs better as the input size grows.
Lastly; no mention is made about the complex logic which has to take place behind the scenes to generate UUIDv7 IDs... People take it for granted that all computers have a clock which can produce accurate timestamps where all computers in the world are magically in-sync... UUIDv7 is not simple; it's very complicated. It has a lot of additional complexity and dependencies compared to UUIDv4. Just because that complexity is very well hidden from most developers, doesn't mean it's not there and that it's not a dependency... This may become especially obvious as we move to a world of robotics and embedded systems where cheap microchips may not have enough Flash memory to hold the code for the kinds of programs required to compute such elaborate IDs.
To your original point, these are heuristics; there isn't always time to dig into every little architectural decision, so having a set of rules of thumb on hand helps to preempt problems at minimal cognitive cost. "Avoid using a GUID as a primary key if you can" is one of mine.
If the client POSTs a new object to insert it into the database; if there is a connection failure and the client does not receive a success response from the server, the client cannot know whether the record was inserted or not without making an expensive and cumbersome additional read call to check... The client cannot simply assume that the insertion did not happen purely on the basis that they did not receive a success response. It could very well be that the insertion succeeded but the connection failed shortly after so response was not received. If the IDs are auto-incremented on the server and the client posts the same object again without any ID on it, the server will create a duplicate record in the database table (same object with a different ID).
On the other hand, if the client generates a UUID for the object it wants to create on the front-end, then it can safely resend that exact object any number of times and there is no risk of double-insertion; the object will be rejected the second time and you can show the user a meaningful error "Record was already created" instead of creating two of the same resource; leading to potential bugs and confusion.
Using idempotency identifier is the last resort in my book.
Yes there are different flavors of generating them with their own pros and cons, but at the end of the day it’s just so much more elegant than some auto incrementing crap your database creates. But that is just semantic, you can always change the uuid algorithm for future keys. And honestly if you treat the uuid as some opaque entity (which you should), why not just pick the random one?
And I just thought of the argument that “but what if you want to sort the uuid…” say it’s used for a list of stories or something? Well, again… if you treat the uuid as opaque why would you sort it? You should be sorting on some other field like the date field or title or something. UUIDs are opaque, damn it. You don’t sort opaque data. “Well they get clustered weird” say people. Why are you clustering on a random opaque key? If you need certain data to be clustered, then do it on the right key (user_id field did your data was to be clustered by user, say)
Letting the client generate the primary keys is really liberating. Not having to care about PK collisions or leaking information via auto incrementing numbers is great!
In my opinion uuid isn’t used enough!
Balanced and uniformly scattered. A random index means fetching a random page for every item. Fine if your access patterns are truly random, but that's rarely the case.
> Why are you clustering on a random opaque key?
InnoDB clusters by the PK if there is one, and that can't be changed (if you don't have a PK, you have some options, but let's assume you have one). MSSQL behaves similarly, but you can override it. If your PK is random, your clustering will be too. In Postgres, you'll just get fragmented indexes, which isn't quite as bad, but still slows down vacuum. Whether that actually becomes a problem is also going to depend on access patterns.
One shouldn't immediately freak out over having a random PK, but should definitely at least be aware of the potential degradation they might cause.