I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.
Here's a quote from Bjarne,
> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.
The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.
I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?
Late nineties is approaching thirty decades ago; if the C++ committee has now been working on this for nearly a decade, that's fifteen to twenty years of them not working on it. It's quite plausible that contracts simply weren't valued at the time.
Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.
Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.
There are several things proposed in the early days of C++ that arguably should be added.
I can’t speak to the C++ contract design — it’s possible bad choices were made. But contracts in general are absolutely exactly what C++ needs for the next step of its evolution. Programming languages used for correct-by-design software (Ada, C++, Rust) need to enable deep integration with proof assistants to allow showing arbitrary properties statically instead of via testing, and contracts are /the/ key part of that — see e.g. Ada Spark.
Problem is contracts mean different things to different people, and that leads standard contracts support being a compromise that makes nobody happy. To some people contracts are something checked at runtime in debug mode and ignored in release mode. To others they’re something rigorous enough to be usable in formal verification. But the latter essentially requires a completely new C++ dialect for writing contract assertions that has no UB, no side effects, and so on. And that’s still not enough as long as C++ itself is completely underspecified.
C++ is the last language I'd add to any list of languages used for correct-by-design - it's underspecified in terms of semantics with huge areas of UB and IB. Given its vast complexity - at every level from the pre-processor to template meta-programming and concepts, I simply can't imagine any formal denotational definition of the language ever being developed. And without a formal semantics for the language, you cannot even start to think about proof of correctness.
As with Spark, proving properties over a subset of the language is sufficient. Code is written to be verified; we won’t be verifying interesting properties of large chunks of legacy code in my career span. The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.
I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing.
> The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.
In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison
I don’t understand this “next evolution” approach to language design.
It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful.
It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful.
Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things.
Feels like people that push stuff do it because "it is just what they do".
Many of the recent C++ standards have been focused on expanding and cleaning up its powerful compile-time and metaprogramming capabilities, which it initially inherited by accident decades ago.
It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin.
One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.
> One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.
Aren’t Rust macros more powerful than C++ template metaprogramming in practice?
Rust has two separate macro systems. It has declarative "by example" macros which are a nicer way to write the sort of things where you show an intern this function for u8 and ask them to create seven more just like it except for i8, u16, i16, u32, i32, u64, i64. Unlike the C pre-processor these macros understand how loops work (sort of) and what types are, and so on, and they have some hygiene features which make them less likely to cause mayhem.
Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand.
However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust...
> powerful compile-time and metaprogramming capabilities
While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad.
Why, for example, is printing the source-code text of an enum value so goddamn hard?
Why can I not just loop over the members of a class?
How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator)
These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates
The people who did contracts are aware of ada/spark and some have experience using it. Only time will tell if it works in c++ but they at least did all they could to give it a chance.
Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.
Might be the case that Ada folks successfully got a bad version of contracts not amenable for compile-time checking into C++, to undermine the competition. Time might tell.
Ada used to be mandated in the US defense industry, but lots of developers and companies preferred C++ and other languages, and for a variety of reasons, the mandate ended, and Ada faded from the spotlight.
On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.
Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.
I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it.
But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.
So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.
C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.
My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.
C++ needs to give itself up and make way for other, newer, modern, language that have far, far fewer baggage. It should be working with other language to provide tools for interop and migration.
C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility.
It does not matter what version of C++ you are using, you are still using C with classes.
Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)
The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.
Even C++23 is largely usable at this point, though there are still gaps for some features.
I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot.
I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me.
I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow?
The devil is in the details, because standardization work is all about details.
From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now?
Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want.
Right, I think the tension here is that we would like contracts to exist in the language, but the current design isn't what it needs to be, and once it's standardized, it's extremely hard to fix.
But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax.
Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation.
Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.
In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it...
I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.
A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.
Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks.
The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.
But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.
I wonder if C++ already has so much complexity, that it would actually be a good idea to ignore feature creep, and implement any feature with even the most remote use-case.
It sounds (and probably is) insane. But if a feature breaks backwards compatibility, or can't be implemented in a way that non-negligibly affects compiler/IDE performance for codebases that ignore it, what's the issue? Specifically, what significant new issues would it cause that C++’s existing bloat hasn’t?
GCC and MSVC are pretty close. fyi, the tables on cppreference are rather outdated at this point. I made a more up-to-date, community-maintained site: https://cppstat.dev/?conformance=cpp20
wow, that's weird. One would think that updating the reference table is something a team or individual - who just spent a lot of time and effort on implementing a feature - would also do.
For a while now cppreference.com has been in "temporary read-only mode" in which it isn't updated. Eventually I expect a "temporary" replacement will dominate and eventually it won't be "temporary" after all. Remember when some of Britain's North American colonies announced they were declaring independence? Yeah me either, but at the time I expect some people figured hey, we send a bunch of troops, burn down some stuff, by Xmas we'll have our colonies back.
C++ contracts standardizes what people already do in C++. Where is the complexity in that? It removes the need to write your own implementation because the language provides a standard interoperable one.
An argument can be made that C++26 features like reflection add complexity but I don't follow that argument for contracts.
Contracts are already informally a thing: most functions have preconditions, and if you break those preconditions, the function doesn't make any guarantees of what it does.
We already have some primitive ways to define preconditions, notably the assert macro and the 'restrict' qualifier.
I don't mind a more structured way to define preconditions which can automatically serve as both documentation and debug invariant checks. Though you could argue that a simpler approach would be to "standardize" a convention to use assert() more liberally in the beginning of functions as precondition checks; that a sequence of 'assert's before non-'assert' code should semantically be treated as the functions preconditions by documentation generators etc.
I haven't looked too deep into the design of the actual final contracts feature, maybe it's bad for reasons which have nothing to do with the fundamental idea.
>to a language which has already surpassed its complexity budget
I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.
In essence, a standard committee thinks like bureaucrats. They have little to no incentive to get rid of cruft and only piling on new stuff is rewarded.
Just because Bjarne thinks the feature is bad doesnt mean it is bad. He can be wrong. The point is, most peoppe disagree with him, and so a lot of peoppe do think it is good.
There have been several talks about contracts and the somewhat hidden complexities in them. C++ contracts are not like what you'd initally expect. Compiler switches can totally alter how contracts behave from getting omitted to reporting failures to aborting the program. There is also an optional global callback for when a contract check fails.
Different TUs can be compiled with different settings for the contract behavior. But can they be binary compatible? In general, no. If a function is declared in-line in a header, the comoiler may have generated two different versions with different contract behaviors, which violates ODR.
What happens if the contract check calls a helper function that throws an exception?
The whole things is really, really complex and I don't assume that I understand it properly. But I can see that there are some valid concerns against the feature as standardized and that makes me side with the opposition here: this was not baked enough yet
It sounds like it should solve your problem. At first it seems to work. Then you keep on finding the footguns after it is too late to change the design.
> I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.
I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.
Can you share what aspects of the design you (and Stroustroup) aren't happy with? Stroustroup has a tendency of being proven right, with 1-3 decade lag.
Certainly we can say that Bjarne will insist he was right decades later. We can't necessarily guess - at the time - what it is he will have "always" believed decades later though.
Has any project ever tried to quantify a “complexity budget” and stick to it?
I’m fascinated by the concept of deciding how much complexity (to a human) a feature has. And then the political process of deciding what to remove when everyone agrees something new needs to be accepted.
Without a significant amount of needed context that quote just sounds like some awkward rambling.
Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.
I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.
I envy the person that will walk into a c++ codebase and see "[[indeterminate]]" on some place. And then they will need to absolutely waste their time searching and reading what "[[indeterminate]]" means. Or over time they will just learn to ignore this crap and mentally filter it out when looking at code.
Just like when I was learning rust and trying to read some http code but it was impossible because each function had 5 generics and 2 traits.
What is non-obvious about “[[indeterminate]]”? That terminology has been used throughout the standards in exactly this context for ages. This just makes it explicit instead of implicit so that the compiler can know your intent.
I know roughly what indeterminate means in english but it is not obvious to me when I see something like this in code.
So I would have to look it up and be very careful about it since I can break something easily in C++.
This just makes things more difficult from the perspective of using/learning the language.
Similar problem with "unsequenced" and "reproducible" attributes added in c. It sounded cool after I took the time to learn exactly (/s) what it means. But it is not worth the time to learn it. And it is not worth the cognitive load it will put on people that will read the code later imo.
I wonder if you're fine with const, constexpr and volatile also being things. I mean, "const" really doesn't mean what one would naively think (that's what "constexpr" is actually for) and the semantics of "volatile" are also widely misunderstood.
Nope, not only is C++ const not a constant, C++ constexpr isn't a constant either, and C++ constinit isn't a constant, C++ consteval is closest, but it's only available for functions.
const int a = 10; // Just an immutable variable named a
constexpr int b = 20; // Still an immutable variable named b
static constinit int c = 30; // Now it isn't even immutable
For functions const says this function promises it doesn't change things, constexpr says this function is shiny and modern and has no other real meaning (hence "constexpr all the things" memes, you might as well) but consteval does mean that we're promising this must always be evaluated at compile time, so the evaluation is frozen by runtime, however only a function can have this label.
Volatile is a mess because what you actually want are the volatile intrinsics, indeed you might want more (or fewer) depending on the target. If your target can do single bit hardware writes it'd be nice to provide an intrinsic for that, rather than hoping you can write in code REG |= 0x40 and have that write a single bit... which on platforms which do not have this single bit write feature that's going to compile to an unsynchronized read-modify-write which may cause problems. However instead of having intrinsics C's volatile was hacked into the type system instead and C++ tries to keep that.
This is awesome. I've was a dev on the C++ team at MS in the 90s and was sure that RTTI was the closest the language would ever get to having a true reflection system.
> Second, conforming compiler and standard library implementations are coming quickly. Throughout the development of C++26, at any given point both GCC and Clang had already implemented two-thirds of C++26 features. Today, GCC already has reflection and contracts merged in trunk, awaiting release.
That much I'm aware, but that's just about feature availability. I was wondering how far the implementations have progressed internally, despite the features being unavailable.
The best thing the C++ WG could do is to spend an entire release cycle working on modules and packaging.
It's nice to have new features, but what is really killing C++ is Cargo. I don't think a new generation of developers are going to be inspired to learn a language where you can't simply `cargo add` whatever you need and instead have to go through hell to use a dependency.
To me, the most important feature of Cargo isn't even the dependency management but that I don't ever need to tell it which files to compile or where to find them. The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO. Over the past couple of years I've tried to clone and build a number of dotnet packages for various things, but for an ecosystem that's supposedly cross-platform, almost none of them seem to just work by default when I run `dotnet build` and instead require at least some fixes in the various project files. I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done. The list of dependencies is just about the only thing in there that effects the default build; if there's any other configuration other than that and the basic metadata like the name, the repo link, the license, etc., it almost always will end up being specifically for alternate builds (like extra options for release builds, alternate features that can be compiled in, etc.).
> The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO.
In the interest of pedantry, locating source files relative to the crate root is a language-level Rust feature, not something specific to Cargo. You can pass any single Rust source file directly to rustc (bypassing Cargo altogether) and it will treat it as a crate root and locate additional files as needed based on the normal lookup rules.
Interesting, I didn't realize this! I know that a "crate" is specifically the unit of compilation for rustc, but I assumed there was some magic in cargo that glued the modules together into a single AST rather than it being in rustc itself.
That being said, I'd argue that the fact that this happens so transparently that people don't really need to know this to use Cargo correctly is somewhat the point I was making. Compared to something like cmake, the amount of effort to use it is at least an order of magnitude lower.
> I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done.
For most crates, yes. But you might be surprised how many crates have a build.rs that is doing more complex stuff under the hood (generating code, setting environment variables, calling a C compiler, make or some other build system, etc). It just also almost always works flawlessly (and the script itself has a standardised name), so you don't notice most of the time.
True, but if anything, a build.rs is a lot easier for me to read and understand (or even modify) if needed because I already know Rust. With something like cmake, the build configuration is an entirely separate language the one I'm actually working in, and I haven't seen a project that doesn't have at least some amount of custom configuration in it. Starting up a cargo project literally doesn't require putting anything in the Cargo.toml that doesn't exist after you run `cargo new`.
Oh sure, build.rs is (typically) a great experience. My favourite example is Skia which is notoriously difficult to build, but relatively easy to build with the Rust bindings. My point was just that this isn't always because there's nothing complex going on, but because it still works well even though there sometimes are complex things going on!
But you are specifying source files, although indirectly, aren't you? That's what all those `mod blah` with a corresponding `blah.rs` file present in the correct location are.
I suffered with Java from
Any, Maven and Gradle (the oldest is the the best). After reading about GNU Autotools I was wondering why the C/C++ folks still suffer? Right at that time Meson appeared and I skipped the suffering.
* No XML
* Simple to read and understand
* Simple to manage dependencies
* Simple to use options
Meson is indeed nice, but has very poor support for GPU compilation compared to CMake. I've had a lot of success adopting the practices described in this talk, https://www.youtube.com/watch?v=K5Kg8TOTKjU. I thought I knew a lot of CMake, but file sets definitely make things a
lot simpler.
Meson merges the crappy state of C/C++ tooling with something like Cargo in the worst way possible: by forcing you to handle the complexity of both. Nothing about Meson is simple, unless you're using it in Rust, in which case you're better off with Cargo.
In C++ you don't get lockfiles, you don't get automatic dependency install, you don't get local dependencies, there's no package registry, no version support, no dependency-wide feature flags (this is an incoherent mess in Meson), no notion of workspaces, etc.
Compared to Cargo, Meson isn't even in the same galaxy. And even compared to CMake, Meson is yet another incompatible incremental "improvement" that offers basically nothing other than cute syntax (which in an era when AI writes all of your build system anyway, doesn't even matter). I'd much rather just pick CMake and move on.
Build system generators (like Meson, autotools, CMake or any other one) can't solve programming language module and packaging problems, even in principle. So, it's not clear what your argument is here.
> I’m still surprised how people ignore Meson. Please test it :)
I did just that a few years ago and found it rather inconvenient and inflexible, so I went back to ignoring it. But YMMV I suppose.
Agreed, arcane cmake configs and or bash build scripts are genuinely off-putting. Also cpp "equivalents" of cargo which afaik are conan and vcpkg are not default and required much more configuring in comparison with cargo. Atleast this was my experience few years ago.
It's fundamentally different; Rust entirely rejects the notion of a stable ABI, and simply builds everything from source.
C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
There are of course good ways of building C++, but those are the exception rather than the standard.
"Stable ABI" is a joke in C++ because you can't keep ABI and change the implementation of a templated function, which blocks improvements to the standard library.
In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers.
That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C!
But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage.
If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR).
This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways.
This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again.
That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation.
It's not true that Rust rejects "the notion of a stable ABI". Rust rejects the C++ solution of freeze everything and hope because it's a disaster, it's less stable than some customers hoped and yet it's frozen in practice so it disappoints others. Rust says an ABI should be a promise by a developer, the way its existing C ABI is, that you can explicitly make or not make.
Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them.
> It'd be nice to have say, ABI for slices for example.
The de-facto ABI for slices involves passing/storing pointer and length separately and rebuilding the slice locally. It's hard to do better than that other than by somehow standardizing a "slice" binary representation across C and C-like languages. And then you'll still have to deal with existing legacy code that doesn't agree with that strict representation.
> C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
It seems to me the "convenient" options are the dangerous ones.
The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.
Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.
Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.
This is all wishful thinking disconnected from practicalities.
First you confuse API and ABI.
Second there is no practical difference between first and third-party for any sufficiently complex project.
Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.
In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.
I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.
> Second there is no practical difference between first and third-party for any sufficiently complex project.
Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.
There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.
> Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.
Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?
> In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.
This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.
You'd be wrong. If the build system has full knowledge on how to build the whole thing, it can do a much better job. Caching the outputs of the build is trivial.
If you import some ready made binaries, you have no way to guarantee they are compatible with the rest of your build or contain the features you need. If anything needs updating and you actually bother to do it for correctness (most would just hope it's compatible) your only option is usually to rebuild the whole thing, even if your usage only needed one file.
"That makes everything slow, inefficient, and widely dangerous."
There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.
Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
Neither "ODR violations" nor IFNDR exist in C. Incompatibility across translation units can cause undefined behavior in C, but this can easily be avoided.
Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.
You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files.
The problem is doing this requires a team to support it that is realistically as large as your average product team. I know Bazel is the solution here but as someone who has used C++, modified build systems and maintained CI for teams for years, I have never gotten it to work for anything more than a toy project.
It usually ends up somewhat non-generic, with project-specific decisions hardcoded rather than specified in a config file.
I usually make it so that it's fully integrated with wherever we store artifacts (for CAS), source (to download specific revisions as needed), remote running (which depending on the shop can be local, docker, ssh, kubernetes, ...), GDB, IDEs... All that stuff takes more work for a truly generic solution, and it's generally more valuable to have tight integration for the one workflow you actually use.
Since I also control the build image and toolchain (that I build from source) it also ends up specifically tied to that too.
In practice, I find that regardless of what generic tool you use like cmake or bazel, you end up layering your own build system and workflow scripts on top of those tools anyway. At some point I decided the complexity and overhead of building on top of bazel was more trouble than it was worth, while building it from scratch is actually quite easy and gives you all the control you could possibly need.
>Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.
Which tool do you use for content-addressable storage in your builds?
>You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
This isn't always feasible though.
What's the best practice when one cannot avoid a library?
You can use S3 or equivalent; a normal filesystem (networked or not) also works well.
You hash all the inputs that go into building foo.cpp, and then that gives you /objs/<hash>.o. If it exists, you use it; if not, you build it first. Then if any other .cpp file ever includes foo.hpp (directly or indirectly), you mark that it needs to link /objs/<hash>.o.
You expand the link requirements transitively, and you have a build system. 200 lines of code. Your code is self-describing and you never need to write any build logic again, and your build system is reliable, strictly builds only what it needs while sharing artifacts across the team, and never leads to ODR.
In my experience, no one does build systems right; Cargo included.
The standard was initially meant to standardize existing practice. There is no good existing practice. Very large institutions depending heavily on C++ systematically fail to manage the build properly despite large amounts of third party licenses and dedicated build teams.
With AI, how you build and integrate together fragmented code bases is even more important, but someone has yet to design a real industry-wide solution.
Speedy convenience beats absolute correctness anyday. Humans are not immortal and have finite amount of time for life and work. If convenience didn't matter, we would all still be coding in assembly or toggling hardware switches.
C++ builds are extremely slow because they are not correct.
I'm doing a migration of a large codebase from local builds to remote execution and I constantly have bugs with mystery shared library dependencies implicitly pulled from the environment.
This is extremely tricky because if you run an executable without its shared library, you get "file not found" with no explanation. Even AI doesn't understand this error.
I may be in the minority but I like that C++ has multiple package managers, as you can use whichever one best fits your use case, or none at all if your code is simple enough.
It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.
Multiple package managers is fine, what's needed is a common repository standard (or even any repository functionality at all). Look at how it works in Java land, where if you don't want to use Maven you can use Gradle or Bazel or what have you, or if you hate yourself you can use Ant+Ivy, but all of them share the same concept of what a dependency is and can use the same repositories.
Also, having one standard packaging format and registry doesn't preclude having alternatives for special use cases.
There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.
The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.
In the Linux world and even Haiku, there is a standard package dependacy format, so dependencies aren’t really a problem. Even OSX has Homebrew. Windows is the odd man out.
That would actually be pretty cool. Though I think there might have been papers written on this a few years ago. Does anyone know of these or have any updates about them?
No, because most major compilers don't support header units, much less standard library header units from C++26.
What'll spur adoption is cmake adopting Clang's two-step compilation model that increases performance.
At that point every project will migrate overnight for the huge build time impact since it'll avoid redundant preprocessing. Right now, the loss of parallelism ruins adoption too much.
The idea is great, the execution is terrible. In JS, modules were instantly popular because they were easy to use, added a lot of benefit, and support in browsers and the ecoysystem was fairly good after a couple of years. In C++, support is still bad, 6 years after they were introduced.
The idea is great in the same way the idea of a perpetual motion machine is great: I'd love to have a perpetual motion machine (or C++ modules), but it's just not realistic.
IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.
No idea if modules themselves are failed or no, but if c++ wants to keep fighting for developer mindshare, it must make something resembling modules work and figure out package management.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
It has the developer mindshare of game engines, games and VFX industry standards, CUDA, SYCL, ROCm, HIP, Khronos APIS, game consoles SDK, HFT, HPC, research labs like CERN, Fermilab,...
Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.
Much like contracts--yes, C++ needs something modules-like, but the actual design as standardized is not usable.
Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.
I emphatically agree. C++ needs a standard build system that doesn’t suck ass. Most people would agree it needs a package manager although I think that is actually debatable.
Neither of those things require modules as currently defined.
I'm not the PC but I think you miss most of the pain points due to: 'personal' projects.
There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.
This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?
Because as a percentage of global C++ builds they’re used in probably 0.0001% of builds with no line of sight to that improving.
They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.
"Failed idea" gives modules too much credit. Outside old codebases, almost no one outside C++ diehards have the patience for the build and tooling circuss they create, and if you need fast iteration plus sane integration with existing deps, modules are like trading your shoes for roller skates in a gravel lot. Adopting them now feels like volunteering to do tax forms in assembbly.
I frankly wish we'd stop developing C++. It's so hard to keep track of all the new unnecessary toys they're adding to it. I thought I knew C++ until I read some recent C++ code. That's how bad it is.
Meanwhile C++ build system is an abomination. Header files should be unnecessary.
It looks like they didn't even add _BitInt types yet. Adding concepts but not adding _BitInt types sounds insane considering how simple _BitInt types are as a programmer (not sure about implementation but it already works in clang).
_BitInt types probably aren’t a priority because they are more or less trivial to implement yourself in C++.
Also, some of the implementation details and semantics do matter in an application dependent way, which makes it a bit of an opinionated feature. I would guess there is a lot of arguing over the set of tradeoffs suitable for a standard, since C++ tends to avoid opinionated designs if it can.
Finally, reflection has arrived, five years after I last touched a line in c++. I wonder how long would it take the committee, if ever, to introduce destructing move.
std::execution is very interesting, but will be difficult to get started with, as cautioned by Sutter. This HPC Wire article demonstrates how to use standard C++ to benefit from asynchronously parallel computation on both CUDA and MPI:
Overlapping communication and computation has been a common technique for decades in high-performance computing to "hide latency", which leads to better scaling. Now standard C++ can be used to express parallel algorithms without tying to a specific scheduler.
If you ask me (and why wouldn't you? :-)...) I really wish the C++ WG would do several things:
1. Standardize a `restrict` keyword and semantics for it (tricky for struct/class fields, but should be done).
2. Uniform Function Call Syntax! That is, make the syntax `obj.f(arg)` mean simply `f(obj, arg)` . That would make my life much easier, both as a user of classes and as their author. In my library authoring work particularly. And while we're at it, let us us a class' name as a namespace for static methods, so that Obj::f the static method is simply the method f in namespace Obj.
3. Get compiler makers to have an ABI break, so that we can do things like passing wrapped values in registers rather than going through memory. See: https://stackoverflow.com/q/58339165/1593077
4. Get rid of the current allocators in the standard library, which are type-specific (ridiculous) and return pointers rather than regions of memory. And speaking of memory regions (i.e. with address and size but no element type) - that should be standardized too.
1. This seems like it's be far too tricky and make C++ even more footgunny, especially with references, move constructors, etc etc.
2. Name lookup and overload resolution is already so complex though! This will likely never be added because it's so core c++ and would break so much. imo, it also blurs the line between what's your interface vs what I've defined.
3. This is every junior c++ engineers suggestion. Having ABI breaks would probably kill c++, even though it would improve the language long term.
4. Again, you make solid points and I think a lot of the committee would agree with you. However, the committee's job is to adapt C++ in a backwards supporting way, not to disrupt it's users and API each new release.
There are definitely things to fix in c++ and every graduate engineer I've managed has had the same opinions of patching the standard, without considering the full picture.
I don't understand this at all. There are modules.
But headers are perfectly fine to deal with and have been for decades and decades! Next you'll be arguing that contents pages in all books should be removed.
It's the fault of built systems.
CMake still doesn't support `import std` officially and undocumented things are done in the ecosystem [1]
But once it works and you setup the new stuff, having started a new CPP26 Project with modules now, it's kinda awesome. I'm certainly never going back. The big compilers are also retroactively adding `import std` to CPP20, so support is widening.
You're not supposed to distribute the precompiled module file. You are supposed to distribute the source code of the module.
Header-only projects are the best to convert to modules because you can put the implementation of a module in a "private module fragment" in that same file and make it invisible to users.
That prevents the compile-time bloat many header-only dependencies add. It also avoids distributing a `.cpp` file that has to be compiled and linked separately, which is why so many projects are header-only.
Plenty of examples on Github, Microsoft has talks on how Office has migrated to modules, and the Vulkan updated tutorials from Khronos, have an optional learning path with modules.
That being said, while it looks better than CMake, for anything professional I need remote execution support to deviate from the industry standard. Zig doesn't have that.
This is because large C++ projects reach a point where they cannot be compiled locally if they use the full language. e.g. Multi-hour Chromium builds.
What are you talking about, there is actually too much unicode awareness in C++. Unicode is not the same thing as utf-8. And, frankly, no language does it right, I'm not even sure "right" exists with Unicode
c++20's u8strings took a giant steaming dump on a number of existing projects, to the point that compiler flags had to be introduced to disable the feature just so c++20 would work with existing codebases. Granted that's utf-8 (not the same thing as unicode, as mentioned) but it's there.
No such strategy is necessary. That discourse was about not using C++ for applications where Java would work just as well.
The US government still uses C++ widely for new projects. For some types of applications it is actually the language of choice and will remain so for the foreseeable future.
>"For some types of applications it is actually the language of choice..."
Can you give an example please? And how does it correspond to government ONCD report and other government docs "recommending" "safe" languages like: Rust (noted for its ability to prevent memory-unsafe code), Go, Java, Swift, C#, Ruby, Ada
Among other things I design and implement high performance C++ backends. for some I got SOCS2 Type II certification but I am curious about future. Do not give a flying fuck about what the criteria for military projects as I would not ever touch one even if given a chance.
It is the high-performance/high-scale data processing and storage engines for data-intensive applications, some of which are used in high-assurance environments. These are used outside of defense/intel (the data models are generic) but defense/intel tends to set the development standards for government since theirs are the strictest and most rigorous.
An increasingly common requirement is the ability to robustly reject adversarial workloads in addition to being statically correct. Combined with the high-performance/high-scale efficiency requirements, this dictates what the software architecture can look like.
There are a few practical reasons Rust is not currently considered an ideal fit for this type of development. The required architecture largely disables Rust's memory-safety advantages. Modern C++ has significantly better features and capabilities under these constraints, yielding a smaller, more maintainable code base. People worry about supply chain attacks but I don't think that is a major factor here.
Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort. This allows you to trivially verify many correctness properties of the code that Rust cannot. It ends up being a comparison of unsafe Rust versus verification maximalist C++20, which tilts the safety/reliability aspects pretty hard toward C++. Code designed to this standard of reliability has extremely low defect rates regardless of language but it is much easier in some languages than others. I even shipped Python once.
A lot of casual C++ code doesn't bother with this level of verification, though they really should. It has been possible for years now. More casual applications also have more exposure to memory safety issues but those mostly use Java in my experience, at least in government.
> Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort
Would you be willing to share some more information about this? Interested in learning more since this sort of thing rarely seems to come up in typical situations I work in.
Please don't use Hacker News as a religious or ideological battleground. It tramples curiosity. Please don't pick the most religiously/ideologically provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead.
This is interesting! People ignoring this, I think, is also interesting on its own. I respect if other people disagree, but that's my 2c. I think our overton windows may not agree here, but I think this is part of the value of discussions with other humans.
Are you a moderator? The directive tone of this post is as if from an authority figure, but, but I do not believe you are one.
I do not believe there is anything about a religious or ideological background here. Could you please clarify?
I also believe it is your post that could be more accurately described as trampling curiosity; I believe there is a role reversal, in that I think your comment is a better description for trampling curiosity than the post your are responding. I'm not trying to be snarky - I'm curious how you came to those conclusions.
I watched a talk from Bjarne Stroustrup at CppCon about safety and it was pretty second hand embarrassing watching him try to pretend C++ has always been safe and safety mattered all along to them before Rust came along.
Well, there has been a long campaign against manual memory management - well before Rust was a thing. And along with that, a push for less use of raw pointers, less index loops etc. - all measures which, when adopted, reduce memory safety hazards significantly. Following the Core Guideliness also helps, as does using span's. Compiler warnings has improved, as has static analysis, also in a long process preceding Rust.
Of course, this is not completely guaranteed safety - but safety has certainly mattered.
Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?
I switched from C++ to Java/Python 20 years ago. I never really fit in, I just dont understand when people talk about the complicated frameworks to avoid multithreading/mutexes etc when basic C++ multi threading is much simpler than rxjava or async/await or whatever is flavor of the month.
But C++ projects are usually really boring. I want to go back but glad I left. Has anyone found a place where C++ style programming is in fashion but isn't quite C++? I hope that makes sense.
Contracts feel like the right direction but the wrong execution timeline. The Ada/SPARK model shows how powerful contracts become when they feed into static verification — but that took decades of iteration on a language with far cleaner semantics. Bolting that onto C++ where UB is load-bearing infrastructure is a different beast entirely.
The real risk isn't complexity for complexity's sake — it's that a "minimum viable" contracts spec gets locked in, and then the things that would actually make it useful for proof assistants become impossible to retrofit because they'd break the v1 semantics. Bjarne's concern about "incomplete" is more worrying to me than "bloated."
Here's a quote from Bjarne,
> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.
Nobody wanted it.
https://www.digitalmars.com/ctg/contract.html
The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.
I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?
Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.
Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.
There are several things proposed in the early days of C++ that arguably should be added.
In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison
It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful.
It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful.
Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things.
Feels like people that push stuff do it because "it is just what they do".
It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin.
One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.
Aren’t Rust macros more powerful than C++ template metaprogramming in practice?
Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand.
However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust...
While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad.
Why, for example, is printing the source-code text of an enum value so goddamn hard?
Why can I not just loop over the members of a class?
How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator)
These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates
I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn.
I'm not following rust development since about 2 years so don't know what the newest things are.
A shoutout to Eiffel, the first "modern" (circa 1985) language to incorporate Design by Contract. Well done Bertrand Meyer!
Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.
Exactly. People stopped using Ada as soon as they were no longer forced to use it.
In other words on its own merits people don't choose it.
Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.
But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.
So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.
C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.
My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.
C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility.
It does not matter what version of C++ you are using, you are still using C with classes.
Even C++23 is largely usable at this point, though there are still gaps for some features.
I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot.
I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me.
I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow?
From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now?
Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want.
Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.
You are passing in a memory location that can be read or written too.
That’s it.
I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.
This seems like an extension of that.
But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.
It sounds (and probably is) insane. But if a feature breaks backwards compatibility, or can't be implemented in a way that non-negligibly affects compiler/IDE performance for codebases that ignore it, what's the issue? Specifically, what significant new issues would it cause that C++’s existing bloat hasn’t?
C++20 isn't fully implemented in any one compiler (https://en.cppreference.com/w/cpp/compiler_support.html#C.2B...).
An argument can be made that C++26 features like reflection add complexity but I don't follow that argument for contracts.
We already have some primitive ways to define preconditions, notably the assert macro and the 'restrict' qualifier.
I don't mind a more structured way to define preconditions which can automatically serve as both documentation and debug invariant checks. Though you could argue that a simpler approach would be to "standardize" a convention to use assert() more liberally in the beginning of functions as precondition checks; that a sequence of 'assert's before non-'assert' code should semantically be treated as the functions preconditions by documentation generators etc.
I haven't looked too deep into the design of the actual final contracts feature, maybe it's bad for reasons which have nothing to do with the fundamental idea.
I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.
So perhaps I think the issue is not committees per se, but how the committees are put together and what are the driving values.
https://youtu.be/tzXu5KZGMJk?t=3160
Different TUs can be compiled with different settings for the contract behavior. But can they be binary compatible? In general, no. If a function is declared in-line in a header, the comoiler may have generated two different versions with different contract behaviors, which violates ODR.
What happens if the contract check calls a helper function that throws an exception?
The whole things is really, really complex and I don't assume that I understand it properly. But I can see that there are some valid concerns against the feature as standardized and that makes me side with the opposition here: this was not baked enough yet
It sounds like it should solve your problem. At first it seems to work. Then you keep on finding the footguns after it is too late to change the design.
I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.
https://herbsutter.com/2018/07/02/trip-report-summer-iso-c-s...
Asserting that no one wants their code to correctly handle errors is a bold claim.
I’m fascinated by the concept of deciding how much complexity (to a human) a feature has. And then the political process of deciding what to remove when everyone agrees something new needs to be accepted.
> bloated committee design and also incomplete
That's truly in that backdoor alley catching fire
Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.
I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.
It does have a runtime cost. There's an attribute to force undefined behavior on read again and avoid the cost:
But if you really, really want to leave it uninitialized, write:
where you're not writing that by accident.I'm surprised the "= void;" wasn't discussed. People liked it immediately in D, and other alternatives were not proposed.
Just like when I was learning rust and trying to read some http code but it was impossible because each function had 5 generics and 2 traits.
So I would have to look it up and be very careful about it since I can break something easily in C++.
This just makes things more difficult from the perspective of using/learning the language.
Similar problem with "unsequenced" and "reproducible" attributes added in c. It sounded cool after I took the time to learn exactly (/s) what it means. But it is not worth the time to learn it. And it is not worth the cognitive load it will put on people that will read the code later imo.
Volatile is a mess because what you actually want are the volatile intrinsics, indeed you might want more (or fewer) depending on the target. If your target can do single bit hardware writes it'd be nice to provide an intrinsic for that, rather than hoping you can write in code REG |= 0x40 and have that write a single bit... which on platforms which do not have this single bit write feature that's going to compile to an unsynchronized read-modify-write which may cause problems. However instead of having intrinsics C's volatile was hacked into the type system instead and C++ tries to keep that.
I thought constexpr was a hard physical constant, but in reality it's a weird hybrid
this visualisation helped me to wrap my head around it - https://vectree.io/c/c-constness-and-evaluation-qualifiers
How far is Clang on reflection and contracts?
https://clang.llvm.org/cxx_status.html
https://gcc.gnu.org/projects/cxx-status.html
It has been on compiler explorer for a while
This has been used to add reflection to simdjson
It's nice to have new features, but what is really killing C++ is Cargo. I don't think a new generation of developers are going to be inspired to learn a language where you can't simply `cargo add` whatever you need and instead have to go through hell to use a dependency.
In the interest of pedantry, locating source files relative to the crate root is a language-level Rust feature, not something specific to Cargo. You can pass any single Rust source file directly to rustc (bypassing Cargo altogether) and it will treat it as a crate root and locate additional files as needed based on the normal lookup rules.
That being said, I'd argue that the fact that this happens so transparently that people don't really need to know this to use Cargo correctly is somewhat the point I was making. Compared to something like cmake, the amount of effort to use it is at least an order of magnitude lower.
For most crates, yes. But you might be surprised how many crates have a build.rs that is doing more complex stuff under the hood (generating code, setting environment variables, calling a C compiler, make or some other build system, etc). It just also almost always works flawlessly (and the script itself has a standardised name), so you don't notice most of the time.
https://mesonbuild.com/
And Mesons awesome dependency handling:
https://mesonbuild.com/Dependencies.html
https://mesonbuild.com/Using-the-WrapDB.html#using-the-wrapd...
https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies...
I suffered with Java from Any, Maven and Gradle (the oldest is the the best). After reading about GNU Autotools I was wondering why the C/C++ folks still suffer? Right at that time Meson appeared and I skipped the suffering.
Feel free to extend WrapDB.In C++ you don't get lockfiles, you don't get automatic dependency install, you don't get local dependencies, there's no package registry, no version support, no dependency-wide feature flags (this is an incoherent mess in Meson), no notion of workspaces, etc.
Compared to Cargo, Meson isn't even in the same galaxy. And even compared to CMake, Meson is yet another incompatible incremental "improvement" that offers basically nothing other than cute syntax (which in an era when AI writes all of your build system anyway, doesn't even matter). I'd much rather just pick CMake and move on.
> I’m still surprised how people ignore Meson. Please test it :)
I did just that a few years ago and found it rather inconvenient and inflexible, so I went back to ignoring it. But YMMV I suppose.
> After reading about GNU Autotools
Consider Kitware's CMake.
C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
There are of course good ways of building C++, but those are the exception rather than the standard.
In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers.
That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C!
But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage.
If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR).
This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways.
This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again.
That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation.
Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them.
The de-facto ABI for slices involves passing/storing pointer and length separately and rebuilding the slice locally. It's hard to do better than that other than by somehow standardizing a "slice" binary representation across C and C-like languages. And then you'll still have to deal with existing legacy code that doesn't agree with that strict representation.
It seems to me the "convenient" options are the dangerous ones.
The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.
Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.
Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.
First you confuse API and ABI.
Second there is no practical difference between first and third-party for any sufficiently complex project.
Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.
In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.
I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.
> Second there is no practical difference between first and third-party for any sufficiently complex project.
Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.
There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.
> Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.
Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?
> In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.
This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.
If you import some ready made binaries, you have no way to guarantee they are compatible with the rest of your build or contain the features you need. If anything needs updating and you actually bother to do it for correctness (most would just hope it's compatible) your only option is usually to rebuild the whole thing, even if your usage only needed one file.
There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.
The same problems exist.
What are the good ways?
You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files.
Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.
Really? I'd love a link to even something that works as a toy project
> Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.
I agree
I usually make it so that it's fully integrated with wherever we store artifacts (for CAS), source (to download specific revisions as needed), remote running (which depending on the shop can be local, docker, ssh, kubernetes, ...), GDB, IDEs... All that stuff takes more work for a truly generic solution, and it's generally more valuable to have tight integration for the one workflow you actually use.
Since I also control the build image and toolchain (that I build from source) it also ends up specifically tied to that too.
In practice, I find that regardless of what generic tool you use like cmake or bazel, you end up layering your own build system and workflow scripts on top of those tools anyway. At some point I decided the complexity and overhead of building on top of bazel was more trouble than it was worth, while building it from scratch is actually quite easy and gives you all the control you could possibly need.
Which tool do you use for content-addressable storage in your builds?
>You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
This isn't always feasible though.
What's the best practice when one cannot avoid a library?
You hash all the inputs that go into building foo.cpp, and then that gives you /objs/<hash>.o. If it exists, you use it; if not, you build it first. Then if any other .cpp file ever includes foo.hpp (directly or indirectly), you mark that it needs to link /objs/<hash>.o.
You expand the link requirements transitively, and you have a build system. 200 lines of code. Your code is self-describing and you never need to write any build logic again, and your build system is reliable, strictly builds only what it needs while sharing artifacts across the team, and never leads to ODR.
The standard was initially meant to standardize existing practice. There is no good existing practice. Very large institutions depending heavily on C++ systematically fail to manage the build properly despite large amounts of third party licenses and dedicated build teams.
With AI, how you build and integrate together fragmented code bases is even more important, but someone has yet to design a real industry-wide solution.
I'm doing a migration of a large codebase from local builds to remote execution and I constantly have bugs with mystery shared library dependencies implicitly pulled from the environment.
This is extremely tricky because if you run an executable without its shared library, you get "file not found" with no explanation. Even AI doesn't understand this error.
You can also very easily harden this if you somehow don't want to capture libraries from outside certain paths.
You can even build the compiler in such a way that every binary it produces has a built-in RPATH if you want to force certain locations.
(And "absolute" or other adjectives don't qualify "correctness"... it simply is or isn't.)
It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.
There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.
The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.
You cannot cargo add Unreal, LLVM, GCC, CUDA,...
What'll spur adoption is cmake adopting Clang's two-step compilation model that increases performance.
At that point every project will migrate overnight for the huge build time impact since it'll avoid redundant preprocessing. Right now, the loss of parallelism ruins adoption too much.
IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.
Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.
Neither of those things require modules as currently defined.
Personally I use them in new projects using XMake and it just works.
There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.
This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?
They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.
Meanwhile C++ build system is an abomination. Header files should be unnecessary.
Also, some of the implementation details and semantics do matter in an application dependent way, which makes it a bit of an opinionated feature. I would guess there is a lot of arguing over the set of tradeoffs suitable for a standard, since C++ tends to avoid opinionated designs if it can.
Someone has to write a proposal, bring it to the various meetings, and getting it to win the features selection election from all parties.
Also WG21 tends to disregard C features that can already be implemented within C++'s type system.
eg. B = std::move(A); // You are worried about touching A when it's in this indeterminate state?
https://www.hpcwire.com/2022/12/05/new-c-sender-library-enab...
Overlapping communication and computation has been a common technique for decades in high-performance computing to "hide latency", which leads to better scaling. Now standard C++ can be used to express parallel algorithms without tying to a specific scheduler.
1. Standardize a `restrict` keyword and semantics for it (tricky for struct/class fields, but should be done).
2. Uniform Function Call Syntax! That is, make the syntax `obj.f(arg)` mean simply `f(obj, arg)` . That would make my life much easier, both as a user of classes and as their author. In my library authoring work particularly. And while we're at it, let us us a class' name as a namespace for static methods, so that Obj::f the static method is simply the method f in namespace Obj.
3. Get compiler makers to have an ABI break, so that we can do things like passing wrapped values in registers rather than going through memory. See: https://stackoverflow.com/q/58339165/1593077
4. Get rid of the current allocators in the standard library, which are type-specific (ridiculous) and return pointers rather than regions of memory. And speaking of memory regions (i.e. with address and size but no element type) - that should be standardized too.
Someone has to bring a written spec to WG21 meetings and push it through.
And like in every open source project that doesn't go the way we like, the work is only done by those that show up.
2. Name lookup and overload resolution is already so complex though! This will likely never be added because it's so core c++ and would break so much. imo, it also blurs the line between what's your interface vs what I've defined.
3. This is every junior c++ engineers suggestion. Having ABI breaks would probably kill c++, even though it would improve the language long term.
4. Again, you make solid points and I think a lot of the committee would agree with you. However, the committee's job is to adapt C++ in a backwards supporting way, not to disrupt it's users and API each new release.
There are definitely things to fix in c++ and every graduate engineer I've managed has had the same opinions of patching the standard, without considering the full picture.
But headers are perfectly fine to deal with and have been for decades and decades! Next you'll be arguing that contents pages in all books should be removed.
This should be your default stack on any small-to-medium sized C++ project.
Bazel, the default pick for very large codebases, also has support for C++20 modules.
But once it works and you setup the new stuff, having started a new CPP26 Project with modules now, it's kinda awesome. I'm certainly never going back. The big compilers are also retroactively adding `import std` to CPP20, so support is widening.
[1] https://gitlab.kitware.com/cmake/cmake/-/work_items/27706
Header-only projects are the best to convert to modules because you can put the implementation of a module in a "private module fragment" in that same file and make it invisible to users.
That prevents the compile-time bloat many header-only dependencies add. It also avoids distributing a `.cpp` file that has to be compiled and linked separately, which is why so many projects are header-only.
That being said, while it looks better than CMake, for anything professional I need remote execution support to deviate from the industry standard. Zig doesn't have that.
This is because large C++ projects reach a point where they cannot be compiled locally if they use the full language. e.g. Multi-hour Chromium builds.
Proper reflection is exciting.
Also, useful: https://gcc.gnu.org/projects/cxx-status.html
The US government still uses C++ widely for new projects. For some types of applications it is actually the language of choice and will remain so for the foreseeable future.
Can you give an example please? And how does it correspond to government ONCD report and other government docs "recommending" "safe" languages like: Rust (noted for its ability to prevent memory-unsafe code), Go, Java, Swift, C#, Ruby, Ada
Among other things I design and implement high performance C++ backends. for some I got SOCS2 Type II certification but I am curious about future. Do not give a flying fuck about what the criteria for military projects as I would not ever touch one even if given a chance.
An increasingly common requirement is the ability to robustly reject adversarial workloads in addition to being statically correct. Combined with the high-performance/high-scale efficiency requirements, this dictates what the software architecture can look like.
There are a few practical reasons Rust is not currently considered an ideal fit for this type of development. The required architecture largely disables Rust's memory-safety advantages. Modern C++ has significantly better features and capabilities under these constraints, yielding a smaller, more maintainable code base. People worry about supply chain attacks but I don't think that is a major factor here.
Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort. This allows you to trivially verify many correctness properties of the code that Rust cannot. It ends up being a comparison of unsafe Rust versus verification maximalist C++20, which tilts the safety/reliability aspects pretty hard toward C++. Code designed to this standard of reliability has extremely low defect rates regardless of language but it is much easier in some languages than others. I even shipped Python once.
A lot of casual C++ code doesn't bother with this level of verification, though they really should. It has been possible for years now. More casual applications also have more exposure to memory safety issues but those mostly use Java in my experience, at least in government.
Would you be willing to share some more information about this? Interested in learning more since this sort of thing rarely seems to come up in typical situations I work in.
Are you a moderator? The directive tone of this post is as if from an authority figure, but, but I do not believe you are one.
I do not believe there is anything about a religious or ideological background here. Could you please clarify?
I also believe it is your post that could be more accurately described as trampling curiosity; I believe there is a role reversal, in that I think your comment is a better description for trampling curiosity than the post your are responding. I'm not trying to be snarky - I'm curious how you came to those conclusions.
Of course, this is not completely guaranteed safety - but safety has certainly mattered.
Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?
But C++ projects are usually really boring. I want to go back but glad I left. Has anyone found a place where C++ style programming is in fashion but isn't quite C++? I hope that makes sense.