Is it? Curl dropped the hyper components, but they appear to support rustls as a backend. That work appears to have been done by Prossimo as well, and is conceptually grouped with it on the site.
(In other words: curl is using Rust, just not a specific Rust HTTP/1 backend anymore. But the site doesn’t limit its scope to just that backend.)
1. attempting to retcon garbage collected languages as not memory safe, or
2. discussing a particular implementation choice of the standard Go runtime that was made because it is not a practical source of bugs (see https://research.swtch.com/gorace, it is not an inherent feature of the language, just the implementation, and it is the right practical choice)
But either way: this is the sort of thing I have seen again and again in the Rust "community" that I find deeply off-putting. Build good things, do not play snooty word games.
If we're being extremely strict, Python probably also shouldn't be on that list because the CPython runtime is written in C and has had issues with memory safety in the past.
Ultimately, "memory safety" is a conversation about what a program is intended to do and what semantics the language guarantees that program will have when executed. For the vast majority of programs, you can be just as confident that your Go and Python code will do the right things at runtime as your safe Rust. It's good enough.
No, because then no language would be included, including Rust. Implementation bugs are not treated the same as integral parts of the language as defined by the standard. Python is defined as memory safe.
I understand this website as focusing on unsafety in a more practical sense of writing your stack in memory safe ways, not in the sense of discussing what's theoretically possible within the language specs. After all, Fil-C is standard compliant, but "run everything under Fil-C" is not the argument it's making. The most common language runtime being memory unsafe is absolutely an applicable argument here, mitigated only by the fact that it's a mature enough runtime that memory issues are vanishingly rare.
is go not memory safe? other than unsafe and other contrived goroutine scenarios, isn't it? I'm actually really curious - I've been writing go for just a couple years now and my understanding is the only ways for it to be unsafe are the two scenarios I described earlier.
One of Rust's core guarantees is that a race condition in safe code will never cause UB. It might return a nondeterministic result, but that result will be safe and well-typed (for example, if it's a Vec, it will be a valid Vec that will behave as expected and, once you have a unique reference, is guaranteed not to change out from under you).
When talking about the kind that lead to torn memory writes, no it doesn't have those. To share between threads you need to go through atomics or mutexes or other protection methods.
Usually people are talking about race conditions. When you say contrived you're thinking races conditions are difficult to win and unrealistic but attackers who have a lot of money on the line spend the time to win all sorts of wild race conditions consistently.
is there actually a programming language that makes race conditions impossible (I am not being facetious, I actually do not know)? if the existence of races makes a language unsafe, then aren't all languages unsafe?
It's not that race conditions are generally memory-unsafe. The same race conditions would not be memory-unsafe in, say, Java or Python.
Go has a memory model that basically guarantees that the language is memory-safe except with a few marked "unsafe" functions or in case of race conditions involving interfaces or arrays. It's pretty easy to come up with an example of such a race condition that will cause reads or writes from/to unpredictable memory addresses. I imagine it's quite feasible to turn this into reads or writes from/to crafted memory addresses, which would be a mean to defeat pretty much any security measure implemented in the language.
The Rust community caters to people who are a bit obsessive about safety (including myself) and Rust developers tend to consider this a bug in the design of the Go language (there are a few, albeit much harder to achieve, issues that are vaguely comparable in Rust and they are considered bugs in the current design of Rust). The Go community tends to attract people who are more interested in shipping than in guarantees, and Go developers who are aware of this issue tend not care and assume that this is never going to happen in practice (which may or may not be true, I haven't checked).
>is there actually a programming language that makes race conditions impossible
It'd be very hard to make something that offers that guarantee in the real world. One of the most common, IRL exploitable race conditions are ones that involve multiple services/databases, and even if your programming language would have such a feature, your production system would not.
> is there actually a programming language that makes race conditions impossible
To my knowledge, no.
> if the existence of races makes a language unsafe, then aren't all languages unsafe?
Are we talking about "data races" or "race conditions" One can lead to the other, but race conditions are a much bigger set.
AIUI It's impossible for any language level controls to prevent any and all race conditions, because some are happening outside of the binary/process/computer.
Data races, OTOH are almost trivial to protect against - a contestable thing must have a guard that ensures a writer has exclusive access to that thing for the duration of the write.
Some languages do this with mutually exclusive locks (mutex/semaphore/go channels), some languages/paradigms do this by never having shareable objects (Functional Programming/Pass by Value), and some (Rust) are doing this with the compile time checks and firm rules on a single writer.
I think Go is effectively memory safe. The relevant test is for the presence of exploitable memory corruption, and to my understanding that’s never been a real issue with Go.
C and C++ as defined by their current standards are memory unsafe. You may argue that some specific implementations manage to stay as memory safe as they can get away with, but even then, features like union prevents a fully memory-safe implementation.
> C and C++ as defined by their current standards are memory unsafe.
I don’t think the spec says one way or another (but please correct me if you find verbiage indicating that the language must be memory unsafe).
It’s possible to make the whole language memory safe, including unions. It’s tricky, but possible.
Someone else mentioned Fil-C but Fil-C builds on a lot of prior art. The fact that C and C++ can be memory safe is no secret to those who understand language implementation.
By definition, C and C++ are memory safe as long as you follow the rules. The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.
But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)
> By definition, C and C++ are memory safe as long as you follow the rules.
This statement doesn't make sense to me.
Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.
> The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.
They can be automatically checked and Fil-C proves this. The prior art had already proved it before Fil-C existed.
> But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)
Fil-C doesn't have UB. If you find anything that looks like UB to you, please file a GH issue.
Let's also be clear that you're referring to nasal demons specifically, not UB generally. In some contexts, like CPU ISAs, UB means a trap, rather than nasal demons. So let's use the term "nasal demons".
C and C++ only have nasal demons because:
- Policy decisions. For example, making signed integer addition have nasal demons is because someone wanted to cook a benchmark.
- Lack of memory safety in most implementations, combined with a refusal to acknowledge what happens when the wrong kind of memory access occurs. (Note that CPU ISAs like x86 and ARM are not memory safe, but have no nasal demons, because they do define what happens when any kind of memory access occurs.)
So anyway, Fil-C has no nasal demons, because:
- I turned off all of those silly policy decisions for cooking benchmarks.
- The memory safety means that I define what happens when the wrong kind of memory access occurs: the program gets killed with a panic.
The annual report of this org is pretty underspecified. There a numerous directors. Are they getting paid under "ops & admin" (11%)? Is "advancement" (12.3%) marketing?
Prossimo gets 6.8%. Does that money go to programmers? Are people whose works are being rewritten and plagiarized offered money to do the rewrite themselves or does it go to friends and family?
Why does an org that takes donations not produce a proper report?
This is a bit like saying everyone would be a bit less jaded if the plane staying in the air wasn't hung over the Boeing 737 MAX 8 designer's heads by certain communities and used as an existential threat to the company.
Doesn't modern C++ offer the ability to write memory safe code? The primary distinguishing difference from Rust, on this front, is that Rust is memory safe by default (and allows them to override memory safety with code that is explicitly declared as unsafe) while C++ requires the developer to make a conscious effort to avoid unsafe code (without providing facilities to declare code as safe or unsafe). While this means that C++ is problematic, it does not make Rust automagically safer - particularly when interfacing with C/C++ code (as would be the case when interfacing with Linux syscalls or most C or C++ based libraries).
I guess what I'm saying is that Rust is great when dealing exclusively with Rust libraries since it is either memory safe by default or because it is easier to audit (since it is either declared explicitly or implicitly as unsafe). On the other hand, it is not guaranteed to be memory safe. While this may sound like nitpicking, the distinction is important from the perspective of the end user who is unlikely to ever audit the code yet may be swayed by being told that it is written in a memory safe language.
> C++ requires the developer to make a conscious effort to avoid unsafe code
The problem is much worse than how you put it. I've written C++ for more than a decade and it's painful to constantly having to worry about memory safety due to enormous complexity of the language. Even if you are super diligent, you will make a mistake and it will bite you when you expect it the least.
Relying on clang-tidy, non-default compiler flags, sanitizers and other tools is not only not enough, but a constant source of headache on how to integrate them with your build systems and project requirements.
I think one of the fundamental differences in the SOTA C++ approach to memory safety (eg. extending unique/shared_ptr) and Rust is that C++ doesn't try to enforce having a single mutable reference to a variable and it still relies on strict aliasing heuristics, and so cannot claim to be fully deterministic.
Still, use after free, and memory leaks should be impossible.
It'll still let you do a bunch of stuff Rust doesn't, which is up to the programmer to decide whether this is good or not.
All languages offer the ability to write memory-safe code. It's just that doing so is very difficult in C and C++. The benefit of Rust isn't really the assurance of safety that's provided by not using the `unsafe` keyword. After all, pretty much all Rust programs do use the `unsafe` keyword. The benefit of Rust is a combination of many design decisions that make it easy to write safe code.
For example, everyday operations in Rust are almost all defined. But in C++, it is extremely easy to run into undefined behavior by accident and make your code do something bizarre. On the practical side, I have never ever gotten a segfault when writing rust, but have many times in C++.
Doesn't modern C++ offer the ability to write memory safe code?
Can you name an example of a non-trivial C++ program that's memory safe? The only examples I can think of went to extraordinary lengths with formal methods and none of them are widely used.
If you're writing C/C++ and you don't care about memory safety, you're taking one of a few possible positions:
1. "I don't care what my program does."
Why write it though?
2. "I don't care what the standard says, I've put text into a compiler and it gave me a binary that does the thing."
What if you want to put the same text into a different compiler in the future, or the same compiler again? Are you certain the binary is going to continue doing the thing? Have you even fully tested the binary?
3. "I use a special runtime that makes memory unsafety defined again."
One, I don't believe you unless you're part of a very small group of people and two, why are you accepting the serious drawbacks (performance, process death, all the broader issues of UB) that come with this?
It's genuinely hard for me to understand why you wouldn't think memory safety is important Don't you want to write code that's portable and correct? Don't you want to execute other people's programs and trust they won't segfault? Doesn't it frustrate you that the language committees have spent years refusing to address even the lowest-hanging fruit?
Firstly, the existence of unsafe does not inherently mean the code isn’t memory safe.
Secondly, memory safety does not mean no security vulnerabilities. What it does mean is that 80% of the most commonly found vulnerabilities (as gathered through statistical analysis of field failures) are gone. It means that the price for finding a vulnerability is higher.
And also sudo-rs precisely removes a lot of complexity that’s the source of vulnerabilities in normal sudo. There may be better approaches but it’s specifically not targeting 100% compat because sudo is horribly designed right now.
TLDR: this is a lazy knee jerk critique. Please do better in the future.
> Is this website promoting Rust, memory unsafety and insecurity?
What a joke of a question. The website prioritizes programs with 98% safe code over programs with 0% safe code. Does that mean it's "promoting memory unsafety" because it's not demanding 100%? No.
> This code is 100% Safe Rust but it is also completely unsound. Changing the capacity violates the invariants of Vec (that cap reflects the allocated space in the Vec). This is not something the rest of Vec can guard against. It has to trust the capacity field because there's no way to verify it.
> Because it relies on invariants of a struct field, this unsafe code does more than pollute a whole function: it pollutes a whole module. Generally, the only bullet-proof way to limit the scope of unsafe code is at the module boundary with privacy.
If 2% code means that, wildly spitballing, 50% code has to be manually reviewed, it's not quite as impressive.
And it might be worse regarding memory safety than the "status quo" if one accepts the assumption that unsafe Rust is harder than C and C++, as many Rust developers do.
> It may be even less safe because of the strong aliasing rules. That is, it may be harder to write correct unsafe {} Rust than correct C. Only use unsafe {} when absolutely needed.
If you're letting safe code in the same module as unsafe code mess with invariants, then the whole module needs to be verified by hand, and should be kept as minimal as feasible. Anything outside the module doesn't need to be verified by hand. "Modules with unsafe" should be a lot lot less than 50%. Your spitball is not a fit to real code.
When I wrote "98% safe code" I meant the code that can be automatically verified by the compiler. I wish the terminology was better.
> If you're letting safe code in the same module as unsafe code mess with invariants, then the whole module needs to be verified by hand, and should be kept as minimal as feasible. Anything outside the module doesn't need to be verified by hand.
More or less correct, but in principle, it also requires modules to have proper encapsulation and interfaces. Otherwise, an interface could be made that enables safe API functions to cause memory unsafety if called incorrectly.
> "Modules with unsafe" should be a lot lot less than 50%. Your spitball is not a fit to real code.
> When I wrote "98% safe code" I meant the code that can be automatically verified by the compiler. I wish the terminology was better.
That depends on what is meant by "automatically verified by the compiler". The compiler cannot generally verify outside-unsafe code that modifies invariants that inside-unsafe code might rely on to have memory safety, for instance.
I mean the portion of the code that has no write access to those fragile invariants. At minimum, this includes all modules that don't have unsafe blocks.
(In other words: curl is using Rust, just not a specific Rust HTTP/1 backend anymore. But the site doesn’t limit its scope to just that backend.)
Perhaps in part because the sponsors are invested in using go and benefit from its inclusion in a list of memory safe languages.
1. attempting to retcon garbage collected languages as not memory safe, or
2. discussing a particular implementation choice of the standard Go runtime that was made because it is not a practical source of bugs (see https://research.swtch.com/gorace, it is not an inherent feature of the language, just the implementation, and it is the right practical choice)
But either way: this is the sort of thing I have seen again and again in the Rust "community" that I find deeply off-putting. Build good things, do not play snooty word games.
Ultimately, "memory safety" is a conversation about what a program is intended to do and what semantics the language guarantees that program will have when executed. For the vast majority of programs, you can be just as confident that your Go and Python code will do the right things at runtime as your safe Rust. It's good enough.
See: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m...
Go has a memory model that basically guarantees that the language is memory-safe except with a few marked "unsafe" functions or in case of race conditions involving interfaces or arrays. It's pretty easy to come up with an example of such a race condition that will cause reads or writes from/to unpredictable memory addresses. I imagine it's quite feasible to turn this into reads or writes from/to crafted memory addresses, which would be a mean to defeat pretty much any security measure implemented in the language.
The Rust community caters to people who are a bit obsessive about safety (including myself) and Rust developers tend to consider this a bug in the design of the Go language (there are a few, albeit much harder to achieve, issues that are vaguely comparable in Rust and they are considered bugs in the current design of Rust). The Go community tends to attract people who are more interested in shipping than in guarantees, and Go developers who are aware of this issue tend not care and assume that this is never going to happen in practice (which may or may not be true, I haven't checked).
It'd be very hard to make something that offers that guarantee in the real world. One of the most common, IRL exploitable race conditions are ones that involve multiple services/databases, and even if your programming language would have such a feature, your production system would not.
To my knowledge, no.
> if the existence of races makes a language unsafe, then aren't all languages unsafe?
Are we talking about "data races" or "race conditions" One can lead to the other, but race conditions are a much bigger set.
AIUI It's impossible for any language level controls to prevent any and all race conditions, because some are happening outside of the binary/process/computer.
Data races, OTOH are almost trivial to protect against - a contestable thing must have a guard that ensures a writer has exclusive access to that thing for the duration of the write.
Some languages do this with mutually exclusive locks (mutex/semaphore/go channels), some languages/paradigms do this by never having shareable objects (Functional Programming/Pass by Value), and some (Rust) are doing this with the compile time checks and firm rules on a single writer.
What standard?
False.
C and C++ are not memory safe if you use the most common and most performant implementations, sure.
At some point these folks are going to have to accept that caveat
I don’t think the spec says one way or another (but please correct me if you find verbiage indicating that the language must be memory unsafe).
It’s possible to make the whole language memory safe, including unions. It’s tricky, but possible.
Someone else mentioned Fil-C but Fil-C builds on a lot of prior art. The fact that C and C++ can be memory safe is no secret to those who understand language implementation.
But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)
This statement doesn't make sense to me.
Memory safety is a property of language implementations, which is all about what happens when the programmer does not follow the rules.
> The problem is that the rules cannot be automatically checked and in practice are the source of unenumerable issues from straight up bugs to subtle standards violations that trigger the optimizer to rewrite your code into what you didn’t intend.
They can be automatically checked and Fil-C proves this. The prior art had already proved it before Fil-C existed.
> But yes, fil-c is a huge improvement (afaik though it doesn’t solve the UB problem - it just guarantees you can’t have a memory safety issue as a result)
Fil-C doesn't have UB. If you find anything that looks like UB to you, please file a GH issue.
Let's also be clear that you're referring to nasal demons specifically, not UB generally. In some contexts, like CPU ISAs, UB means a trap, rather than nasal demons. So let's use the term "nasal demons".
C and C++ only have nasal demons because:
- Policy decisions. For example, making signed integer addition have nasal demons is because someone wanted to cook a benchmark.
- Lack of memory safety in most implementations, combined with a refusal to acknowledge what happens when the wrong kind of memory access occurs. (Note that CPU ISAs like x86 and ARM are not memory safe, but have no nasal demons, because they do define what happens when any kind of memory access occurs.)
So anyway, Fil-C has no nasal demons, because:
- I turned off all of those silly policy decisions for cooking benchmarks.
- The memory safety means that I define what happens when the wrong kind of memory access occurs: the program gets killed with a panic.
Prossimo gets 6.8%. Does that money go to programmers? Are people whose works are being rewritten and plagiarized offered money to do the rewrite themselves or does it go to friends and family?
Why does an org that takes donations not produce a proper report?
https://news.ycombinator.com/newsguidelines.html
Doesn't modern C++ offer the ability to write memory safe code? The primary distinguishing difference from Rust, on this front, is that Rust is memory safe by default (and allows them to override memory safety with code that is explicitly declared as unsafe) while C++ requires the developer to make a conscious effort to avoid unsafe code (without providing facilities to declare code as safe or unsafe). While this means that C++ is problematic, it does not make Rust automagically safer - particularly when interfacing with C/C++ code (as would be the case when interfacing with Linux syscalls or most C or C++ based libraries).
I guess what I'm saying is that Rust is great when dealing exclusively with Rust libraries since it is either memory safe by default or because it is easier to audit (since it is either declared explicitly or implicitly as unsafe). On the other hand, it is not guaranteed to be memory safe. While this may sound like nitpicking, the distinction is important from the perspective of the end user who is unlikely to ever audit the code yet may be swayed by being told that it is written in a memory safe language.
The problem is much worse than how you put it. I've written C++ for more than a decade and it's painful to constantly having to worry about memory safety due to enormous complexity of the language. Even if you are super diligent, you will make a mistake and it will bite you when you expect it the least. Relying on clang-tidy, non-default compiler flags, sanitizers and other tools is not only not enough, but a constant source of headache on how to integrate them with your build systems and project requirements.
It'll still let you do a bunch of stuff Rust doesn't, which is up to the programmer to decide whether this is good or not.
Use-after-free is still possible in modern C++ via std::span/std::string_view/etc. outliving the backing object.
For example, everyday operations in Rust are almost all defined. But in C++, it is extremely easy to run into undefined behavior by accident and make your code do something bizarre. On the practical side, I have never ever gotten a segfault when writing rust, but have many times in C++.
And then Ada software caused the loss of US$370 million with Ariane 5.
1. "I don't care what my program does."
Why write it though?
2. "I don't care what the standard says, I've put text into a compiler and it gave me a binary that does the thing."
What if you want to put the same text into a different compiler in the future, or the same compiler again? Are you certain the binary is going to continue doing the thing? Have you even fully tested the binary?
3. "I use a special runtime that makes memory unsafety defined again."
One, I don't believe you unless you're part of a very small group of people and two, why are you accepting the serious drawbacks (performance, process death, all the broader issues of UB) that come with this?
It's genuinely hard for me to understand why you wouldn't think memory safety is important Don't you want to write code that's portable and correct? Don't you want to execute other people's programs and trust they won't segfault? Doesn't it frustrate you that the language committees have spent years refusing to address even the lowest-hanging fruit?
Secondly, memory safety does not mean no security vulnerabilities. What it does mean is that 80% of the most commonly found vulnerabilities (as gathered through statistical analysis of field failures) are gone. It means that the price for finding a vulnerability is higher.
And also sudo-rs precisely removes a lot of complexity that’s the source of vulnerabilities in normal sudo. There may be better approaches but it’s specifically not targeting 100% compat because sudo is horribly designed right now.
TLDR: this is a lazy knee jerk critique. Please do better in the future.
That does not contradict what I wrote.
I am confounded by your post, since an article with vulnerabilities in sudo-rs was posted.
You can also read
https://news.ycombinator.com/item?id=46388181
> TLDR: this is a lazy knee jerk critique. Please do better in the future.
TL;DR: This is a lazy knee jerk critique. Please do better in the future.
> Is this website promoting Rust, memory unsafety and insecurity?
What a joke of a question. The website prioritizes programs with 98% safe code over programs with 0% safe code. Does that mean it's "promoting memory unsafety" because it's not demanding 100%? No.
> Nor is it guaranteed to be secure.
Nothing is, and nobody is claiming that.
Please explain this section of the Rustonomicon.
https://doc.rust-lang.org/nomicon/working-with-unsafe.html
> This code is 100% Safe Rust but it is also completely unsound. Changing the capacity violates the invariants of Vec (that cap reflects the allocated space in the Vec). This is not something the rest of Vec can guard against. It has to trust the capacity field because there's no way to verify it.
> Because it relies on invariants of a struct field, this unsafe code does more than pollute a whole function: it pollutes a whole module. Generally, the only bullet-proof way to limit the scope of unsafe code is at the module boundary with privacy.
If 2% code means that, wildly spitballing, 50% code has to be manually reviewed, it's not quite as impressive.
And it might be worse regarding memory safety than the "status quo" if one accepts the assumption that unsafe Rust is harder than C and C++, as many Rust developers do.
https://www.reddit.com/r/rust/comments/1amlfdj/comment/kpmb2...
> It may be even less safe because of the strong aliasing rules. That is, it may be harder to write correct unsafe {} Rust than correct C. Only use unsafe {} when absolutely needed.
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
When I wrote "98% safe code" I meant the code that can be automatically verified by the compiler. I wish the terminology was better.
More or less correct, but in principle, it also requires modules to have proper encapsulation and interfaces. Otherwise, an interface could be made that enables safe API functions to cause memory unsafety if called incorrectly.
> "Modules with unsafe" should be a lot lot less than 50%. Your spitball is not a fit to real code.
But https://grep.app/search?f.repo=trifectatechfoundation%2Fsudo... indicated that maybe up to 45 files include unsafe code. That is quite a lot of files. How many modules might that touch? How large are those modules?
That depends on what is meant by "automatically verified by the compiler". The compiler cannot generally verify outside-unsafe code that modifies invariants that inside-unsafe code might rely on to have memory safety, for instance.