This is time efficient* but rather wasteful of space.
The best way to save space is to use a Bloom Filter.
If we capture all the even numbers, that would sadly only give us "Definitely not Even" or "Maybe Even".
But for just the cost of doubling our space, we can use two Bloom filters!
So we can construct one bloom filter capturing even numbers, and another bloom filter capturing odd numbers.
Now we have "Definitely not Even" and "Maybe Even" but also "Definitely not Odd" and "Maybe Odd".
In this manner, we can use the "evens" filter to find the odd numbers and the "odds" filter to find the even numbers.
Having done this, we'll be left with just a handful of unlucky numbers that are recorded as both "Maybe even" and "Maybe odd". These will surely be few enough in number that we can special case these in our if/else block.
The filters as a first-pass will save gigabytes of memory!
It's a constant number of lookups, and all good Computer Scientists know that it is therefore an O(1) algorithm.
It is hard to imagine better efficiency than O(1)!
Indeed we could improve it further by performing all evaluations even when we find the answer earlier, ensuring it is a true Constant Time algorithm, safe for use in cryptography.
> This is time efficient* but rather wasteful of space.
You're saying that the blog's solution is time efficient. Which it is not. Your solution may be O(1) but it is also not efficient. As I'm sure you are aware.
I can tell you a practical solution which is also O(1) and takes up maybe 2 or 3 instructions of program code and no extra memory at all.
`x & 1` or `x % 2 != 0`
This blog post was taking a joke and running with it. And your comment is in that spirit as well, I just wanted to point out that it's by no means time efficient when we have 2s or 1s complement numbers which make this algorithm trivial.
I may have missed the * meaning. I got that the bloom filter was an extension of the joke as I mentioned below. I was just clarifying in case someone else missed the joke.
You're absolutely right. The obvious solution would have been to create a boolean table containing all the pre-computed answers, and then simply use the integer you are testing as the index of the correct answer in memory. Now your isEven code is just a simple array lookup! Such an obvious improvement, I can't believe the OP didn't see it.
And with a little extra work you can shrink the whole table's size in memory by a factor of eight, but I'll leave that as an exercise for the interested reader.
> But for just the cost of doubling our space, we can use two Bloom filters!
We can optimize the hash function to make it more space efficient.
Instead of using remainders to locate filter positions, we can use a mersenne prime number mask (like say 31), but in this case I have a feeling the best hash function to use would be to mask with (2^1)-1.
Ah, yes, exactly the pointless diversion I needed for my lunch break. For science: generating a C# switch statement for similar purposes took 7 minutes on similar-ish hardware, but the resulting 99.2GB file could not be opened or compiled ('Stream is too long'), which was slightly disappointing.
Optimization efforts included increasing the internal buffer size of the StreamWriter used to create the source code: this reduced the runtime to around 6 minutes, as well as running a non-debug build, as it was observed that the poor Visual Studio metrics gathering process was contributing significantly to disk activity as well, but that ultimately didn't matter much. So, ehhm, yes, good job on that I guess?
Isn't the obvious thing to generate different classes for different ranges over the input? Then the classes could be loaded lazily.
And if you then make the ranges tree-shaped you get logarithmic complexity, which massively cuts down the O(n) of the rather naive chained `if` statements.
I wonder if you could generate it via a Roslyn incremental source generator instead of as a file to bypass this limit. I'm guessing not, but it does sound like fun.
> any value over 2^31 seems to give random results.
Wow he really lucked out... On his way to perfecting a fully functioning and performant Even/Odd Detector, he stumbled upon a fully functioning and performant Coin Flip Simulator!
I took an ASIC design class in college, unfortunately with a heavy course load that didn't allow me to focus on it. For our final project we were given a numbered dictionary and asked to design a chip that would accept the characters on a 7 bit interface (ASCII), one character per clock cycle and output the dictionary number on an output interface but I can't remember how wide. We were graded on the size of the resulting ASIC and how many clock cycles it took from the last character in to the number on the output.
I started designing my modules, a ROM, a register with a ROM pointer, etc, etc, writing the Verilog and working out the clock sync between modules. Then I got 'lazy' and wrote a trie tree like implementation in Java, and have it spit out the whole tree in Verilog. It worked and just one clock cycle after the last letter my number would output. Fastest in the class! Also the most number of gates in the class. Surprised I got a 90 grade given I didn't use any of the advanced ASIC design the class taught. The TA didn't know what the hell they were looking at.
Yep! Something a bit counterintuitive on circuit design is that dedicated transistors will always beat reusing existing components. If we do reuse existing components like ALUs, multipliers, or state machines, we save on chip area but pay the penalty in clock cycles. Your approach was the extreme version of this tradeoff. You essentially unrolled the entire dictionary lookup into pure combinatorial logic (well, with registers for the input characters). One clock cycle latency because you weren't doing any sequential searching, comparing, or state machine transitions just racing electrons through logic gates.
It's akin to a compiler unrolling a loop. Uses more RAM (area) but fewer cycles to execute. Hardware synthesis uses many of the same techniques as compilers use to optimize code.
It's a common pitfall for those learning hardware description languages like Verilog, when they think about them like programming languages. If you go "if (calc) res <= a * b;" If res is 32 bits wide then you have instantiated a 32 bit fast multiplier circuit dedicated just to that one operation. This is often not what was intended.
Despite how leaning on the analogy too closely can mislead in that way, the analogy between hardware and software is not a shallow one. A combinatorial circuit is akin to the pure function of functional programming. Anything that can be described as a pure function working on fixed integers or floating point or other discrete data types, can be transformed into a combinatorial circuit. And there are algorithms to do so automatically and often with reasonable efficiency.
Free software synthesis has come a long way in recent years, by the way. There's even several hobbyist projects that can take VHDL or Verilog and produce layouts using TTL chips or even discrete transistor logic with automatic circuit board layout. You can now compile your code directly to circuit board copper masks and a part list.
This could be obviously done with much less code: Just add "if"s for all even number, and at the end just return "odd" if none of the evens matched. 50% less code!
Or even simpler: If it's 0, return "even". If not, do a recursive call to n-1, if that equals "even", return "odd", otherwise return "even".
But the best way is probably to just use a library. Yes, 500MB of additional dependencies, but then it's a one-liner.
You brought up an important opportunity for optimization. If you know the distribution of your data, it may make more sense to implement it in terms of the odd numbers and leave even numbers as the fallback. It's important to profile with a realistic distribution of data to make sure you're targeting the correct parity of numbers.
Good point. Have two programs - one checking every even number and returning odd of not even. And then have a program checking every odd number and returning even if not. Then, a simple program to dispatch to either program randomly, so you end up in the long term with good performance for each.
Gemini took 4 seconds to answer this prompt: "Here is a number 4200020010101. Think deeply about it and tell me if it is not or or not even."
So if you're concerned with privacy issues, you can run the assembly version proposed in the article locally and be well within the same order of performance.
Let's thank the author of the article for providing a decent alternative to Google.
ah, but the license is not that good we can't reproduce his code.
> Now, this is a time-memory tradeoff, but my time on this earth is limited so I decided to meta-program the if statements using a programmer program in a different programming language.
Yeah... I come here to talk about that. Should have been
for i in range(0, 2**8, 2):
print(" if (number == "+str(i)+")")
print(" printf(\"even\\n\");")
print(" if (number == "+str(i + 1)+")")
print(" printf(\"odd\\n\");")
or
for i in range(0, 2**8, 2):
print(f""" if (number == {i})
puts("even");
if (number == {i + 1})
puts("odd");""")
I think we can improve this. Just make a microservice who generates the code on the fly and streams it to the compiler. Then you also just have to create the necessary code and don't waste the SSD with unused code-paths.
Microservice kinda implies usage of a container for me. How else would we google-scale it to serve all requests in parallel?
But thinking about, we probably have to use some more microservices, we can't put all that burden on the requester. So a dedicated service for compiling and executing in sandboxes would be necessary. Also, some local load balancers to control the flow and filter out the useless answers. So I'm not an expert on that devops-magic, but I guess this means ~12.5 billion pods fast enough result. Do Amazon or Google offer planetary scale for services?
I recently asked a Qwen model to write me some code to remove errant spaces ("c he es e" instead of "cheese") in OCR'd text. It proceeded to write 'if' blocks for every combo of all English words and all possible errant spaces. I did not wait for it to finish...
If the author is available for consulting I have this bag of rice I need cooked. Should be around 30,000 grains, each needs about 1mL of water and 2m on the stove. Will pay $10 (2025 dollars)
This reminds me of when I learned to program on my casio calculator.
There was a function to detect a key press which would return a number identifying the pressed key.
I needed to map that number to the letter printed on the key to print it on the screen.
I don't remember whether there was no hashmap data structure or I just didn't know about it, but I implemented it with a serie of if.
The problem with that solution is that while mapping A was fast, Z was very slow because it was at the end of the list.
That is how I discovered divide and conquer/ dichotomy with if branches.
I see this is 2023… the article refs GPT even then. Can’t believe it’s already that much time gone by, still seems like “last years big news”
I was gonna comment “this is what I really like to see on HN”. Then I saw the date and was sad that we’re having to dip into the history box to find fun/interesting articles more often of late it seems.
Anyone else interested in a temporary moratorium on all things LLM? We could have GPT-free-Wednesday, or something like that :)
> Anyone else interested in a temporary moratorium on all things LLM? We could have GPT-free-Wednesday, or something like that :)
I would be interested in a permanent moratorium, personally. There's no interesting content to be had in the various LLM articles that litter HN these days. Or failing a topic ban, at least give a way to filter it for those of us who are sick of hearing about AI hype.
With data-driven programming, I would have expected an SQL table containing all the precomputed results. Unless you carelessly add an index, it has the same asymptotic performance!
Infact, some really performant code such as glMatrix.js do not use any for loops for matrix math, just to allow the javascript engine to optimize the code as much as possible.
Thanks for making me doubt myself & googling who that guy who made python was again, because surely "van der Gussom" isn't a normal Dutch name. Well played.
> As a side note, the program is amazingly performant. For small numbers the results are instantaneous and for the large number close to the 2^32 limit the result is still returned in around 10 seconds.
> How did I do this? Well I jumped online, using a mix of my early life experience coding emulators and hacking and looked into the x86(-64) architecture manuals to figure out the correct opcodes and format for each instruction. … Just kidding, that’s horrible. I asked ChatGPT
Ok but if you do want to play with writing binary code manually I recommend Casey Muratori's performance course
I would have wanted to see them look at the assembly from various optimization levels to see if the compiler really did. Ideally o1 or something wouldn't see through this but would generate better code in other ways. Disabled optimizations often are really stupid about how they code gen.
Well I created the 16 bit .c file, because I'm not that curious. gcc -O0 completed immediately and made a 1,5MB executable. -O1 took about 10 minutes for a 1,8 MB executable. -O2 has been running for 1h15m so far... i7-14700K
I'm in too deep now, so I'll let it run while I'm at work.
> I decided to implement this in the C programming language as it’s by far the fastest language on the planet to this day (thanks to the visionary genius Dennis Richie)
Am I lost?
Aren't the compiler/linker responsible for fast code, not the language itself?
There are language issues as well. 99% of C programs are valid C++, and if you compile with a C++ compiler instead of a C++ compiler will be slightly faster! C++ ha a stronger type system and so once in a while (very rarely) those C programs compile but give incorrect results since C++ allowed the optimizer to make an assumption that wasn't true. Fortran is often even faster because the language allows for even more assumptions. I don't know where Rust fits in here (Rust is hurt today because the better optimizes are designed for C++ and so don't take advantage of extra assumptions Rust allows - it was designed to allow different assumptions from C++ and likely could be better would a ground up optimizer but that would take a large team a decade+ to write: expensive)
Most of the difference in speed is the optimizer/linker. Assuming a fair competition the difference between Ada, C, C++, D, Fortran, Rust, Zig (and many others I can't think of) is very rarely even 1% in any real world situation. Of course it isn't hard add pessimization to make any language lose, sometimes accidentally, so fair competitions are very hard to find/make.
Both, usually. A language's semantics can limit how much a compiler can speed up the language. Python, for example, is extremely difficult to make fast due to the fact that almost everything has the semantics of a hashmap lookup. C, in comparison, has relatively little in it that can't be mapped fairly straightforwardly to assembly, and then most of it can be mapped in a more difficult way to faster assembly.
> I decided to use the slowest language on the planet, Python (thanks to the visionary genius of Ross van der Gussom).
given the article, it's fair to assume the author was joking around
that being said, the way the language is used and its ecosystem do contribute to the executable's efficiency. yet, given C's frugality, or the proximity between its instructions and the executed ones, it's not unfair to say that "C is fast"
The interesting part isn’t the if-statement count but how quickly the compiler and branch predictor erase the differences. It’s a nice demo of why “manual cleverness” rarely beats modern toolchains.
> I saw from the SSD was around 800 MB/s (which doesn’t really make sense as that should give execution speeds at 40+ seconds, but computers are magical so who knows what is going on).
If anyone knows what’s actually going on, please do tell.
A much cooler approach would have been to generate the ASM from the same program, rather than generate a file from python and load that file from C++. The multi-stage build and filesystem are completely unnecessary.
The technique actually has a lot of practical applications, so it's useful to have a C++ library that helps you with generating amd64 machine code.
Map reduce, cluster geo-failover and CDN caching for optimized coldstarts in case you have to bring it up from scratch in a new datacenter. Bio for contact info, hourly billing. I have helped many startups reach their first 100MARR. Buy my audiobook.
This reminds me of my personal "prime number" grabber research https://github.com/tigranbs/prime-numbers I needed to create the unique graph nodes and assign prime numbers, and to make things efficient, I thought, why not just download the list of known prime numbers instead of generating them one by one. So I did and compiled everything with a single Go binary. Ehh, good old days with a nice feith in making "the best" crappy software out there.
I would also like to praise the visionary genius Ross van der Gussom, without whom this wonderful achievement in software engineering would not have been possible!
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
if (argc < 2) {
fprintf(stderr, "Usage: %s <string>\n", argv[0]);
return 1;
}
char *s = argv[1];
int i;
/* find the end of the string */
for (i = 0; s[i] != '\0'; ++i)
;
/* make sure the string wasn't empty */
if (i == 0) {
fprintf(stderr, "Error: empty string\n");
return 1;
}
/* last character is at s[i - 1] */
char d = s[i - 1];
if (d == '0')
printf("even\n");
if (d == '1')
printf("odd\n");
if (d == '2')
printf("even\n");
if (d == '3')
printf("odd\n");
if (d == '4')
printf("even\n");
if (d == '5')
printf("odd\n");
if (d == '6')
printf("even\n");
if (d == '7')
printf("odd\n");
if (d == '8')
printf("even\n");
if (d == '9')
printf("odd\n");
return 0;
}
number="$1"
if [[ "$number" =~ "^(2|4|6|8|10|12|14|16|18|20)$" ]]; then
echo even
elif [[ "$number" =~ "^(1|3|5|7|9|11|13|15|17|19)$" ]]; then
echo odd
else
echo Nan
fi
The best way to save space is to use a Bloom Filter.
If we capture all the even numbers, that would sadly only give us "Definitely not Even" or "Maybe Even".
But for just the cost of doubling our space, we can use two Bloom filters!
So we can construct one bloom filter capturing even numbers, and another bloom filter capturing odd numbers.
Now we have "Definitely not Even" and "Maybe Even" but also "Definitely not Odd" and "Maybe Odd".
In this manner, we can use the "evens" filter to find the odd numbers and the "odds" filter to find the even numbers.
Having done this, we'll be left with just a handful of unlucky numbers that are recorded as both "Maybe even" and "Maybe odd". These will surely be few enough in number that we can special case these in our if/else block.
The filters as a first-pass will save gigabytes of memory!
It's a joke post with some interesting bits and details.
It is hard to imagine better efficiency than O(1)!
Indeed we could improve it further by performing all evaluations even when we find the answer earlier, ensuring it is a true Constant Time algorithm, safe for use in cryptography.
You're saying that the blog's solution is time efficient. Which it is not. Your solution may be O(1) but it is also not efficient. As I'm sure you are aware.
I can tell you a practical solution which is also O(1) and takes up maybe 2 or 3 instructions of program code and no extra memory at all.
`x & 1` or `x % 2 != 0`
This blog post was taking a joke and running with it. And your comment is in that spirit as well, I just wanted to point out that it's by no means time efficient when we have 2s or 1s complement numbers which make this algorithm trivial.
lol
And with a little extra work you can shrink the whole table's size in memory by a factor of eight, but I'll leave that as an exercise for the interested reader.
We can optimize the hash function to make it more space efficient.
Instead of using remainders to locate filter positions, we can use a mersenne prime number mask (like say 31), but in this case I have a feeling the best hash function to use would be to mask with (2^1)-1.
Optimization efforts included increasing the internal buffer size of the StreamWriter used to create the source code: this reduced the runtime to around 6 minutes, as well as running a non-debug build, as it was observed that the poor Visual Studio metrics gathering process was contributing significantly to disk activity as well, but that ultimately didn't matter much. So, ehhm, yes, good job on that I guess?
And if you then make the ranges tree-shaped you get logarithmic complexity, which massively cuts down the O(n) of the rather naive chained `if` statements.
So you have to make sure to compile only in release mode just to get to 16 bits.
Wow he really lucked out... On his way to perfecting a fully functioning and performant Even/Odd Detector, he stumbled upon a fully functioning and performant Coin Flip Simulator!
[1]: https://aphyr.com/posts/342-typing-the-technical-interview
[2]: https://www.richard-towers.com/2023/03/11/typescripting-the-...
> “Can I use any language?” > > “Sure.” > > Move quickly, before he realizes his mistake.
I started designing my modules, a ROM, a register with a ROM pointer, etc, etc, writing the Verilog and working out the clock sync between modules. Then I got 'lazy' and wrote a trie tree like implementation in Java, and have it spit out the whole tree in Verilog. It worked and just one clock cycle after the last letter my number would output. Fastest in the class! Also the most number of gates in the class. Surprised I got a 90 grade given I didn't use any of the advanced ASIC design the class taught. The TA didn't know what the hell they were looking at.
It's a common pitfall for those learning hardware description languages like Verilog, when they think about them like programming languages. If you go "if (calc) res <= a * b;" If res is 32 bits wide then you have instantiated a 32 bit fast multiplier circuit dedicated just to that one operation. This is often not what was intended.
Despite how leaning on the analogy too closely can mislead in that way, the analogy between hardware and software is not a shallow one. A combinatorial circuit is akin to the pure function of functional programming. Anything that can be described as a pure function working on fixed integers or floating point or other discrete data types, can be transformed into a combinatorial circuit. And there are algorithms to do so automatically and often with reasonable efficiency.
Free software synthesis has come a long way in recent years, by the way. There's even several hobbyist projects that can take VHDL or Verilog and produce layouts using TTL chips or even discrete transistor logic with automatic circuit board layout. You can now compile your code directly to circuit board copper masks and a part list.
Or even simpler: If it's 0, return "even". If not, do a recursive call to n-1, if that equals "even", return "odd", otherwise return "even".
But the best way is probably to just use a library. Yes, 500MB of additional dependencies, but then it's a one-liner.
So if you're concerned with privacy issues, you can run the assembly version proposed in the article locally and be well within the same order of performance.
Let's thank the author of the article for providing a decent alternative to Google.
ah, but the license is not that good we can't reproduce his code.
I think I just experienced a segfault
Did you want to test the LLM's grammatical comprehension?
But thinking about, we probably have to use some more microservices, we can't put all that burden on the requester. So a dedicated service for compiling and executing in sandboxes would be necessary. Also, some local load balancers to control the flow and filter out the useless answers. So I'm not an expert on that devops-magic, but I guess this means ~12.5 billion pods fast enough result. Do Amazon or Google offer planetary scale for services?
There was a function to detect a key press which would return a number identifying the pressed key.
I needed to map that number to the letter printed on the key to print it on the screen. I don't remember whether there was no hashmap data structure or I just didn't know about it, but I implemented it with a serie of if.
The problem with that solution is that while mapping A was fast, Z was very slow because it was at the end of the list. That is how I discovered divide and conquer/ dichotomy with if branches.
I was gonna comment “this is what I really like to see on HN”. Then I saw the date and was sad that we’re having to dip into the history box to find fun/interesting articles more often of late it seems.
Anyone else interested in a temporary moratorium on all things LLM? We could have GPT-free-Wednesday, or something like that :)
I would be interested in a permanent moratorium, personally. There's no interesting content to be had in the various LLM articles that litter HN these days. Or failing a topic ban, at least give a way to filter it for those of us who are sick of hearing about AI hype.
We don’t _have_ to. You could start a blog and display the indomitable human spirit.
Save space.
Ditto. Sure, this overflows the stack, but you look cool doing it. Save time. Just buy more RAM.https://en.wikipedia.org/wiki/IBM_1620#Anecdotes
isEven is a performant, hand-compiled evenness checker for any 32 bit integer. A single file import that does one job and one job only!
"Is it odd or Even?"
"YES"
https://github.com/toji/gl-matrix/blob/master/dist/gl-matrix...
processing the whole number is absurd
You know, to save the performance cost of processing the input as a string, and chomping off all but the last character.
All you need is the final binary digit, which incidentally is the most optimal codegen, `v & 1`.
https://news.ycombinator.com/item?id=38790597
4B If Statements (469 comments)
Thanks for making me doubt myself & googling who that guy who made python was again, because surely "van der Gussom" isn't a normal Dutch name. Well played.
https://news.ycombinator.com/item?id=38790754
Amazing!
Ok but if you do want to play with writing binary code manually I recommend Casey Muratori's performance course
> Lets compile the code, disabling optimizations with /Od to make sure that the pesky compiler doesn’t interfere with our algorithm
And that weird flag is because it's a windows compiler: cl.exe, not gcc.
I'm in too deep now, so I'll let it run while I'm at work.
Am I lost? Aren't the compiler/linker responsible for fast code, not the language itself?
Most of the difference in speed is the optimizer/linker. Assuming a fair competition the difference between Ada, C, C++, D, Fortran, Rust, Zig (and many others I can't think of) is very rarely even 1% in any real world situation. Of course it isn't hard add pessimization to make any language lose, sometimes accidentally, so fair competitions are very hard to find/make.
Then again, the article was clearly sarcastic.
given the article, it's fair to assume the author was joking around
that being said, the way the language is used and its ecosystem do contribute to the executable's efficiency. yet, given C's frugality, or the proximity between its instructions and the executed ones, it's not unfair to say that "C is fast"
If anyone knows what’s actually going on, please do tell.
The numbers match quite nicely. 40gb program size minus 32gb RAM is 8gb, divided by 800mb/s makes 10 seconds.
The technique actually has a lot of practical applications, so it's useful to have a C++ library that helps you with generating amd64 machine code.
Moreover, interviewers will be smitten by the hand-optimized assembly code.
else
./check_digit 999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Many gigabytes saved!
/s