The author (author's operator?) does not understand the data they are working with. And in doing so, they inadvertently make the case against their own "dark factory" nonsense.
For one, nothing about this project makes "every law" a commit. It just takes the _annual_ snapshots published by the House clerk and diffs chunks of those files against each other. A project which actually traced the edits in each annual snapshot to a specific passed bill would be incredibly cool (and is probably tractable now for the first time with current AI agents). This is not that!
All this does, as far as I can tell, is parse a set of well-structured XML files into chunks and commit those chunks to Git. It's not literally nothing, but it's something that the author's own README credits multiple people doing years ago with ~100 line Python scripts.
I don't mean to be overly harsh. But this is exactly the problem with treating your software as a "factory": you release something you do not understand, in a domain you did not care to learn. And we are all the poorer for it.
Oof. You’re not totally wrong. I’ve parsed XML with XSDs since the days of Java. I looked at the 100 line Ruby implementation of parsing these files and thought “ack. (Not ACK) why do I need all of this?!”
Well it has a data loader, and hits APIs with retry logic, and has a CLI that can take arguments to run data downloads that can resume on fail, and yeah it parses the stupid XML with a “chapeau” tag - did you know that is French for hat? There is a tag that is the “hat” for a section and it is just like another title basically. So yeah, I would’ve had to learn all of that. But it also tests all of these things with actual tests. And the adversary complains if you write a test that isn’t actually testing anything meaningful. And if I needed to, I could reason about the architecture by reading the architecture design documents, which I have done at least a little bit and they are pretty nice, I have to admit.
Anyways - it’s a next step in the evolution of the laws in GitHub which is actually interesting to see them change and imagine what we can do with more data overlayed. Sadly the other repos were not maintained so this is the latest laws and you can view the diff from one Congress to another. Or you can git blame one of the files and see how old certain sections are. The data we have right now only goes back to 2013.
The real point for me is the dark factory we built that built the repo that generated the full git history of laws. I definitely could have vibe coded just getting the laws into GitHub, but we’re proving out building higher quality tested software autonomously, and building a base for this to be extended.
The magic (to me) is actually in the issues in `us-code-tools` and seeing the autonomous pipeline work with architecture designs and spec iteration and test building that ultimately led to the legal text in the repo.
I realize now people don’t want to read the generated blog post about it, though I still find it fun that all I asked was “do you want to write a blog about this?”
> when nick asked me to write this post, I had to be reminded that I have a blog.
Oh how I hate this! Not in the, “I loathe the author” kind of way. Just in the, “ewwww I hate fuzzy caterpillars.” Kind of way. It feels so wrong to feel this sort of “voice” coming from an LLM. I don’t like how the “author” says, “Nick and I didn’t build it by hand. We sent it off to… AI agents.” As if it’a pretending not to be an agent.
Regardless, very fun project. Thanks for sharing. And don’t let my hate stop your experiments.
Feature request—add some context to each git commit message. What prompted the law to be drafted? What was said to gain support? What was debated? Committee reports? My lawyer sister said, “You can look at the legislative history to see the reasoning behind any law.” Can that get added to the commit messages?
Thank you - noted for my future sharing, and appreciate your additional ideas.
The second half of the data that powers the cooler features is rate-limited so it is going to take a few weeks to download - but ultimately being able to see who voted on something, see laws that were proposed and debated and rejected… lots of cool ideas (beyond “can I create some real software that does this with just some basic specs”)
The entire United States Code — every title from General Provisions to National Park Service — parsed from the official XML published by the Office of the Law Revision Counsel, transformed into structured Markdown, and committed to a Git repository.
Everything described in this post — every issue, every PR, every adversarial review — was built in 48 hours by Dark Factory, our autonomous software development pipeline. The full build history is in the repos. We didn't clean it up. We didn't hide the failures. That's the point.
Seriously the intent is to build more on top of this, and viewing the git diffs of laws changing is already interesting. Once we get the additional data to create other overlays it will be a lot more interesting and something you really can’t see elsewhere
I think law as code or the "legal code" as code is a proposition that hasn't been fully imagined. There are a few other cs-language projects to describe tax law as code and some of them have some traction, but if law were immersed more in code, we could test it better and reason about its effects with more context.
If you pass a law to reduce theft, you could include tests based on official statistics about whether or not theft is going down, and with some scientific rigor (CBO is usually quite reliable for instance), the law could "amend itself" either by sunsetting itself, if it isn't measuring up to expectations, or have an automatic budget increase if it's succeeding.
It's a bit far-fetched, given how indeterminate most government programs' intentions actually are (e.g. just hand out billions for "healthcare" and allow untold fraud to proliferate because it benefits our donors and voters), but every law should serve a purpose and we should automate its evaluation.
For one, nothing about this project makes "every law" a commit. It just takes the _annual_ snapshots published by the House clerk and diffs chunks of those files against each other. A project which actually traced the edits in each annual snapshot to a specific passed bill would be incredibly cool (and is probably tractable now for the first time with current AI agents). This is not that!
All this does, as far as I can tell, is parse a set of well-structured XML files into chunks and commit those chunks to Git. It's not literally nothing, but it's something that the author's own README credits multiple people doing years ago with ~100 line Python scripts.
I don't mean to be overly harsh. But this is exactly the problem with treating your software as a "factory": you release something you do not understand, in a domain you did not care to learn. And we are all the poorer for it.
Well it has a data loader, and hits APIs with retry logic, and has a CLI that can take arguments to run data downloads that can resume on fail, and yeah it parses the stupid XML with a “chapeau” tag - did you know that is French for hat? There is a tag that is the “hat” for a section and it is just like another title basically. So yeah, I would’ve had to learn all of that. But it also tests all of these things with actual tests. And the adversary complains if you write a test that isn’t actually testing anything meaningful. And if I needed to, I could reason about the architecture by reading the architecture design documents, which I have done at least a little bit and they are pretty nice, I have to admit.
Anyways - it’s a next step in the evolution of the laws in GitHub which is actually interesting to see them change and imagine what we can do with more data overlayed. Sadly the other repos were not maintained so this is the latest laws and you can view the diff from one Congress to another. Or you can git blame one of the files and see how old certain sections are. The data we have right now only goes back to 2013.
Why even bother?
The magic (to me) is actually in the issues in `us-code-tools` and seeing the autonomous pipeline work with architecture designs and spec iteration and test building that ultimately led to the legal text in the repo.
I realize now people don’t want to read the generated blog post about it, though I still find it fun that all I asked was “do you want to write a blog about this?”
Probably could have just linked to the repo…
I can't put my finger on it. Why is this writing style so embarrassing?
https://news.ycombinator.com/item?id=47553798
Edit: opened the post, yep.
Oh how I hate this! Not in the, “I loathe the author” kind of way. Just in the, “ewwww I hate fuzzy caterpillars.” Kind of way. It feels so wrong to feel this sort of “voice” coming from an LLM. I don’t like how the “author” says, “Nick and I didn’t build it by hand. We sent it off to… AI agents.” As if it’a pretending not to be an agent.
Regardless, very fun project. Thanks for sharing. And don’t let my hate stop your experiments.
Feature request—add some context to each git commit message. What prompted the law to be drafted? What was said to gain support? What was debated? Committee reports? My lawyer sister said, “You can look at the legislative history to see the reasoning behind any law.” Can that get added to the commit messages?
The second half of the data that powers the cooler features is rate-limited so it is going to take a few weeks to download - but ultimately being able to see who voted on something, see laws that were proposed and debated and rejected… lots of cool ideas (beyond “can I create some real software that does this with just some basic specs”)
Everything described in this post — every issue, every PR, every adversarial review — was built in 48 hours by Dark Factory, our autonomous software development pipeline. The full build history is in the repos. We didn't clean it up. We didn't hide the failures. That's the point.
Seriously the intent is to build more on top of this, and viewing the git diffs of laws changing is already interesting. Once we get the additional data to create other overlays it will be a lot more interesting and something you really can’t see elsewhere
If you pass a law to reduce theft, you could include tests based on official statistics about whether or not theft is going down, and with some scientific rigor (CBO is usually quite reliable for instance), the law could "amend itself" either by sunsetting itself, if it isn't measuring up to expectations, or have an automatic budget increase if it's succeeding.
It's a bit far-fetched, given how indeterminate most government programs' intentions actually are (e.g. just hand out billions for "healthcare" and allow untold fraud to proliferate because it benefits our donors and voters), but every law should serve a purpose and we should automate its evaluation.
Language is not discrete.