I stopped using Actions for side projects a few months ago, things are simpler now (I run tests locally).
I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.
Can you explain your thoughts here more? I don't think I'm following.
Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?
The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.
I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.
Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?
Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.
Maintaining was painful (strongly disliked when an Action would start failing for unknown reasons, forcing prioritisation of the Action over the app and users).
But setting up a useful CI/CD pipeline was the worst part. The tl;dr is installing everything and getting system tests (i.e. tests using a browser) to work was just excruciating, partly because of the minutes long cycle time between making changes in the yaml, committing, pushing, clicking and waiting for it to fail. The cycle time was the killer. (if you could reliably run GHAs locally, my opinion of them would be completely different - and I tried Act, but it had its own problems, mostly related to its image and dependences being a bit different to that used in GHA).
I agree that debugging flakey tests locally is much easier, though, and flakey tests in a CI pipeline is really aggravating. Flakey tests are just aggravating in general, though.
I've also had frustrations where, if I didn't lock the versions of my actions, they'd start failing randomly and require intervention. Just getting into a good habit of not relying on implicit versioning for dependencies helped a lot.
Good advice about locking versions. I can literally see
uses: browser-actions/setup-chrome@latest
in my discarded yml file.
Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project. IME, projects that use docker move at half the pace of the projects that don't. (similar to Actions - lots of fun, feels like 'best practice', but velocity suffers - i.e. it steals time that should be spent talking to users and building).
I see. I don't feel the way you do about containers. In most scenarios I barely remember the abstraction layer exists. The only time in recent memory I've regretted using one was when doing game development and struggling to get GPU hardware acceleration functioning within the container. On the other hand, I remember lots of historical situations where I've dirtied my host machine's globals, gone back to an older project, and found the project non-functional (python2 vs python3 being namespaced to python, competing node/npm versions, etc.)
If anything, with the advent of Claude Code et al., I've become an even stronger proponent of container-based development. I have absolutely zero interest running AI on my host machine. It's reassuring to know that a rogue "rm -rf" will, at worst, just require me to rebuild my container.
> Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project.
This is a perplexing comment. Can you provide any specifics on what leads you to believe that Docker is a hindrance? I've been using Docker for ages, both professionally and in personal projects, and if anything it greatly simplifies any workflow. I wonder what experience you are having to arrive to such an unusual outcome.
For small projects, getting rid of the CI/CD can make sense in certain cases. I almost started my simple static site with GitHub Actions as setting up CI/CD is “just something you do”, then I realized I want to deploy in 2 seconds, not in 2 minutes.
Makes sense for side projects. I think there's real value for open source projects so people can get feedback quickly and maintainers can know that the tests are passing quickly.
For my current work project, caching was actually slower than just doing fresh resource installs... I don't know if Github has some transparent caching proxies for npm, nuget, etc, but they run faster than the cache save/restore does.
We're using the github hosted runners for pull requests and builds... the build process with build and attach .zip files into a release and the deploy process runs on self-hosted runners on the target server(s). Tests for PRs takes about 2-4min depending on how long it takes to queue the job. Build/Bundling takes about 3-5 minutes. The final deploys are under a minute.
The biggest thing for me is to stay as hands off from the deployed servers as possible. Having done a lot of govt and banking work, it's just something I work pretty hard to separate myself from. Automating all the things and staying as hands off as I can. Currently doing direct deploys, but would rather be deploying containers.
Um... maybe some actions setups are overly complex, but CI/CD is valuable if done well.
For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.
For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.
Perhaps you should invest in simplifying your build process instead?
The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).
My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)
I mean, I agree it would be nice to be able to test actions locally (maybe there's a tool for this). But I keep my actions very simple, so it rarely takes me a lot of time to get them right. See https://github.com/nixpulvis/grapl/blob/master/.github/workf...
If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.
Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.
Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:
- Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.
- Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).
- Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).
Is it a project where it's pretty much just you doing things, or something with a team of people working on things? Are you working in a space with strong auditability concerns or building pretty much hobby software?
For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.
For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.
I had that happen when somebody tagged me in a private repository that was later deleted (?).
You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):
Interesting that they went with a custom MIME type and a custom version header. I would have expected the version to be in the MIME type, but I feel like there is a reason behind this.
Yes private repos that are deleted leave notifications behind.
GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.
I got hit with one of these, then commented on GitHub meetsn issue about it, and ironically got a hundred notifications for everybody's comments, many complaining about phantom notifications.
Yeah there are some issues. PR is stuck at "Checking for the ability to merge automatically..."
By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com
Edit:
The report just updated with the following interesting bit.
> We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.
That's interesting—my understanding is that Github Enterprise Cloud is part of the same infrastructure as Github.com, so this status page seems maybe incorrect? Probably some missing step in the runbook to update both of these pages at the same time.
How long before moderately sized companies start hosting their own git servers again. Surely it wouldn’t be that difficult unless your repos are absolutely massive. GitHub outages are so common these days
And then you need to add another server to the infra / netops / tools team's maintenance burden and then they take it down for an upgrade and it doesn't come back up etc etc.
I don't think outages/downtime are necessarily a good reason to switch to self-hosting. I worked at a company that self-hosted the repo and code review tool and it was great, but it still had the same issues.
Yeah, as someone old enough to have worked at mid-sized companies before cloud-everything became the norm, self-hosting is overly romanticized. People think you'll get a full infrastructure team dedicated to making sure your self-hosted Git/Artifactory/Jira/Grafana/whatever runs super smoothly and never goes down. In reality it ends up being installed by a dev or IT as sort of a side project, and once the config is hacked together (and of course it's a special pet server and not any kind of repeatable infrastructure setup with Ansible or Docker, so yes you're stuck on Ubuntu 12.04 for a decade) they let it "just run" forever because they don't want to touch it (because making changes is the #1 reason for outages) so you're constantly 2+ years behind the latest version of everything.
It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.
I don't think running your own git server on its own is what's preventing this. It's all the other things you're missing like: CI/CD pipelines, code review tools, user management, etc...
seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.
I experienced an outage (website and any push, pull commands) that lasted for about 1-2 hours on Oct 7th but didn't see anything on their status page. There was definitively a spike on https://downdetector.ca/status/github/, so I know it wasn't just my ISP.
I experienced that too! In Canada. It was a head-scratcher, none of my teammates had issues, and i could access just fine on my phone but i couldn't on my home's wifi.
I wish the most popular software forge didn't include a bunch of other software solutions like issue tracking or forums.
Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).
Why do so many solutions grow so big? Is it done to increase enterprise adoption?
I agree to a degree, but issue tracking being able to directly work with branches and PRs is natural enough, and then discussions can share a lot of code with the issue tracker.
Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.
Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.
If the alternative is each user has to patch together all of the different solutions into one, you are just increasing the number of parts that can go wrong, too. And when they do, it won't be immediately clear who the issue is with.
I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.
Of everything potentially causing scope creep in GitHub, issue tracking and forums might be the least out of scope.
That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.
Here's the step-by-step guide to self-hosting git repositories:
Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.
Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.
To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.
Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.
This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.
There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.
But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`
You are done.
Want to host your git repository online? Install caddy and point to /var/git with something like
At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.
I self host a server for a website and I also compile executables in there and it's been running just fine for 2 years and it's not even like a big provider, a very niche one actually (mac servers)
I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.
Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?
The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.
I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.
Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?
Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.
But setting up a useful CI/CD pipeline was the worst part. The tl;dr is installing everything and getting system tests (i.e. tests using a browser) to work was just excruciating, partly because of the minutes long cycle time between making changes in the yaml, committing, pushing, clicking and waiting for it to fail. The cycle time was the killer. (if you could reliably run GHAs locally, my opinion of them would be completely different - and I tried Act, but it had its own problems, mostly related to its image and dependences being a bit different to that used in GHA).
More details (linking to save repeating them): https://news.ycombinator.com/item?id=45530753
In my experience, I just set everything up inside my container locally, run tests locally, push the Dockerfile to GH, and re-run my CI off of dependencies declared in the Dockerfile. https://stackoverflow.com/questions/61154750/use-local-docke...
I agree that debugging flakey tests locally is much easier, though, and flakey tests in a CI pipeline is really aggravating. Flakey tests are just aggravating in general, though.
I've also had frustrations where, if I didn't lock the versions of my actions, they'd start failing randomly and require intervention. Just getting into a good habit of not relying on implicit versioning for dependencies helped a lot.
Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project. IME, projects that use docker move at half the pace of the projects that don't. (similar to Actions - lots of fun, feels like 'best practice', but velocity suffers - i.e. it steals time that should be spent talking to users and building).
If anything, with the advent of Claude Code et al., I've become an even stronger proponent of container-based development. I have absolutely zero interest running AI on my host machine. It's reassuring to know that a rogue "rm -rf" will, at worst, just require me to rebuild my container.
This is a perplexing comment. Can you provide any specifics on what leads you to believe that Docker is a hindrance? I've been using Docker for ages, both professionally and in personal projects, and if anything it greatly simplifies any workflow. I wonder what experience you are having to arrive to such an unusual outcome.
my setup before was just build and scp
now it takes like 3 mins for a deploy: i haven’t setup caching for builds etc. but that feels like a self made problem
my proj is pretty simple so thats probably why
We're using the github hosted runners for pull requests and builds... the build process with build and attach .zip files into a release and the deploy process runs on self-hosted runners on the target server(s). Tests for PRs takes about 2-4min depending on how long it takes to queue the job. Build/Bundling takes about 3-5 minutes. The final deploys are under a minute.
The biggest thing for me is to stay as hands off from the deployed servers as possible. Having done a lot of govt and banking work, it's just something I work pretty hard to separate myself from. Automating all the things and staying as hands off as I can. Currently doing direct deploys, but would rather be deploying containers.
For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.
For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.
Perhaps you should invest in simplifying your build process instead?
The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).
My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)
If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.
Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.
Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:
- Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.
- Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).
- Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).
For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.
For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.
You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):
Then you can click the checkbox at the top and then "select all", and it'll mark the phantom notifications as read.
GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.
This has been a known issue since at least 2021, which is ridiculous.
https://github.com/orgs/community/discussions/6874
By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com
Edit:
The report just updated with the following interesting bit.
> We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.
It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.
https://www.githubstatus.com/history
seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.
Wondering when M$ will cut their losses and bail.
If the current state of GH availability is without Azure-induced additional unreliability, I truly fear what it will be on Azure
Edit: Found the discussion about this https://news.ycombinator.com/item?id=45517173
Just be warned if you try it out that if you don't specify which workflow to run, it will just run them all!
Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).
Why do so many solutions grow so big? Is it done to increase enterprise adoption?
Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.
Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.
I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.
That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.
Who else?
Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.
Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.
To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.
Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.
This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.
There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.
But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`
You are done.
Want to host your git repository online? Install caddy and point to /var/git with something like
Your git repository will be instantly accessible via https://example.com/repo.git.At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.
Expecting more and more downtime and random issues in the future.
At the same time, self-hosting is great for privacy, cost, or customization. It is not great for uptime.