Incident with Webhooks

(githubstatus.com)

106 points | by munksbeer 5 hours ago

13 comments

  • nomilk 3 hours ago
    I stopped using Actions for side projects a few months ago, things are simpler now (I run tests locally).

    I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.

    • SeanAnderson 2 hours ago
      Can you explain your thoughts here more? I don't think I'm following.

      Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?

      The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.

      I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.

      Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?

      Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.

      • nomilk 2 hours ago
        Maintaining was painful (strongly disliked when an Action would start failing for unknown reasons, forcing prioritisation of the Action over the app and users).

        But setting up a useful CI/CD pipeline was the worst part. The tl;dr is installing everything and getting system tests (i.e. tests using a browser) to work was just excruciating, partly because of the minutes long cycle time between making changes in the yaml, committing, pushing, clicking and waiting for it to fail. The cycle time was the killer. (if you could reliably run GHAs locally, my opinion of them would be completely different - and I tried Act, but it had its own problems, mostly related to its image and dependences being a bit different to that used in GHA).

        More details (linking to save repeating them): https://news.ycombinator.com/item?id=45530753

        • SeanAnderson 2 hours ago
          Why are you having to manage your dependencies separately in GH actions YAML? Are you not using containers for local development?

          In my experience, I just set everything up inside my container locally, run tests locally, push the Dockerfile to GH, and re-run my CI off of dependencies declared in the Dockerfile. https://stackoverflow.com/questions/61154750/use-local-docke...

          I agree that debugging flakey tests locally is much easier, though, and flakey tests in a CI pipeline is really aggravating. Flakey tests are just aggravating in general, though.

          I've also had frustrations where, if I didn't lock the versions of my actions, they'd start failing randomly and require intervention. Just getting into a good habit of not relying on implicit versioning for dependencies helped a lot.

          • nomilk 2 hours ago
            Good advice about locking versions. I can literally see

                uses: browser-actions/setup-chrome@latest 
            
            in my discarded yml file.

            Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project. IME, projects that use docker move at half the pace of the projects that don't. (similar to Actions - lots of fun, feels like 'best practice', but velocity suffers - i.e. it steals time that should be spent talking to users and building).

            • SeanAnderson 1 hour ago
              I see. I don't feel the way you do about containers. In most scenarios I barely remember the abstraction layer exists. The only time in recent memory I've regretted using one was when doing game development and struggling to get GPU hardware acceleration functioning within the container. On the other hand, I remember lots of historical situations where I've dirtied my host machine's globals, gone back to an older project, and found the project non-functional (python2 vs python3 being namespaced to python, competing node/npm versions, etc.)

              If anything, with the advent of Claude Code et al., I've become an even stronger proponent of container-based development. I have absolutely zero interest running AI on my host machine. It's reassuring to know that a rogue "rm -rf" will, at worst, just require me to rebuild my container.

            • motorest 1 hour ago
              > Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project.

              This is a perplexing comment. Can you provide any specifics on what leads you to believe that Docker is a hindrance? I've been using Docker for ages, both professionally and in personal projects, and if anything it greatly simplifies any workflow. I wonder what experience you are having to arrive to such an unusual outcome.

        • serial_dev 1 hour ago
          For small projects, getting rid of the CI/CD can make sense in certain cases. I almost started my simple static site with GitHub Actions as setting up CI/CD is “just something you do”, then I realized I want to deploy in 2 seconds, not in 2 minutes.
    • raybb 3 hours ago
      Makes sense for side projects. I think there's real value for open source projects so people can get feedback quickly and maintainers can know that the tests are passing quickly.
    • wara23arish 3 hours ago
      i recently started and kinda agree?

      my setup before was just build and scp

      now it takes like 3 mins for a deploy: i haven’t setup caching for builds etc. but that feels like a self made problem

      my proj is pretty simple so thats probably why

      • tracker1 2 hours ago
        For my current work project, caching was actually slower than just doing fresh resource installs... I don't know if Github has some transparent caching proxies for npm, nuget, etc, but they run faster than the cache save/restore does.

        We're using the github hosted runners for pull requests and builds... the build process with build and attach .zip files into a release and the deploy process runs on self-hosted runners on the target server(s). Tests for PRs takes about 2-4min depending on how long it takes to queue the job. Build/Bundling takes about 3-5 minutes. The final deploys are under a minute.

        The biggest thing for me is to stay as hands off from the deployed servers as possible. Having done a lot of govt and banking work, it's just something I work pretty hard to separate myself from. Automating all the things and staying as hands off as I can. Currently doing direct deploys, but would rather be deploying containers.

      • huflungdung 3 hours ago
        [dead]
    • nixpulvis 3 hours ago
      Um... maybe some actions setups are overly complex, but CI/CD is valuable if done well.

      For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.

      For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.

      Perhaps you should invest in simplifying your build process instead?

      • nomilk 3 hours ago
        It's valuable, but at what cost?

        The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).

        My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)

        • nixpulvis 3 hours ago
          I mean, I agree it would be nice to be able to test actions locally (maybe there's a tool for this). But I keep my actions very simple, so it rarely takes me a lot of time to get them right. See https://github.com/nixpulvis/grapl/blob/master/.github/workf...

          If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.

          Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.

          • nomilk 2 hours ago
            < 20 lines is nice.

            Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:

            - Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.

            - Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).

            - Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).

    • vel0city 3 hours ago
      Is it a project where it's pretty much just you doing things, or something with a team of people working on things? Are you working in a space with strong auditability concerns or building pretty much hobby software?

      For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.

      For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.

  • christophilus 4 hours ago
    The thing that's been annoying me for a few weeks is an always-on notification, but my notification page shows no unread notifications.
    • diath 4 hours ago
      I had that happen when somebody tagged me in a private repository that was later deleted (?).

      You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):

          curl -L \
          -X PUT \
          -H "Accept: application/vnd.github+json" \
          -H "Authorization: Bearer <YOUR-TOKEN>" \                            
          -H "X-GitHub-Api-Version: 2022-11-28" \
          https://api.github.com/notifications \
          -d '{"last_read_at":"2025-10-09T00:00:00Z","read":true}'
      • masklinn 4 hours ago
        An other trick is to go into the "done" tab, and move at least 25 issues back to unread.

        Then you can click the checkbox at the top and then "select all", and it'll mark the phantom notifications as read.

        • christophilus 4 hours ago
          Oooooh. Snap! Thank you. This was driving me crazy.
      • brewmarche 3 hours ago
        Interesting that they went with a custom MIME type and a custom version header. I would have expected the version to be in the MIME type, but I feel like there is a reason behind this.
      • delfinom 4 hours ago
        Yes private repos that are deleted leave notifications behind.

        GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.

        • MyOutfitIsVague 3 hours ago
          I got hit with one of these, then commented on GitHub meetsn issue about it, and ironically got a hundred notifications for everybody's comments, many complaining about phantom notifications.
        • RyJones 3 hours ago
          I have been getting added to spam repos and orgs several times a day for weeks. it's annoying
        • masklinn 4 hours ago
          Yep, there was a huge spate of spam with hundreds of people pinged, and when they got reported / deleted the notifications didn't go...
    • levkk 3 hours ago
      Switched to email notifications and disabled them in GitHub, mainly because of this. Huge quality of life improvement.
    • IshKebab 4 hours ago
      Yeah happens to me all the time. I haven't been able to find a pattern or a reliable way to fix it (without messing with curl).

      This has been a known issue since at least 2021, which is ridiculous.

      https://github.com/orgs/community/discussions/6874

  • madethemcry 4 hours ago
    Yeah there are some issues. PR is stuck at "Checking for the ability to merge automatically..."

    By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com

    Edit:

    The report just updated with the following interesting bit.

    > We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.

    • nightpool 4 hours ago
      That's interesting—my understanding is that Github Enterprise Cloud is part of the same infrastructure as Github.com, so this status page seems maybe incorrect? Probably some missing step in the runbook to update both of these pages at the same time.
  • anon7000 4 hours ago
    How long before moderately sized companies start hosting their own git servers again. Surely it wouldn’t be that difficult unless your repos are absolutely massive. GitHub outages are so common these days
    • bntyhntr 4 hours ago
      And then you need to add another server to the infra / netops / tools team's maintenance burden and then they take it down for an upgrade and it doesn't come back up etc etc. I don't think outages/downtime are necessarily a good reason to switch to self-hosting. I worked at a company that self-hosted the repo and code review tool and it was great, but it still had the same issues.
      • mjr00 3 hours ago
        Yeah, as someone old enough to have worked at mid-sized companies before cloud-everything became the norm, self-hosting is overly romanticized. People think you'll get a full infrastructure team dedicated to making sure your self-hosted Git/Artifactory/Jira/Grafana/whatever runs super smoothly and never goes down. In reality it ends up being installed by a dev or IT as sort of a side project, and once the config is hacked together (and of course it's a special pet server and not any kind of repeatable infrastructure setup with Ansible or Docker, so yes you're stuck on Ubuntu 12.04 for a decade) they let it "just run" forever because they don't want to touch it (because making changes is the #1 reason for outages) so you're constantly 2+ years behind the latest version of everything.

        It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.

    • hypeatei 4 hours ago
      I don't think running your own git server on its own is what's preventing this. It's all the other things you're missing like: CI/CD pipelines, code review tools, user management, etc...
      • import 3 hours ago
        That’s already what Gitlab and gitea is doing
        • chrisweekly 1 hour ago
          Yeah - and IME (circa 2020-22) Gitlab/CI is at least as good as (or better than) Github/Actions.
      • myrmidon 3 hours ago
        Run your own gitlab server then?
        • edoceo 2 hours ago
          Works for my small team.
    • d_silin 3 hours ago
      How is GitLab doing?
  • nimbius 3 hours ago
    For anyone who wants context, here is the entire history of github "Issues"

    https://www.githubstatus.com/history

    seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.

    Wondering when M$ will cut their losses and bail.

    • kelvinjps10 3 hours ago
      So they buy company, ruin it and then start again forever?
  • olao99 3 hours ago
    I'm not looking forward to the Azure migration and potential for more issues in the coming year
  • lbrito 4 hours ago
    I experienced an outage (website and any push, pull commands) that lasted for about 1-2 hours on Oct 7th but didn't see anything on their status page. There was definitively a spike on https://downdetector.ca/status/github/, so I know it wasn't just my ISP.
    • mparnisari 4 hours ago
      I experienced that too! In Canada. It was a head-scratcher, none of my teammates had issues, and i could access just fine on my phone but i couldn't on my home's wifi.
      • lbrito 3 hours ago
        Same. Fraser Valley?
  • montroser 3 hours ago
    To run your github actions locally, we've had decent success with this tool: https://github.com/nektos/act

    Just be warned if you try it out that if you don't specify which workflow to run, it will just run them all!

  • hakube 5 hours ago
    Can't merge PRs atm
  • Kavelach 4 hours ago
    I wish the most popular software forge didn't include a bunch of other software solutions like issue tracking or forums.

    Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).

    Why do so many solutions grow so big? Is it done to increase enterprise adoption?

    • MyOutfitIsVague 3 hours ago
      I agree to a degree, but issue tracking being able to directly work with branches and PRs is natural enough, and then discussions can share a lot of code with the issue tracker.

      Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.

      Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.

    • cortesoft 3 hours ago
      If the alternative is each user has to patch together all of the different solutions into one, you are just increasing the number of parts that can go wrong, too. And when they do, it won't be immediately clear who the issue is with.

      I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.

    • poly2it 3 hours ago
      Of everything potentially causing scope creep in GitHub, issue tracking and forums might be the least out of scope.

      That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.

  • 0-bad-sectors 4 hours ago
    Oh it's that time of the week again.
  • munksbeer 5 hours ago
    Getting failed pushes, failed PR creation, failed CI pipelines.

    Who else?

    • digitalsushi 4 hours ago
      probably everyone since its been on their status page since before this was asked
  • koolba 4 hours ago
    It’s kind of funny that the top two posts right now are:

        1. Why Self-Host?
        2. GitHub Issues
    • jsheard 4 hours ago
      And yesterday we had "GitHub pausing feature development to prioritize moving infra to Azure", immediately followed by them breaking their infra.
      • sam_lowry_ 3 hours ago
        Here's the step-by-step guide to self-hosting git repositories:

        Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.

        Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.

        To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.

        Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.

        This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.

        There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.

        But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`

        You are done.

        Want to host your git repository online? Install caddy and point to /var/git with something like

            example.com {
            root * /var/git
            file_server
            }
        
        Your git repository will be instantly accessible via https://example.com/repo.git.
      • progbits 3 hours ago
        I thought they were already on Azure?

        At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.

      • cluckindan 3 hours ago
        Well, of course the corporation wants to dogfood their platform.

        Expecting more and more downtime and random issues in the future.

      • thelastgallon 3 hours ago
        Its better for github to self-host.
    • esafak 3 hours ago
      3. QED
    • paulddraper 3 hours ago
      It is funny.

      At the same time, self-hosting is great for privacy, cost, or customization. It is not great for uptime.

      • danlugo92 3 hours ago
        I self host a server for a website and I also compile executables in there and it's been running just fine for 2 years and it's not even like a big provider, a very niche one actually (mac servers)
        • paulddraper 1 hour ago
          No restarts?
        • kelvinjps10 3 hours ago
          Why Mac servers?
        • guluarte 3 hours ago
          are not mac servers more expensive than traditional ones? the only reason i've used them is to compile xcode
    • chistev 4 hours ago
      What's the joke?
      • logicallee 3 hours ago
        Github hosts git repos, so it's an example of when self-hosting git repos on one's own servers could remain operational despite a github outage.
      • dvmazur 4 hours ago
        GitHub Pages are affected too