15:45 UTC on 29 October 2025 – Customer impact began.
16:04 UTC on 29 October 2025 – Investigation commenced following monitoring alerts being triggered.
16:15 UTC on 29 October 2025 – We began the investigation and started to examine configuration changes within AFD.
16:18 UTC on 29 October 2025 – Initial communication posted to our public status page.
16:20 UTC on 29 October 2025 – Targeted communications to impacted customers sent to Azure Service Health.
17:26 UTC on 29 October 2025 – Azure portal failed away from Azure Front Door.
17:30 UTC on 29 October 2025 – We blocked all new customer configuration changes to prevent further impact.
17:40 UTC on 29 October 2025 – We initiated the deployment of our ‘last known good’ configuration.
18:30 UTC on 29 October 2025 – We started to push the fixed configuration globally.
18:45 UTC on 29 October 2025 – Manual recovery of nodes commenced while gradual routing of traffic to healthy nodes began after the fixed configuration was pushed globally.
23:15 UTC on 29 October 2025 - PowerApps mitigation of dependency, and customers confirm mitigation.
00:05 UTC on 30 October 2025 – AFD impact confirmed mitigated for customers.
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. In addition. customers may experience issues accessing the Azure Portal. Customers can attempt to use programmatic methods (PowerShell, CLI, etc.) to access/utilize resources if they are unable to access the portal directly. We have failed the portal away from Azure Front Door (AFD) to attempt to mitigate the portal access issues and are continuing to assess the situation.
We are actively assessing failover options of internal services from our AFD infrastructure. Our investigation into the contributing factors and additional recovery workstreams continues. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:57 UTC on 29 October 2025
---
Update: 16:35 UTC:
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
---
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue. We are taking two concurrent actions where we are blocking all changes to the AFD services and at the same time rolling back to our last known good state.
We have failed the portal away from Azure Front Door (AFD) to mitigate the portal access issues. Customers should be able to access the Azure management portal directly.
We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update.
This message was last updated at 17:17 UTC on 29 October 2025
"We have initiated the deployment of our 'last known good' configuration. This is expected to be fully deployed in about 30 minutes from which point customers will start to see initial signs of recovery. Once this is completed, the next stage is to start to recover nodes while we route traffic through these healthy nodes."
"This message was last updated at 18:11 UTC on 29 October 2025"
At this stage, we anticipate full mitigation within the next four hours as we continue to recover nodes. This means we expect recovery to happen by 23:20 UTC on 29 October 2025. We will provide another update on our progress within two hours, or sooner if warranted.
This message was last updated at 19:57 UTC on 29 October 2025
in many cases: no service health alerts, no status page updates and no confirmations from the support team in tickets.
still we can confirm these issues from different customers accross europe. Mostly the issues are regional dependent.
This is the single most frustrating thing about these incidents. As you're harmstrung on what you can do or how you can react until Microsoft officially acknowledges a problem. Took nearly 90mins both today and when it happened on 9th October.
It's pretty unlikely. AWS published a public 'RCA' https://aws.amazon.com/message/101925/. A race condition in a DNS 'record allocator' causing all DNS records for DDB to be wiped out.
I'm simplifying a bit, but I don't think it's likely that Azure has a similar race condition wiping out DNS records on _one_ system than then propagates to all others. The similarity might just end at "it was DNS".
That RCA was fun. A distributed system with members that don't know about each other, don't bother with leader elections, and basically all stomp all over each other updating the records. It "worked fine" until one of the members had slightly increased latency and everything cascade-failed down from there. I'm sure there was missing (internal) context but it did not sound like a well-architected system at all.
THIS is the real deal. Some say it's always DNS but many times it's some routing fuckup with BGP. two most cursed 3 letter acronym technologies out there
Whilst the status message acknowledge's the issue with Front Door (AFD), it seems as though the rest of the actions are about how to get Portal/internal services working without relying on AFD. For those of us using Front Door does that mean we're in for a long haul?
Yeah, I am guessing it's just a placeholder till they get more info. I thought I saw somewhere that internally within Microsoft it's seen as a "Sev 1" with "all hands on deck" - Annoyingly I can't remember where I saw it, so if someone spots it before I do, please credit that person :D
It's a Sev 0 actually (as one would expect - this isn't a big secret). I was on the engineering bridge call earlier for a bit.
The Azure service I work on was minimally impacted (our customer facing dashboard could not load, but APIs and data layer were not impacted) but we found a workaround.
yea I saw that, but im not sure on how accurate that is. a few large apps/companies I know to be 100% on AWS in us-east-1 are cranking along just fine.
We already had to do it for large files served from Blob Storage since they would cap out at 2MB/s when not in cache of the nearest PoP. If you’ve ever experienced slow Windows Store or Xbox downloads it’s probably the same problem.
I had a support ticket open for months about this and in the end the agent said “this is to be expected and we don’t plan on doing anything about it”.
We’ve moved to Cloudflare and not only is the performance great, but it costs less.
Only thing I need to move off Front Door is a static website for our docs served from Blob Storage, this incident will make us do it sooner rather than later.
we are considering the same but because our website uses APEX domain we would need to move all DNS resolver to cloudfront right ? Does it have as a nice "rule set builder" as azure ?
Unless you pay for CloudFlare’s Enterpise plan, you’re required to have them host your DNS zone, you can use a different registrar as long as you just point your NS records to Cloudflare.
Be aware that if you’re using Azure as your registrar, it’s (probably still) impossible to change your NS records to point to CloudFlare’s DNS server, at least it was for me about 6 months ago.
This also makes it impossible to transfer your domain to them either, as CloudFlare’s domain transfer flow requires you set your NS records to point to them before their interface shows a transfer option.
In our case we had to transfer to a different registrar, we used Namecheap.
However, transferring a domain from Azure was also a nightmare. Their UI doesn’t have any kind of transfer option, I eventually found an obscure document (not on their Learn website) which had an az command which would let you get a transfer code which I could give to Namecheap.
Then I had to wait over a week for the transfer timeout to occur because there is no way on Azure side that I could find to accept the transfer immediately.
I found CloudFlare’s way of building rules quite easy to use, different from Front Door but I’m not doing anything more complex than some redirects and reverse proxying.
I will say that Cloudflare’s UI is super fast, with Front Door I always found it painfully slow when trying to do any kind of configuration.
Cloudflare also doesn’t have the problem that Front Door has where it requires a manual process every 6 months or so to renew the APEX certificate.
Thanks :). We don't use Azure as our registrar. It seems I'll have to plan for this then, we also had another issue, AFD has a hard 500ms tls handshake timeout (doesn't matter how much you put on the origin timeout settings) which means if our server was slow for some reason we would get 504 origin timeout.
They briefly had a statement about using Traffic Manager to work with your AFD to work around this issue, with a link to learn.microsoft.com/...traffic-manager, and the link didn't work. Due to the same issue affecting everyone right now.
They quickly updated the message to REMOVE the link. Comical at this point.
I noticed that Starbucks mobile ordering was down and thought “welp, I guess I’ll order a bagel and coffee on Grubhub”, then GrubHub was down. My next stop was HN to find the common denominator, and y’all did not disappoint.
I’ve seen this up close twice and I’m surprised it’s only twice. Between March and September one year, 6 people on one team had to get new hard drives in their thinkpads and rebuild their systems. All from the same PO but doled out over the course of a project rampup. That was the first project where the onboarding docs were really really good, since we got a lot of practice in a short period of time.
Long before that, the first raid array anyone set up for my (teams’) usage, arrived from Sun with 2 dead drives out of 10. They RMA’d us 2 more drives and one of those was also DOA. That was a couple years after Sun stopped burning in hardware for cost savings, which maybe wasn’t that much of a savings all things considered.
Many years ago (13?), I was around when Amazon moved SABLE from RAM to SSDs. A whole rack came from a single batch, and something like 128 disks went out at once.
I was an intern but everyone seemed very stressed.
Why? Starbucks is not providing a critical service. Spending less money and resources and just accepting the risk that occasionally you won't be able to sell coffee for a few hours is a completely valid decision from both management and engineering pov.
I noticed it when my Netatmo rigamajig stopped notifying me of bad indoor air quality. Lovely. Why does it need to go through the cloud if the data is right there in the home network…
My inner Nelson-from-the-Simpsons wishes I was on your team today, able to flaunt my flask of tea and homemade packed sandwiches. I would tease you by saying 'ha ha!' as your efforts to order coffee with IP packets failed.
I always go everywhere adequately prepared for beverages and food. Thanks to your comment, I have a new reason to do so. Take out coffees are actually far from guaranteed. Payment systems could go down, my bank account could be hacked or maybe the coffee shop could be randomly closed. Heck, I might even have an accident crossing the road. Anything could happen. Hence, my humble flask might not have the top beverage in it but at least it works.
We all design systems with redundancy, backups and whatnot, but few of us apply this thinking to our food and drink. Maybe get a kettle for the office and a backup kettle, in case the first one fails?
It still surprises me how much essential services like public transport are completely reliant on cloud providers, and don't seem to have backups in place.
Here in The Netherlands, almost all trains were first delayed significantly, and then cancelled for a few hours because of this, which had real impact because today is also the day we got to vote for the next parlement (I know some who can't get home in time before the polls close, and they left for work before they opened).
Is voting there a one day only event? If not, I feel the solution to that particular problem is quite clear. There’s a million things that could go wrong causing you to miss something when you try to do it in a narrow time range (today after work before polls close)
If it’s a multi day event, it’s probably that way for a reason. Partially the same as the solution to above.
In europe, voting typically happens in one day, where everyone physically goes to their designated voting place and puts papers in a transparent box. You can stay there and wait for the count at the end of the day if you want to. Tom Scott has a very good video about why we don't want electronic/mail voting: https://www.youtube.com/watch?v=w3_0x6oaDmI
Well "mail in voting" in Washington state pretty much means you drop off your ballot in a drop box in your neighborhood. Which is pretty much the same thing as putting it in a ballot box.
The description of voting in the Netherlands is that you can see your ballot physically go into a clear box and stay to see that exact box be opened and all ballots tallied.
Dropping a ballot in a box in tour neighborhood helps ensure nothing with regards to the actually ballot count.
Here in NZ when I've been to vote, there are usually a couple of party affiliates at the voting location, doing what one of the parent posts described:
> You can stay there and wait for the count at the end of the day if you want to.
And if you watch the election night news, you'll see footage of multiple people counting the votes from the ballot boxes, again with various people observing to check that nothing dodgy is going on.
Having everyone just put their ballots in a postbox seems like a good way remove public trust from the electoral system, because noone's standing around waiting for the postie to collect the mail, or looking at what happens in the mail truck, or the rest of the mail distribution process.
I'm sure I've seen reports in the US of people burning postboxes around election time. Things like this give more excuses to treat election results as illegitimate, which I believe has been an issue over there.
(Yes, we do also have advanced voting in NZ, but I think they're considered "special votes" and are counted separately .. the elections are largely determined on the day by in-person votes, with the special votes being confirmed some days later)
I’m not sure what’s so special in Oregon’s ballot boxes. But, tampering that is detected (don’t need much special to detect a burning box I guess!) is not a complete failure for a system. If any elections were close enough for a box to matter, they could have rerun them.
In Sweden, mail/early votes get sent through the postal system to the official ballot box for those votes. In 2018, a local election had to be redone because the post delivered votes late. Mail delivery occasionally have packaged delayed or lost, and votes are note immune to this problem. In one case the post also gave the votes to an unauthorized person, through the votes did end up at the right place.
It is a small but distinct difference between mail/early voting and putting the votes directly into the ballot box.
If you wish, you can write a phrase on your ballot. The phrases and their corresponding vote are broadcast (on tv, internet, etc). So if you want to validate that your vote was tallied correctly, write a unique phrase. Or you could pick a random 30 digit number, collisions should be zero-probability, right?
I mean, this would be annoying because people would write slurs and advertisements, and the government would have to broadcast them. But, it seems pretty robust.
I’d suggest the state handle the number issuing, but then they could record who they issues which numbers to, and the winning party could go about rounding up their opposition, etc.
Googling around a bit, it sounds like there are systems that let you verify that your ballot made it, but not necessarily that it was counted correctly. (For this reason, I guess?)
You have to trust that whole system. Maybe you do, I don't know the details of how any of that works.
When I vote in person, I know all the officials there from various parties are just like...looking at the box for the whole day to make sure everything is counted. It's much easier to understand and trust.
Off the top of my head, I can't think of an EU country that does not have some form of advance voting.
Here in Latvia the "election day" is usually (always?) on weekend, but the polling stations are open for some (and different!) part of every weekday leading up. Something like couple hours on monday morning, couple hours on tuesday evening, couple around midday wednesday, etc. In my opinion, it's a great system. You have to have a pretty convoluted schedule for at least one window not to line up for you.
I think they meant "don't have it" as in except in special circumstances, and that form says:
> You may use this form to apply for a postal vote if, due to the circumstances of your work/service or your full-time study in the State, you cannot go to your polling station on polling day.
Which seems to indicate that's only for people who can't go to the polling station, otherwise you do have to go there.
I think that a lot of Ireland's voting practices come from having a small population but a huge diaspora. I imagine the percentage of people living outside Ireland what would be eligible to vote in many other countries is significant enough to effect elections, certainly if they are close.
As someone who spent the first 30 years of my life in Ireland but is now part of that diaspora, it's frustrating but I get it. I don't get to vote, but neither do thousands of plastic paddys who have very little genuine connection to Ireland.
That said, I'm sure they could expand the voting window to a couple of days at least without too much issue.
Italy has mail-in vote only for citizen residing abroad. The rest vote on the election Sunday (and Monday morning in some cases, at least in the past).
You don't have to attribute any name to the transaction, just a voting booth ID and the vote. The actual benefit is just that it is hard to tamper and easy to trace where tampering happened.
But I still prefer the paper vote and I usually a blockchain apathetic.
Anonymous voting means that you can't sell your vote. Like, if I pay you $5 to vote for X, but I can't actually verify that you voted for X and not Y, then I wouldn't bother trying. Or if I'm your boss and I want you to vote for X... etc.
Washington State having full vote-by-mail (there is technically a layer of in-person voting as a fallback for those who need it for accessibility reasons or who missed the registration deadline) has spoiled me rotten, I couldn't imagine having to go back to synchronous on-site voting on a single day like I did in Illinois. Awful. Being able to fill my ballot at my leisure, at home, where I can have all the research material open, and drive it to a ballot drop box whenever is convenient in a 2-3 week window before 20:00 on election night, is a game-changer for democracy. Of course this also means that people who serve to benefit from disenfranchising voters and making it more difficult to vote, absolutely hate our system and continually attack it for one reason or another.
As a Dutchman, I have to go vote in person on a specific day. But to be honest: I really don't mind doing so. If you live in a town or city, there'll usually be multiple voting locations you can choose from within 10 minutes walking distance. I've never experienced waiting times more than a couple of minutes. Opening times are pretty good, from 7:30 til 21:00. The people there are friendly. What's not to like? (Except for some of the candidates maybe, but that's a whole different story. :-))
Please lookup US voting poll overflow issues that come up every election cycle. Just because you experience a well streamlined process doesn't mean that it's the norm everywhere.
So, if you have a minor emergency, like a kidney stone and hospitalized for the day - you just miss your chance to vote in that election?
If so, I see a lot to dislike. As the point I was making is you can’t anticipate what might come up. Just because it’s worked thus far doesn’t mean it’s designed for resilience. There’s a lot of ways you could miss out in that type of situation. I seems silly to make sure everything else is redundant and fault tolerant in the name of democracy when the democratic process itself isn’t doing the same.
How is that an acceptable response? Honestly. You’re in the hospital, in pain, likely having a minor surgery, and having someone cast your vote for you is going to be on your mind too? Do you have your voting card in your pocket just in case this were to play out?
That’s just ridiculous in my opinion. Makes me wonder how many well intentioned would be voters end up missing out each election cause shit happens and voting is pretty optional
Mild curiosity, no idea whether it would be statistically relevant but asking the question is the first step. If you knew the answer, you might want to extend the voting window even if it wouldn't effect an elections outcome it would be a quantified number of people excluded from the democratic process for simply having bad luck at the wrong time.
We're on year five of one of the two parties telling voters to not trust early voting. Their choice is because of the Fear, Uncertainty, and Doubt created by the propaganda they are fed.
"No mail-in or 'Early' Voting, Yes to Voter ID! Watch how totally dishonest the California Prop Vote is! Millions of Ballots being 'shipped.' GET SMART REPUBLICANS, BEFORE IT IS TOO LATE!!!"
That's all happening too, but it's honestly a different topic altogether. We have the ability to vote early. Whether you trust it or politicians are trying to undermine your trust in it, etc.... whole other can of worms
If India can have voters vote and tally all the votes in one day, then so can everyone else. It’s the best way to avoid fraud and people going with whoever is ahead. I am sympathetic with emergency protocols for deadly pandemics, but for all else, in-person on a given day.
> If India can have voters vote and tally all the votes in one day, then so can everyone else.
In most countries, in the elections you vote or the member of parliament you want. Presidential elections, and city council elections are held separately, but are also equally simple. But in one election you cast your vote for one person, and that's it.
With this kind of elections, many countries manage to hold the elections on paper ballots, count them all by hand, and publish results by midnight.
But on an American ballot, you vote for, for example:
- US president
- US senator
- US member of congress
- state governor
- state senator
- state member of congress
- several votes for several different state judge positions
- several other state officer positions
- several votes for several local county officers
- local sheriff
- local school board member
- several yes/no votes for several proposed laws, whether they should be passed or not
I don't think it would be possible to calculate all these 20 or 40 votes, if calculated by hand. That's why they use voting machines in America.
Say, how many voting stations are there in a typical city/county in the US?
Here in Indonesia, in a city of 2 million people there are over 7000 voting stations. While we vote for 5 ballots (President, Legislative (National, Province, and City/Regency), we still use paper ballots and count them by hand.
How is it not possible? It's just additional votes, there isn't anything actually stopping counting by hand, is there? How was it counted historically without voting machines?
If it's not a national holiday where the vast majority of people don't have to work, and if there aren't polling places reasonably near every voting age citizen, it's voter suppression.
In particular India has a law that no one shall be made to walk more than 2km to vote. The Indian military will literally deploy a voting booth into the jungle so that a single caretaker of an old temple can vote.
Here in Belgium voting is usually done during the weekend, although it shouldn't matter because voting is a civic duty (unless you have a good reason you have to go vote or you'll be fined), so those who work during the weekend have a valid reason to come in late or leave early.
In the US, where I assume a lot of the griping comes from, election day is not a national holiday, nor is it on a weekend (in fact, by law it is defined as "the Tuesday next after the first Monday in November"), and even though it is acknowledged as an important civic duty, only about half of the states have laws on the books that require employers provide time off to vote. There are no federal laws to that effect, so it's left entirely to states to decide.
By design. The US is literally an experimental testing ground for unmitigated capitalism where the government allows companies to experiment on the population.
In Australia there are so many places to vote, it is almost popping out to get milk level if convenience. Just detour your dog walk slightly. Always at the weekend.
In Australia there are so many places to vote, it is almost popping out to get milk level if convenience. (At least in urbia and suburbia) Just detour your dog walk slightly. Always at the weekend.
In the US getting milk involves driving multiple miles, finding parking, walking to the store, finding a shopping cart, finding the grocery department, navigating the aisles to the dairy section, finding the milk, waiting in line to check out, returning the cart if you’re courteous, and driving back. Could take an hour or so.
In washington we have a 100% mail-in voting system (for all intents and purposes). I can put my ballot back in the mail or drop at any number of drop-boxes throughout the city (less dropboxes in rural areas i'm sure). I think there are some allowances for in-person voting but I don't think they are often used.
There is a ballot tracking system as well, I can see and be notified as my ballot moves through the counting system. It's pretty cool.
I actually just got back from dropping off my local elections ballot 15m ago, quick bike trip maybe a mile or so away and back.
Of course, because it makes it easy for people to vote, the republicans want to do away with it. If you have to stand in line for several hours (which seems to be very normal in most cities) and potentially miss work to do it that's going to all but guarantee that working people and the less motivated will not vote.
So yes in places that only do in person voting, national or state holiday.
Yet... deploy on two clouds and you'll get tax payers scream at you for "wasting money" preparing for a black swan event. Can't have both, either reliability or lower cost.
i'm not sure this is an easily solvable problem. i remember reading an article arguing that your cloud provider is part of your tech stack and it's close to impossible/a huge PITA to make a non-trivial service provider-agnostic. they'd have to run their own openstack in different datacenters, which would be costly and have their own points of failure.
I run non trivial services on EC2, using that service as a VPS. My deploy script works just as well on provisioned Digital Ocean services and on docker containers using docker-compose.
I do need a human to provision a few servers and configure e.g. load balancing and when to spin up additional servers under load. But that is far less of a PITA than having my systems tied to a specific provider or down whenever a cloud precipitates.
The moment you choose to use S3 instead of hosting your own object store, though, you either use AWS because S3 and IAM already have you or spend more time on the care and feeding of your storage system as opposed to actually doing the thing you customers are paying you to do.
It's not impossible, just complicated and difficult for any moderately complex architecture.
dang even zealand didn't survive! new zealand got some soul searching with this outage which took down government person ID service, it's called RealME and it can be used to file your taxes apply for passport etc
The Flemish bus company (de Lijn) uses Azure and I couldn't activate my ticket when I came home after training a couple of hours ago. I should probably start using physical tickets again, because at least those work properly. It's just stupid that there's so much stuff being moved to digital only (often even only being accessible through an Android or iOS app, despite the parent companies of those two being utterly atrocious) when the physical alternatives are more reliable.
Organizations who had their own datacenters were chided for being resistant to modernizing, and now they modernized to use someone else's shared computers and they stopped working.
I really do feel the only viable future for clouds is hybrid or agnostic clouds.
can't believe it's 2025 and some still need to go to some place to vote. I can vote since I can remember(at least 20 years) by mail for anything, we also vote multiple times a year(4-6 times), we just get 1 Month before the things to vote by mail and then mail in back votes. Hope we can soon vote online to get rid of the paper overhead.
For some reason an Azure outage does not faze me in the same way that an AWS outage does.
I have never had much confidence in Azure as a cloud provider. The vertical integration of all the things for a Microsoft shop was initially very compelling. I was ready to fight that battle. But, this fantasy was quickly ruined by poor execution on Microsoft's part. They were able to convince me to move back to AWS by simply making it difficult to provision compute resources. Their quota system & availability issues are a nightmare to deal with compared to EC2.
At this point I'd rather use GCP over Azure and I have zero seconds of experience with it. The number of things Microsoft gets right in 2025 can be counted single-handedly. The things they do get right are quite good, but everything else tends to be extremely awful.
The "Blades" experience [0] where instead of navigating between pages it just kept opening things to the side and expanding horizontally?
Yeah, that had some fun ideas but was way more confusing than it needed to be. But also that was quite a few years back now. The Portal ditched that experience relatively quickly. Just long enough to leave a lot of awful first impressions, but not long enough for it to be much more than a distant memory at this point, several redesigns later.
[0] The name "Blades" for that came from the early years of the Xbox 360, maybe not the best UX to emulate for a complex control panel/portal.
Azure to me has always suffered from a belief that “UI innovations can solve UX complexity if you just try hard enough.”
Like, AWS, and GCP to a lesser extent, has a principled approach where simple click-ops goals are simple. You can access the richer metadata/IAM object model at any time, but the wizards you see are dumb enough to make easy things easy.
With Azure, those blades allow tremendously complex “you need to build an X Container and a Container Bucket to be able to add an X” flows to coexist on the same page. While this exposes the true complexity, and looks cool/works well for power users, it is exceedingly unintuitive. Inline documentation doesn’t solve this problem.
I sometimes wonder if this is by design: like QuickBooks, there’s an entire economy of consultants who need to be Certified and thus will promote your product for their own benefit! Making the interface friendly to them and daunting to mere mortals is a feature, not a bug.
But in Azure’s case it’s hard to tell how much this is intentional.
(I think that's from near the transition because it has full "windowing" controls of minimize/maximize/close buttons. I recall a period with only close buttons.)
All that blue space you could keep filling with more "blades" as you clicked on things until the entire page started scrolling horizontally to switch between "blades". Almost everything you could click opened in a new blade rather than in place in the existing blade. (Like having "Open in New Window" as your browser default.)
It was trying to merge the needs of a configurable Dashboard and a "multi-window experience". You could save collections of blades (a bit like Niri workspaces) as named Dashboards. Overall it was somewhere between overkill and underthought.
(Also someone reminded me that many "blades" still somewhat exist in the modern Portal, because, of course, Microsoft backwards compatibility. Some of the pages are just "maximized Blades" and you can accidentally unmaximize them and start horizontally scrolling into new blades.)
azure likes to open new sections on the same tab / page as opposed to reloading or opening a new page / tab (overlays? modals? I'm lost on graphic terms)
depending on the resource you're accessing, you can get 5+ sections each with their own ui/ux on the same page/tab and it can be confusing to understand where you're at in your resources
if you're having trouble visualizing it, imagine an url where each new level is a different application with its own ui/ux and purpose all on the same webpage
AWS' UI is similarly messy, and to this day. They regularly remove useful data from the UI, or change stuff like the default sort order of database snapshots from last created to initial instance created date.
I never understood why a clear and consistent UI and improved UX isn't more of a priority for the big three cloud providers. Even though you talk mostly via platform SDK's, I would consider better UI especially initially, a good way to bind new customers and pick your platform over others.
I guess with their bottom line they don't need it (or cynically, you don't want to learn and invest in another cloud if you did it once).
It’s more than just the UI itself (which is horrible), it’s the whole thing that is very hostile to new users even if they’re experienced. It’s such an incoherent mess. The UI, the product names, the entire product line itself, with seemingly overlapping or competing products… and now it’s AI this and AI that. If you don’t know exactly what you’re looking for, good luck finding it. It’s like they’re deliberately trying to make things as confusing as possible.
For some reason this applies to all AWS, GCP and Azure. Seems like the result of dozens of acquisitions.
I still find it much easier to just self host than learn cloud and I’ve tried a few times but it just seems overly complex for the sake of complexity. It seems they tie in all their services to jack up charges, eg. I came for S3 but now I’m paying for 5 other things just to get it working.
Any time something is that unintuitive to get started, I automatically assume that if I encounter a problem that I’ll be unable to solve it. That thought alone leads me to bounce every time.
100% agree. I've been working in the industry for almost 20 years, I'm a full stack developer and I manage my servers. I've tried signing up for AWS and I noped out.
AWS Is a complete mess. Everything is obscured behind other products, and they're all named in the most confusing way possible.
Microsoft has the regulatory capture. All the European privacy and regulatory laws are good for Azure. That's why your average European government or baking app runs most likely on Azure. (or Oracle, but more likely Azure)
Cloud Run is incredible. It’s one of those things I wish more devs knew about. Even at work where we use GCP all the “smart” devs insist on GKE for their “webscale” services that get dozens of requests a second. Dozens!
I know for some people the prospect of losing their Google Cloud access due to an automated terms of service violation on some completely unrelated service is worrisome.
I'd hope you can create a Google Cloud account under a completely different email address, but I do as little business with Google as I can get away with, so I have no idea.
That's generally speaking a good practice anyways. My Amazon shopping account has a different email than my Amazon Web Services account. I intuited that they need to be different from the get go.
The problem is that in some industries, Microsoft is the only option. Many of these regulated industries are just now transitioning from the data center to the cloud, and they've barely managed to get approval for that with all of the Microsoft history in their organization. AWS or GCloud are complete non-starters.
I moved a 100% MS shop to AWS circa 2015. We ran our DCs on EC2 instances just as if they were on prem. At some point we installed the AAD connector and bridged some stuff to Azure for office/mail/etc., but it was all effectively in AWS. We were selling software to banks so we had a lot of due diligence to suffer. AWS Artifact did much of the heavy lifting for us. We started with Amazon's compliance documentation and provided our own feedback on top where needed.
I feel like compliance is the entire point of using these cloud providers. You get a huge head start. Maintaining something like PCI-DSS when you own the real estate is a much bigger headache than if it's hosted in a provider who is already compliant up through the physical/hardware/networking layers. Getting application-layer checkboxes ticked off is trivial compared to "oops we forgot to hire an armed security team". I just took a look and there are currently 316 certifications and attestations listed under my account.
I've found that lift and shifting to EC2 is also generally cheaper than the equivalent VMs on Azure.
Microsoft really wants you to use their PaaS offerings, and so things on Azure are priced accordingly. A Microsoft shop just wanting to lift-and-shift, Azure isn't the best choice unless the org has that "nobody ever got fired for buying Microsoft" attitude.
Microsoft is better at regulatory capture, so Azure has many customers in the public sector. So an Azure outage probably affects the public sector more (see example above about trains).
What Amazon, Azure, and Google are showing with their platform crashes amid layoffs, while they supports governments that are Oppressing's Citizens and Ignoring the Law, is that they do not care about anything other than the bottom line.
They think they have the market captured, but I think what their dwindling quality and ethics are really going to drive is adoption of self hosting, distributed computing frameworks. Nerds are the ones who drove adoption of these platforms, and we can eventually end if we put in the work.
Seriously with container technology, and a bit more work / adoption on distributed compute systems and file storage (IPFS,FileCoin) there is a future where we dont have to use big brothers compute platform. Fuck these guys.
These were my thoughts exactly. I may have my tinfoil hat on, but outages these close together between the largest cloud providers amid social unrest, my wonder is the government / tech companies implementing some update that adds additional spyware / blackout functionality.
I really hope this pushes the internet back to how it used to be, self hosted, privacy, anonymity. I truly hope that's where we're headed, but the masses seem to just want to stay comfortable as long as their show is on TV
At least some bits of it do. I was writing something to pull logs the other day and the redirect was to an azure bucket. It also returned a 401 with the valid temporary authed redirect in the header. I was a bit worried I'd found a massive security hole but it appears after some testing it just returned the wrong status code.
Personally I am thinking more and more about hetzner, yes I know its not an apples to orange comparison. But its honestly so good
Someone had created a video where they showed the underlying hardware etc., I am wondering if there is something like https://vpspricetracker.com/ but with geek-benchmarks as well.
This video was affiliated with scalahosting but still I don't think that there was too much bias of them and they showed at around 3:37 a graph comparison with prices https://www.youtube.com/watch?v=9dvuBH2Pc1g
Now it shows how contabo has better hardware but I am pretty sure that there might be some other issues, and honestly I feel a sense of trust with hetzner I am not sure about others.
Either hetzner or self hosting stuff personally or just having a very cheap vps and going to hetzner if need be but hetzner already is pretty cheap or I might use some free service that I know of are good as well.
Probably not, but at least you don’t delude yourself into thinking reliability is a solved problem just because you’re paying through the nose for compute and storage.
One of recent (4 months ago) Cloudflare outages (I think it was even workers) was caused by Google Cloud being down and Cloudflare hosting an essential service there
Hm it seemed that they hosted a critical service for cloudflare kv on google itself, but I wonder about the update.
Personally I just trust cloudflare more than google, given how their focus is on security whereas google feels googly...
I have heard some good things about google cloud run and the google's interface feels the best out of AWS,Azure,GCloud but I still would just prefer cloudflare/hetzner iirc
Another question: Has there ever been a list of all major cloud outages, like I am interested how many times google cloud and all cloud providers went majorly down I guess y'know? is there a website/git project that tracks this?
IIRC, the grocery chain I worked for used to have an offline mode to move customers out the door. But it meant that when the system came back online, if the customers card was denied, the customer got free groceries.
Yea, good old store and forward. We implemented it in our PoS system. Now, we do non PCI integrations so we arn't in PCI scope, but depending on the processor, it can come with some limitations. Like, you can do store and forward, but only up to X number of transactions. I think for one integration, it's 500-ish store wide (it uses a local gateway that store and forwards to the processors gateway). The other integration we have, its 250, but store and forward on device, per device.
In many places it's also possibly just a left over feature from older times. I worked at a major UK supermarket in the mid-00s, and their checkout system had this feature. But it was like that because that's how it was originally built, it wasn't a 'feature' they added.
Credit card information would be recorded by the POS, synced to a mini-server in the back office (using store-and-forward to handle network issues) and then in a batch process overnight, sent to HQ where the payment was processed.
It wasn't until chip-and-PIN was rolled out that they started supporting "online" (i.e. processed then and there) card transactions, and even then the old method still worked if there was a network issues or power failure (all POSes has their own UPS).
The only real risk at the time was that someone tried to pay with a cancelled credit card - the bank would always honour the payment otherwise. But that was pretty uncommon back then, as you'd have to phone your bank to do it, not just press a button in an app.
IIRC, the grocery chain I worked for used to have an offline mode to move customers out the door.
Chick-fil-a has this.
One of the tech people there was on HN a few years ago describing their system. Credit card approval slows down the line, so the cards are automatically "approved" at the terminal, and the transaction is added to a queue.
The loss from fraudulent transactions turns out to be less than the loss from customers choosing another restaurant because of the speed of the lines.
I was shopping at a mall with a visa vanilla card once. I got it as a gift and didn't know the limit. No matter what I bought the card kept going -- and I never got a balance of what was on the card. Eventually, later that day it stopped. I called customer support and asked how much was left on the balance. They told me they had no idea my balance - but everything I bought was mine.
I remember that banks will try to honor the transactions, even if the customer's balance/credit limit is exhausted. It doesn't apply only to some gift cards.
There's a Family Dollar by my house that is down at least 2 full days per month because of bad inet connectivity. I live close enough that with a small tower on my roof i can get line of sight to theirs. I've thought about offering them a backup link off my home inet if they give me 50% of sales whenever its in use. It would be a pretty good deal for them, better some sales when their inet is down vs none.
It's Family Dollar, margin has to be almost nothing and sales per day is probably < $1k. That's why I said 50% of sales and not profit.
I go there daily because it's a nice 30min round trip walk and I wfh. I go up there to get a diet coke or something else just to get out of the house. It amazes me when i see a handwritten sign on the door "closed, system is down". I've gotten to know the cashiers so I asked and it's because the internet connection goes down all the time. That store has to one of the most poorly run things i've ever seen yet it stays in business somehow.
I think the point people are trying and failing to make is that asking for half of means sales is half of revenue not half of net and that you’re out of your goddamned mind if you think a store with razor thin margins would sell at a massive loss rather than just close due to connectivity problems.
Your responses imply that you think people are questioning whether you would lose money on the deal while we are instead saying you’ll get laughed out of the store, or possibly asked never to come back.
Unfortunately they are largely corporate, which is how they can sell items for such a cheap price. The store manager probably has zero say in nearly anything. Even if they wanted to "break the rules," I doubt they could make use of your connection as a backup, but I've also worked for smaller companies that were able to sell internet access to individual locations like Denny's and various large hotels in the US. Being able to somehow share sales would be the difficult part, since all sales are reported back to corporate.
Good luck if you make this work for you, it would be exciting to hear about if you're able to get them to work with you.
2-3%, bit higher on perishables. Though i'd just ask lump sum payments in cash since it likely has to no go through corporate (as in, avoid the corporation).
You'd think any SeriousBusiness would have a backup way to take customers' money. This is the one thing you always want to be able to do: accept payment. If they made it so they can't do that, they deserve the hit to their revenue. People should just walk out of the store with the goods if they're not being charged.
Why doesn't someone in the store at least have one of those manual kachunk-kachunk carbon copy card readers in the back that they can resuscitate for a few days until the technology is turned back on? Did they throw them all away?
The kachunk-kachunk credit card machines need raised digits on the cards, and I don't think most banks have been issuing those for years at this point. Mine have been smooth for at least 10 years.
My card tied to my main financial institution have the raised digits, but most cards you'd sign up for online now no longer have the raised digits (and often allow you to select art to appear on the card face).
I think a lot of payment terminals have an option to record transactions offline and upload them later, but apparently it's not enabled by default - probably because it increases your risk that someone pays with a bad card.
If they used standalone merchant terminals, then those typically use the local LAN which can rollover to cellular or PoT in the event of a network outage. The store can process a card transaction with the merchant terminal and then reconcile with the end of day chit. This article from 2008 describes their PoS https://www.retailtouchpoints.com/topics/store-operations/ca...
These stores appear everywhere, even in areas with high income. You'd be surprised, but often people with those high incomes shop for certain products at very low rates, and that's how they keep their savings. A good example is garbage bags. Most people don't care too much about the quality of their garbage bags, unless they rip on the way to the bin.
Just to add - this particular supermarket wasn’t fully down, it took ages for them to press “sub total” and then pick the payment method. I suspect it was slow waiting for a request to timeout perhaps
I remember last mechanical cash registers in my country in 90s and when these got replaced by early electronic ones with blue vacuum fluorescent tubes. Then everything got smaller and smaller. Now I'm pestered to "add the item to the cart" by software.
Last week I couldn't pay for flowers for grandma's grave because smartphone-sized card terminal refused to work - it stuck on charging-booting loop so I had to get cash. Tho my partner thinks she actually wanted to get cash without a receipt for herself excluding taxes
You can, but it's all about risk mitigation. Most processors have some form of store and forward (and it can have limitations like only X number of transactions). Some even have controls to limit the amount you can store-and-forward (for instance, only charges under $50). But ultimately, it's still risk mitigation. You can store-and-forward, but you're trusting that the card/account has the funds. If it doesn't, you loose and ain't shit you can do about it. If you can't tolerate any risk, you don't turn on store and forward systems and then you can't process cards offline.
Its not the we are not capable. Its, is the business willing to assume the risk?
Currently standing in a half closed supermarket because the tills are down and they cant take payments
There's a fairly large supermarket near me that has both kinds of outages.
Occasionally it can't take cards because the (fiber? cable?) internet is down, so it's cash only.
Occasionally it can't take cash because the safe has its own cellular connection, and the cell tower is down.
I was at Frank's Pizza in downtown Houston a few weeks ago and they were giving slices of pizza away because the POS terminal died, and nobody knew enough math to take cash. I tried to give them a $10 and told them to keep the change, but "keep the change" is an unknown phrase these days. They simply couldn't wrap their brains around it. But hey, free pizza!
AWS, now Azure - wasn't this a plot point in Terminator where SkyNet was causing computer systems to have issues much before it finally become self-aware?
I’ve been migrating our services off of Azure slowly for the past couple of years. The last internet facing things remaining are a static assets bucket and an analytics VM running Matomo. Working with Front Door has been an abysmal experience, and today was the push I needed to finally migrate our assets to Cloudflare.
I feel pretty justified in my previous decisions to move away from Azure. Using it feels like building on quicksand…
We are very dependent on Azure and Microsoft Authentication and Microsoft 365 and haven’t had weekly or even monthly issues. I can think of maybe three issues this year.
I have had intermittent issues with winget today. I use UniGetUI for a front-end, and anything tied to Microsoft has failed for me. Judging by the logs, it's mostly retrieving the listing of versions (I assume similar to what 'apt-get update' does, I'm fairly new to using winget for Windows package management).
Pretty much every single Microsoft domain I've tried to access loads for a looooong time before giving me some bare html. I wonder if someone can explain why that's happening.
We’re 100% on Azure but so far there’s no impact for us.
Luckily, we moved off Azure Front Door about a year ago. We’d had three major incidents tied to Front Door and stopped treating it as a reliable CDN.
They weren’t global outages, more like issues triggered by new deployments. In one case, our homepage suddenly showed a huge Microsoft banner about a “post-quantum encryption algorithm” or something along those lines.
Kinda wild that a company that big can be so shaky on a CDN, which should be rock solid.
We battled https://learn.microsoft.com/en-us/answers/questions/1331370/... for over a year, and finally decided to move off since there was no any resolution. Unfortunately our API servers were still behind AFD so they were affected by today's stuff...
And querying https://www.microsoft.com/ results in HTTP 200 on the root document, but the page elements return errors (a 504 on the .css/.js documents, a 404 on some fonts, Name Not Resolved on scripts.clarity.ms, Connection Timed Out on wcpstatic.microsoft.com and mem.gfx.ms). That many different kinds of errors is actually kind of impressive.
I'm gonna say this was a networking/routing issue. The CDN stayed up, but everything else non-CDN became unroutable, and different requests traveled through different paths/services, but each eventually hit the bad network path, and that's what created all the different responses. Could also have been a bad deploy or a service stopped running and there's different things trying to access that service in different ways, leading to the weird responses... but that wouldn't explain the failed DNS propagation.
I've been doing it since 1998 in my bedroom with a dual T1 (and on to real DCs later). While I've had some outages for sure it makes me feel better I am not that divergent in uptime in the long run vs big clouds.
They added a message at the same time as your comment:
"We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly."
The paradox of cloud provider crashes is that if the provider goes down and takes the whole world with it, it's actually good advertisement. Because, that means so many things rely on it, it's critically important, and has so many big customers. That might be why Amazon stock went up after AWS crash.
If Azure goes down and nobody feels it, does Azure really matter?
People feel it, but usually not general consumers like they do when AWS goes down.
If Azure goes down, it's mostly affecting internal stuff at big old enterprises. Jane in accounting might notice, but the customers don't. Contrast with AWS which runs most of the world's SaaS products.
People not being able to do their jobs internally for a day tends not to make headlines like "100 popular internet services down for everyone" does.
Yeah just took down the prod site for one of our clients since we host the front-end out of their CDN. Just got wrapped up panic hosting it somewhere else for the past hour, very quickly reminds you about the pain of cookies...
Pretty much all Azure services seem to be down. Their status page says it's only the portal since 16:00. It would be nice if these mega-companies could update their status page when they take down a large fraction of the Internet and thousands of services that use them.
Same playbook for AWS. When they admitted that Dynamo was inaccessible, they failed to provide context that their internal services are heavily dependent on Dynamo
It's only after the fact they are transparent about the impact
The Internet is supposed to be decentralized. The big three seem to have all the power now (Amazon, Microsoft, and Google) plus Cloudflare/Oracle.
How did we get here? Is it because of scale? Going to market in minutes by using someone else's computers instead of building out your own, like co-location or dedicated servers, like back in the day.
A lot of money and years of marketing the cloud as the responsible business decision led us here. Now that the cloud providers have vendor lock-in, few will leave, and customers will continue to wildly overpay for cloud services.
Not sure how the current situation is better. Being stranded with no way whatsoever to access most/all of your services sounds way more terrifying than regular issues limited to a couple of services at a time
> no way whatsoever to access most/all of your services
I work on a product hosted on Azure. That's not the case. Except for front door, everything else is running fine. (Front door is a reverse proxy for static web sites.)
The product itself (an iot stormwater management system) is running, but our customers just can't access the website. If they need to do something, they can go out to the sites or call us and we can "rub two sticks together" and bypass the website. (We could also bypass front door if someone twisted our arms.)
Most customers only look at the website a few times a year.
---
That being said, our biggest point of failure is a completely different iot vendor who you probably won't hear about on Hacker News when they, or their data networks, have downtime.
> Big Tech lobbying is riding the EU’s deregulation wave by spending more, hiring more, and pushing more, according to a new report by NGO’s Corporate Europe Observatory and LobbyControl on Wednesday (29 October).
> Based on data from the EU’s transparency register, the NGOs found that tech companies spend the most on lobbying of any sector, spending €151m a year on lobbying — a 33 percent increase from €113m in 2023.
Gee whizz, I really do wonder how they end up having all the power!
I think the response lies in the surrounding ecosystem.
If you have a company it's easier to scale your team if you use AWS (or any other established ecosystem). It's way easier to hire 10 engineers that are competent with AWS tools than it is to hire 10 engineers that are competent with the IBM tools.
And from the individuals perspective it also make sense to bet on larger platforms. If you want to increase your odds of getting a new job, learning the AWS tools gives you a better ROI than learning the IBM tools.
But the cloud compute market is basically centralized into 2.5 companies at this point. The point of paying companies like Azure here is that they've in theory centralized the knowledge and know-how of running multiple, distributed datacenters, so as to be resilient.
But that we keep seeing outages encompassing more than a failure domain, then it should be fair game for engineers / customers to ask "what am I paying for, again?"
Moreover, this seems to be a classic case of large barriers to entry (the huge capital costs associated with building out a datacenter) barring new entrants into the market, coupled with "nobody ever got fired for buying IBM" level thinking. Are outages like these truly factored into the napkin math that says externalizing this is worth it?
Consolidation is the inevitable outcome of free unregulated markets.
In our highly interconnected world, decentralization paradoxically requires a central authority to enforce decentralization by restricting M&A, cartels, etc.
A natural monopoly is a monopoly in an industry in which high infrastructure costs and other barriers to entry relative to the size of the market give the largest supplier in an industry, often the first supplier in a market, an overwhelming advantage over potential competitors. Specifically, an industry is a natural monopoly if a single firm can supply the entire market at a lower long-run average cost than if multiple firms were to operate within it. In that case, it is very probable that a company (monopoly) or a minimal number of companies (oligopoly) will form, providing all or most of the relevant products and/or services.
The outage was really weird. For me, parts of the portal worked, other parts didn't. I had access to a couple of resource groups, but no resources visible in those groups. Azure Devops Pipelines that needed do download from packages.microsoft.com didn't work.
The Microsoft status page mostly referenced the portal outage, but it was more than that.
For us, it looks like most services are still working (eastus and eastus2). Our AKS cluster is still running and taking requests. Failures seem limited to management portal.
High availability is touted as a reason for their high prices, but I swear I read about major cloud outages far more than I experience any outages at Hetzner.
I think the biggest features of the big cloud vendors is that when they are down, not only you but your customers and your competitors usually have issues at the same time so everybody just shrug and have a lazy/off day at the same time. Even on call teams reall just have to wait and stay on standby because there is very little they can do. Doing a failover can be slower than waiting for the recovery, not help at all if outage is spanned accross several region, or bring aditional risks.
And more importantly nobody lose any reputation except AWS/Azure/Google.
The real reason is that outages are not your fault. Its the new version of "nobody ever got fired for buying IBM" - later it became MS, and now its any big cloud provider.
For one it’s statistics - Hetzner simply runs far fewer major services than hyperscalers. And the services they run are also more affluent, with larger customer bases, so downtimes are systemically critical. Therefore it’s louder.
On the merits though, I agree, haven’t had any serious issues with Hetzner.
DO has been shockingly reliable for me. I shut down a neglected box almost 900 days uptime the other day. In that time AWS has randomly dropped many of my boxes with no warning requiring a manual stop/start action to recover them... But everybody keeps telling me that DO isn't "as reliable" as the big three are.
To be fair, in the AWS/Azure outages, I don't think any individual (already created) boxes went down, either. In AWS' case you couldn't start up new EC2 instances, and presumably same for Azure (unless you bypass the management portal, I guess). And obviously services like DynamoDB and Front Door, respectively, went down. Hetzner/DO don't offer those, right? Or at least they're not very popular.
Nope, more than the portal. For instance, I just searched for "Azure Front Door" because I hadn't heard of it before (I now know it's a CDN), and neither the product page itself [1] nor the technical docs [2] are coming up for me.
we use front door (as does miccrosoft.com) and our website was down, I was able to change the DNS records to point directly to our server and will leave it like that for a few hours until everything is green
Do Microsoft still say "If the government has a broader voluntary national security program to gather customer data, we don't participate in it" today (which PRISM proved very false), or are they at least acknowledging they're participating in whatever NSA has deployed today?
PRISM wasn't voluntary. Also there are 3 levels here:
1. Mandatory
2. "Voluntary"
3. Voluntary
And I suspect that very little of what the NSA does falls into category 3. As Sen Chuck Schumer put it "you take on the intelligence community, they have six ways from Sunday at getting back at you"
This is funny but also possibly true because: business/MBA types see these outages as a way to prove how critical some services are, leading to investors deciding to load up on the vendor's stock.
I may or may not have been known to temporarily take a database down in the past to make a point to management about how unreliable some old software is.
I was having issues a few hours ago. I'm now able to access the portal, although I get lots of errors in the browser console, and things are loading slowly. I have services in the US-East region.
I have been having issues with GitHub and the winget tool for updates throughout the day as well. I imagine things are pulling from the same locations on Azure for some of the software I needed to update (NPM dependencies, and some .NET tooling).
Seeing users having issues with the "Modern Outlook", specifically empty accounts. Switching back to the "Legacy Outlook" which functions largely without the help of the cloud fixes the issue. How ironic.
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
----
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
The sad thing is - $MSFT isn't even down by 1%. And IIRC, $AMZN actually went up during their previous outage.
So if we look at these companies' bottom lines, all those big wigs are actually doing something right. Sales and lobbying capacity is way more effective than reliability or good engineering (at least in the short term).
I think he was implying that those companies think they are so important that it doesnt matter they are down, they wont loose any customers over it because they are too big and important.
That's a good thing. Stock prices shouldn't go down because of rare incidents which don't accurately represent how successful a company is likely to be in the future.
I looked into this before and the stocks of these large corps simply does not move when outages happens. Maybe intra-day, I don't have that data, but in general no effect.
There's no way to tell, and after about 30 minutes, the release process on VS Code Marketplace failed with a cryptic message: "Repository signing for extension file failed.". And there's no way to restart/resume it.
"We’re investigating an issue impacting Azure Front Door services. Customers may experience intermittent request failures or latency. Updates will be provided shortly."
They admit in their update blurb azure front door is having issues but still report azure front door as having no issues on their status page.
And it's very clear from these updates that they're more focused on the portal than the product, their updates haven't even mentioned fixing it yet, just moving off of it, as if it's some third party service that's down.
Unsubstantiated idea: So the support contract likely says there is a window between each reporting step and the status page is the last one and the one in the legal documents giving them several more hours before the clauses trigger.
Azure goes down all the time. On Friday we had an entire regional service down all day. Two weeks ago same thing different region. You only hear about it when it's something everyone uses like the portal, because in general nobody uses Azure unless they're held hostage.
Portal and Azure CDN are down here in the SF Bay Area. Tenant azureedge.net DNS A queries are taking 2-6 seconds and most often return nothing. I got a couple successful A response in the last 10 minutes.
Edit: As of 9:19 AM Pacific time, I'm now getting successful A responses but they can take several seconds. The web server at that address is not responding.
It is much more than azure. One of my kids needs a key for their laptop and can't reach that either. Great excuse though, 'Azure ate my homework'. What a ridiculous world we are building. Fuck MS and their account requirements for windows.
“ Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025”
It begs the question from a noob like me... Where should they host the status page? Surely it shouldn't be on the same infra that it's supposed to be monitoring. Am I correct in thinking that?
I'd say DNS/Front Door (or some carrier interconnect) is the thing affected, since I can auth just fine in a few places. (I'm at MS, but not looped into anything operational these days, so I'm checking my personal subscription).
On our end, our VMs are still working, so our gitlab instance is still up. Our services using Azure App Services are available through their provided url. However, Front Door is failing to resolve any domains that it was responsible for.
Part of this outage involves outlook hanging and then blaming random addins. Pretty terrible practice by Microsoft to blame random vendors for their own outage.
SSO is down, Azure Portal Down and more, seems like a major outage. Already a lot of services seem to be affected: banks, airlines, consumer apps, etc.
The portal is up for me and their status page confirms they did a failover for it. Definitely not disputing that its reach is wide, but a lot of smaller setups probably aren't using Front Door.
Looks like MyGet is impacted too. Seems like they use Azure:
>What is required to be able to use MyGet? ... MyGet runs its operations from the Microsoft Azure in the West Europe region, near Amsterdam, the Netherlands.
This is the eternal tension for early-stage builders, isn't it? Multi-cloud gives you resilience, but adds so much complexity that it can actually slow down shipping features and iterating.
I'm curious—at what point did you decide the overhead was worth it? Was it after experiencing an outage, or did you architect for it from day one?
As someone launching a product soon (more on the builder/product side than infra-engineer), I keep wrestling with this. The pragmatist in me says "start simple, prove the concept, then layer in resilience." But then you see events like this week and think "what if this happens during launch?"
How did you handle the operational complexity? Did you need dedicated DevOps folks, or are there patterns/tools that made it manageable for a smaller team?
I don't think it's meant to be serious. It's a comment on Microsoft laying off their staff and stuffing their Azure and Dotnet teams with AI product managers.
That said, I don't hear about GCP outages all that often. I do think AWS might be leading in outages, but that's a gut feeling, I didn't look up numbers.
Thank you. I was wondering what was going on at a company whose web app I need to access. I just checked with BuiltWith and it seems they are on Azure.
Does (should, could) DownDetector also say what customer-facing services are down, when some infrastructure is unworking? Or is that the info that the malefactors are seeking?
I absolutely love the utility aspect of LLMs but part of me is curious if moving faster by using AI is going to make these sorts of failure more and more often.
Unable to access the portal and any hit to SSO for other corporate accesses is also broken. Seems like there's something wrong in their Identity services.
Could be DNS, I'm seeing SERVFAIL trying to resolve what look to be MS servers when I'm hitting (just one example) mygoodtogo.com (trying to pay a road toll bill, and failing).
Apologies, but this just reads like a low effort critique of big things.
To be clear, they should get criticism. They should be held liable for any damage they cause.
But that they remain the biggest cloud offering out there isn't something you'd expect to change from a few outages that, by most all evidence, potential replacements have, as well? More, a lot of the outages potential replacements have are often more global in nature.
Yeah, I have non prod environments that don't use FD that are functioning. Routing through FD does not work. And a different app, nonprod doesn't use FD (and is working) but loads assets from the CDN (which is not working).
FD and CDN are global resources and are experiencing issues. Probably some other global resources as well.
Hate to say it, but DNS is looking like it's still the undisputed champ.
HTTPSConnectionPool(host='schemas.xmlsoap.org', port=443): Max retries exceeded with url: /soap/encoding/ (Caused by SSLError(CertificateError("hostname 'schemas.xmlsoap.org' doesn't match '*.azureedge.net'")))
A service we rely on that isn't even running on Azure is inaccessible due to this issue. For an asset that probably never changes. Wild for that to be the SPOF.
downdetector reports coincident cloudflare outage. is microsoft using cloudflare for management plane, or is there common infra? data center problem somewhere, maybe fiber backbone? BGP?
downdetector reports coincident cloudflare outage. is microsoft using cloudflare for management plane, or is there common infra? data center problem somewhere, maybe fiber backbone? BGP?
Yeah the graph for that one looks exactly the same shape. I wonder if they were depending on some azure component somehow, or maybe there were things hosted on both and the azure failure made enough things failover to AWS that AWS couldn't cope? If that was the case I'd expect to see something similar with GCP too though.
Edit: nope looks like there's actually a spike on GCP as well
Definitely also a strong possibility. I wish I had paid more attention during the AWS one earlier to see what other things looked like on there at the time.
winget upgrade fabric
Failed in attempting to update the source: winget
An unexpected error occurred while executing the command:
InternetOpenUrl() failed.
0x80072ee7 : unknown error
When you look at the scale of the reports, you find they are much lower than Azure's. seeing a bunch of 24-hour sparkline type graphs next to each other can make it look like they are equally impacted, but AWS has 500 reports and Azure has 20,000. The scale is hidden by the choice of graph.
In other words, people reporting outages at AWS are probably having trouble with microsoft-run DNS services or caching proxies. It's not that the issues aren't there, it's that the internet is full of intermingled complexity. Just that amount of organic false-positives can make it look like an unrelated major service is impacted.
As of now Azure Status page still shows no incident. It must be manually updated, someone has to actively decide to acknowledge an issue, and they're just... not. It undermines confidence in that status page.
I know how to fix this but this community is too close minded and argumentative egocentric sensitive pedantic threatened angry etc to bother discussing it
I noticed issues on Azure so I went to the status page. It said everything was fine even though the Azure Portal was down. It took more than 10 minutes for that status page to update.
How can one of the richest companies in the world not offer a better service?
My best guess at the moment is something global like the CDN is having problems affecting things everywhere. I'm able to use a legacy application we have that goes directly to resources in uswest3, but I'm not able to use our more modern application which uses APIM/CDN networks at all.
From Azure status page: "Customers can consider implementing failover strategies with Azure Traffic Manager, to fail over from Azure Front Door to your origins".
I especially like how Nadella speaks of layoffs as some kind of uncontrollable natural disaster, like a hurricane, caused by no-one in particular. A kind of "God works in mysterious ways".
> “Microsoft is being recognized and rewarded at levels never seen before,” Nadella wrote. “And yet, at the same time, we’ve undergone layoffs. This is the enigma of success in an industry that has no franchise value.”
> Nadella explained the disconnect between thriving financials and layoffs by stating that “progress isn’t linear” and that it is “sometimes dissonant, and always demanding.”
I've read the whole memo and it's actually worse than those excerpts. Nadella doesn't even claim these were low performers:
> These decisions are among the most difficult we have to make. They affect people we’ve worked alongside, learned from, and shared countless moments with—our colleagues, teammates, and friends.
Ok, so Microsoft is thriving, these were friends and people "we've learned from", but they must go because... uh... "progress isn't linear". Well, thanks Nadella! That explains so much!
Timeline
15:45 UTC on 29 October 2025 – Customer impact began.
16:04 UTC on 29 October 2025 – Investigation commenced following monitoring alerts being triggered.
16:15 UTC on 29 October 2025 – We began the investigation and started to examine configuration changes within AFD.
16:18 UTC on 29 October 2025 – Initial communication posted to our public status page.
16:20 UTC on 29 October 2025 – Targeted communications to impacted customers sent to Azure Service Health.
17:26 UTC on 29 October 2025 – Azure portal failed away from Azure Front Door.
17:30 UTC on 29 October 2025 – We blocked all new customer configuration changes to prevent further impact.
17:40 UTC on 29 October 2025 – We initiated the deployment of our ‘last known good’ configuration.
18:30 UTC on 29 October 2025 – We started to push the fixed configuration globally.
18:45 UTC on 29 October 2025 – Manual recovery of nodes commenced while gradual routing of traffic to healthy nodes began after the fixed configuration was pushed globally.
23:15 UTC on 29 October 2025 - PowerApps mitigation of dependency, and customers confirm mitigation.
00:05 UTC on 30 October 2025 – AFD impact confirmed mitigated for customers.
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. In addition. customers may experience issues accessing the Azure Portal. Customers can attempt to use programmatic methods (PowerShell, CLI, etc.) to access/utilize resources if they are unable to access the portal directly. We have failed the portal away from Azure Front Door (AFD) to attempt to mitigate the portal access issues and are continuing to assess the situation.
We are actively assessing failover options of internal services from our AFD infrastructure. Our investigation into the contributing factors and additional recovery workstreams continues. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:57 UTC on 29 October 2025
---
Update: 16:35 UTC:
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
---
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
---
Message from the Azure Status Page: https://azure.status.microsoft/en-gb/status
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue. We are taking two concurrent actions where we are blocking all changes to the AFD services and at the same time rolling back to our last known good state.
We have failed the portal away from Azure Front Door (AFD) to mitigate the portal access issues. Customers should be able to access the Azure management portal directly.
We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update.
This message was last updated at 17:17 UTC on 29 October 2025
"This message was last updated at 18:11 UTC on 29 October 2025"
This message was last updated at 19:57 UTC on 29 October 2025
> In 50%+ the cases they just don‘t report it anywhere, even if its for 2h+.
I assume you mean publicly. Are you getting the service health alerts?
But, for future reference:
site:microsoft.com csam
Storytelling is how issues get addressed. Help the CSAM tell the story to the higher ups.
Child Sex-Abuse Material?!? Well, a nice case of acronym collision.
No -- the one referencing crime should NEVER have be turned into an acronym.
Crimes should not be described in euphemistic terms (which is exactly what the acronym is)
I'm simplifying a bit, but I don't think it's likely that Azure has a similar race condition wiping out DNS records on _one_ system than then propagates to all others. The similarity might just end at "it was DNS".
They didn't provide any details on latency. It could have been delayed an hour or a day and no one noticed
https://azure.microsoft.com/en-us/products/frontdoor
• https://www.xbox.com/en-US also doesn't fully paint. Header comes up, but not the rest of the page.
• https://www.minecraft.net/en-us is extremely slow, but eventually came up.
Edit: Typo!
The other day during the AWS outage they "reported" OVH down too.
We already had to do it for large files served from Blob Storage since they would cap out at 2MB/s when not in cache of the nearest PoP. If you’ve ever experienced slow Windows Store or Xbox downloads it’s probably the same problem.
I had a support ticket open for months about this and in the end the agent said “this is to be expected and we don’t plan on doing anything about it”.
We’ve moved to Cloudflare and not only is the performance great, but it costs less.
Only thing I need to move off Front Door is a static website for our docs served from Blob Storage, this incident will make us do it sooner rather than later.
Be aware that if you’re using Azure as your registrar, it’s (probably still) impossible to change your NS records to point to CloudFlare’s DNS server, at least it was for me about 6 months ago.
This also makes it impossible to transfer your domain to them either, as CloudFlare’s domain transfer flow requires you set your NS records to point to them before their interface shows a transfer option.
In our case we had to transfer to a different registrar, we used Namecheap.
However, transferring a domain from Azure was also a nightmare. Their UI doesn’t have any kind of transfer option, I eventually found an obscure document (not on their Learn website) which had an az command which would let you get a transfer code which I could give to Namecheap.
Then I had to wait over a week for the transfer timeout to occur because there is no way on Azure side that I could find to accept the transfer immediately.
I found CloudFlare’s way of building rules quite easy to use, different from Front Door but I’m not doing anything more complex than some redirects and reverse proxying.
I will say that Cloudflare’s UI is super fast, with Front Door I always found it painfully slow when trying to do any kind of configuration.
Cloudflare also doesn’t have the problem that Front Door has where it requires a manual process every 6 months or so to renew the APEX certificate.
They quickly updated the message to REMOVE the link. Comical at this point.
https://news.ycombinator.com/item?id=32031639
https://news.ycombinator.com/item?id=32032235
Edit: wow, I can't believe we hadn't put https://news.ycombinator.com/item?id=32031243 in https://news.ycombinator.com/highlights. Fixed now.
Long before that, the first raid array anyone set up for my (teams’) usage, arrived from Sun with 2 dead drives out of 10. They RMA’d us 2 more drives and one of those was also DOA. That was a couple years after Sun stopped burning in hardware for cost savings, which maybe wasn’t that much of a savings all things considered.
I was an intern but everyone seemed very stressed.
https://news.ycombinator.com/item?id=32030400
dang saying it's temporary: https://news.ycombinator.com/item?id=32031136
And that IP says it's with M5 again.Bunch of on-call peeps over there that definitely know the instant something major goes down
they think that they are 'eliminating a single point of failure', but in reality, they end up adding multiple, complicated points of mostly failure.
But they won't be.
I always go everywhere adequately prepared for beverages and food. Thanks to your comment, I have a new reason to do so. Take out coffees are actually far from guaranteed. Payment systems could go down, my bank account could be hacked or maybe the coffee shop could be randomly closed. Heck, I might even have an accident crossing the road. Anything could happen. Hence, my humble flask might not have the top beverage in it but at least it works.
We all design systems with redundancy, backups and whatnot, but few of us apply this thinking to our food and drink. Maybe get a kettle for the office and a backup kettle, in case the first one fails?
Here in The Netherlands, almost all trains were first delayed significantly, and then cancelled for a few hours because of this, which had real impact because today is also the day we got to vote for the next parlement (I know some who can't get home in time before the polls close, and they left for work before they opened).
If it’s a multi day event, it’s probably that way for a reason. Partially the same as the solution to above.
The description of voting in the Netherlands is that you can see your ballot physically go into a clear box and stay to see that exact box be opened and all ballots tallied.
Dropping a ballot in a box in tour neighborhood helps ensure nothing with regards to the actually ballot count.
> You can stay there and wait for the count at the end of the day if you want to.
And if you watch the election night news, you'll see footage of multiple people counting the votes from the ballot boxes, again with various people observing to check that nothing dodgy is going on.
Having everyone just put their ballots in a postbox seems like a good way remove public trust from the electoral system, because noone's standing around waiting for the postie to collect the mail, or looking at what happens in the mail truck, or the rest of the mail distribution process.
I'm sure I've seen reports in the US of people burning postboxes around election time. Things like this give more excuses to treat election results as illegitimate, which I believe has been an issue over there.
(Yes, we do also have advanced voting in NZ, but I think they're considered "special votes" and are counted separately .. the elections are largely determined on the day by in-person votes, with the special votes being confirmed some days later)
It is a small but distinct difference between mail/early voting and putting the votes directly into the ballot box.
There's so much more you have to trust.
If you wish, you can write a phrase on your ballot. The phrases and their corresponding vote are broadcast (on tv, internet, etc). So if you want to validate that your vote was tallied correctly, write a unique phrase. Or you could pick a random 30 digit number, collisions should be zero-probability, right?
I mean, this would be annoying because people would write slurs and advertisements, and the government would have to broadcast them. But, it seems pretty robust.
I’d suggest the state handle the number issuing, but then they could record who they issues which numbers to, and the winning party could go about rounding up their opposition, etc.
Googling around a bit, it sounds like there are systems that let you verify that your ballot made it, but not necessarily that it was counted correctly. (For this reason, I guess?)
When I vote in person, I know all the officials there from various parties are just like...looking at the box for the whole day to make sure everything is counted. It's much easier to understand and trust.
Sure you got a notification! That doesn't mean anything. Even with human counted ballots or electronic ballots.
Following the chain of custody from vote to verification, in some way, would be nice.
Here in Latvia the "election day" is usually (always?) on weekend, but the polling stations are open for some (and different!) part of every weekday leading up. Something like couple hours on monday morning, couple hours on tuesday evening, couple around midday wednesday, etc. In my opinion, it's a great system. You have to have a pretty convoluted schedule for at least one window not to line up for you.
Here is the form to register for postal voting in the Republic of Ireland - https://www.dublincity.ie/sites/default/files/2024-01/pv4-wo...
Instructions on how to submit the form / register for mail-in votes is on page 4.
Hope that helps anyone else out who needs in Ireland
> You may use this form to apply for a postal vote if, due to the circumstances of your work/service or your full-time study in the State, you cannot go to your polling station on polling day.
Which seems to indicate that's only for people who can't go to the polling station, otherwise you do have to go there.
As someone who spent the first 30 years of my life in Ireland but is now part of that diaspora, it's frustrating but I get it. I don't get to vote, but neither do thousands of plastic paddys who have very little genuine connection to Ireland.
That said, I'm sure they could expand the voting window to a couple of days at least without too much issue.
But I still prefer the paper vote and I usually a blockchain apathetic.
Mail in voting is just better all around for a geographically diverse place as the US and I wish would be adopted by all states.
So excited to see how the right-wing pedants here disagree with this.
If so, I see a lot to dislike. As the point I was making is you can’t anticipate what might come up. Just because it’s worked thus far doesn’t mean it’s designed for resilience. There’s a lot of ways you could miss out in that type of situation. I seems silly to make sure everything else is redundant and fault tolerant in the name of democracy when the democratic process itself isn’t doing the same.
That’s just ridiculous in my opinion. Makes me wonder how many well intentioned would be voters end up missing out each election cause shit happens and voting is pretty optional
What is the that group's deviation from the general voting population's preferences?
What are the margins of the votes on those ballot questions?
We've been closing a lot of polling places recently:
https://abcnews.go.com/US/protecting-vote-1-5-election-day-p...
Here's the President of the United States on Sunday: https://truthsocial.com/@realDonaldTrump/posts/1154418712892...
"No mail-in or 'Early' Voting, Yes to Voter ID! Watch how totally dishonest the California Prop Vote is! Millions of Ballots being 'shipped.' GET SMART REPUBLICANS, BEFORE IT IS TOO LATE!!!"
In most countries, in the elections you vote or the member of parliament you want. Presidential elections, and city council elections are held separately, but are also equally simple. But in one election you cast your vote for one person, and that's it.
With this kind of elections, many countries manage to hold the elections on paper ballots, count them all by hand, and publish results by midnight.
But on an American ballot, you vote for, for example:
I don't think it would be possible to calculate all these 20 or 40 votes, if calculated by hand. That's why they use voting machines in America.https://ballotpedia.org/Official_sample_ballots,_2020
Here in Indonesia, in a city of 2 million people there are over 7000 voting stations. While we vote for 5 ballots (President, Legislative (National, Province, and City/Regency), we still use paper ballots and count them by hand.
There is a ballot tracking system as well, I can see and be notified as my ballot moves through the counting system. It's pretty cool.
I actually just got back from dropping off my local elections ballot 15m ago, quick bike trip maybe a mile or so away and back.
Of course, because it makes it easy for people to vote, the republicans want to do away with it. If you have to stand in line for several hours (which seems to be very normal in most cities) and potentially miss work to do it that's going to all but guarantee that working people and the less motivated will not vote.
So yes in places that only do in person voting, national or state holiday.
Horses were famously tamed in 2007 after AWS released S3 to the public, this is the best of times.
I do need a human to provision a few servers and configure e.g. load balancing and when to spin up additional servers under load. But that is far less of a PITA than having my systems tied to a specific provider or down whenever a cloud precipitates.
The moment you choose to use S3 instead of hosting your own object store, though, you either use AWS because S3 and IAM already have you or spend more time on the care and feeding of your storage system as opposed to actually doing the thing you customers are paying you to do.
It's not impossible, just complicated and difficult for any moderately complex architecture.
I really do feel the only viable future for clouds is hybrid or agnostic clouds.
[1] https://news.ycombinator.com/item?id=44689366
[2] https://news.ycombinator.com/item?id=44684373
I have never had much confidence in Azure as a cloud provider. The vertical integration of all the things for a Microsoft shop was initially very compelling. I was ready to fight that battle. But, this fantasy was quickly ruined by poor execution on Microsoft's part. They were able to convince me to move back to AWS by simply making it difficult to provision compute resources. Their quota system & availability issues are a nightmare to deal with compared to EC2.
At this point I'd rather use GCP over Azure and I have zero seconds of experience with it. The number of things Microsoft gets right in 2025 can be counted single-handedly. The things they do get right are quite good, but everything else tends to be extremely awful.
I remember I at one point had expanded enough menus that it covered the entirety of the screen.
Never before have I felt so lost in a cloud product.
Yeah, that had some fun ideas but was way more confusing than it needed to be. But also that was quite a few years back now. The Portal ditched that experience relatively quickly. Just long enough to leave a lot of awful first impressions, but not long enough for it to be much more than a distant memory at this point, several redesigns later.
[0] The name "Blades" for that came from the early years of the Xbox 360, maybe not the best UX to emulate for a complex control panel/portal.
Like, AWS, and GCP to a lesser extent, has a principled approach where simple click-ops goals are simple. You can access the richer metadata/IAM object model at any time, but the wizards you see are dumb enough to make easy things easy.
With Azure, those blades allow tremendously complex “you need to build an X Container and a Container Bucket to be able to add an X” flows to coexist on the same page. While this exposes the true complexity, and looks cool/works well for power users, it is exceedingly unintuitive. Inline documentation doesn’t solve this problem.
I sometimes wonder if this is by design: like QuickBooks, there’s an entire economy of consultants who need to be Certified and thus will promote your product for their own benefit! Making the interface friendly to them and daunting to mere mortals is a feature, not a bug.
But in Azure’s case it’s hard to tell how much this is intentional.
Here's a somewhat ancient Stack Overflow screenshot I found: https://i.sstatic.net/yCseI.png
(I think that's from near the transition because it has full "windowing" controls of minimize/maximize/close buttons. I recall a period with only close buttons.)
All that blue space you could keep filling with more "blades" as you clicked on things until the entire page started scrolling horizontally to switch between "blades". Almost everything you could click opened in a new blade rather than in place in the existing blade. (Like having "Open in New Window" as your browser default.)
It was trying to merge the needs of a configurable Dashboard and a "multi-window experience". You could save collections of blades (a bit like Niri workspaces) as named Dashboards. Overall it was somewhere between overkill and underthought.
(Also someone reminded me that many "blades" still somewhat exist in the modern Portal, because, of course, Microsoft backwards compatibility. Some of the pages are just "maximized Blades" and you can accidentally unmaximize them and start horizontally scrolling into new blades.)
[0] https://github.com/YaLTeR/niri
depending on the resource you're accessing, you can get 5+ sections each with their own ui/ux on the same page/tab and it can be confusing to understand where you're at in your resources
if you're having trouble visualizing it, imagine an url where each new level is a different application with its own ui/ux and purpose all on the same webpage
I never understood why a clear and consistent UI and improved UX isn't more of a priority for the big three cloud providers. Even though you talk mostly via platform SDK's, I would consider better UI especially initially, a good way to bind new customers and pick your platform over others.
I guess with their bottom line they don't need it (or cynically, you don't want to learn and invest in another cloud if you did it once).
For some reason this applies to all AWS, GCP and Azure. Seems like the result of dozens of acquisitions.
Any time something is that unintuitive to get started, I automatically assume that if I encounter a problem that I’ll be unable to solve it. That thought alone leads me to bounce every time.
AWS Is a complete mess. Everything is obscured behind other products, and they're all named in the most confusing way possible.
MSFT : Hold my beer...
TBH, GCP is very good! More people should use it.
https://cloud.google.com/resource-manager/docs/project-suspe...
I'd hope you can create a Google Cloud account under a completely different email address, but I do as little business with Google as I can get away with, so I have no idea.
I feel like compliance is the entire point of using these cloud providers. You get a huge head start. Maintaining something like PCI-DSS when you own the real estate is a much bigger headache than if it's hosted in a provider who is already compliant up through the physical/hardware/networking layers. Getting application-layer checkboxes ticked off is trivial compared to "oops we forgot to hire an armed security team". I just took a look and there are currently 316 certifications and attestations listed under my account.
https://aws.amazon.com/artifact/faq/
Microsoft really wants you to use their PaaS offerings, and so things on Azure are priced accordingly. A Microsoft shop just wanting to lift-and-shift, Azure isn't the best choice unless the org has that "nobody ever got fired for buying Microsoft" attitude.
They think they have the market captured, but I think what their dwindling quality and ethics are really going to drive is adoption of self hosting, distributed computing frameworks. Nerds are the ones who drove adoption of these platforms, and we can eventually end if we put in the work.
Seriously with container technology, and a bit more work / adoption on distributed compute systems and file storage (IPFS,FileCoin) there is a future where we dont have to use big brothers compute platform. Fuck these guys.
I really hope this pushes the internet back to how it used to be, self hosted, privacy, anonymity. I truly hope that's where we're headed, but the masses seem to just want to stay comfortable as long as their show is on TV
if all companies focused on fixing each and every social issue that exists in the world, how would they make any money?
I would link to that article, but that one does seem down ;)
> They're stating they're working with the Azure teams, so I suspect this is related.
Personally I am thinking more and more about hetzner, yes I know its not an apples to orange comparison. But its honestly so good
Someone had created a video where they showed the underlying hardware etc., I am wondering if there is something like https://vpspricetracker.com/ but with geek-benchmarks as well.
This video was affiliated with scalahosting but still I don't think that there was too much bias of them and they showed at around 3:37 a graph comparison with prices https://www.youtube.com/watch?v=9dvuBH2Pc1g
Now it shows how contabo has better hardware but I am pretty sure that there might be some other issues, and honestly I feel a sense of trust with hetzner I am not sure about others.
Either hetzner or self hosting stuff personally or just having a very cheap vps and going to hetzner if need be but hetzner already is pretty cheap or I might use some free service that I know of are good as well.
https://blog.cloudflare.com/rearchitecting-workers-kv-for-re...
Personally I just trust cloudflare more than google, given how their focus is on security whereas google feels googly...
I have heard some good things about google cloud run and the google's interface feels the best out of AWS,Azure,GCloud but I still would just prefer cloudflare/hetzner iirc
Another question: Has there ever been a list of all major cloud outages, like I am interested how many times google cloud and all cloud providers went majorly down I guess y'know? is there a website/git project that tracks this?
Credit card information would be recorded by the POS, synced to a mini-server in the back office (using store-and-forward to handle network issues) and then in a batch process overnight, sent to HQ where the payment was processed.
It wasn't until chip-and-PIN was rolled out that they started supporting "online" (i.e. processed then and there) card transactions, and even then the old method still worked if there was a network issues or power failure (all POSes has their own UPS).
The only real risk at the time was that someone tried to pay with a cancelled credit card - the bank would always honour the payment otherwise. But that was pretty uncommon back then, as you'd have to phone your bank to do it, not just press a button in an app.
Chick-fil-a has this.
One of the tech people there was on HN a few years ago describing their system. Credit card approval slows down the line, so the cards are automatically "approved" at the terminal, and the transaction is added to a queue.
The loss from fraudulent transactions turns out to be less than the loss from customers choosing another restaurant because of the speed of the lines.
I go there daily because it's a nice 30min round trip walk and I wfh. I go up there to get a diet coke or something else just to get out of the house. It amazes me when i see a handwritten sign on the door "closed, system is down". I've gotten to know the cashiers so I asked and it's because the internet connection goes down all the time. That store has to one of the most poorly run things i've ever seen yet it stays in business somehow.
Your responses imply that you think people are questioning whether you would lose money on the deal while we are instead saying you’ll get laughed out of the store, or possibly asked never to come back.
1: I doubt they're "with it" enough to put together a backup arrangement for internet.
2: Their internet problems are probably due to a cheapo router, loose wire, ect.
3: The employees probably like the break.
Good luck if you make this work for you, it would be exciting to hear about if you're able to get them to work with you.
EDIT: their last quarterly was 36%. they lost $3.7bn in 24Q4 -- the christmas quarter. sold to PE in Q1.
Why doesn't someone in the store at least have one of those manual kachunk-kachunk carbon copy card readers in the back that they can resuscitate for a few days until the technology is turned back on? Did they throw them all away?
How aptly descriptive.
The stores are in the hood or middle of nowhere. The customers don’t have many options.
Last week I couldn't pay for flowers for grandma's grave because smartphone-sized card terminal refused to work - it stuck on charging-booting loop so I had to get cash. Tho my partner thinks she actually wanted to get cash without a receipt for herself excluding taxes
Its not the we are not capable. Its, is the business willing to assume the risk?
There's a fairly large supermarket near me that has both kinds of outages.
Occasionally it can't take cards because the (fiber? cable?) internet is down, so it's cash only.
Occasionally it can't take cash because the safe has its own cellular connection, and the cell tower is down.
I was at Frank's Pizza in downtown Houston a few weeks ago and they were giving slices of pizza away because the POS terminal died, and nobody knew enough math to take cash. I tried to give them a $10 and told them to keep the change, but "keep the change" is an unknown phrase these days. They simply couldn't wrap their brains around it. But hey, free pizza!
I feel pretty justified in my previous decisions to move away from Azure. Using it feels like building on quicksand…
At this point I dont believe that any one of them is any better or reliable than the others.
And microsoft.com too - that's gotta hurt
- on a US tenant I am unable to access login.microsoftonline.com and the login flow stalls on any SSO authentication attempt.
- on a European tenant, probably germany-west, I am able to login and access the Azure portal.
Error: visual-studio-code: Download failed on Cask 'visual-studio-code' with message: Download failed: https://update.code.visualstudio.com/1.105.1/darwin-arm64/st...
https://www.youtube.com/watch?v=YJVkLP57yvM
Luckily, we moved off Azure Front Door about a year ago. We’d had three major incidents tied to Front Door and stopped treating it as a reliable CDN.
They weren’t global outages, more like issues triggered by new deployments. In one case, our homepage suddenly showed a huge Microsoft banner about a “post-quantum encryption algorithm” or something along those lines.
Kinda wild that a company that big can be so shaky on a CDN, which should be rock solid.
https://archive.is/Q4izZ
The root zone and www. do not: https://dnschecker.org/#A/microsoft.com (all resolvers return records)
And querying https://www.microsoft.com/ results in HTTP 200 on the root document, but the page elements return errors (a 504 on the .css/.js documents, a 404 on some fonts, Name Not Resolved on scripts.clarity.ms, Connection Timed Out on wcpstatic.microsoft.com and mem.gfx.ms). That many different kinds of errors is actually kind of impressive.
I'm gonna say this was a networking/routing issue. The CDN stayed up, but everything else non-CDN became unroutable, and different requests traveled through different paths/services, but each eventually hit the bad network path, and that's what created all the different responses. Could also have been a bad deploy or a service stopped running and there's different things trying to access that service in different ways, leading to the weird responses... but that wouldn't explain the failed DNS propagation.
I wonder if this is microsoft "learning" to "prevent" such an issue and instead triggered it...
"One often meets his destiny on the path he takes to avoid it" -- Master Oogway
2028: the year of migrating from a managed provider to the cloud
2029: the year of migrating from the cloud to your own metal in a rack
People keep thinking the solution to their problems is to do something new (that they don't fully understand).
TIL it's called Nirvana Fallacy
We used to call it "The grass is always greener on the other side of the fence."
[1]: https://azure.status.microsoft/en-us/status
> There are currently no active events. Use Azure Service Health to view other issues that may be impacting your services.
Links to a page on Azure Portal which is down...
"We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly."
If Azure goes down and nobody feels it, does Azure really matter?
If Azure goes down, it's mostly affecting internal stuff at big old enterprises. Jane in accounting might notice, but the customers don't. Contrast with AWS which runs most of the world's SaaS products.
People not being able to do their jobs internally for a day tends not to make headlines like "100 popular internet services down for everyone" does.
Moving a website quickly is never fun.
It's only after the fact they are transparent about the impact
How did we get here? Is it because of scale? Going to market in minutes by using someone else's computers instead of building out your own, like co-location or dedicated servers, like back in the day.
Now, they go down a lot less frequently, but when they do, it's more widespread.
I work on a product hosted on Azure. That's not the case. Except for front door, everything else is running fine. (Front door is a reverse proxy for static web sites.)
The product itself (an iot stormwater management system) is running, but our customers just can't access the website. If they need to do something, they can go out to the sites or call us and we can "rub two sticks together" and bypass the website. (We could also bypass front door if someone twisted our arms.)
Most customers only look at the website a few times a year.
---
That being said, our biggest point of failure is a completely different iot vendor who you probably won't hear about on Hacker News when they, or their data networks, have downtime.
Decentralisation is winning it seems.
> Big Tech lobbying is riding the EU’s deregulation wave by spending more, hiring more, and pushing more, according to a new report by NGO’s Corporate Europe Observatory and LobbyControl on Wednesday (29 October).
> Based on data from the EU’s transparency register, the NGOs found that tech companies spend the most on lobbying of any sector, spending €151m a year on lobbying — a 33 percent increase from €113m in 2023.
Gee whizz, I really do wonder how they end up having all the power!
[0] https://news.ycombinator.com/item?id=45744973
I think the response lies in the surrounding ecosystem.
If you have a company it's easier to scale your team if you use AWS (or any other established ecosystem). It's way easier to hire 10 engineers that are competent with AWS tools than it is to hire 10 engineers that are competent with the IBM tools.
And from the individuals perspective it also make sense to bet on larger platforms. If you want to increase your odds of getting a new job, learning the AWS tools gives you a better ROI than learning the IBM tools.
Pick your point on the scale
But the cloud compute market is basically centralized into 2.5 companies at this point. The point of paying companies like Azure here is that they've in theory centralized the knowledge and know-how of running multiple, distributed datacenters, so as to be resilient.
But that we keep seeing outages encompassing more than a failure domain, then it should be fair game for engineers / customers to ask "what am I paying for, again?"
Moreover, this seems to be a classic case of large barriers to entry (the huge capital costs associated with building out a datacenter) barring new entrants into the market, coupled with "nobody ever got fired for buying IBM" level thinking. Are outages like these truly factored into the napkin math that says externalizing this is worth it?
In our highly interconnected world, decentralization paradoxically requires a central authority to enforce decentralization by restricting M&A, cartels, etc.
https://en.wikipedia.org/wiki/Natural_monopoly
Stonks
The Microsoft status page mostly referenced the portal outage, but it was more than that.
Even the national digital id service is down.
Can't help but smirk as my country is ramming through "Digital ID" right now
And more importantly nobody lose any reputation except AWS/Azure/Google.
The real reason is that outages are not your fault. Its the new version of "nobody ever got fired for buying IBM" - later it became MS, and now its any big cloud provider.
On the merits though, I agree, haven’t had any serious issues with Hetzner.
[1] https://azure.microsoft.com/en-us/products/frontdoor
[2] https://learn.microsoft.com/en-us/azure/frontdoor/front-door...
It's CDN and FrontDoor at least.
1. Mandatory
2. "Voluntary"
3. Voluntary
And I suspect that very little of what the NSA does falls into category 3. As Sen Chuck Schumer put it "you take on the intelligence community, they have six ways from Sunday at getting back at you"
https://microsoft.com/deviceloginus
Seems like they migrated the non-Gov login but not the Gov one. C'mon Microsoft, I've got a deadline in a few days.
I have been having issues with GitHub and the winget tool for updates throughout the day as well. I imagine things are pulling from the same locations on Azure for some of the software I needed to update (NPM dependencies, and some .NET tooling).
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
----
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
-- From the Azure status page
When you find an honest vendor, cherish them. They are rare, and they work hard to earn and keep your confidence.
So if we look at these companies' bottom lines, all those big wigs are actually doing something right. Sales and lobbying capacity is way more effective than reliability or good engineering (at least in the short term).
You know nobody is migrating off of AWS or Azure because of these.
That's certainly not the right conclusion.
I guess the GCP is next.
There's no way to tell, and after about 30 minutes, the release process on VS Code Marketplace failed with a cryptic message: "Repository signing for extension file failed.". And there's no way to restart/resume it.
Much of Xbox is behind that too.
"We’re investigating an issue impacting Azure Front Door services. Customers may experience intermittent request failures or latency. Updates will be provided shortly."
This mom’s son was asking Tesla’s Grok AI chatbot about soccer. It told him to send nude pics, she says
xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'
For example when I try to log into our payroll provider Brightpay, it sends me here:
https://bpuk1prod1environment.blob.core.windows.net/host-pro...
Microsoft CDN
There, that's it. You're selling it to (hopefully) technical people
But seriously I thought it would be the console, not a CDN.
And it's very clear from these updates that they're more focused on the portal than the product, their updates haven't even mentioned fixing it yet, just moving off of it, as if it's some third party service that's down.
Unsubstantiated idea: So the support contract likely says there is a window between each reporting step and the status page is the last one and the one in the legal documents giving them several more hours before the clauses trigger.
(couldn't resist adding it. i acknowledge this comment adds no value to the discussion)
Edit: As of 9:19 AM Pacific time, I'm now getting successful A responses but they can take several seconds. The web server at that address is not responding.
This message was last updated at 16:35 UTC on 29 October 2025”
And so is Microsoft: http://www.microsoft.com/
The actual stuff I was working on (App Insights, Function App) that was still open was operational.
There's a lot of outages this month!
Doesn't seem to be too bad of an outage unless you were relying on Azure Front Door.
>What is required to be able to use MyGet? ... MyGet runs its operations from the Microsoft Azure in the West Europe region, near Amsterdam, the Netherlands.
We had to bypass the Frontdoor
I also got weird notification in VS2022 that my license key was upgraded to Enterprise, but we did not purchase anything.
Any guess on what's causing it?
In hindsight, I guess the foresight of some organizations to go multi-cloud was correct after all.
It's not easy though.
I'm curious—at what point did you decide the overhead was worth it? Was it after experiencing an outage, or did you architect for it from day one?
As someone launching a product soon (more on the builder/product side than infra-engineer), I keep wrestling with this. The pragmatist in me says "start simple, prove the concept, then layer in resilience." But then you see events like this week and think "what if this happens during launch?"
How did you handle the operational complexity? Did you need dedicated DevOps folks, or are there patterns/tools that made it manageable for a smaller team?
That said, I don't hear about GCP outages all that often. I do think AWS might be leading in outages, but that's a gut feeling, I didn't look up numbers.
This isn't GCP's fault, but the outage ended up taking down Cloudflare too, so in total impact I think that takes the cake.
Few customers....few voices to complain as well.
https://www.natwest.com/
Be interesting to understand cause here. Pretty big impact on services we use
That is a pass.
To be clear, they should get criticism. They should be held liable for any damage they cause.
But that they remain the biggest cloud offering out there isn't something you'd expect to change from a few outages that, by most all evidence, potential replacements have, as well? More, a lot of the outages potential replacements have are often more global in nature.
if that's true then it's a sign that Azure's control / data plane separation is doing it's job! at least for now
Kind of mindboggling it's still sometimes DNS maybe.
https://isitdns.com/
This is not the first or second time this happened, multiple Hyperscaler failed one by one.
FD and CDN are global resources and are experiencing issues. Probably some other global resources as well.
Hate to say it, but DNS is looking like it's still the undisputed champ.
160k+ results on GitHub: https://github.com/search?q=http%3A%2F%2Fschemas.xmlsoap.org...
(Coder is currently at the top of the experiment list. Any other suggestions?)
Go cloud!
Institutional knowledge matters. Just has to be the right institution is all.
Services too, not just the portal.
edit: it worked once, then died again. So I guess - some resolvers, or FD servers may be working!
Edit: nope looks like there's actually a spike on GCP as well
QNBQ-5W8
I noticed that winget is also down eg.
In other words, people reporting outages at AWS are probably having trouble with microsoft-run DNS services or caching proxies. It's not that the issues aren't there, it's that the internet is full of intermingled complexity. Just that amount of organic false-positives can make it look like an unrelated major service is impacted.
There is no way it’s DNS
It was DNS
I can at least login to Azure. But several MS sites are down.
How can one of the richest companies in the world not offer a better service?
Better service costs money.
Except that it is not!
Interesting times...
What a terrible advise.
But what if I don't want AI brought to me?
Although judging by the available transports it will likely be colonized by nazis.
Unless that's a euphemism for "vibe coding", no.
> We have confirmed that an inadvertent configuration change as the trigger event for this issue.
Save the speculation for Reddit. HN is better than that.