But that’s a sacrifice he’s willing to make
But that’s a sacrifice he’s willing to make
Machines are about trying to kill you by driving you into a lake.
Of course it doesn’t, I said that just above. Gets you a better chance tho.
Where do i buy a car that’s not a private company? Where do I get internet service?
You can’t. Choose a reputable one. And pay for it, so you are a customer, not the product. Aaaaand I lost 95% of lemmy.
It’s an equation. One of those “left for the reader”. Please start solving it.
Microsoft is a private company and they can ask you to leave, no reason given, anytime.
And they have a history of over 30 years of being evil, manipulative and anticonsumer. If you base your online life on the good will of Microsoft you will be very disappointed sooner or later.
Everything in this comment is true.
I use both for my job and my subjective feeling is that chrome is faster. Js benchmarks seems to confirm it. Privately I use Firefox 95% of the time but I understand people who stay on chrome just out of inertia.
Because it’s fast and works well enough to keep the fame acquired over the last 10 years.
Not only it can be cheaper, it is cheaper in most cases… when designed correctly. And if you compare TCO, not hardware vs IaaS.
It can also be much more expensive of course, but that’s almost always a skill issue.
That’s my whole point from the beginning, boring is good. Boring is repeatable, boring is reliable.
Of course they still have huge teams. The invention of the automobile made travel easier therefore there was more travel happening.
Banking is extremely variable. Instant transactions are periodic, I don’t know any bank that runs them globally on one machine to compensate for time zones. Batches happen at a fixed time, are idle most of the day. Sure you can pay MIPS out of the ass, but you’re much more cost effective paying more for peak and idling the rets of the day.
My experience are banks (including UK) that are modernizing, and cloud for most apps brings brutal savings if done right, or moderate savings if getting better HA/RTO.
Of course if you migrate to the cloud because the cto said so, and you lift and shift your 64 core monstrosity that does 3M operations a day, you’re going to 3nd up more expensive. But that should have been a lambda function that would cost 5 bucks a day tops. That however requires effort, which most people avoid and complain later.
Redundancy should be automatic. Raid5 for instance.
Plus cloud abstracts a lot of complexity. You can have an oracle (or postgres, or mongo) DB with multi region redundancy, encryption and backups with a click. Much, much simpler for a sysadmin (or an architect) than setting the simplest mysql on a VM. Unless you’re in the business of configuring databases, your developers should focus on writing insurance risk code, or telco optimization, or whatever brings money. Same with k8s, same with Kafka, same with cdn, same with kms, same with iam, same with object storage, same with logging and monitoring…
You can build a redundant system in a day like Legos, much better security and higher availability (hell, higher SLAs even) than anything a team of 5 can build in a week self-manging everything.
Agree to disagree. Banking, telecommunications, insurance, automotive, retail are all industries where I have seen wild load fluctuations. The only applications where I have seen constant load are simulations: weather, oil&gas, scientific. That’s where it makes sense to deploy your own hardware. For all else, server less or elastic provisioning makes economic sense.
Edit to answer the last question: to test variable loads, in the last one. Imagine a hurricane comes around and they have to recalculate a bunch of risk components. But can be as simple as running CI/CD tests.
You make it redundant, I thought I didn’t need to say that…
My last customer (global insurance company) provisions several systems a day. Now moving to hundreds via Jenkins. Frequency is environment dependent.
It’s not about responsibility (and only the c suite reports to the shareholders, not Mike), it’s about capability, visibility, tooling and availability.
Because the customer demands it.
The only problem is that the single instance also has 20 scenarios (and keeps the 2 as well), making it more brittle.
A well design system removes points of failure, disk, power and network are obvious ones, and as long as you keep it byzantine safe, anything you added should be redundant so if one fails the system still runs. Ideally you remove all of them but if there’s one hidden it’s still better than “the whole thing is a single point of failure”.
Weird metric, but pretty sure UGGs or KitchenAid hold more power than Lichtenstein or Tuvalu, so not unique to tech giants.