Why are people happy or approving of AI on apple products, when it seems like the same thing was treated (rightly) horribly when Microsoft just did it.
Is Apple doing it better in some way? Both said it’ll be local only, but then Apple is doing some cloud processing now. Do people really just trust Apple more???
Apple lay out some details here: https://security.apple.com/blog/private-cloud-compute/
They control the cloud hardware. Information used for cloud requests is deleted as soon as the request is done. Everything end-to-end encrypted. Server builds are publicly available to inspect. And all of this is only used unless the on-device processing can’t handle a request.
If somebody wanted to actually create a private AI system, this is probably how they’d do it.
You can disagree with this or claim somehow that they are actually accessing and selling people’s data, but Apple are going out of their way to show (and cryptographically prove) how they’re not. It would also be incredible fraudulent and illegal for them to make these claims and not follow through.
The biggest thing in the last couple of weeks is Microsoft showing off the half baked Recall “feature” that let your computer take photos of basically everything you do. The idea that you could search for something you did in the past using normal language is interesting, but the implementation was terrible. So that’s a big strike against MS, so much so they now are recalling the beta release of that. MS doesn’t have a good track record with things that are supposed to be local that somehow end up not local; I believe there was a big issue on xbox where local screenshots were still being monitored by the cloud somewhere. MS also loves shoving ads down your throat and turning back on features you have explicitly turned off. There’s no trust.
Apple certainly has their own issues, but as others have said, they have at least outwardly been a privacy first company, at least in marketing materials. They were one of the first to build in “secure enclaves” into phones and PCs so biometrics couldn’t get off of your device, for example. There’s a bit of a history, earned or otherwise, that Apple isn’t doing bad things with your data, so when they say their AI junk is private it’s easier to swallow.
That said, I still have yet to find a use for any of this AI junk across all platforms. I wish it all just stayed in the realm of intelligently making your photos a little sharper or whatever and not hallucinating things out of whole cloth. I’m actually happy my iPhone isn’t new enough to take advantage of this new stuff.
Well. One company stared down the FBI when they wanted assistance unlocking a terrorists phone, because it would weaken security for everyone else.
The other keeps adding „features“ to my operating system that are designed to siphon data from me, they build at the very least misleading dialogs for those „features“ to trick me into enabling them (not even allowing „no“ as a choice, usually it’s just „yes“ or „not now“) and even when meticulously disabled they have a tendency to magically re-enable themselves after updates.
Who would you trust more?
I actually like Apple’s approach to AI more than all of the others. I don’t care for Microsoft’s implementation at all. I just try to avoid Microsoft in general on top of that, so no need to complain about it.
But I do think Apple’s approach to AI from a privacy and implementation perspective is what I would prefer from a software vendor.