top of page
Search
Writer's pictureHowie Klein

Apple Crosses The Line-- Say Goodbye To Online Privacy



Tim Apple, as Señor T called him, has taken a very strong position against Big Brotherism, even when pressured by the FBI and authoritarian governments like China's. But his defense of privacy absolutism may be crumbling as Apple sticks a toe onto a very slippery slope. The company is going to deploy a system it developed that will enable it to flag images of what it decides is child exploitation uploaded to iCloud storage and report it to authorities. Who could possibly oppose that? Besides, Google and Facebook-- two companies with much worse reputations than Apple-- are already using similar approaches, often badly and engendering deep hatred towards the companies, especially Facebook.


Apple claims their system is an improvement on everything else because it's all robotic and is designed to expose as little as possible a person's phone while just flagging child porn. There is a strong feeling among people interested in privacy that Apple is full of shit. How long before governments persuade Apple to flag, say, pictures of "violence" or "terrorism?"


The Electronic Frontier Foundation: "All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of 'misinformation' in 24 hours may apply to messaging services. And many other countries-- often those with authoritarian governments-- have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers." How would Tim Cook feel about that?

Ed Snowden warned on Twitter "Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow." On Wednesday, NY Times technology/Silicon Valley reporter Jack Nicas, wrote that "Apple’s [good intentions] approach to scanning people’s private photos could give law enforcement authorities and governments a new way to surveil citizens and persecute dissidents. Once one chip in privacy armor is identified, anyone can attack it... The technology that protects the ordinary person’s privacy can also hamstring criminal investigations. But the alternative, according to privacy groups and many security experts, would be worse."


In late 2019, after reports in the New York Times about the proliferation of child sexual abuse images online, members of Congress told Apple that it had better do more to help law enforcement officials or they would force the company to do so. Eighteen months later, Apple announced that it had figured out a way to tackle the problem on iPhones, while, in its view, protecting the privacy of its users.
...To prevent false positives and hide the images of abuse, Apple took a complex approach. Its software reduces each photo to a unique set of numbers-- a sort of image fingerprint called a hash-- and then runs them against hashes of known images of child abuse provided by groups like the National Center for Missing and Exploited Children.
If 30 or more of a user’s photos appear to match the abuse images, an Apple employee reviews the matches. If any of the photos show child sexual abuse, Apple sends them to the authorities and locks the user’s account. Apple said it would turn on the feature in the United States over the next several months.
...Other tech companies, like Facebook, Google and Microsoft, also scan users’ photos to look for child sexual abuse, but they do so only on images that are on the companies’ computer servers. In Apple’s case, much of the scanning happens directly on people’s iPhones. (Apple said it would scan photos that users had chosen to upload to its iCloud storage service, but scanning still happens on the phone.)
To many technologists, Apple has opened a Pandora’s box. The tool would be the first technology built into a phone’s operating system that can look at a person’s private data and report it to law enforcement authorities. Privacy groups and security experts are worried that governments looking for criminals, opponents or other targets could find plenty of ways to use such a system.
“As we now understand it, I’m not so worried about Apple’s specific implementation being abused,” said Alex Stamos, a Stanford University researcher who previously led Facebook’s cybersecurity efforts. “The problem is, they’ve now opened the door to a class of surveillance that was never open before.”
If governments had previously asked Apple to analyze people’s photos, the company could have responded that it couldn’t. Now that it has built a system that can, Apple must argue that it won’t.
...In the United States, Apple has been able to avoid more intense fights with the government because it still turns over plenty of data to law enforcement officials. From January 2018 through June 2020, the most recent data available, Apple turned over the contents of 340 customers’ iCloud accounts a month to American authorities with warrants. Apple still hasn’t fully encrypted iCloud, allowing it to have access to its customers’ data, and the company scrapped plans to add more encryption when the F.B.I. balked, according to Reuters.
Apple’s fights with the F.B.I. over smartphone encryption have also been defused because other companies have regularly been able to hack into iPhones for the police. It is still expensive and time-consuming to get into a locked iPhone, but that has created an effective middle ground where the police can gain access to devices they need for investigations but it is more difficult for them to abuse the technology.
That stalemate on encryption has also enabled Apple to retain its brand as a champion of privacy, because it is not actively giving the police a way in. But that compounds the potential harm of its new tools, security experts said.
For years, technologists have argued that giving the police a way into phones would fundamentally undermine the devices’ security, but now governments can point to Apple’s endorsement of its photo-scanning tools as a method that helps the police while preserving privacy.
Apple has “taken all their platinum privacy branding and they’ve applied it to this idea,” Mr. Stamos said. “This Apple solution screws up the entire debate and sets us back years.”


A Washington Post OpEd by tech academics Jonathan Mayer and Anunay Kulshrestha this week focused on the "civil liberties firestorm" the Apple announcement has provoked. They concluded Apple's technology is dangerous and could be "repurposed for surveillance and censorship... A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials. We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny."


China is Apple’s second-largest market, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials? Absolutely nothing, except Apple’s solemn promise. This is the same Apple that blocked Chinese citizens from apps that allow access to censored material, that acceded to China’s demand to store user data in state-owned data centers and whose chief executive infamously declared, “We follow the law wherever we do business.”
Apple’s muted response about possible misuse is especially puzzling because it’s a high-profile flip-flop. After the 2015 terrorist attack in San Bernardino, California., the Justice Department tried to compel Apple to facilitate access to a perpetrator’s encrypted iPhone. Apple refused, swearing in court filings that if it were to build such a capability once, all bets were off about how that capability might be used in future.
“It’s something we believe is too dangerous to do,” Apple explained. “The only way to guarantee that such a powerful tool isn’t abused … is to never create it.” That worry is just as applicable to Apple’s new system.
...Apple is making a bet that it can limit its system to certain content in certain countries, despite immense government pressures. We hope it succeeds in both protecting children and affirming incentives for broader adoption of encryption. But make no mistake that Apple is gambling with security, privacy and free speech worldwide.


UPDATE: Bill Maher and his audience last night didn't seem too impressed with Apple's decision. I don't recommend him that often, but today I am suggesting you watch the clip:



1 Comment


Louis
Louis
Aug 22, 2021

As with everything Apple it all comes down to the bottom line. They could scan for these pics after they’ve been uploaded to iCloud but why invest in the infrastructure needed for that when you could have your customers do it for you on their phones for free.

Like
bottom of page