Russian Government Forces Apple to Remove Dozens of VPN Apps From the App Store bleepingcomputer.com

Sergiu Gatlan, Bleeping Computer:

Apple has removed 25 virtual private network (VPN) apps from the Russian App Store at the request of Roskomnadzor, Russia’s telecommunications watchdog.

Roskomnadzor confirmed to Interfax that the order targets multiple apps (including NordVPN, Proton VPN, Red Shield VPN, Planet VPN, Hidemy.Name VPN, Le VPN, and PIA VPN) used to gain access to content tagged as illegal in Russia.

This is part of an ongoing purge in Russia of the availability of VPN services.

Apple is, of course, required to comply with the laws of the regions in which it operates — something which it is happy to point out any time it is questioned — and it is barely maintaining a presence within Russia today. Its Russian-language website only provides documentation, and it has officially curtailed its other operations. But there are people in the country who have owned iPhones for years and those phones remain dependent on the App Store.

I understand why Apple would be outspoken about its objections to, say, new E.U. laws but not those from authoritarian states, because the objectives of the governments are entirely different. At least regulators in the E.U. might listen. Yet it sure does not feel right that it is dutifully and quietly complying with Russian policies despite withdrawing its presence otherwise.

Massive Records Breach Affects AT&T and Carriers Which Use Its Network techcrunch.com

Zack Whittaker, TechCrunch:

U.S. phone giant AT&T confirmed Friday it will begin notifying millions of consumers about a fresh data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers, a company spokesperson told TechCrunch.

In a statement, AT&T said that the stolen data contains phone numbers of both cellular and landline customers, as well as AT&T records of calls and text messages — such as who contacted who by phone or text — during a six-month period between May 1, 2022 and October 31, 2022.

AT&T said some of the stolen data includes more recent records from January 2, 2023 for a smaller but unspecified number of customers.

AT&T discovered this breach in April but waited until today to announce it. But if you believed this wholesale theft of metadata would shake confidence in the value of AT&T as a business, think again: the market is not punishing the company.

From AT&T’s SEC filing:

On May 9, 2024, and again on June 5, 2024, the U.S. Department of Justice determined that, under Item 1.05(c) of Form 8-K, a delay in providing public disclosure was warranted. AT&T is now timely filing this report. AT&T is working with law enforcement in its efforts to arrest those involved in the incident. Based on information available to AT&T, it understands that at least one person has been apprehended. As of the date of this filing, AT&T does not believe that the data is publicly available.

Joseph Cox, 404 Media:

John Binns, a U.S. citizen who has been incarcerated in Turkey, is linked to the massive data breach of metadata belonging to nearly all of AT&T’s customers that the telecommunications giant announced on Friday, three sources independently told 404 Media.

The breach, in which hackers stole call and text records from a third-party cloud service provider used by AT&T, is one of the most significant in recent history, with the data showing what numbers AT&T customers interacted with across a several month period in 2022. 404 Media has also seen a subset of the data, giving greater insight into the highly sensitive nature of the stolen information.

Binns also took responsibility for breaching T-Mobile in 2021, for which he was recently arrested after being charged in 2022. It seems likely to me Binns is the Turkish-residing member alluded to by Google’s Mandiant in its report on UNC5537, the threat actor associated with breaching possibly 165 customers of the Snowflake platform.

AT&T and other giant corporations will continue to retain massive amounts of data with poor security because it is valuable for them to do so and they are barely punished when it all goes wrong. T-Mobile paid a $350 million penalty in 2022 while continuing to say it did nothing wrong. The same year, it made $61.3 billion. In 2022, U.S. median household income was $74,580. Proportionally, T-Mobile got a $425 ticket.

Update: The 404 Media post was not paywalled at the time of posting, but it was later restricted.

‘Slop’ and ‘Content’

Ryan Broderick:

You’ve probably seen the phrase AI slop already, the term most people have settled on for the confusing and oftentimes disturbing pictures of Jesus and flight attendants and veterans that are filling up Facebook right now. But the current universe of slop is much more vast than that. There’s Google Slop, YouTube slop, TikTok slop, Marvel slop, Taylor Swift slop, Netflix slop. One could argue that slop has become the defining “genre” of the 2020s. But even though we’ve all come around to this idea, I haven’t seen anyone actually define it. So today I’m going to try.

This piece does actually settle somewhere very good in its attempt to address the vibe of the entertainment and media world in which we swim, but it is a slog to get there. This is the first paragraph and trying to pull it apart will take a minute. For a start, Broderick says the definition of “slop” has evaded him. That is plausible, but it does require him to have avoided Googling “ai slop definition” upon which point he would have surely seen Simon Willison’s post defining and popularizing the term:

Not all promotional content is spam, and not all AI-generated content is slop. But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.

This is a good definition, though Willison intentionally restricts it to describe A.I.-generated products. However, it seems like people are broadening the word’s use to cover things not made using A.I., and it appears Broderick wishes to reflect that.

Next paragraph:

Content slop has three important characteristics. The first being that, to the user, the viewer, the customer, it feels worthless. This might be because it was clearly generated in bulk by a machine or because of how much of that particular content is being created. The next important feature of slop is that feels forced upon us, whether by a corporation or an algorithm. It’s in the name. We’re the little piggies and it’s the gruel in the trough. But the last feature is the most crucial. It not only feels worthless and ubiquitous, it also feels optimized to be so. […]

I have trimmed a few examples from this long paragraph — in part because I do not want emails about Taylor Swift. I will come back to this definition, but I want to touch on something in the next paragraph:

Speaking of Ryan Reynolds, the film essayist Patrick Willems has been attacking this idea from a different direction in a string of videos over the last year. In one essay titled, “When Movie Stars Become Brands,” Willems argues that in the mid-2000s, after a string of bombs, Dwayne Johnson and Ryan Reynolds adapted a strategy lifted from George Clooney, where an actor builds brands and side businesses to fund creatively riskier movie projects. Except Reynolds and Johnson never made the creatively riskier movie projects and, instead, locked themselves into streaming conglomerates and allowed their brands to eat their movies. The zenith of this being their 2021 Netflix movie Red Notice, which literally opens with competing scenes advertising their respective liquor brands. A movie that, according to Netflix, is their most popular movie ever.

This is a notable phenomenon, but I think Broderick would do to cite another Willems video essay as well. This one, which seems just as relevant, is all about the word “content”. Willems’ obvious disdain for the word — one which I share — is rooted in its everythingness and, therefore, nothingness. In it, he points to a specific distinction:

[…] In a video on the PBS “Ideas” channel, Mike Rugnetta addressed this topic, coming at it from a similar place as me. And he put forth the idea that the “content” label also has to do with how we experience something.

He separates it into “consumption” versus “mere consumption”. In other words, yes, we technically are consuming everything, but there’s the stuff that we fully focus on and engage with, and then the stuff we look at more passively, like tweets we scroll past or a gaming stream we half-watch in the background.

So the idea Mike proposes is that maybe the stuff that we merely consume is content. And if we consume it and actually focus on it, then it’s something else.

What Broderick is getting at — and so too, I think, are the hoards of people posting about “slop” on X to which he links in the first paragraph — is a combination of this phenomenon and the marketing-driven vehicles for Johnson and Reynolds. Willems correctly points out that actors and other public figures have long been spokespeople for products, including their own. Also, there have always been movies and shows which lack any artistic value. Those things have not changed.

What has changed, however, is the sheer volume of media released now. Nearly six hundred English-language scripted shows were released in 2022 alone, though that declined in 2023 to below five hundred in part because of striking writers and actors. According to IMDB data, 4,100 movies were released in 1993, 6,125 in 2003, 15,451 in 2013, and 19,626 in 2023.

As I have previously argued, volume is not inherently bad. The self-serve approach of streaming services means shows do not need to fit into an available airtime slot on a particular broadcast channel. It means niche programming is just as available as blockbusters. The only scheduling which needs to be done is on the viewer’s side, fitting a new show or movie in between combing through the 500 hours of YouTube videos uploaded every minute, some of which have the production quality of mid-grade television or movies, not to mention a world of streaming music.

As Willems says, all of this media gets flattened in description — “content” — and in delivery. If you want art, you can find it, but if you just want something for, as Rugnetta says, “mere consumption”, you can find that — or, more likely, it will be served to you. This is true of all forms of media.

There are two things which help older media’s reputation for quality, with the benefit of hindsight: a bunch of bad stuff has been forgotten, and there was less of it to begin with. It was a lot harder to make a movie when it had to be shot to tape or film, and more difficult to make it look great. A movie with a jet-setting hero was escapist in the 1960s, but lower-cost airfare means those locations no longer seem so exotic. If you wanted to give it a professional sheen, you had to rent expensive lenses, build detailed sets, shoot at specific times of day, and light it carefully. If you wanted a convincing large-scale catastrophe on-screen, it had to be built in real life. These are things which can now be done in post-production, albeit not easily or necessarily cheaply. I am not a hater of digital effects. But it is worth mentioning the ability of effects artists to turn a crappy shot into something cinematic, and to craft apocalyptic scenery without constructing a single physical element.

We are experiencing the separating of wheat and chaff in real time, and with far more of each than ever before. Unfortunately, soulless and artless vehicles for big stars sell well. Explosions sell. Familiar sells.

“Content” sells.

Here is where Broderick lands:

And six years later, it’s not just music that feels forgettable and disposable. Most popular forms of entertainment and even basic information have degraded into slop simply meant to fill our various feeders. It doesn’t matter that Google’s AI is telling you to put glue on pizza. They needed more data for their language model, so they ingested every Reddit comment ever. This makes sense because from their perspective what your search results are doesn’t matter. All that matters is that you’re searching and getting a response. And now everything has meet these two contradictory requirements. It must fill the void and also be the most popular thing ever. It must reach the scale of MrBeast or it can’t exist. Ironically enough, though, when something does reach that scale now, it’s so watered down and forgettable it doesn’t actually feel like it exists.

One may quibble with the precise wording that “what your search results are doesn’t matter” to Google. The company appears to have lost market share as trust in search has declined, though there is conflicting data and the results may not be due to user preference. But the gist of this is, I think, correct.

People seem to understand they are being treated as mere consumers in increasingly financialized expressive media. I have heard normal people in my life — people without MBAs, and who do not work in marketing, and who are not influencers — throw around words like “monetize” and “engagement” in a media context. It is downright weird.

The word “slop” seems like a good catch-all term finding purchase in the online vocabulary, but I think the popularization of “content” — in the way it is most commonly used — foreshadowed this shift. Describing artistic works as though they are filler for a container is a level of disrespect not even a harsh review could achieve. Not all “content” is “slop”, but all “slop” is “content”. One thing “slop” has going for it is its inherent ugliness. People excitedly talk about all the “content” they create. Nobody will be proud of their “slop”.

Google Photos Finally Lets Users Migrate Directly to iCloud Photos dtinit.org

Chris Riley, of the Data Transfer Initiative:

Beginning today, Apple and Google are expanding on their direct data transfer offerings to allow users of Google Photos to transfer their collections directly to iCloud Photos. This complements and completes the existing transfers that were first made possible from iCloud Photos to Google Photos and fulfills a core Data Transfer Initiative (DTI) principle of reciprocity. The offering from Apple and Google will be rolling out over the next week and is the newest tool powered by the open source Data Transfer Project (DTP) technology stack, joining existing direct portability tools available to billions of people today offered by DTI and its founding partners Apple, Google, and Meta.

The Data Transfer Initiative’s story originates with Google’s Data Liberation Front, spurred by E.U. legislation. While Google has long permitted users’ retrieval of data it holds, it has not been the most enthusiastic supporter of direct transfers away from its services. This distinction becomes increasingly important as users store more data with cloud-based services instead of keeping local copies — they may not have space to download all their pictures if they trust the cloud provider’s hosting.

Since 2021, iCloud users have been able to migrate images directly to Google Photos. At long last, the same is possible in reverse.

Amazon Did Not Reach Its Goal of Fully Clean Electricity Seven Years Early nytimes.com

Ivan Penn and Eli Tan, New York Times:

Amazon announced on Wednesday that effectively all of the electricity its operations used last year came from sources that did not produce greenhouse gas emissions. But some experts have criticized the method the company uses to make that determination as being too lenient.

[…]

As a result, to achieve 100 percent clean energy — at least on paper — companies often buy what are known as renewable energy certificates, or RECs, from a solar or wind farm owner. By buying enough credits to match or exceed the energy its operations use, a company could make the claim that its business is powered entirely by clean energy.

“That’s what we do, buy RECs for projects that are not yet operational,” Ms. Hurst [Amazon’s vice president of worldwide sustainability] said.

Regardless of how legitimate these certificates are — and there are plenty of reasonable questions to be asked — it is dishonest for Amazon or anyone else to apply them to power consumed in a year in which it was not generated. This is greenwashing.

Samsung Introduces the Galaxy Watch Ultra and See If You Can Guess Its Inspiration theverge.com

Victoria Song, the Verge:

I’m not exaggerating or being a hater, either. It’s in the name! Apple Watch Ultra, Galaxy Watch Ultra. Everything about this watch is reminiscent of Apple’s. Samsung says this is its most durable watch yet, with 10ATM of water resistance, an IP68 rating, a titanium case, and a sapphire crystal lens. There’s a new orange Quick Button that launches shortcuts to the workout app, flashlight, water lock, and a few other options. (There is a lot of orange styling.) It’s got a new lug system for attaching straps that looks an awful lot like Apple’s, too.

It is extremely funny to me how shameless Samsung is in duplicating the specific differences of the Apple Watch Ultra relative to a standard Apple Watch. You can imagine Samsung’s product team going through the list: Titanium? Check. More durable? Check. Chunkier? Check. Assignable button? Check. Extended typeface? Check. Orange accents? In for a dime, in for a dollar.

Apple Blog TUAW Returns as A.I. Slop engadget.com

Christina Warren:

So someone bought the old TUAW domain name. TUAW was a site that I worked at in college, that has been dead for a decade and that I stopped working for 15 years ago. But now my name is bylined on 1500+ articles alongside an AI-generated photo. Revive the old brand. Fine. But leave my name off of it! H/t @gruber

Karissa Bell, Engadget:

Originally started in 2004, TUAW was shut down by AOL in 2015. Much of the site’s original archive can still be found on Engadget. Yahoo, which owns Engadget, sold the TUAW domain in 2024 to an entity called “Web Orange Limited” in 2024, according to a statement on TUAW’s website.

The sale, notably, did not include the TUAW archive. But, it seems that Web Orange Limited found a convenient (if legally dubious) way around that. “With a commitment to revitalize its legacy, the new team at Web Orange Limited meticulously rewrote the content from archived versions available on archive.org, ensuring the preservation of TUAW’s rich history while updating it to meet modern standards and relevance,” the site’s about page states.

Ernie Smith:

OK found the connection. The people who own The Hack Post bought the TUAW site. They use the same Google ad tag.

[…]

Notably, same dude owns iLounge.

The same advertising identifier has been used with a handful of other previously defunct publications like Metapress and Tapscape, as well as a vanity URL generator for Google Plus. Not a surprising use for domains with plenty of history and incoming links, but truly a scumbag result. Shameful.

The Ticketmaster Breach Is a Cautionary Nightmare globalnews.ca

Saba Aziz, Global News:

Ticketmaster has finally notified its users who may have been impacted by a data breach — one month after Global News first reported that the personal information of Canadian customers was likely stolen.

In an email to its customers on Monday, Ticketmaster said that their personal information may have been obtained by an unauthorized third party from a cloud database that was hosted by a separate third-party data services provider.

Ticketmaster says this might include “encrypted credit card information” from “some customers”.

Jason Koebler, 404 Media:

Monday, the hacking group that breached Ticketmaster released new data that they said can be used to create more than 38,000 concert tickets nationwide, including to sought after shows like Olivia Rodrigo, Bruce Springsteen, Hamilton, Tyler Childers, the Jonas Brothers, and Los Angeles Dodgers games. The data would allow someone to create and print a ticket that was already sold to someone else, creating a situation where Ticketmaster and venues might have to sort out which tickets are from legitimate buyers and which are the result of the hack for shows that are taking place as early as today.

These are arguably problems created by the scale and scope of Ticketmaster’s operations. This series of data releases affects so many people and events because parent company Live Nation is a chokepoint for entertainment thanks to a merger approved by U.S. authorities. If this industry were more distributed, it would certainly present more opportunities for individual breaches, but the effect of each would be far smaller.

Dynamic Type on the Web furbo.org

Craig Hockenberry:

This site now supports Dynamic Type on iOS and iPadOS. If you go to System Settings on your iPhone or iPad, and change the setting for Display & Brightness > Text Size, you’ll see the change reflected on this website.

With the important caveat this only applies to iOS-derived devices — not even Macs — it seems trivial enough to implement in a way that preserves the Dynamic Type font size but permits flexibility with other properties. Apple added this in Safari 7.0 along with a wide variety of other properties — you can set headings to match system sizes, too — but I cannot find many places where it is used even today. (The WebKit blog is one.) Is that a result of poor communication, or perhaps poor focus on accessibility? Or is it just too limited because it is only used on one set of platforms?

Apple Withheld Epic Games’ App Store in the E.U. appleinsider.com

Malcolm Owen, AppleInsider:

In earlier reports, it was confirmed by Apple that Epic was mostly in compliance with EU-specific app review guidelines. The objectionable parts were a download button and related copy, which went against rules that forbid developers from making apps that can confuse consumers that elements in the apps were actually Apple-made items.

Epic had defended itself, insisting it used the same naming conventions employed across different platforms. Epic also said it followed standard conventions for buttons in iOS apps.

Apple has since told AppleInsider on Friday that it has approved Epic’s marketplace app. It has also asked Epic to fix the buttons in a future submission of the app for review.

As far as I know, there are no screenshots of the version of Epic Games’ store submitted to Apple. Maybe it is designed in a way that duplicates Apple’s App Store to the point where it is confusing, as Apple argues. Maybe it is intentionally designed in such a way that it creates headlines; Epic Games loves being in this position.

Regardless, it seems like a bad idea for Apple to be using its moderate control over alternative app stores are distributed to litigate intellectual property disputes. Perhaps when trust in the company’s processes is healthier, it would be less objectionable. But right now? If Apple wants to give competition investigators more material, it appears to be succeeding.

Also, it is interesting to see the publications to which Apple chooses to provide quotes. TechCrunch has been a longtime favourite for the company but, increasingly, Apple is giving exclusive statements to smaller blogs like 9to5Mac and AppleInsider. I do not know what to make of this but I am noting it for my own future reference.

Canadian Government Enacts Digital Services Tax cbc.ca

Peter Zimonjic, CBC News:

The federal government has enacted a controversial digital services tax that will bring in billions of dollars while threatening Canada’s trading relationships by taxing the revenue international firms earn in Canada.

This has always seemed to me like a fairer response to declining Canadian advertising revenue for media companies than the Online News Act’s link tax. It makes no sense to charge ad-supported platforms for the privilege of pointing users to specific URLs.

U.S. Ambassador to Canada David Cohen issued a media statement Thursday calling the tax “discriminatory.”

“[The United States Trade Representative] has noted its concern with Canada’s digital services tax and is assessing, and is open to using, all available tools that could result in meaningful progress toward addressing unilateral, discriminatory [digital services taxes],” Cohen said in the statement.

I would love to know if it is possible for any non-U.S. government to respond to any number of unique conditions created by massive technology companies without it disproportionately impacting U.S.-based firms. The U.S. spent decades encouraging a soft power empire in the tech industry with its lax competition laws, and it has been an immensely successful endeavour. There will likely be retaliation, which is a similar reflection of its power — the Canadian government can either allow advertising spending to continue to be eaten up by U.S. firms, or it can get hit with some tariff on something else. Like sleeping with an elephant.

OpenAI’s ChatGPT Mac App Stored Conversation History Outside the Sandbox theverge.com

Pedro José Pereira Vieito on Threads:

The OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in **plain-text** in a non-protected location:

~/Library/Application\ Support/com.openai.chat/conversations-{uuid}/

So basically any other running app / process / malware can read all your ChatGPT conversations without any permission prompt.

I have not yet updated my copy of the desktop app, so I was able to see this for myself, and it clarified the “all your ChatGPT conversations” part of this post. I had only downloaded and signed into the ChatGPT app — I had not used it for any conversations yet — but my entire ChatGPT history was downloaded to this folder. Theoretically, this means any app on a user’s system had access to a copy of their conversations with ChatGPT since they began using it on any device.

Jay Peters, the Verge:

After The Verge contacted OpenAI about the issue, the company released an update that it says encrypts the chats. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” OpenAI spokesperson Taya Christianson says in a statement to The Verge. “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”

Virtually all media coverage — including Peters’ article — has focused on the “plain text” aspect. Surely, though, the real privacy and security risk identified in the ChatGPT app — such that there is any risk — was in storing its data outside the app’s sandbox in an unprotected location. This decision made it possible for apps without any special access privileges to read its data without throwing up a permissions dialog.

There are obviously plenty of frustrations and problems with Apple’s sandboxing model in MacOS. Yet there are also many cases where sensitive data is stored in plain text. The difference is it is at least a little bit difficult for a different app to surreptitiously access those files.

The Design of A.I. Identities techcrunch.com

Speaking of A.I. and design, I enjoyed Devin Coldewey’s look, for TechCrunch, at the brand and icon design of various services:

The thing is, no one knows what AI looks like, or even what it is supposed to look like. It does everything but looks like nothing. Yet it needs to be represented in user interfaces so people know they’re interacting with a machine learning model and not just plain old searching, submitting, or whatever else.

Although approaches differ to branding this purportedly all-seeing, all-knowing, all-doing intelligence, they have coalesced around the idea that the avatar of AI should be non-threatening, abstract, but relatively simple and non-anthropomorphic. […]

Gradients and gentle shapes abound — with one notable exception.

See Also: Brand New has reviews of the identities for OpenAI’s DevDay and Perplexity — both paywalled.

Report from Google Researchers Finds Impersonation Is the Most Likely Way Generative A.I. Is Misused ft.com

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like Character.ai and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.

Figma Disables A.I. Design Tool After It Copied Apple’s Weather App 404media.co

Emanuel Maiberg, 404 Media:

The design tool Figma has disabled a newly launched AI-powered app design tool after a user showed that it was clearly copying Apple’s weather app. 

Figma disabled the feature, named Make Design, after CEO and cofounder of Not Boring Software Andy Allen tweeted images showing that asking it to make a “weather app” produced several variations of apps that looked almost identical to Apple’s default weather app.

Dylan Field, Figma’s CEO, blamed this result on rushing to launch it at the company’s Config conference last week, and using a set of third-party models. Still, it is amazing how fast a company will move when it could reasonably be accused of intellectual property infringement.

It is consistent to view this clear duplication of existing works through the same lens of morality as when A.I. tools duplicate articles and specific artists. I have not seen a good explanation for why any of these should be viewed differently from the others. There are compelling reasons for why it is okay to copy the works of others, just as there are similarly great arguments for why it is not.

The duplication of Apple’s weather app by Figma’s new gizmo is laughable, but nobody is going to lose their livelihood because a big corporation’s A.I. feature ripped off the work of a giant corporation. It is outrageous, though, to see the unique style of individual artists and the careful reporting of publications being ripped off at scale for financial gain.

‘King Lear Is Just English Words Put in Order’

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

No, the European Commissioner Did Not Say the Delayed Launch of Apple Intelligence Is Anticompetitive spyglass.org

M.G. Siegler:

With all the talk about how the EU believes Apple is anticompetitive, it never occurred to me to read it more literally. By announcing the [sic] would not be shipping their ‘Apple Intelligence’ tools in the EU, Apple is choosing to not compete in AI in the region. That is anticompetitive. I guess?

Siegler is not the only person who seems to be confused by Margrethe Vestager’s recent comments, as transcribed by Ben Lovejoy of 9to5Mac:

I find that very interesting that they say we will now deploy AI where we’re not obliged to enable competition. I think that is that is the most sort of stunning open declaration that they know 100% that this is another way of disabling competition where they have a stronghold already.

Vestanger is claiming Apple Intelligence must be anticompetitive because Apple is not launching it in the E.U. where it would fall under the governance of the DMA. It is, at best, a stretch to conclude that from Apple’s cautious behaviour. But I cannot see how one could interpret Vestanger’s comments to mean she believes the delay of Apple Intelligence in the E.U. is itself anticompetitive.

When Google Burned a U.S.-Allied Counterterrorism Operation poppopret.org

Yesterday, in responding to a Google profile of DRAGONBRIDGE, a Chinese state-affiliated disinformation campaign, I wrote that I hoped Google would do the same if it were a U.S.-allied effort it had found instead — forgetting that Google had already done so, and in a far more complicated circumstance.

Michael Coppola:

In January 2021, Google’s Project Zero published a series of blog posts coined the In the Wild Series. Written in conjunction with Threat Analysis Group (TAG), this report detailed a set of zero-day vulnerabilities being actively exploited in the wild by a government actor.

[…]

What the Google teams omitted was that they had in fact exposed a nine-month-long counterterrorism operation being conducted by a U.S.-allied Western government, and through their actions, Project Zero and TAG had unilaterally destroyed the capabilities and shut down the operation.

This is not the only example cited by Coppola; there are many in this post.

When an exploit chain is discovered, there is a very easy situation — technically: Google did the right thing by finding and exposing these vulnerabilities, no matter how they were being used. But doing so is politically and ethically fraught if those vulnerabilities are being used by state actors.

Patrick Howell O’Neill, reporting for MIT Technology Review in March 2021:

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

As far as I know, the U.S. ally was never revealed nor were the specific targets. Google’s revelation could have had catastrophic consequences, as Coppola speculates. But it is also true that not revealing known exploits to software vendors can have severe outcomes, as we learned with WannaCry. The risk of exposing the use of vulnerabilities is variable; the risk of not reporting them is fixed and known: they will be found by or released to people who should never have access to them.

The Latest Attempt at a U.S. Privacy Law Is Dead wired.com

Dell Cameron, Wired:

United States lawmakers who’ve flirted for years with the idea of offering Americans a semblance of control over their own data yanked at the last moment the latest iteration of a “comprehensive” privacy package that’s been subject to continual editing and debate for the better part of a decade. The bill, known as the American Privacy Rights Act (APRA), was scheduled for markup Thursday morning by the House Energy & Commerce Committee (E&C), which holds jurisdiction over matters of commercial surveillance.

Americans, if you do not like how Democrats diluted this bill to appease obstinate Republicans who still killed its chances, you should let your representative know.

The demise of this bill sucks because strong privacy rules in the U.S. would have a knock-on effect worldwide. It would mean the expectations of data collection, retention, and use would fundamentally shift. This bill was imperfect even in its original guise, but it was a meaningful positive step forward. My own government should learn from it.

Beleaguered Paramount Wipes Historic MTV News and Comedy Central Archives variety.com

Alex Young, Consequence:

MTV News has pulled its digital archive, making thousands of news stories, profiles, interviews, and other editorial features dating back to the 1996 no longer accessible on the web.

Jed Rosenzweig, LateNighter:

ComedyCentral.com had been home to clips from every episode of The Daily Show since 1999, and the entire run of The Colbert Report, but as of Wednesday morning, those clips (and most everything else on the site) are gone.

Michael Alex, who used to run MTV News’ web team, in a guest piece for Variety:

History needs stewards, not owners. Whoever legally owns the archive does not legally own the history, even if they own the creative work of thousands of writers, editors, producers and more. This archive — of MTV News, where you heard it first — needs to be available to public.

I will not pretend to understand how big of a financial hole Paramount is in, but I fully understand the loss of this archive. Most of the video clips are not available anywhere else — at least, not publicly and not legally. Much of the text on MTV News has been saved by the Internet Archive going back to 1996, but it also has huge gaps.

I do not want to make this sound like the Library of Alexandria is burning down, but it is an important collection of work. Now, without any notice, it is all gone.

Apple Publishes Paper Explaining Its Approach to Repair and Device Longevity apple.com

Apple:

Diagnostics is part of Apple’s ongoing effort to extend the lifespan of Apple products. While Apple is committed to providing safe and affordable repair options, designing and building long-lasting products remains the top priority. The best type of repair for customers and the planet is one that is never needed. Today, Apple published a whitepaper explaining the company’s principles for designing for longevity — a careful balance between product durability and repairability.

The paper is worth a read to understand what role Apple sees repair playing in the lifecycle of a device, and why it is so keen on parts pairing. For example, it says the charging port is part of a more complex module, and separating it would actually create greater carbon emissions if you account for both the total emissions from manufacturing and the likelihood of repair. This is fair though, it should be said, based entirely on an internal case study, the results of which are not readily available, and which appears to be isolated to only carbon emissions — what about other environmental costs? It does sound believable.

Apple also repeats the argument made by John Ternus that building for durability can prevent the need for repair, though sometimes at the cost of its ease. Ternus and this paper explain how the addition of seals and adhesives used to make iPhones far more water resistant thereby eliminating a whole host of repair needs. But, as I pointed out at the time, these goals are not necessarily in conflict, as Apple’s recent iPhones have been easier to repair than their predecessors without sacrificing their water and dust ingress rating.

Ternus’ interview with Marques Brownlee last month and this paper seem to be both Apple’s attempt to explain how it sees repair, and a way to reframe it in more favourable terms. Repairability is important, Apple says, but “when it benefits our customers and the environment”, not in isolation. It should be considered in the context of overall device longevity. That is a reasonable argument and one I do not disagree with in general. It also makes me wonder about Apple’s attitude toward batteries in general. There should be no need to replace the trackpad, keyboard, and a square foot of aluminum in order to install a new battery in a laptop.

You do have to chuckle at Apple’s diagram on page eight highlighting the sole repairable component on the original iPhone: the SIM card tray.

[…] Next year, Canada will become the 34th country in which Apple offers Self Service Repair.

Excellent news. Legislative pressure works.

Google Says It Continues to Monitor an Ineffective Chinese Propaganda Network blog.google

Zak Butler of Google:

Today we are sharing updated insights about DRAGONBRIDGE, the most prolific IO actor Google’s Threat Analysis Group (TAG) tracks. DRAGONBRIDGE, also known as “Spamouflage Dragon,” is a spammy influence network linked to the People’s Republic of China (PRC) that has a presence across multiple platforms. Despite producing a high amount of content, DRAGONBRIDGE still does not get high engagement from users on YouTube or Blogger.

[…]

Despite their continued profuse content production and the scale of their operations, DRAGONBRIDGE achieves practically no organic engagement from real viewers. In 2023, of the over 57,000 YouTube channels disabled, 80% had zero subscribers. Of the over 900,000 videos suspended, over 65% of their videos had fewer than 100 views, and 30% of their videos had zero views. Despite experimenting with content and producing large amounts of content, DRAGONBRIDGE still does not receive high engagement.

Reporting earlier this year by David Gilbert at Wired indicates this is not an isolated case: these propaganda efforts have been largely unsuccessful. Nevertheless, I appreciate that platform owners like Google are looking out for coordinated campaigns like these and intervening. I hope it would do the same when it is a U.S.-led disinformation campaign — and I would really like to believe it would. It is hilarious to call out any governments trying these tactics and embarrassing them with their weak engagement numbers.

At least, I hope these campaigns keep seeing a milquetoast reception. The alternative is likely terrible, especially for something like that U.S. anti-vaccine initiative.

U.S. Supreme Court Rejects Conspiracy Theory Behind Murthy v. Missouri techpolicy.press

Ben Lennett, Justin Hendrix, and Gabby Miller, Tech Policy Press:

Today, the US Supreme Court ruled in favor of the Biden Administration in Murthy v. Missouri. In a 6-3 ruling, the Court reversed a decision by the Court of Appeals for the Fifth Circuit that had found that the administration had violated the plaintiffs’ First Amendment rights, finding instead that the plaintiffs did not have standing to bring the case. “Neither the individual nor the state plaintiffs have established Article III standing to seek an injunction against any defendant,” the decision says.

This was an escalation of a September ruling favourable to the Biden administration — one which, by the way, the Supreme Court justices seemed really annoyed about having to listen to.

There are two more key cases concerning U.S. government influence over social media platforms’ moderation policies from this term, the decisions for which will be released soon.

Update: More good news after the justices ruled social media companies can moderate their platforms as they see fit.

Tuesdays used to be my favourite day of the week because it was the day when a bunch of new music would typically be released. That is no longer the case — but only because music releases were moved to Fridays. The music itself is as good as ever.

It remains a frustratingly recurring theme that today’s music is garbage, and yesterday’s music is gold. Music today is “just noise”, according to generations of people who insist the music in their day — probably when they were in their twenties — was better. Today’s peddler of this dead horse trope is YouTuber and record producer Rick Beato, who published a video about the two “real reason[s] music is getting worse”: it is “too easy to make”, and “too easy to consume”.

Beato is right that new technologies have been steadily making it easier to make music and listen to it, but he is wrong that they are destructive. “Good” and “bad” are subjective, to be sure, but it seems self-evident that more good music is being made now than ever before, simply because people are so easily able to translate an idea to published work. Any artist can experiment with any creative vision. It is an amazing time.

This also suggests more bad music is being made, but who cares? Bad music has existed forever. The lack of a gatekeeper now means it gains wider distribution, but that has more benefits than problems. Maybe some people will stumble across it and recognize the potential in a burgeoning artist.

Aside from the lack of distribution avenues historically, the main reason we do not remember bad records is because they are no longer played. This does not mean unpopular music is inherently bad, of course, only that time sifts things we generally like from things we do not.

Perhaps one’s definition of “good” includes how influential a work of art turns out to be. Again, it seems difficult to argue modern music is not as influential as that which has preceded it. It may be too early to tell what will prove its influence, to be sure, but we have relatively recent examples which indicate otherwise. The Weeknd spawned an entire genre of moody R&B imitators from a series of freely distributed mixtapes. The entire genre of trap spread to the world from its origins in Atlanta, to the extent that its unique characteristics have underpinned much of pop music for a decade. Many of its biggest artists made their name on DatPiff. Just two of countless examples.

If you actually love music for all that it can be, you are spoiled for choice today. If anything, that is the biggest problem with music today: there is so much of it and it can be overwhelming. The ease with which music can be made does not necessarily make it worse, but it does make it more difficult if you want to try as much of it as you can. I have only a small amount of sympathy when Beato laments how the ease of streaming services devalues artistry because of how difficult it can be to spend time with any one album when there is another to listen to, and then another. But anyone can make the decision to hold the queue and embrace a single release. (And if artistry is something we are concerned with, why call it “consuming”? A good record is not something I want to chug down.)

We can try any music we like these days. We can explore old releases just as easily as we can see what has just been published. We can and should take a chance on genres we had never considered before. We can explore new recordings of jazz and classical compositions. Every Friday is a feast for the ears — if you want it to be. If you really like music, you are living in the luckiest time. I know that is how I feel. I just wish artists could get paid an appropriate amount for how much they contribute to the best parts of life.

European Commission Finds Apple Is in Breach of DMA Rules ec.europa.eu

The European Commission:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.

The problems cited by the Commission are so far entirely related to in-app referrals for external purchases. The Commission additionally says it is looking into Apple’s terms for third-party app stores — including the Core Technology Fee — but that is not what these specific findings are about.

Jesper:

In the DMA, the ground rule is for sideloading apps to be allowed, and to only very minimally be reigned in under very specific conditions. Apple chose to take these conditions and lawyer them into “always, unless you pay us sums of money that are plainly prohibitive for most actors”. Apple knew the rules and understood the intent and chose to evade them, in order to retain additional income.

Separately, earlier this month — the weekend before WWDC, in fact — Apple rejected an emulator after holding it in review for two months.

Benjamin Mayo, 9to5Mac:

App Review has rejected a submission from the developers of UTM, a generic PC system emulator for iPhone and iPad.

The open source app was submitted to the store, given the recent rule change that allows retro game console emulators, like Delta or Folium. App Review rejected UTM, deciding that a “PC is not a console”. What is more surprising, is the fact that UTM says that Apple is also blocking the app from being listed in third-party app stores in the EU.

Michael Tsai compiled the many disapproving reactions to Apple’s decision, adding:

The bottom line for me is that Apple doesn’t want general-purpose emulators, it’s questionable whether the DMA lets it block them, and even siding with Apple on this it isn’t consistently applying its own rules.

Jason Snell, Six Colors:

The whole point of the DMA is that Apple does not get to act as an arbitrary approver or disapprover of apps. If Apple can still reject or approve apps as it sees fit, what’s the point of the DMA in the first place?

The Commission continues:

In parallel, the Commission will continue undertaking preliminary investigative steps outside of the scope of the present investigation, in particular with respect to the checks and reviews put in place by Apple to validate apps and alternative app stores to be sideloaded.

Riley Testut:

When we first met with the EC a few months ago, we were asked repeatedly if we trusted Apple to be in charge of Notarization. We emphatically said yes.

However, it’s clear to us now that Apple is indeed using Notarization to not only delay our apps, but also to determine on a case-by-case basis how to undermine each release — such as by changing the App Store rules to allow them

If you are somebody who believes it is only fair to take someone at their word and assume good faith, I am right there with you. Even though Apple has a long history of capricious App Review processes, it was fair to consider its approach to the E.U. a begrudging but earnest attempt at compliance. Even E.U. Commissioner Margrethe Vestager did, telling CNBC she was “very surprised that we would have such suspicions of Apple being non-compliant”.

That is, however, a rather difficult position to maintain, given the growing evidence Apple seems determined to evade both the letter and spirit of this legislation. Perhaps there are legitimate security concerns in the UTM emulator. The burden of proof for that claim rests on Apple, however, and its ability to be a reliable narrator is sometimes questionable. Consider the possible conflicts of interest in App Tracking Transparency rules raised by German competition authorities.

Manton Reece:

When a company withholds a feature from the EU because of the DMA — Apple for AI, Meta today for the fediverse — they should document which sections of the DMA would potentially be violated. Let users fact-check whether there’s a real problem.

Agreed. This would allow people to understand what businesses see are the limitations of the DMA on the merits. Users may not be the best judge of whether a legal problem exists — especially since laws get interpreted and reinterpreted by different experts all the time — but any details would be better than a void filled with speculation.

Alberta Government Is Outraged on Behalf of Greenwashing Oil Companies cbc.ca

Joel Dryden, CBC News:

Alberta’s government says it is “actively exploring” the use of every legal option, including a constitutional challenge or the use of the Alberta Sovereignty Act, to push back against federal legislation that will soon become law.

That legislation is Bill C-59, which would require companies to provide evidence to back up their environmental claims. It is currently awaiting royal assent.

As of Thursday, it was also what led the Pathways Alliance, a consortium of Canada’s largest oilsands companies, to remove all its content from its website, social media and other public communications.

The Alberta government and these petrochemical companies are cowards, the lot of them. If a business wants to claim that a drug treats or cures a disease, it needs to have proof of that. If it wants to claim the health benefits of some packaged food product, it needs evidence. If a mechanical process is supposed to meet energy or environmental standards, it must pass relevant tests.

Why should oil companies making absurd claims laundered through a quasi-governmental public relations office and supported by the province get to greenwash their way out of responsibility? All else being equal, most people probably do not care how their energy needs are met. They care about having electricity, transportation, and warmth. There is no shame in being honest about where we are environmentally speaking, where we need to go, and how difficult it will be to get there.

So Far, A.I. Is a Feature, Not a Product daringfireball.net

John Gruber:

An oft-told story is that back in 2009 — two years after Dropbox debuted, two years before Apple unveiled iCloud — Steve Jobs invited Dropbox cofounders Drew Houston and Arash Ferdowsi to Cupertino to pitch them on selling the company to Apple. Dropbox, Jobs told them, was “a feature, not a product”.

[…]

Leading up to WWDC last week, I’d been thinking that this same description applies, in spades, to LLM generative AI. Fantastically useful, downright amazing at times, but features. Not products. Or at least not broadly universal products. Chatbots are products, of course. People pay for access to the best of them, or for extended use of them. But people pay for Dropbox too.

Marques Brownlee published a video about the same topic last week, and referenced a Wired podcast episode from the week before.

This seems to be the way things are shaping up and, anecdotally, describes the kinds of A.I. things I find most useful. Previous site sponsor ListenLater’s pitch is it lets you “listen to articles as podcasts”; that it uses an A.I.-trained voice makes it sound better, but is only one component of a more comprehensive story. Generative features in Adobe’s products enable faster and easier object removal from photos, and extending images beyond the known edges.

These are just features, though. Text-to-speech has been around for ages, and training it on real speech patterns makes it sound more realistic than most digital voices have so far been. Likewise, removal tools have been a core feature in image editing software for decades, and Adobe’s has changed a lot in the time I have used it: from basic clone stamping, which allows you to paint an area with sampled pixels, to the healing brush — sort of similar, but it tries to match the tone of the destination — to Content-Aware Fill. And, now, Generative Fill. These tools have made image editing easier and more realistic. It could take hours to remove an errant street sign from a photo with older tools; now, it really does take mere seconds, and the results are usually at least as good as a manual effort. The same is true for extending a photo — something routinely done to make it fit better in an ad or some other fixed composition.

The irony of the feature-not-product framing is that iCloud Drive and OneDrive, for example, have struggled to become as efficient and reliable as Dropbox was when it launched. But, then again, so has Dropbox today. As synced folders became just a feature within a broader platform, Dropbox expanded its offering to become a collaborative work environment, a cloud backup utility, and more. As a result, its formerly quiet and dutiful desktop app has become less efficient.1

A similar story could be told about 1Password, too, though perhaps not to the same extent. For many users, the password manager built into their system or browser might be fine. 1Password makes a more robust product marketed heavily toward business and enterprise users. Unfortunately, it has supported that effort with a suite of apps which are less efficient for users to create a better workflow for its developers.

If you are looking for the path the standalone A.I. companies are likely to take — aside from a merger or acquisition — these examples may be lurking along the way. I wonder what has been happening with that OpenAI hardware project.


  1. At the time, I wrote the enterprise positioning was “misguided” and likely would not be successful. This humble pie tastes fine, I guess. ↥︎

Perplexity CEO Aravind Srinivas Responds fastcompany.com

Mark Sullivan, Fast Company:

[Perplexity CEO Aravind] Srinivas said the mysterious web crawler that Wired identified was not owned by Perplexity, but by a third-party provider of web crawling and indexing services.

Srinivas wants the warm glow of innovation without the cold truth of responsibility.

Srinivas would not say the name of the third-party provider, citing a Nondisclosure Agreement.

The way Perplexity works is dependent on favourable relationships with these providers, so Srinivas cannot throw them under the bus by name. He can, however, scatter blame all around.

Asked if Perplexity immediately called the third-parter crawler to tell them to stop crawling Wired content, Srinivas was non-committal. “It’s complicated,” he said.

Srinivas has not.

Srinivas also noted that the Robot Exclusion Protocol, which was first proposed in 1994, is “not a legal framework.”

Srinivas is creating a clear difference between laws and principles because the legal implications are so far undecided, but it sure looks unethical that its service ignores the requests of publishers — no matter whether that is through first- or third-party means.

He suggested that the emergence of AI requires a new kind of working relationship between content creators, or publishers, and sites like his.

On this, Srinivas and I agree. But it seems that, until new policies are in place, Perplexity will keep pillaging the web.

The Fight for End-to-End Encryption Is Worldwide

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

Apple Says It Will Prevent E.U. Users From Accessing Select New Features, Including Apple Intelligence, Until It Has Achieved DMA Compliance ft.com

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

On Robots and Text

After Robb Knight found — and Wired confirmed — Perplexity summarizes websites which have followed its opt out instructions, I noticed a number of people making a similar claim: this is nothing but a big misunderstanding of the function of controls like robots.txt. A Hacker News comment thread contains several versions of these two arguments:

  • robots.txt is only supposed to affect automated crawling of a website, not explicit retrieval of an individual page.

  • It is fair to use a user agent string which does not disclose automated access because this request was not automated per se, as the user explicitly requested a particular page.

That is, publishers should expect the controls provided by Perplexity to apply only to its indexing bot, not a user-initiated page request. Wary of being the kind of person who replies to pseudonymous comments on Hacker News, this is an unnecessarily absolutist reading of how site owners expect the Robots Exclusion Protocol to work.

To be fair, that protocol was published in 1994, well before anyone had to worry about websites being used as fodder for large language model training. And, to be fairer still, it has never been formalized. A spec was only recently proposed in September 2022. It has so far been entirely voluntary, but the draft standard proposes a more rigid expectation that rules will be followed. Yet it does not differentiate between different types of crawlers — those for search, others for archival purposes, and ones which power the surveillance economy — and contains no mention of A.I. bots. Any non-human means of access is expected to comply.

The question seems to be whether what Perplexity is doing ought to be considered crawling. It is, after all, responding to a direct retrieval request from a user. This is subtly different from how a user might search Google for a URL, in which case they are asking whether that site is in the search engine’s existing index. Perplexity is ostensibly following real-time commands: go fetch this webpage and tell me about it.

But it clearly is also crawling in a more traditional sense. The New York Times and Wired both disallow PerplexityBot, yet I was able to ask it to summarize a set of recent stories from both publications. At the time of writing, the Wired summary is about seventeen hours outdated, and the Times summary is about two days old. Neither publication has changed its robots.txt directives recently; they were both blocking Perplexity last week, and they are blocking it today. Perplexity is not fetching these sites in real-time as a human or web browser would. It appears to be scraping sites which have explicitly said that is something they do not want.

Perplexity should be following those rules and it is shameful it is not. But what if you ask for a real-time summary of a particular page, as Knight did? Is that something which should be identifiable by a publisher as a request from Perplexity, or from the user?

The Robots Exclusion Protocol may be voluntary, but a more robust method is to block bots by detecting their user agent string. Instead of expecting visitors to abide by your “No Homers Club” sign, you are checking IDs. But these strings are unreliable and there are often good reasons for evading user agent sniffing.

Perplexity says its bot is identifiable by both its user agent and the IP addresses from which it operates. Remember: this whole controversy is that it sometimes discloses neither, making it impossible to differentiate Perplexity-originating traffic from a real human being — and there is a difference.

A webpage being rendered through a web browser is subject to the quirks and oddities of that particular environment — ad blockers, Reader mode, screen readers, user style sheets, and the like — but there is a standard. A webpage being rendered through Perplexity is actually being reinterpreted and modified. The original text of the page is transformed through automated means about which neither the reader or the publisher has any understanding.

This is true even if you ask it for a direct quote. I asked for a full paragraph of a recent article and it mashed together two separate sections. They are direct quotes, to be sure, but the article must have been interpreted to generate this excerpt.1

It is simply not the case that requesting a webpage through Perplexity is akin to accessing the page via a web browser. It is more like automated traffic — even if it is being guided by a real person.

The existing mechanisms for restricting the use of bots on our websites are imperfect and limited. Yet they are the only tools we have right now to opt out of participating in A.I. services if that is something one wishes to do, short of putting pages or an entire site behind a user name and password. It is completely reasonable for someone to assume their signal of objection to any robotic traffic ought to be respected by legitimate businesses. The absolute least Perplexity can do is respecting those objections by clearly and consistently identifying itself, and excluding websites which have indicated they do not want to be accessed by these means.


  1. I am not presently blocking Perplexity, and my argument is not related to its ability to access the article. I am only illustrating how it reinterprets text. ↥︎

Perplexity Is a Bullshit Machine wired.com

Dhruv Mehrotra and Tim Marchman, of Wired, were able to confirm Robb Knight’s finding that Perplexity ignores the very instructions it gives website owners to opt out of scraping. And there is more:

The WIRED analysis also demonstrates that despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

I had not played around with Perplexity very much, but I tried asking it “what is the bullshit web?”. Its summaries in response to prompts with and without a question mark are slightly different but there is one constant: it does not cite my original article, only a bunch of (nice) websites which linked to or reblogged it.

A.I. Cannot Fix What Automation Already Broke bloodinthemachine.com

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

McDonald’s Is Ending Its Drive-Through A.I. Test restaurantbusinessonline.com

Jonathan Maze, Restaurant Business Online:

McDonald’s is ending its two-year-old test of drive-thru, automated order taking (AOT) that it has conducted with IBM and plans to remove the technology from the more than 100 restaurants that have been using it.

[…]

McDonald’s has taken a deliberative approach on drive-thru AI even as many other restaurant chains have jumped fully on board. Checkers and Rally’s, Hardee’s, Carl’s Jr., Krystal, Wendy’s, Dunkin and Taco Johns are either testing or have implemented the technology in its drive-thrus.

Some of those chains “fully on board” with A.I. order-taking are customers of Presto which, according to reporting last year in Bloomberg, relied on outsourced workers in the Philippines for roughly 70% of the orders processed through its “A.I.” system. In a more recent corporate filing, human intervention has fallen to 54% of orders at “select locations” where Presto has launched what it calls its “most advanced version of [its] A.I. technology”. However, that improvement only applies to 55 of 202 restaurant locations where Presto is used. It does not say in that filing how many orders need human intervention at the other 147 locations.

Perhaps I am being unfair. Any advancements in A.I. are going to start off rocky, and will take a while to improve. They will understandably be mired in controversy, too. I am fond of how Cory Doctorow put it:

[…] their [A.I. vendors’] products aren’t anywhere near good enough to do your job, but their salesmen are absolutely good enough to convince your boss to fire you and replace you with an AI model that totally fails to do your job.

We can choose to create a world where even the smallest expressions of human creativity in our work are eliminated to technology — or we can choose not to. I am not a doomsday person about A.I.; I have found it sometimes useful in home and work contexts. But I am not buying the hype either. The problem is that I think Doctorow might be right: the people making decisions may hold their nose over any concerns they could have about trust as they realize how much more productive someone can be when they no longer have to think so much, and how much less they can be paid. And then whatever standards we have for good enough fall off a cliff.

But the McDonald’s experiment is probably just silly.

Nvidia Is the World’s Most Valuable Bubb— Sorry, Company cnbc.com

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.

[…]

Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Gender Discrimination Lawsuit Filed Against Apple arstechnica.com

Patrick McGee, Financial Times, August 2022:

In interviews with 15 female Apple employees, both current and former, the Financial Times has found that Mohr’s frustrating experience with the People group has echoes across at least seven Apple departments spanning six US states.

The women shared allegations of Apple’s apathy in the face of misconduct claims. Eight of them say they were retaliated against, while seven found HR to be disappointing or counterproductive.

Ashley Belanger, Ars Technica, last week:

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.

[…]

The current class action has alleged that Apple continues to ignore complaints that the company culture fosters an unfair and hostile workplace for women. It’s hard to estimate how much Apple might owe in back pay and other damages should women suing win, but it could easily add up if all 12,000 class members were paid thousands less than male counterparts over the complaint’s approximately four-year span. Apple could also be on the hook for hundreds in civil penalties per class member per pay period between 2020 and 2024.

I pulled the 2022 Financial Times investigation into this because one of the plaintiffs in the lawsuit filed last week also alleges sexual harassment by a colleague which was not adequately addressed.

Stephen Council, SFGate:

The lawyer said that asking women about pay expectations “locks” past pay discrimination in and that the requirements of a job should determine pay. Finberg isn’t new to the fight over tech pay; he represented employees suing Oracle and Google for gender-based pay discrimination, securing $25 million and $118 million settlements, respectively.

Last year, Apple paid $25 million to settle claims it discriminated in U.S. hiring in favour of people whose ability to remain in the U.S. depended on their employment status.

Adobe Codifies Pledge Not to Train A.I. on Customer Data axios.com

Ina Fried, Axios:

Adobe on Tuesday updated its terms of service to make explicit that it won’t train AI systems using customer data.

The move follows an uproar over largely unrelated changes Adobe made in recent days to its terms of service — which contained wording that some customers feared was granting Adobe broad rights to customer content.

Again, I must ask whether businesses are aware of how little trust there currently is in technology firms’ A.I. use. People misinterpret legal documents all the time — a minor consequence of how we have normalized signing a non-negotiable contract every time we create a new account. Most people are not equipped to read and comprehend the consequences of those contracts, and it is unsurprising they can assume the worst.

U.S. Federal Trade Commission Sues Adobe Over Subscription Practices ftc.gov

The U.S. Federal Trade Commission:

The Federal Trade Commission is taking action against software maker Adobe and two of its executives, Maninder Sawhney and David Wadhwani, for deceiving consumers by hiding the early termination fee for its most popular subscription plan and making it difficult for consumers to cancel their subscriptions.

A federal court complaint filed by the Department of Justice upon notification and referral from the FTC charges that Adobe pushed consumers toward the “annual paid monthly” subscription without adequately disclosing that cancelling the plan in the first year could cost hundreds of dollars. Wadhwani is the president of Adobe’s digital media business, and Sawhney is an Adobe vice president.

The inclusion of two Adobe executives as co-defendants is notable, though not entirely unique — in September, the FTC added three executives to its complaint against Amazon, a move a judge recently upheld.

The contours of the case itself bear similarities to the Amazon Prime one, too. In both cases, customers are easily coerced into subscriptions which are difficult to cancel. Executives were aware of customer complaints, according to the FTC, yet they allegedly allowed or encouraged these practices. But there are key differences between these cases as well. Amazon Prime is a monthly cancel-anytime subscription — if you can navigate the company’s deliberately confusing process. Adobe, on the other hand, offers three ways to pay for many of its products: on a monthly basis which can be cancelled at any time, on an annual basis, or on a monthly basis locked into an annual contract. However, it predominantly markets its products with the latter option, and preselects it when subscribing. That is where the pain begins.

The difficulty and cost of cancelling an Adobe subscription is legendary. It is right up there with gyms for how badly it treats its customers. It has designed a checkout process that defaults people into an annual contract, and a cancellation workflow which makes extricating oneself from that contract tedious, time-consuming, and expensive. If Adobe wanted to make it obvious what users were opting into at checkout, and easy for them to end a subscription, it could have designed those screens in that way. Adobe did not.

Perplexity A.I. Is Lying About Its User Agent rknight.me

Robb Knight blocked various web scrapers via robots.txt and through nginx. Yet Perplexity seemed to be able to access his site:

I got a perfect summary of the post including various details that they couldn’t have just guessed. Read the full response here. So what the fuck are they doing?

[…]

Before I got a chance to check my logs to see their user agent, Lewis had already done it. He got the following user agent string which certainly doesn’t include PerplexityBot like it should: […]

I am sure Perplexity will respond to this by claiming it was inadvertent, and it has fixed the problem, and it respects publishers’ choices to opt out of web scraping. What matters is how we have only a small amount of control over how our information is used on the web. It defaults to open and public — which is part of the web’s brilliance, until the audience is no longer human.

Unless we want to lock everything behind a login screen, the only mechanisms for control that we have are dependent on companies like Perplexity being honest about their bots. There is no chance this problem only affects the scraping of a handful of independent publishers; this is certainly widespread. Without penalty or legal reform, A.I. companies have little incentive not to do exactly the same as Perplexity.

Clearview Class Action Settlement Proposal Would Make Investors Out of Victims nytimes.com

Kashmir Hill, New York Times:

[Clearview AI] A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

This is an awful move by an awful company. It turns U.S. victims of its global privacy invasion into people who are invested and complicit in its success.

Microsoft Delays Launch of Recall blogs.windows.com

Pavan Davuluri, of Microsoft:

Today, we are communicating an additional update on the Recall (preview) feature for Copilot+ PCs. Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks. Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.

Microsoft has always struggled to name its products coherently, but Microsoft Copilot+ PCs with Recall (preview) available first through the Windows Insider Program (WIP) has to take the cake. Absolute gibberish.

Anyway, it is disappointing to see Microsoft botch the announcement of this feature so badly. Investors do not seem to care about how untrustworthy the company is because, face it, how many corporations big and small are going to abandon Windows and Office? As long as its leadership keeps saying the right things, it seems it is still comfortable to sit in the afterglow of its A.I. transformation.

Sponsor: Magic Lasso Adblock: Incredibly Private and Secure Safari Web Browsing magiclasso.co

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Screenshot of Magic Lasso Adblock

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

The Three C’s of Data Participation in the Age of A.I. techpolicy.press

Eryk Salvaggio, Tech Policy Press:

People are growing ever more frustrated by the intrusiveness of tech. This frustration feeds a cycle of fear that can be quickly dismissed, but doing so strikes me as either foolish or cynical. I am not a lawyer, but lately I have been in a lot of rooms with lawyers discussing people’s rights in the spheres of art and AI. One of the things that has come up recently is the challenge of translating oftentimes unfiltered feelings about AI into a legal framework.

[…]

I would never claim to speak to the concerns of everyone I’ve spoken with about AI, but I have made note of a certain set of themes. I understand these as three C’s for data participation: Context, Consent, and Control.

This is a thoughtful essay about what it means for creation to be public, and the imbalanced legal architecture covering appropriation and reuse. I bet many people feel this in their gut — everything is a remix, yet there are vast differences between how intellectual property law deals with individuals compared to businesses.

If I were creating music by hand which gave off the same vibes as another artist, I would be worried about a resulting lawsuit, even if I did not stray into the grey area of sampling. And I would have to obtain everything legally — if I downloaded a song off the back of a truck, so to speak, I would be at risk of yet more legal jeopardy, even if it was for research or commentary. Yet an A.I. company can scrape all the music that has ever been published to the web, and create a paid product that will reproduce any song or artist you might like without credit or compensation; they are arguing this is fair use.

This does not seem like a fair situation, and it is not one that will be remedied by making copyright more powerful. I appreciated Salvaggio’s more careful assessment.

ProPublica: Microsoft Refused to Fix Flaw Years Before SolarWinds Hack propublica.org

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

Japan Becomes the Next Region to Mandate Alternative App Stores asahi.com

The Asahi Shimbun, in a non-bylined report:

The new law designates companies that are influential in four areas: smartphone operating systems, app stores, web browsers and search engines.

The new law will prohibit companies from giving preferential treatment for the operator’s own payment system and from preventing third-party companies from launching new application stores.

[…]

The new legislation sets out exceptional rules in cases to protect security, privacy and youth users.

Penalties are 20–30% of Japanese revenue. Japan is one of very few countries in the world where the iPhone’s market share exceeds that of Android phones. I am interested to know if Apple keeps its policies for developers consistent between the E.U. and Japan, or if they will diverge.

BNN Breaking Was an A.I. Sham nytimes.com

Conspirador Norteño” in January 2023:

BNN (the “Breaking News Network”, a news website operated by tech entrepreneur and convicted domestic abuser Gurbaksh Chahal) allegedly offers independent news coverage from an extensive worldwide network of on-the-ground reporters. As is often the case, things are not as they seem. A few minutes of perfunctory Googling reveals that much of BNN’s “coverage” appears to be mildly reworded articles copied from mainstream news sites. For science, here’s a simple technique for algorithmically detecting this form of copying.

Kashmir Hill and Tiffany Hsu, New York Times:

Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive.

Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.

See, it is not just humans producing abject garbage; robots can do it, too — and way better. There was a time when newsrooms could be financially stable on display ads. Those days are over for a team of human reporters, even if all they do is rewrite rich guy tweets. But if you only need to pay a skeleton operations staff to ensure the robots continue their automated publishing schedule, well that becomes a more plausible business venture.

Another thing of note from the Times story:

Before ending its agreement with BNN Breaking, Microsoft had licensed content from the site for MSN.com, as it does with reputable news organizations such as Bloomberg and The Wall Street Journal, republishing their articles and splitting the advertising revenue.

I have to wonder how much of an impact this co-sign had on the success of BNN Breaking. Syndicated articles on MSN like these are shown in various places on a Windows computer, and are boosted in Bing search results. Microsoft is increasingly dependent on A.I. for editing its MSN portal with predictable consequences.

Conspirador Norteño” in April:

The YouTube channel is not the only data point that connects Trimfeed to BNN. A quick comparison of the bylines on BNN’s and Trimfeed’s (plagiarized) articles shows that many of the same names appear on both sites, and several X accounts that regularly posted links to BNN articles prior to April 2024 now post links to Trimfeed content. Additionally, BNN seems to have largely stopped publishing in early April, both on its website and social media, with the Trimfeed website and related social media efforts activating shortly thereafter. It is possible that BNN was mothballed due to being downranked in Google search results in March 2024, and that the new Trimfeed site is an attempt to evade Google’s decision to classify Trimfeed’s predecessor as spam.

The Times reporters definitively linked the two and, after doing so, Trimfeed stopped publishing. Its domain, like BNN Breaking, now redirects to BNNGPT, which ostensibly uses proprietary technologies developed by Chahal. Nothing about this makes sense to me and it smells like bullshit.

Dark Mode App Icons lmnt.me

Apple’s Human Interface Guidelines:

[Beginning in iOS 18 and iPadOS 18] People can customize the appearance of their app icons to be light, dark, or tinted. You can create your own variations to ensure that each one looks exactly the way you way you want. See Apple Design Resources for icon templates.

Design your dark and tinted icons to feel at home next to system app icons and widgets. You can preserve the color palette of your default icon, but be mindful that dark icons are more subdued, and tinted icons are even more so. A great app icon is visible, legible, and recognizable, even with a different tint and background.

Louie Mantia:

Apple’s announcement of “dark mode” icons has me thinking about how I would approach adapting “light mode” icons for dark mode. I grabbed 12 icons we made at Parakeet for our clients to illustrate some ways of going about it.

I appreciated this deep exploration of different techniques for adapting alternate icon appearances. Obviously, two days into the first preview build of a new operating system is not the best time to adjudicate its updates. But I think it is safe to say a quality app from a developer that cares about design will want to supply a specific dark mode icon instead of relying upon the system-generated one. Any icon with more detail than a glyph on a background will benefit.

Also, now that there are two distinct appearances, I also think it would be great if icons which are very dark also had lighter alternates, where appropriate.

Rich Idiot Tweets techdirt.com

Jason Koebler, 404 Media:

Monday, Elon Musk tweeted a thing about Apple’s marketing event, an act that took Musk three seconds but then led to a large portion of the dwindling number of employed human tech journalists to spring into action and collectively spend many hours writing blogs about What This Thing That Probably Won’t Happen All Means.

Karl Bode, Techdirt:

Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.

The speed at which some publishers insist these “articles” are posted combined with a lack of constraints in airtime or physical paper means the loudest people know they can draw attention by posting deranged nonsense. All those people who got into journalism because they thought they could make a difference are instead cajoled into adding something resembling substance to forty-four tweeted words from the fingers of a dipshit.

Apple Intelligence apple.com

Daniel Jalkut, last month year:

Which leads me to my somewhat far-fetched prediction for WWDC: Apple will talk about AI, but they won’t once utter the letters “AI”. They will allude to a major new initiative, under way for years within the company. The benefits of this project will make it obvious that it is meant to serve as an answer to comparable efforts being made by OpenAI, Microsoft, Google, and Facebook. During the crescendo to announcing its name, the letters “A” and “I” will be on all of our lips, and then they’ll drop the proverbial mic: “We’re calling it Apple Intelligence.” Get it?

Apple:

Apple today introduced Apple Intelligence, the personal intelligence system for iPhone, iPad, and Mac that combines the power of generative models with personal context to deliver intelligence that’s incredibly useful and relevant. Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia. It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks. With Private Cloud Compute, Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

To Apple’s credit, the letters “A.I.” were only enunciated a handful of times during its main presentation today, far less often than I had expected. Mind you, in sixty-odd places, “A.I.” was instead referred to by the branded “Apple Intelligence” moniker which is also “A.I.” in its own way. I want half-right points.

There are several concerns with features like these, and Apple answered two of them today: how it was trained, and the privacy and security of user data. The former was not explained during today’s presentation, nor in its marketing materials and developer documentation. But it was revealed by John Giannandrea, senior vice president of Machine Learning and A.I. Strategy, in an afternoon question-and-answer session hosted by Justine Ezarik, as live-blogged by Nilay Patel at the Verge:1

What have these models actually been trained on? Giannandrea says “we start with the investment we have in web search” and start with data from the public web. Publishers can opt out of that. They also license a wide amount of data, including news archives, books, and so on. For diffusion models (images) “a large amount of data was actually created by Apple.”

If publishers wish to opt out of Apple’s training models but continue to permit crawling for things like Siri and Spotlight, they should add a disallow rule for Applebot-Extended. Because of Apple’s penchant for secrecy, that usage control was not added until today. That means a site may have been absorbed into training data unless its owners opted out of all Applebot crawling. Hard to decline participating in something you do not even know about.

Additionally, in April, Katie Paul and Anna Tong reported for Reuters that Apple struck a licensing agreement with Shutterstock for image training purposes.

Apple is also, unsurprisingly, promoting heavily the privacy and security policies it has in place. It noted some of these attributes in its presentation — including some auditable code and data minimization — and elaborated on Private Cloud Compute on its security blog:

With services that are end-to-end encrypted, such as iMessage, the service operator cannot access the data that transits through the system. One of the key reasons such designs can assure privacy is specifically because they prevent the service from performing computations on user data. Since Private Cloud Compute needs to be able to access the data in the user’s request to allow a large foundation model to fulfill it, complete end-to-end encryption is not an option. Instead, the PCC compute node must have technical enforcement for the privacy of user data during processing, and must be incapable of retaining user data after its duty cycle is complete.

[…]

  • User data is never available to Apple — even to staff with administrative access to the production service or hardware.

Apple can make all the promises it wants, and it appears it does truly want to use generative A.I. in a more responsible way. For example, the images you can make using Image Playground cannot be photorealistic and — at least for those shown so far — are so strange you may avoid using them. Similarly, though I am not entirely sure, it seems plausible the query system is designed to be more private and secure than today’s Siri.

Yet, as I wrote last week, users may not trust any of these promises. Many of these fears are logical: people are concerned about the environment, creative practices, and how their private information is used. But some are more about the feel of it — and that is okay. Even if all the training data were fully licensed and user data is as private and secure as Apple says, there is still an understandable ick factor for some people. The way companies like Apple, Google, and OpenAI have trained their A.I. models on the sum of human creativity represents a huge imbalance of power, and the only way to control Apple’s public data use was revealed yesterday. Many of the controls Apple has in place are policies which can be changed.

Consider how, so far as I can see, there will be no way to know for certain if your Siri query is being processed locally or by Apple’s servers. You do not know that today when using Siri, though you can infer it based on what you are doing and if something does not work when Apple’s Siri service is down. It seems likely that will be the case with this new version, too.

Then there are questions about the ethos of generative intelligence. Apple has long positioned its products as tools which enable people to express themselves creatively. Generative models have been pitched as almost the opposite: now, you do not have to pay for someone’s artistic expertise. You can just tell a computer to write something and it will do so. It may be shallow and unexciting, but at least it was free and near-instantaneous. Apple notably introduced its set of generative services only a month after it embarrassed itself by crushing analogue tools into an iPad. Happily, it seems this first set of generative features is more laundry and less art — making notifications less intrusive, categorizing emails, making Siri not-shit. I hope I can turn off things like automatic email replies.

You will note my speculative tone. That is because Apple’s generative features have not been made available yet, including in developer beta builds of its new operating system. None of us have any idea how useful these features are, nor what limitations they have. All we can see are Apple’s demonstrations and the metrics it has shared. So, we will see how any of this actually pans out. I have been bamboozled by this same corporation making similar promises before.

“May you live in interesting times”, indeed.


  1. The Verge’s live blog does not have per-update permalinks so you will need to load all the messages and find this for yourself. ↥︎

Research as Leisure Activity personalcanon.com

Celine Nguyen:

But this isn’t really about the software. It’s about what software promises us — that it will help us become who we want to be, living the lives we find most meaningful and fulfilling. The idea of research as leisure activity has stayed with me because it seems to describe a kind of intellectual inquiry that comes from idiosyncratic passion and interest. It’s not about the formal credentials. It’s fundamentally about play. It seems to describe a life where it’s just fun to be reading, learning, writing, and collaborating on ideas.

This is a wonderful essay, albeit one which leaves me with a question of how a reader distinguishes between an amateur’s interpretation of what they read, and an expert’s more considered exploration of a topic — something I have wondered about before.

The amateur or non-professional has their place, of course; I am staking mine on this very website. The expert may not always be correct. But adjudicating the information from each is not a realistic assignment for a layperson. Consider the vast genre of multi-hour YouTube essays, or even short but seemingly authoritative TikTok digests of current events. We are ingesting more information than ever before with fewer gatekeepers — for good and otherwise.

Sponsor: Magic Lasso Adblock: 2.0× Faster Web Browsing in Safari magiclasso.co

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you. It’s easy to setup, blocks all ads, and doubles the speed at which Safari loads.

Screenshot of Magic Lasso Adblock

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers and annoyances in Safari. Just enable to browse in bliss.

By cutting down on ads and trackers, common news websites load 2x faster and use less data.

Over 300,000+ users rely on Magic Lasso Adblock to:

  • Improve their privacy and security by removing ad trackers

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad and Mac.

Download today via the Magic Lasso website.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Justin Trudeau on ‘Hard Fork’ aaron.vegh.ca

Canadian Prime Minister Justin Trudeau appeared on the New York Times’ “Hard Fork” podcast for a discussion about artificial intelligence, election security, TikTok, and more.

I have to agree with Aaron Vegh:

[…] I loved his messaging on Canada’s place in the world, which is pragmatic and optimistic. He sees his job as ambassador to the world, and he plays the role well.

I just want to pull some choice quotes from the episode that highlight what I enjoyed about Trudeau’s position on technology. He’s not merely well-briefed; he clearly takes an interest in the technology, and has a canny instinct for its implications in society.

I understand Trudeau’s appearance serves as much to promote his government’s efforts in A.I. as it does to communicate any real policy positions — take a sip every time Trudeau mentions how we “need to have a conversation” about something. But I also think co-hosts Kevin Roose and Casey Newton were able to get a real sense of how the Prime Minister thinks about A.I. and Canada’s place in the global tech industry.

Anti Trust in Tech

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Screen Time is Buggy wsj.com

Joanna Stern, Wall Street Journal:

Porn, violent images, illicit drugs. I could see it all by typing a special string of characters into the Safari browser’s address bar. The parental controls I had set via Apple’s Screen Time? Useless.

Security researchers reported this particular software bug to Apple multiple times over the past three years with no luck. After I contacted Apple about the problem, the company said it would release a fix in the next software update. The bug is a bad one, allowing users to easily circumvent web restrictions, although it doesn’t appear to have been well-known or widely exploited.

It seems lots of parents are frustrated by Screen Time. It is not reliable software but, for privacy reasons, it is hard for third-parties to differentiate themselves as they rely on the same framework.

Stern:

  • Screen usage chart. Want to see your child’s screen usage for the day? The chart is often inaccurate or just blank.

I find this chart is always wildly disconnected from actual usage figures for my own devices. My iMac recently reported a week straight of 24-hour screen-on time per day, including through a weekend when I was out of town, because of a web browser tab I left open in the background.

One could reasonably argue nobody should entirely depend on software to determine how devices are used by themselves or their children, but I do not think many people realistically do. It is part of a combination of factors. Screen Time should perform the baseline functions it promises. It sucks how common problems are basically ignored until Stern writes about them.

The Rise and Fall of Preview eclecticlight.co

Howard Oakley:

Prior to Mac OS X, Adobe Acrobat, both in its free viewer form and a paid-for Pro version, were the de facto standard for reading, printing and working with PDF documents on the Mac. The Preview app had originated in NeXTSTEP in 1989 as its image and PDF viewer, and was brought across to early versions of Mac OS X, where it has remained ever since.

The slow decline of Preview — and Mac PDF rendering in general — since MacOS Sierra is one of the more heartbreaking products of Apple’s annual software churn cycle. To be entirely fair, many of the worst bugs have been fixed, but some remain: sometimes, highlights and notes stop working; search is a mess; copying text is unreliable.

Unfortunately, the apps which render PDF files the most predictably and consistently are Adobe Acrobat and Reader. Both became hideous Electron Chromium-based apps at some point and, so, are gigantic packages which behave nothing like Mac software. It is all pretty disappointing.

Update: A Hacker News commenter rightly pointed out that Acrobat and Reader are not truly Electron apps, and are instead Chromium-based apps. That is to say both are generic-brand shitty instead of the name-brand stuff.

Meta’s Big Squeeze wheresyoured.at

Ashley Belanger, reporting for Ars Technica in July 2022 in what I will call “foreshadowing”:

Despite all the negative feedback [over then-recent Instagram changes], Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.

Ed Zitron:

In this document [leaked to Zitron], they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.”

Zitron, again:

When you look at Instagram or Facebook, I want you to try and think of them less as social networks, and more as a form of anthropological experiment. Every single thing you see on either platform is built or selected to make you spend more time on the app and see more things that Meta wants you to see, be they ads, sponsored content, or suggested groups that you can interact with, thus increasing the amount of your “time spent” on the app, and increasing the amount of “meaningful interactions” you have with content.

Zitron is a little too eager, for my tastes, to treat Meta’s suggestions of objectionable and controversial posts as deliberate. It seems much more likely the company simply sucks at moderating this stuff at scale and is throwing in the towel.

Kurt Wagner, Bloomberg:

In late 2021, TikTok was on the rise, Facebook interactions were declining after a pandemic boom and young people were leaving the social network in droves. Chief Executive Officer Mark Zuckerberg assembled a handful of veterans who’d built their careers on the Big Blue app to figure out how to stop the bleeding, including head of product Chris Cox, Instagram boss Adam Mosseri, WhatsApp lead Will Cathcart and head of Facebook, Tom Alison.

During discussions that spanned several meetings, a private WhatsApp group, and an eventual presentation at Zuckerberg’s house in Palo Alto, California, the group came to a decision: The best way to revive Facebook’s status as an online destination for young people was to start serving up more content from outside a person’s network of friends and family.

Jason Koebler, 404 Media:

At first, previously viral (but real) images were being run through image-to-image AI generators to create a variety of different but plausibly believable AI images. These images repeatedly went viral, and seemingly tricked real people into believing they were real. I was able to identify a handful of the “source” or “seed” images that formed the basis for this type of content. Over time, however, most AI images on Facebook have gotten a lot easier to identify as AI and a lot more bizarre. This is presumably happening because people will interact with the images anyway, or the people running these pages have realized they don’t need actual human interaction to go viral on Facebook.

Sarah Perez, TechCrunch:

Instagram confirmed it’s testing unskippable ads after screenshots of the feature began circulating across social media. These new ad breaks will display a countdown timer that stops users from being able to browse through more content on the app until they view the ad, according to informational text displayed in the Instagram app.

These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.

Inside the Copilot Recall ‘Disaster’ doublepulsar.com

Kevin Beaumont:

At a surface level, it [Recall] is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago.

In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the real world, not executive world, is basically like punching customers in the face. The echo chamber effect inside Microsoft is real here, and oh boy… just oh boy. It’s a rare misfire, I think.

Via Eric Schwarz:

This fact that this feature is basically on by default and requires numerous steps to disable is going to create a lot of problems for people, especially those who click through every privacy/permission screen and fundamentally don’t know how their computer actually operates — I’ve counted way too many instances where I’ve had to help people find something and they have no idea where anything lives in their file system (mostly work off the Desktop or Downloads folders). How are they going to even grapple with this?

The problems with Recall remind me of the minor 2017 controversy around “brassiere” search results in Apple’s Photos app. Like Recall, it is entirely an on-device process with some security and privacy protections. In practice, automatically cataloguing all your photos which show a bra is kind of creepy, even if it is being done only with your own images on your own phone.

Shit’s on Fire, Yo! youtube.com

Deviant Ollam gave a brand new talk at CackalackyCon this year about fire safety standards from a pentesting perspective. It is as entertaining as just about anything you may have seen from Ollam, despite being about two hours long.

Caitlin Dewey Wants to See Your Old Gmail Messages linksiwouldgchatyou.substack.com

Caitlin Dewey:

In April, Gmail turned 20; the service is two-thirds as old as I am. “We now have a huge accidental archive of our collective past,” wrote the editors at New York, to mark the occasion.

[…]

You have emails like this too, I’d imagine — happy emails and sad ones. Emails lost to time or memory or the unrelenting deluge of other, newer messages. Maybe it’s the first or last email you got from someone you treasure, or an announcement that changed your life, or a conversation you remembered wrong. Whatever forms this sort of long-lost email takes for you, I would love to see them.

If you would like to participate, there are more details in Dewey’s post, or you can visit the Google Form. Obviously, you can also forward messages to Dewey at linksiwouldgchatyou@gmail.com, because if this project did not have a Gmail address, it would be a shame.

See Also: UIs with accidental memories, previously linked.

Two TikTok Updates reuters.com

Drew Harwell, Washington Post:

But the extent to which the United States evaluated or disregarded TikTok’s proposal, known as Project Texas, is likely to be a core point of dispute in court, where TikTok and its owner, ByteDance, are challenging the sale-or-ban law as an “unconstitutional assertion of power.”

The episode raises questions over whether the government, when presented with a way to address its concerns, chose instead to back an effort that would see the company sold to an American buyer, even though some of the issues officials have warned about — the opaque influence of its recommendation algorithm, the privacy of user data — probably would still be unresolved under new ownership.

You may recognize the deal Harwell is writing about if you read my exploration of the divestment law. While TikTok claimed in its lawsuit (PDF) that the Biden administration was the party responsible for cancelling this deal with CFIUS, I did not see that confirmed anywhere else. Harwell’s reporting appears to support TikTok’s side of events. Still, there is frustratingly little explanation for why the U.S. was unsatisfied with this settlement.

Krystal Hu and Sheila Dang, Reuters:

TikTok is working on a clone of its recommendation algorithm for its 170 million U.S. users that may result in a version that operates independently of its Chinese parent and be more palatable to American lawmakers who want to ban it, according to sources with direct knowledge of the efforts.

The work on splitting the source code ordered by TikTok’s Chinese parent ByteDance late last year predated a bill to force a sale of TikTok’s U.S. operations that began gaining steam in Congress this year. The bill was signed into law in April.

TikTok says this story is “misleading and factually inaccurate” and reiterates that divestiture is, according to them, impossible. But TikTok already began preparing for this eventuality in 2020, so it is hard to believe the company would not want to figure out ways to make this possible should its current lawsuit fail and the law be allowed to stand.

Amazon Executives May Be Personally Liable for Unintentional Prime Registrations arstechnica.com

Ashley Belanger, Ars Technica:

But the judge apparently did not find Amazon’s denials completely persuasive. Viewing the FTC’s complaint “in the light most favorable to the FTC,” Judge John Chun concluded that “the allegations sufficiently indicate that Amazon had actual or constructive knowledge that its Prime sign-up and cancellation flows were misleading consumers.”

[…]

One such trick that Chun called out saw Amazon offering two-day free shipping with the click of a button at checkout that also signed customers up for Prime even if they didn’t complete the purchase.

“With the offer of Amazon Prime for the purpose of free shipping, reasonable consumers could assume that they would not proceed with signing up for Prime unless they also placed their order,” Chun said, ultimately rejecting Amazon’s claims that all of its “disclosures would be clear and conspicuous to any reasonable consumer.”

This is far from the only instance of scumbag design cited by Chun, and it is bizarre to me that anybody would defend choices like these.

Inner Workings tyler.io

I have very little to add to Tyler Hall’s idea for revealing per-document settings, other than to say that it is so joyful and it makes complete sense. If you watch one thirty-second design demo today, make it this one.

Battery Replacements Should Be the Easiest Repair for Any Device lapcatsoftware.com

Jeff Johnson:

Yesterday I took the M1 MacBook Pro to my local Apple-authorized service provider that I’ve been going to for many years, who performed all of the work on my Intel MacBook Pro, including the battery replacements and a Staingate screen replacement. This is a third-party shop, not an Apple Store. To my utter shock, they told me that they couldn’t replace the battery in-house, because starting with the Apple silicon transition, Apple now requires that the MacBook Pro be mailed in to Apple for battery replacement! What. The. Hell.

The battery in my 14-inch MacBook Pro seems to be doing okay, with 89% capacity remaining after nearly two years of use. But I hope to use it for as long as I did my MacBook Air — about ten years — and I swapped its battery twice. This spooked me. So I called my local third-party repair place and asked them about replacing the battery. They told me they could change it in the store with same-day turnaround for $350, about the same as what Apple charges, using official parts. It is unclear to me if a Apple could replace the battery in-store or would need to send it out, but every Mac service I have had from my local Apple Store has required me to leave my computer with them for several days.

The situation likely varies by geography. Apple’s Self Service Repair program is not available in Canada, which means a battery swap has to be done either by a technician, or using unofficial parts. If you are concerned about this, I recommend contacting your local shops and seeing what their policies are like.

In a recent interview with Marques Brownlee, John Ternus, Apple’s head of hardware engineering, compared ease of repair and long-term durability:

On an iPhone, on any phone, a battery is something […] that’s gonna need to be replaced, right? Batteries wear out. But as we’ve been making iPhones for a long time, in the early days, one of the most common types of failures was water ingress, right? Where you drop it in a pool, or you spill your drink on it, and the unit fails. And so we’ve been making strides over all those years to get better and better and better in terms of minimizing those failures.

This is a fair argument. While Apple has not — to my knowledge — acknowledged any improvements to liquid resistance on MacBook Pros, I spilled half a glass of water across mine in November, and it suffered no damage whatsoever. Ternus’ point is that Apple’s solution for preventing liquid damage to all components, including the battery, compromised the ease of repairing an iPhone, but the company saw it as a reasonable trade-off.

But it is also a bit of a red herring for two reasons. The first is that Apple actually made recent iPhone models more repairable without reducing water or dust resistance, indicating this compromise is not exactly as simple as Ternus implies. It is possible to have easier repairs and better durability.

The second reason is because batteries eventually need replacing on all devices. They are a consumable good with a finite — though not always predictable — lifespan, most often shorter than the actual lifetime usability of the product. The only reason I do not use my AirPods any more is because the battery in each bud lasts less than twenty minutes; everything else is functional. If there is any repair which should be straightforward and doable without replacing unrelated components or the entire device, it is the battery.

See Also: The comments on Michael Tsai’s post.

Apple finished naming what it — well, its “team of experts alongside a select group of artists […] songwriters, producers, and industry professionals” — believes are the hundred best albums of all time. Like pretty much every list of the type, it is overwhelmingly Anglocentric, there are obvious picks, surprise appearances good and bad, and snubs.

I am surprised the publication of this list has generated as much attention as it has. There is a whole Wall Street Journal article with more information about how it was put together, a Slate thinkpiece arguing this ranking “proves [Apple has] lost its way”, and a Variety article claiming it is more-or-less “rage bait”.

Frankly, none of this feels sincere. Not Apple’s list, and not the coverage treating it as meaningful art criticism. I am sure there are people who worked hard on it — Apple told the Journal “about 250” — and truly believe their rating carries weight. But it is fluff.

Make no mistake: this is a promotional exercise for Apple Music more than it is criticism. Sure, most lists of this type are also marketing for publications like Rolling Stone and Pitchfork and NME. Yet, for how tepid the opinions of each outlet often are, they have each given out bad reviews. We can therefore infer they have specific tastes and ideas about what separates great art from terrible art.

Apple has never said a record is bad. It has never made you question whether the artist is trying their best. It has never presented criticism so thorough it makes you wince on behalf of the people who created the album.

Perhaps the latter is a poor metric. After Steve Jobs’ death came a river of articles questioning the internal culture he fostered, with several calling him an “asshole”. But that is mixing up a mean streak and a critical eye — Jobs, apparently, had both. A fair critic can use their words to dismantle an entire project and explain why it works or, just as important, why it does not. The latter can hurt; ask any creative person who has been on the receiving end. Yet exploring why something is not good enough is an important skill to develop as both a critic and a listener.

Dan Brooks, Defector:

There has been a lot of discussion about what music criticism is for since streaming reduced the cost of listening to new songs to basically zero. The conceit is that before everything was free, the function of criticism was to tell listeners which albums to buy, but I don’t think that was ever it. The function of criticism is and has always been to complicate our sense of beauty. Good criticism of music we love — or, occasionally, really hate — increases the dimensions and therefore the volume of feeling. It exercises that part of ourselves which responds to art, making it stronger.

There are huge problems with the way music has historically been critiqued, most often along racial and cultural lines. There are still problems. We will always disagree about the fairness of music reviews and reviewers.

Apple’s list has nothing to do with any of that. It does not interrogate which albums are boring, expressionless, uncreative, derivative, inconsequential, inept, or artistically bankrupt. So why should we trust it to explain what is good? Apple’s ranking of albums lacks substance because it cannot say any of these things. Doing so would be a terrible idea for the company and for artists.

It is beyond my understanding why anyone seems to be under the impression this list is anything more than a business reminding you it operates a music streaming platform to which you can subscribe for eleven dollars per month.


Speaking of the app — some time after I complained there was no way in Apple Music to view the list, Apple added a full section, which I found via foursliced on Threads. It is actually not bad. There are stories about each album, all the reveal episodes from the radio show, and interviews.

You will note something missing, however: a way to play a given album. That is, one cannot visit this page in Apple Music, see an album on the list they are interested in, and simply tap to hear it. There are play buttons on the website and, if you are signed in with your Apple Music account, you can add them to your library. But I cannot find a way to do any of this from within the app.

Benjamin Mayo found a list, but I cannot through search or simply by browsing. Why is this not a more obvious feature? It makes me feel like a dummy.

B.C. Winemakers Grapple With the Climate Crisis thenarwhal.ca

Paloma Pacheco, the Narwhal:

Just a year after the extreme temperature drop in December 2022, another deep freeze descended on wine growers. For several days in January 2024, temperatures across the Okanagan and Similkameen, as well as in the Thompson Valley to the north, dropped below -25 C from unseasonable daytime highs of 10 to 13 C (Canada’s warmest winter on record). The damage from the previous winter’s cold snap had already resulted in a nearly 60 per cent loss of grape and wine production across the province. For the 2024 harvest, the industry is predicting a 97 to 99 per cent loss from both bud and vine damage. In short: decimation.

I am still in shock over how devastating this single cold snap was for so many Okanagan winemakers. It sounds like they are done grieving and are trying to make the most of it, but it is going to be a difficult few years — at least.

The Deskilling of Web Development baldurbjarnason.com

Baldur Bjarnason:

But instead we’re all-in on deskilling the industry. Not content with removing CSS and HTML almost entirely from the job market, we’re now shifting towards the model where devs are instead “AI” wranglers. The web dev of the future will be an underpaid generalist who pokes at chatbot output until it runs without error, pokes at a copilot until it generates tests that pass with some coverage, and ships code that nobody understand and can’t be fixed if something goes wrong.

There are parallels in the history of software development to the various abstractions accumulated in a modern web development stack. Heck, you can find people throughout history bemoaning how younger generations lack some fundamental knowledge since replaced by automation or new technologies. It is always worth a gut-check about whether newer ideas are actually better. In the case of web development, what are we gaining and losing by eventually outsourcing much of it to generative software?

I think Bjarnason is mostly right: if web development becomes accessible by most through layers of A.I. and third-party frameworks, it is abstracted to such a significant extent that it becomes meaningless gibberish. In fairness, the way plain HTML, CSS, and JavaScript work is — to many — meaningless gibberish. It really is better for many people that creating things for the web has become something which does not require a specialized skillset beyond entering a credit card number. But that is distinct from web development. When someone has code-level responsibility, they have an obligation to understand how things work.

How Shein and Temu Snuck Up on Amazon bigtechnology.com

Louise Matsakis, Big Technology:

Shein and Temu’s users aren’t just browsing. Shein reportedly earned roughly $45 billion last year, and is currently trying to go public. PDD Holdings, Temu’s Chinese parent company, reported earlier this week that its revenue surged more than 130% in the first quarter. PDD is now the most valuable e-commerce company in China.

The two startups are sending so many orders from China to the US that it’s causing air cargo rates to spike, and USPS workers have said publicly that they are overwhelmed by the sheer volume of Temu’s signature bright orange packages they have to deliver. “I’m tired of this Temu shit, ya’ll killing me,” one mailman said in a TikTok video last year with over two million likes. “Everyday it’s Temu, Temu, Temu — I’m Temu tired.”

You might recognize how both Shein and Temu grew using the same tactic as TikTok: relentless advertising. (Which is something Snap CEO Evan Spiegel complained about despite TikTok’s huge spending on Snapchat.)

Both these companies are an aggressive distillation of plentiful supply and low cost to buyers. For people with lower incomes or who are economically stressed, the extreme affordability they offer can be a lifeline. Not everybody who shops with either fits that description; Matsakis cites a UBS report finding an average Shein customer earns $65,000 per year and spends more than $100 per month on clothes. But there are surely plenty of people who shop on both sites — and Amazon — because they simply cannot afford to buy anywhere else.

Every time I think about these retailers, I cannot shake a pervasive sadness. Saddened by how some people in rich countries have been compromised so much they rely on stores they may have moral qualms with. Saddened by the ripple effect of exploitation. Saddened by the environmental cost of producing, shipping, and disposing of these often brittle products — a wasteful exercise for many customers who can afford longer-lasting goods, and the many people who cannot.

Derek Guy has written about the brutality of the garment industry in the U.S., but notes how clearly different these fast and ultra-fast fashion brands are from inexpensive clothing:

Given the opacity in the supply chain, your best single measure for whether something is amiss is price. If you are paying $5 for a cut-and-sewn shirt, something bad is happening. Does this mean that every expensive shirt was ethically made? No. But you know the $5 shirt is bad.

Guy also wrote about the difference between cheap and fast fashion.

This whole industry bums me out because I try to appreciate clothing and fashion. I like finding things I like, dressing a particular way, and putting some effort into how I present myself. Yet every peek behind the curtain is a mountain of waste and abuse, and the worst offenders are companies like Shein and Temu — and, for what it is worth, AliExpress and facilitators like Amazon.

More on That Zombie Photos Bug 9to5mac.com

The bad news: Apple shipped an alarming bug in iOS 17.5 which sometimes revealed photos previously deleted by the user and, in the process, created a reason for users to mistrust how their data is handled. This was made especially confusing by Apple’s lack of commentary.

The good news: Apple patched the bug within a week. Also, the lone story about deleted photos reappearing on a wiped iPad given to someone else was deleted and seems to be untrue.

The bad news: aside from acknowledging this “rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted”, there was still little information about exactly what happened. Users quite reasonably expect things they deleted to stay deleted, and when they do not, they are going to have some questions.

The good news: as I predicted, Apple gave an explanation to 9to5Mac, which generously allowed for it to be on background. Chance Miller:

One question many people had is how images from dates as far back as 2010 resurfaced because of this problem. After all, most people aren’t still using the same devices now as they were in 2010. Apple confirmed to me that iCloud Photos is not to be blamed for this. Instead, it all boils to the corrupt database entry that existed on the device’s file system itself.

A much more technically-minded answer was provided by Synacktiv, a security firm that reverse-engineered the bug fix release and compared it to the original 17.5 release.

Bugs are only as bad as the effects they have. I heard from multiple readers who said this bug damaged how much they trust iOS and Apple. This is self-selecting — I likely would not have heard from people who both experienced this bug and thought it was no big deal. I can imagine a normal user who does not read 9to5Mac and finding their deleted photos restored are still going to be spooked.

Killing Time for TikTok

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

OpenAI Documents Reveal Punitive Tactics Toward Former Employees vox.com

Kelsey Piper, Vox:

Questions arose immediately [over the resignations of key OpenAI staff]: Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

Sam Altman, [sic]:

we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.

Piper, again, in a Vox follow-up story:

In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.

[…]

Most ex-employees folded under the pressure. For those who persisted, the company pulled out another tool in what one former employee called the “legal retaliation toolbox” he encountered on leaving the company. When he declined to sign the first termination agreement sent to him and sought legal counsel, the company changed tactics. Rather than saying they could cancel his equity if he refused to sign the agreement, they said he could be prevented from selling his equity.

For its part, OpenAI says in a statement quoted by Piper that it is updating its documentation and releasing former employees from the more egregious obligations of their termination agreements.

This next part is totally inside baseball and, unless you care about big media company CMS migrations, it is probably uninteresting. Anyway. I noticed, in reading Piper’s second story, an updated design which launched yesterday. Left unmentioned in that announcement is that it is, as far as I can tell, the first of Vox’s Chorus-powered sites migrated to WordPress. The CMS resides on the platform subdomain which is not important. But it did indicate to me that the Verge may be next — platform.theverge.com resolves to a WordPress login page — and, based on its DNS records, Polygon could follow shortly thereafter.

Microsoft Recall blogs.microsoft.com

Yusuf Mehdi of Microsoft:

Now with Recall, you can access virtually what you have seen or done on your PC in a way that feels like having photographic memory. Copilot+ PCs organize information like we do – based on relationships and associations unique to each of our individual experiences. This helps you remember things you may have forgotten so you can find what you’re looking for quickly and intuitively by simply using the cues you remember.

[…]

Recall leverages your personal semantic index, built and stored entirely on your device. Your snapshots are yours; they stay locally on your PC. You can delete individual snapshots, adjust and delete ranges of time in Settings, or pause at any point right from the icon in the System Tray on your Taskbar. You can also filter apps and websites from ever being saved. You are always in control with privacy you can trust.

Recall is the kind of feature I have always wanted but I am not sure I would ever enable. Setting aside Microsoft’s recent high-profile security problems, it seems like there is a new risk in keeping track of everything you see on your computer — bank accounts, a list of passwords, messages, work documents and other things sent by a third-party which they expect to be confidential, credit card information — for a rolling three month window.

Microsoft says all the right things about this database. It says it is all stored locally, never shared with Microsoft, access controlled, and user configurable. And besides, screen recorders have existed forever, and keeping local copies of sensitive information has always been a balance of risk.

But this is a feature that creates a rolling record of just about everything. It somehow feels more intrusive than a web browser’s history and riskier than a password manager. The Recall directory will be a new favourite target for malware. Oh and, in addition to Microsoft’s own security issues, we have just seen a massive breach of LastPass. Steal now, solve later.

This is a brilliant, deeply integrated service. It is the kind of thing I often need as I try to remember some article I read and cannot quite find it with a standard search engine. Yet even though I already have my credit cards and email and passwords stored on my computer, something about a screenshot timeline is a difficult mental hurdle to clear — not entirely rationally, but not irrationally either.

On Touch Screen Macs birchtree.me

Joanna Stern, Wall Street Journal:

[Apple vice president of iPad and Mac product marketing Tom] remained firm: iPads are for touch, Macs are not. “MacOS is for a very different paradigm of computing,” he said. He explained that many customers have both types of devices and think of the iPad as a way to “extend” work from a Mac. Apple’s Continuity easily allows you to work across devices, he said.

So there you have it, Apple wants you to buy…both? If you pick one, you live with the trade-offs. I did ask Boger if Apple would ever change its mind on the touch-screen situation.

“Oh, I can’t say we never change our mind,” he said. One can only hope.

Matt Birchler, commenting on a somewhat disingenuous article from Ben Lovejoy of 9to5Mac:

This is fair, and if you were forced to use a touch screen Mac on a vertical screen with no keyboard or mouse to help, then sure, I believe that would be a tiring experience as well. What I find frustrating about this idea is that it lacks imagination. I get the impression that people who hate the idea of touch on Macs can only imagine the current laptops with a digitizer in the screen detecting touch. It’s kind of ironic, but this is exactly the sort of thinking that Apple so rarely does. As we often say, Apple doesn’t add technology for the sake of technology, they add features users will enjoy.

Apple has never pretended the iPad is a tablet Mac. As I wrote several years ago, it has been rebuilding desktop features for a touch-first environment: multitasking, multiwindowing, support for external pointing devices, a file browser, a Dock, and so on. This is an impressive array of features which reference and reinterpret longtime Mac features while respecting the iPad’s character.

But something is missing for some number of people. Developers and users complain annually about the frustrations they experience with iPadOS. A video from Quinn Nelson illustrates how tricky the platform is. One of the great fears of iPad users is that increasing its capability will necessarily entail increasing its complexity. But the iPad is already complicated in ways that it should not be. There is nothing about the way multiwindowing works which requires it to be rule-based and complicated in the way Stage Manager often is.

Perhaps a solution is to treat the iPad as only modestly evolved from its uniwindow roots with hardware differentiated mostly by niceness. I disagree; Apple does too. The company clearly wants it to be so much more. It made a capable version of Final Cut Pro for iPad models which use the same processor as its Macs, but it makes you watch the progress bar as it exports a video because it cannot complete the task in the background.

iPadOS may have been built up from its touchscreen roots but, let us not forget, it is also built up from smartphone roots — and the goals and objectives of smartphone and tablet users can be very different.

What if it really did make more sense for an iPad to run MacOS, even if that is only some models and only some of the time? What if the best version of the Mac is one which is convertible to a tablet that you can draw on? What if the most capable version of an iPad is one which can behave like a Mac when you need it? None of this would be simple or easy. But I have to wonder: is what Apple has been adding for fourteen years produced a system which remains as simple and easy to use as it promises for its most dedicated iPad customers?

Scarlett Johansson Wants Answers About ChatGPT Voice That Sounds Like ‘Her’ npr.org

Bobby Allyn, NPR:

Lawyers for Scarlett Johansson are demanding that OpenAI disclose how it developed an AI personal assistant voice that the actress says sounds uncannily similar to her own.

[…]

Johansson said that nine months ago [Sam] Altman approached her proposing that she allow her voice to be licensed for the new ChatGPT voice assistant. He thought it would be “comforting to people” who are uneasy with AI technology.

“After much consideration and for personal reasons, I declined the offer,” Johansson wrote.

In a defensive blog post, OpenAI said it believes “AI voices should not deliberately mimic a celebrity’s distinctive voice” and that any resemblance between Johansson and the “Sky” voice demoed earlier this month is basically a coincidence, a claim only slightly undercut by a single-word tweet posted by Altman.

OpenAI’s voice mimicry — if you want to be generous — and that iPad ad add up to a banner month for technology companies’ relationship to the arts.1 Are there people in power at these companies who can see how behaviours like these look? We are less than a year out from both the most recent Hollywood writers’ and actors’ strikes, both of which reflected in part A.I. anxieties.

Update: According to the Washington Post, the sound-alike voice really does just sound alike.


  1. A more minor but arguably funnier faux pas occurred when Apple confirmed to the Wall Street Journal the authenticity of the statement it gave to Ad Age — both likely paywalled — but refused to send it to the Journal↥︎

iOS 17.5.1 Contains a Fix for That Reappearing Photos Bug macrumors.com

Apple issued an update today which, it says, ought to patch a bug which resurfaced old and deleted photos:

This update provides important bug fixes and addresses a rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted.

I suppose even a “rare” bug would, at Apple’s scale, impact lots of people. I heard from multiple readers who said they, too, saw presumed deleted photos reappear.

The thing about these bare release notes — which are not yet on Apple’s support site — is how they do not really answer reasonable questions about what happened. It is implied that the photos in question may have been marked for deletion and were visibly hidden from users, but were not actually removed under an old iOS version. Updating to iOS 17.5 revealed these dormant photos.

Bugs happen and they suck, but a bug like this really sucks — especially since so many of us sync so much of our data between our devices. This makes me question the quality of the Photos app, iCloud, and the file system overall.

Also, the anecdote of photos being restored to the same device after it had been wiped has been deleted from Reddit. I have not seen the same claim anywhere else which makes me think this was some sort of user error.

Slack’s Sneaky A.I. Training Policy techcrunch.com

Corey Quinn:

I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc? I’m positive I’m not reading this correctly.

[Screenshot of the opt out portion of Slack’s “privacy principles”: Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. […] ]

Slack replied:

Hello from Slack! To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer. We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data. […]

One thing I like about this statement is how the fifth word is “clarify” and then it becomes confusing. Based on my reading of its “privacy principles”, I think Slack’s “global model” is so named because it is available to everyone and is a generalist machine learning model for small in-workspace suggestions, while its LLM is called “Slack AI” and it is a paid add-on. But I could be wrong, and that is confusing as hell.

Ivan Mehta and Ingrid Lunden, TechCrunch:

In its terms, Slack says that if customers opt out of data training, they would still benefit from the company’s “globally trained AI/ML models.” But again, in that case, it’s not clear then why the company is using customer data in the first place to power features like emoji recommendations.

The company also said it doesn’t use customer data to train Slack AI.

If you want to opt out, you cannot do so in a normal way, like through a checkbox. The workspace owner needs to send an email to a generic inbox with a specific subject line. Let me make it a little easier for you:

To: feedback@slack.com

Subject: Slack Global model opt-out request.

Body: Hey, your privacy principles are pretty confusing and feel sneaky. I am opting this workspace out of training your global model: [paste your workspace.slack.com address here]. This underhanded behaviour erodes my trust in your product. Have a pleasant day.

That ought to do the trick.

iOS 17.5 Bug Apparently Restoring Long-Deleted Photos macrumors.com

Over the past week, several threads have been posted on Reddit claiming photos deleted years ago are reappearing in their libraries, and in those of sold and wiped devices.

Chance Miller, 9to5Mac:

There are a number of reports of similar situations in the thread on Reddit. Some users are seeing deleted images from years ago reappear in their libraries, while others are seeing images from earlier this year.

By default, the Photos app has a “Recently Deleted” feature that preserves deleted images for 30 days. That’s not what’s happening here, seeing as most of the images in question are months or years old, not days.

A few people in the comments say they are also seeing this issue.

Juli Clover, MacRumors:

A bug in iOS 17.5 is apparently causing photos that have been deleted to reappear, and the issue seems to impact even iPhones and iPads that have been erased and sold off to other people.

[…]

The impacted iPad was a fourth-generation 12.9-inch iPad Pro that had been updated to the latest operating system update, and before it was sold, it was erased per Apple’s instructions. The Reddit user says they did not log back in to the iPad at any point after erasing it, so it is entirely unclear how their old photos ended up reappearing on the device.

I have not run into this bug myself. On the one hand, these are just random people on the internet. If any of these were a single, isolated incident, I would assume user error. But there are more than a handful, and it seems unlikely this many people are lying or mistaken. It really seems like there is a problem here, and it is breaching my trust in the security and privacy of my data held by Apple. I can make some assumptions about why this is happening, but none of the technical reasons matter to any user who deleted a photo and — quite reasonably — has every expectation it would be fully erased.

Perhaps Apple will eventually send a statement to a favoured outlet like 9to5Mac or TechCrunch. It has so far said nothing about all the users forced to reset their Apple ID password last month. I bet something happened leading up to changes which will be announced at WWDC, but I do not care. It is not good enough for Apple to let major problems like these go unacknowledged.

Update: The more I have thought about this, the more I am not yet convinced by the sole story of photos appearing on a wiped iPad. Something is not adding up there. The other stories have a more consistent and plausible pattern, and are certainly bad enough.

Sponsor: Magic Lasso Adblock — YouTube Ad Blocker for Safari magiclasso.co

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

It’s easy to setup, doubles the speed at which Safari loads and blocks all YouTube ads.

Screenshot of Magic Lasso Adblock

Magic Lasso is an efficient, high performance and native Safari ad blocker. With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

It blocks all intrusive ads, trackers and annoyances – letting you experience a faster, cleaner and more secure web browsing experience.

The app also blocks over 10 types of YouTube ads; including all:

  • video ads,

  • pop up banner ads,

  • search ads,

  • plus many more.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Apple Pencil Shadow Casting threads.net

In a video on Threads, Quinn Nelson shows how the Apple Pencil casts a tool-specific faux shadow on the surface of the page. I love this sort of thing — a detail like this that, once you notice it, brings a little joy to whatever you are doing, whether that is creating art or just taking notes.

Earlier this week, I read an almost entirely unrelated article by Reece Martin about the difference between transit systems that feel joyful and ones which feel utilitarian. Both ideas feel similar to me. Many of the things which create levity in otherwise rote tasks are in the details. That is one reason I think so much about the paper cuts I get from using computers most of the time from when I wake up to when I go to bed: if these problems were fixed, there would be more room to enjoy the better parts.

If Kevin Roose Was ChatGPT With a Spray-On Beard, Could Anyone Tell? defector.com

Albert Burneko, Defector:

“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.

As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.

Generative A.I. Is Shameless wired.com

Paul Ford, Wired:

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it — with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

Ford sure can write. This is tremendous.

What Are We Actually Doing With A.I. Today? citationneeded.news

Molly White:

I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern.

Mike Masnick, Techdirt:

However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles.

It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

Julia Angwin, in a New York Times opinion piece:

I don’t think we’re in cryptocurrency territory, where the hype turned out to be a cover story for a number of illegal schemes that landed a few big names in prison. But it’s also pretty clear that we’re a long way from Mr. Altman’s promise that A.I. will become “the most powerful technology humanity has yet invented.”

The marketing of A.I. reminds me less of the cryptocurrency and Web3 boom, and more of 5G. Carriers and phone makers promised world-changing capabilities thanks to wireless speeds faster than a lot of residential broadband connections. Nothing like that has yet materialized.

Since reading those articles from White and Masnick, I have also experimented with LLM critiques of my own writing. In one case, I found it raised an issue that sharpened my argument. In another, it tried to suggest changes that made me sound like I spend a lot of time on LinkedIn — gross! I have trouble writing good headlines and the ones it suggests are consistently garbage in the Short Pun: Long Explanation format, even when I explicitly say otherwise. I have no idea what ChatGPT is doing when it interprets an article and I am not sure I like that mystery, but I am also amazed it can do anything at all, and pretty well at that.

There are costs and enormous risks to the A.I. boom — unearned hype being one of them — but there is also a there there. I am enormously skeptical of every announcement in this field. I am also enormously impressed with what I can do today. It worries and surprises me in similar measure. What an interesting time this is.

Update: On Bluesky, “Nafnlaus” pushes back on the specific claim made by Angwin that OpenAI exaggerated ChatGPT’s ability to pass a bar exam.

The iPad Pro Reviews Are in, and You Already Know How This Goes macstories.net

Samuel Axon, Ars Technica:

The new iPad Pro is a technical marvel, with one of the best screens I’ve ever seen, performance that few other machines can touch, and a new, thinner design that no one expected.

It’s a prime example of Apple flexing its engineering and design muscles for all to see. Since it marks the company’s first foray into OLED beyond the iPhone and the first time a new M-series chip has debuted on something other than a Mac, it comes across as a tech demo for where the company is headed beyond just tablets.

These are the opening paragraphs for this review and they read just as damning as is the entire article. Apple does not build a “tech demo”; it makes products. This iteration is, according to Axon, way faster and way nicer than the iPad Pro models it replaces. Yet all of this impressive hardware ought to be in service of a greater purpose. Other reviewers wrote basically the same.

Federico Viticci, MacStories:

I’m tired of hearing apologies that smell of Stockholm syndrome from iPad users who want to invalidate these opinions and claim that everything is perfect. I’m tired of seeing this cycle start over every two years, with fantastic iPad hardware and the usual (justified), “But it’s the software…line at the end. I’m tired of feeling like my computer is a second-class citizen in Apple’s ecosystem. I’m tired of being told that iPads are perfectly fine if you use Final Cut and Logic, but if you don’t use those apps and ask for more desktop-class features, you’re a weirdo, and you should just get a Mac and shut up. And I’m tired of seeing the best computer Apple ever made not live up to its potential.

Viticci was not granted access to a review unit in time, but it hardly matters for reviewing the state of the operating system. Jason Snell did review the new iPad Pro and spoke with Viticci about it on “Upgrade”.

The way I see it is simple: Apple does not appear to treat the iPad seriously. It has not been a priority for the company. Five years ago, it forked the operating system to create iPadOS, which seemed like it would be a meaningful change. And you can certainly point to plenty of things the iPad has gained which are distinct from its iPhone sibling. But we are fourteen years into this platform, and there are still so many obvious gaping holes. Viticci mentions a bunch of really good ones, but I will add another: I cannot believe Photos cannot even display Smart Albums.

Every time I pick up my iPad, I need to charge it from a fully dead battery. Once I do, though, I remember how much I like using the thing. And then I run into some bizarre limitation — or, more often, a series of them — that makes me put it down and pick up my Mac. Like Viticci, I find that frustrating. I want to use my iPad.

The correct move here is for Apple to continue building out iPadOS like it cares about its software as much as it does its hardware. I have no incentive to buy a new one until Apple decides it wants to take iPad users seriously.

ChatGPT Can ‘Flirt’ bbc.com

Zoe Kleinman, BBC:

It [GPT-4o] is faster than earlier models and has been programmed to sound chatty and sometimes even flirtatious in its responses to prompts.

The new version can read and discuss images, translate languages, and identify emotions from visual expressions. There is also memory so it can recall previous prompts.

It can be interrupted and it has an easier conversational rhythm – there was no delay between asking it a question and receiving an answer.

I wrote earlier about how impressed I was with OpenAI’s live demos today. They made the company look confident in its product, and it made me believe nothing fishy was going on. I hope I am not eating those words later.1

But the character of this new ChatGPT voice unsettled me a little. It adjusts its tone depending on how a user speaks to it, and it seems possible to tell it to take on different characters. But it, like virtual assistants before, still presents as having a femme persona by default. Even though I know it is just a robot, it felt uncomfortable watching demos where it giggled, “got too excited”, and said it was going to “blush”. I can see circumstances where this will make conversations more human — in translation, or for people with disabilities. But I can also see how this can be dehumanizing toward people who are already objectified in reality.


  1. Maybe I will a little bit, though. The ostensible “questions from the audience” bit at the end relied on prompts from two Twitter users. The first tweet I could not find; the second was from a user who joined Twitter this month, and two of their three total tweets are directed at OpenAI despite not following the company. ↥︎

The Missing Years in Emoji History blog.gingerbeardman.com

Matt Sephton:

At this point, I couldn’t quite believe what I was seeing because I was under the impression that the first emoji were created by an anonymous designer at SoftBank in 1997, and the most famous emoji were created by Shigetaka Kurita at NTT DoCoMo in 1999. But the Sharp PI-4000 in my hands was released in 1994, and it was chock full of recognisable emoji. Then down the rabbit hole I fell. 🕳️🐇

This article may start with this discovery from 1994, but it absolutely does not end there. What a fascinating piece of well-documented and deeply researched history.

OpenAI Introduces GPT-4o youtube.com

Ina Fried, Axios:

OpenAI Monday announced a new flagship model, dubbed GPT-4o, that brings more powerful capabilities to all its customers, including smarter, faster real-time voice interactions.

The presentation was broadcast live and it is worth watching, particularly the last five or so minutes when the presenters tried suggestions live from viewer submissions. I am sure they were pre-screened for viability, but I appreciated the level of risk they were willing to embrace.

Apple Is Still Launching Apple Music Features on Weird Microsites apple.com

Apple is spending the next two weeks trickling out what its “team of experts alongside a select group of artists” think are the one hundred best albums of all time. Sure, add another to the pile, I do not care. However, unlike Pitchfork and Rolling Stone, Apple has a whole music streaming platform with which they can do anything they want.

Yet there is no exciting presentation of this list in Apple Music. There is a live radio broadcast — which cannot be found by searching, say, “100 best” or “top 100” — and the albums are shown in the featured boxes on the Browse tab, but there little else that I can find. To explore the list, you need to visit 100best.music.apple.com in a web browser, where each record gets a lovely write-up and explanation of why it is on the list. The same explanation appears in album descriptions. But, like the Replay feature, why is this not all within the app and on the web?

Sponsor: Magic Lasso Adblock — the Safari Ad Blocker Built for You magiclasso.co

Do you want an to try an ad blocker that’s easy to setup, easy to keep up to date and with pro features available when you need them?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Screenshot of Magic Lasso Adblock

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers, and annoyances in Safari. Just enable to browse in bliss.

With Magic Lasso’s pro features, you can:

  • Block over ten types of YouTube ads, including pre-roll video ads

  • Craft your own Custom Rules to block media, cookies, and JavaScript

  • Use Tap to Block to effortlessly block any element on a page with a simple tap

  • See the difference ad blocking makes by visualising ad blocking speed, energy efficiency, and data savings for any site

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock via the Magic Lasso website today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Abandoned Blogs are.na

From Lucy Pham, a collection of abandoned blogs — exactly what it says on the tin. This reminds me of a really wonderful piece of net art from probably fifteen years ago — maybe more — which was a series of quotes from people apologizing for not posting in a while, or something similar. There is an interesting stillness to both. Pham’s collection is a catalogue of specific web design trends, and each of these cryogenically preserved sites implies a story behind them.

The Paranoid Crusade Comes for Public Radio and Signal bugeyedandshameless.com

Justin Ling:

[Philip] Zimmermann is a bit of a hero of mine. (I tried to hide my gushing while we spoke.) I’m particularly fond of him because of the broad, complicated, messy coalition he helped usher in to continue advocating for this open internet: Anarchists, libertarians, paranoid weirdos, nerds, activists, journalists, and a lot of people in-between. Despite lots of cross-purposes, this loose-knit coalition has stuck together. Even Elon Musk is — or, was — a Signal stan.

So imagine my surprise when, this week, I came across a thinly-written essay arguing that Signal had “a problem.” It had, the essay argued, been compromised by the American intelligence state. Not from the outside, but from the inside.

When all you have are documents and a red Sharpie, everything looks like it must be connected. All this bad faith effort is able to do is suggest something could happen or might be happening — without a single piece of evidence — and it is enough to get people whipped up in some anxious frenzy.

Update: More from Matthew Green.

Reddit’s Partner Policies, Applicable to ‘A.I.’ Licensees, Prohibits Using Deleted Posts redditinc.com

Reddit:

Our policy outlines the information partners can access via a public-content licensing agreement as well as the commitments we make to users about usage of this content. It takes into account feedback from a group of moderators we consulted when developing it:

  • We require our partners to uphold the privacy of redditors and their communities. This includes respecting users’ decisions to delete their content and any content we remove for violating our Content Policy.

This always sounds like a good policy, but how does it work in practice? Is it really possible to disentangle someone’s deleted Reddit post from training data? The models which have been trained on Reddit comments will not be redone every time posts or accounts get deleted.

There are, it seems, some good protections in these policies and I do not want to dump on it entirely. I just do not think it is fair to imply to users that their deleted posts cannot or will not be used in artificial intelligence models.

How Apple Shot ‘Let Loose’ prolost.com

Stu Maschwitz:

After Apple released a behind-the-scenes video about the production of “Scary Fast,” the Internet did its internet thing and questioned the “Shot on iPhone” claim, as if “Shot on iPhone” inherently means “shot with zero other gear besides an iPhone.” These takes were dumb and bad and some even included assertions that Apple added additional lensing to the phones, which they did not.

But for “Let Loose,” they did.

Maschwitz says Panavision’s Directors Finder is not too far off from what Apple used — though not the same — and there are two ways of viewing this. One is believing that an iPhone in an otherwise professional production environment does not really make a movie “shot on an iPhone”. I disagree. I much prefer the other way of looking at this same rig, which is that it is incredible that this entire professional workflow is being funneled through a tiny sensor on basically the same telephone I have in my pocket right now.