Search Results for: "artificial intelligence"

Shane Goldmacher, New York Times:

Former President Donald J. Trump has taken his obsession with the large crowds that Vice President Kamala Harris is drawing at her rallies to new heights, falsely declaring in a series of social media posts on Sunday that she had used artificial intelligence to create images and videos of fake crowds.

The A.I.-generated crowds claim is something I had seen bouncing around the fringes of X — and by “fringe”, I mean accounts which have paid to amplify their posts. I did not expect a claim this stupid to become a mainstream argument. But then I remembered what the mainstream looks like these days.

This claim is so stupid because you do not need to rely on the photos released by the campaign. You can just go look up pictures for yourself, taken at a bunch of different angles by a bunch of different people with consistent lighting, logical crowds, and realistic hands. There are hundreds of them, and videos too. A piece of supposed evidence for the fakery is that Harris’ plane does not have a visible tail number, but there are — again — plenty of pictures of that plane which show no number. The U.S. Air Force made the change last year.

I know none of the people promoting this theory are interested in facts. They began with a conclusion and are creating a story to fit, in spite of evidence to the contrary. Still, it was equal parts amusing and worrisome to see this theory be spun from whole cloth in real time.

Katie McQue, the Guardian:

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

The reactions to statistics related to this particularly revolting crime are similar to all crime figures: higher and lower numbers can be interpreted as both positive and negative alike. More reports could mean better detection or more awareness, but it could also mean more instances; it is hard to know. Fewer reports might reflect less activity, a smaller platform size or, indeed, undercounting. In Apple’s case, it is likely the latter. It is neither a small platform nor one which prohibits the kinds of channels through which CSAM is distributed.

NCMEC addresses both these problems and I think its complaints are valid:

U.S.-based ESPs are legally required to report instances of child sexual abuse material (CSAM) to the CyberTipline when they become aware of them. However, there are no legal requirements regarding proactive efforts to detect CSAM or what information an ESP must include in a CyberTipline report. As a result, there are significant disparities in the volume, content and quality of reports that ESPs submit. For example, one company’s reporting numbers may be higher because they apply robust efforts to identify and remove abusive content from their platforms. Also, even companies that are actively reporting may submit many reports that don’t include the information needed for NCMEC to identify a location or for law enforcement to take action and protect the child involved. These reports add to the volume that must be analyzed but don’t help prevent the abuse that may be occurring.

Not only are many reports not useful, they are also part of an overwhelming caseload with which law enforcement struggles to turn into charges. Proposed U.S. legislation is designed to improve the state of CSAM reporting. Unfortunately, the wrong bill is moving forward.

The next paragraph in the Guardian story:

All US-based tech companies are obligated to report all cases of CSAM they detect on their platforms to NCMEC. The Virginia-headquartered organization acts as a clearinghouse for reports of child abuse from around the world, viewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple is unable to see the contents of users’ messages, but so is Meta’s WhatsApp, which made roughly 1.4m reports of suspected CSAM to NCMEC in 2023.

I wish there was more information here about this vast discrepancy — a million reports from just one of Meta’s businesses compared to just 267 reports from Apple to NCMEC for all of its online services. The most probable explanation, I think, can be found in a 2021 ProPublica investigation by Peter Elkind, Jack Gillum, and Craig Silverman, about which I previously commented. The reporters here revealed WhatsApp moderators’ heavy workloads, writing:

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

WhatsApp allows users to report any message at any time. Apple’s Messages app, on the other hand, only lets users flag a sender as junk and, even then, only if the sender is not in the user’s contacts and the user has not replied a few times. As soon as there is a conversation, there is no longer any reporting mechanism within the app as far as I can tell.

The same is true of shared iCloud Photo albums. It should be easy and obvious how to report illicit materials to Apple. But I cannot find an obvious mechanism for doing so — not in an iCloud-shared photo album, and not in an obvious place on Apple’s website, either. As noted in Section G of the iCloud terms of use, reports must be sent via email to abuse@icloud.com. iCloud albums use long, unguessable URLs, so the likelihood of unintentionally stumbling across CSAM or other criminal materials is low. Nevertheless, it seems to me that notifying Apple of abuse of its services should be much clearer.

Back to the Guardian article:

Apple’s June announcement that it will launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.

“The race to roll out Apple AI is worrying when AI-generated child abuse material is putting children at risk and impacting the ability of police to safeguard young victims, especially as Apple pushed back embedding technology to protect children,” said [the NSPCC’s Richard] Collard. Apple says the AI system, which was created in partnership with OpenAI, will customize user experiences, automate tasks and increase privacy for users.

The Guardian ties Apple’s forthcoming service to models able to generate CSAM, which it then connects to models being trained on CSAM. But we do not know what Apple Intelligence is capable of doing because it has not yet been released, nor do we know what it has been trained on. This is not me giving Apple the benefit of the doubt. I think we should know more about how these systems are trained.

We also currently do not know what limitations Apple will set for prompts. It is unclear to me what Collard is referring to in saying that the company “pushed back embedding technology to protect children”.

One more little thing: Apple does not say Apple Intelligence was created in partnership with OpenAI, which is basically a plugin. It also does not say Apple Intelligence will increase privacy for users, only that it is more private than competing services.

I am, for the record, not particularly convinced by any of Apple’s statements or claims. Everything is firmly in we will see territory right now.

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like Character.ai and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.

[…]

Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

Canadian Prime Minister Justin Trudeau appeared on the New York Times’ “Hard Fork” podcast for a discussion about artificial intelligence, election security, TikTok, and more.

I have to agree with Aaron Vegh:

[…] I loved his messaging on Canada’s place in the world, which is pragmatic and optimistic. He sees his job as ambassador to the world, and he plays the role well.

I just want to pull some choice quotes from the episode that highlight what I enjoyed about Trudeau’s position on technology. He’s not merely well-briefed; he clearly takes an interest in the technology, and has a canny instinct for its implications in society.

I understand Trudeau’s appearance serves as much to promote his government’s efforts in A.I. as it does to communicate any real policy positions — take a sip every time Trudeau mentions how we “need to have a conversation” about something. But I also think co-hosts Kevin Roose and Casey Newton were able to get a real sense of how the Prime Minister thinks about A.I. and Canada’s place in the global tech industry.

Albert Burneko, Defector:

“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.

As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.

Reddit:

Our policy outlines the information partners can access via a public-content licensing agreement as well as the commitments we make to users about usage of this content. It takes into account feedback from a group of moderators we consulted when developing it:

  • We require our partners to uphold the privacy of redditors and their communities. This includes respecting users’ decisions to delete their content and any content we remove for violating our Content Policy.

This always sounds like a good policy, but how does it work in practice? Is it really possible to disentangle someone’s deleted Reddit post from training data? The models which have been trained on Reddit comments will not be redone every time posts or accounts get deleted.

There are, it seems, some good protections in these policies and I do not want to dump on it entirely. I just do not think it is fair to imply to users that their deleted posts cannot or will not be used in artificial intelligence models.

Sherman Smith, Kansas Reflector:

Facebook’s unrefined artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, and in a cascade of failures blocked the domains of news sites that published the article, according to technology experts interviewed for this story and Facebook’s public statements.

Blake E. Reid:

The punchline of this story was, is, and remains not that Meta maliciously censored a journalist for criticizing them, but that it built a fundamentally broken service for ubiquitously intermediating global discourse at such a large scale that it can’t even cogently explain how the service works.

This was always a sufficient explanation for the Reflector situation, and one that does not require any level of deliberate censorship or conspiracy for such a small target. Yet, it seems as though many of those who boosted the narrative that Facebook blocks critical reporting cannot seem to shake that. I got the above link from Marisa Kabas, who commented:

They’re allowing shitty AI to run their multi-billion dollar platforms, which somehow knows to block content critical of them as a cybersecurity threat.

That is not an accurate summary of what has transpired, especially if you read it with the wink-and-nod tone I imply from its phrasing. There is plenty to criticize about the control Meta exercises and the way in which it moderates its platforms without resorting to nonsense.

Even though it has only been a couple of days since word got out that Apple was cancelling development of its long-rumoured though never confirmed car project, there have been a wave of takes explaining what this means, exactly. The uniqueness of this project was plenty intriguing because it seemed completely out of left field. Apple makes computers of different sizes, sure, but the largest surface you would need for any of them is a desk. And now the company was working on a car?

Much reporting during its development was similarly bizarre due to the nature of the project. Instead of leaks from within the technology industry, sources were found in auto manufacturing. Public records requests were used by reporters at the Guardian, IEEE Spectrum, and Business Insider — among others — to get a peek at its development in a way that is not possible for most of Apple’s projects. I think the unusual nature of it has broken some brains, though, and we can see that in coverage of its apparent cancellation.

Mark Gurman, of Bloomberg, in an analysis supplementing the news he broke of Project Titan’s demise. Gurman writes that Apple will now focus its development efforts on generative “A.I.” products:

The big question is how soon AI might make serious money for Apple. It’s unlikely that the company will have a full-scale AI lineup of applications and features for a few years. And Apple’s penchant for user privacy could make it challenging to compete aggressively in the market.

For now, Apple will continue to make most of its money from hardware. The iPhone alone accounts for about half its revenue. So AI’s biggest potential in the near term will be its ability to sell iPhones, iPads and other devices.

These paragraphs, from perhaps the highest-profile reporter on the Apple beat, present the company’s usual strategy for pretty much everything it makes as a temporary measure until it can — uhh — do what, exactly? What is the likelihood that Apple sells access to generative services to people who do not have its hardware products? Those odds seem very, very poor to me, and I do not understand why Gurman is framing this in the way he is.

While it is true a few Apple services are available to people who do not use the company’s hardware products, they are exclusively media subscriptions. It does not make sense to keep people from legally watching the expensive shows it makes for Apple TV Plus. iCloud features are also available outside the hardware ecosystem but, again, that seems more like a pragmatic choice for syncing. Generative “A.I.” does not fit those models and it is not, so far, a profit-making endeavour. Microsoft and OpenAI are both losing money every time their products are used, even by paying customers.

I could imagine some generative features could come to Pages or Keynote at iCloud.com, but only because they were also added to native applications that are only available on Apple’s platforms. But Apple still makes the vast majority of its money by selling computers to people; its services business is mostly built on those customers adding subscriptions to their Apple-branded hardware.

“A.I.” features are likely just that: features, existing in a larger context. If Apple wants, it can use them to make editing pictures better in Photos, or make Siri somewhat less stupid. It could also use trained models to make new products; Gurman nods toward the Vision Pro’s Persona feature as something which uses “artificial intelligence”. But the likelihood of Apple releasing high-profile software features separate and distinct from its hardware seems impossibly low. It has built its SoCs specifically for machine learning, after all.

Speaking of new products, Brian X. Chen and Tripp Mickle, of the New York Times, wrote a decent insiders’ narrative of the car’s development and cancellation. But this paragraph seems, quite simply, wrong:

The car project’s demise was a testament to the way Apple has struggled to develop new products in the years since Steve Jobs’s death in 2011. The effort had four different leaders and conducted multiple rounds of layoffs. But it festered and ultimately fizzled in large part because developing the software and algorithms for a car with autonomous driving features proved too difficult.

I do not understand on what basis Apple “has struggled to develop new products” in the last thirteen years. Since 2011, Apple has introduced the Apple Watch, AirPods, Vision Pro, migrated Macs to in-house SoCs causing an industry-wide reckoning, and added a bevy of services. And those are just the headlining products; there are also HomePods and AirTags, Macs with Retina displays, iPhones with facial recognition, a range of iPads that support the Apple Pencil, also a new product. None of those things existed before 2011.

These products are not all wild success stories, and some of them need a lot of work to feel great. But that list disproves the idea that Apple has “struggled” with launching new things. If anything, there has been a steady narrative over that same period that Apple has too many products. The rest of this Times report seems fine, but this one paragraph — and, really, just the first sentence — is simply incorrect.

These are all writers who cover Apple closely. They are familiar with the company’s products and strategies. These takes feel like they were written without any of that context or understanding, and it truly confuses me how any of them finished writing these paragraphs and thought they accurately captured a business they know so much about.

Mark Gurman, Bloomberg:

Apple Inc., racing to add more artificial intelligence capabilities, is nearing the completion of a critical new software tool for app developers that would step up competition with Microsoft Corp.

The company has been working on the tool for the last year as part of the next major version of Xcode, Apple’s flagship programming software. It has now expanded testing of the features internally and has ramped up development ahead of a plan to release it to third-party software makers as early as this year, according to people with knowledge of the matter.

“Racing”, in the sense that it has been developing this for at least a year, and its release will likely coincide with WWDC — if it does actually launch this year. Gurman’s sources seem to be fuzzy on that timeline, only noting Apple could release this new version of Xcode “as early as this year”, which is the kind of commitment to a deadline a company takes if is is, indeed, “racing”.

Sixth paragraph:

Apple shares, which had been down as much 1.5%, briefly turned positive on the news. They were little changed at the close Thursday, trading at $183.86. Microsoft fell less than 1% to $406.56.

Some things never change.

Justin Ling, on January 12:

In a hour-long special, I’m Glad I’m Dead, [George] Carlin returns to talk reality TV, AI, billionaires, being dead, mass shootings, and Trump.

It premiered to horrified reviews. Carlin’s daughter called the special an affront to her father: “Humans are so afraid of the void that we can’t let what has fallen into it stay there,” she wrote on Twitter. Major media outlets breathlessly reported on the special, wondering if it was set to harken in a new era of soulless automation.

This week, on a very special Bug-eyed and Shameless, we investigate the Scooby Doo-esque effort to bring George Carlin back from the dead — and prank the media in the process.

Ling was one of few reporters I saw who did not take at face value the special was, as claimed, a product of generative “artificial intelligence”. Just one day after exhaustive coverage of its release, Ling published this more comprehensive investigation showing how it was clearly not a product of “A.I.” — and he was right. That does not absolve Dudesy of creating this mockery of Carlin’s work in his name and likeness, but the technological story is simply false.

Cory Doctorow:

The modern Mechanical Turk — a division of Amazon that employs low-waged “clickworkers,” many of them overseas — modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract “cloud” of human intelligence (the tasks MTurks perform are called “HITs,” which stands for “Human Intelligence Tasks”).

This is such a truism that techies in India joke that “AI” stands for “absent Indians.” Or, to use Jathan Sadowski’s wonderful term: “Potemkin AI”:

https://reallifemag.com/potemkin-ai/

This Potemkin AI is everywhere you look. […]

Doctorow is specifically writing about human endeavours falsely attributed to machines, but the efforts of real people are also what makes today’s so-called “A.I.” services work, something I have often highlighted here. There is nothing wrong, per se, with human labour powering supposed automation, other than the poor and unstable wages they are paid. But there is a yawning chasm between how these products are portrayed in marketing and at a user interface level, the sight of which makes investors salivate, and what is happening behind the scenes.

By the way, I was poking around earlier today trying to remember the name of the canned Facebook phone and I spotted the Wikipedia article for M. M was a virtual assistant launched by then-Facebook in 2015, and eventually shut down in 2018. According to the BBC, up to 70% of M’s responses were from human beings, not software.

Camilla Hodgson, Financial Times (syndicated at Ars Technica):

AI models are “trained” on data, such as photographs and text found on the internet. This has led to concern that rights holders, from media companies to image libraries, will make legal claims against third parties who use the AI tools trained on their copyrighted data.

The big three cloud computing providers have pledged to defend business customers from such intellectual property claims. But an analysis of the indemnity clauses published by the cloud computing companies show that the legal protections only extend to the use of models developed by or with oversight from Google, Amazon and Microsoft.

Ira Rothken, Techdirt:

Here’s the crux: the LLM itself can’t predict the user’s intentions. It simply processes patterns based on prompts. The LLM learning machine and idea processor shouldn’t be stifled due to potential user misuse. Instead, in the rare circumstances when there is a legitimate copyright infringement, users ought to be held accountable for their prompts and subsequent usage and give the AI LLM “dual use technology” developers the non-infringing status of the VCR manufacturer under the Sony Doctrine.

It seems there are two possible points of copyright infringement: input and output. I find the latter so much more interesting.

It seems, to me, to depend on how much of a role machine learning models play in determining what is produced, and I find that fascinating. These models have been marketed as true artificial intelligence but, in their defence, are often compared to photocopiers — and there is a yawning chasm between those perspectives. It makes sense for Xerox to bear zero responsibility if someone uses one of its machines to duplicate an entire book. Taking it up a notch, I have no idea if a printer manufacturer might be found culpable for permitting counterfeiting currency — I am not a lawyer — but it is noteworthy anti-duplication measures have been present in scanners and printers for decades, yet Bloomberg reported in 2014 that around 60% of fake U.S. currency was made on home-style printers.

But those are examples of strict duplication — these devices have very little in the way of a brain, and the same is true of a VHS recorder. Large language models and other forms of generative “intelligence” are a little bit different. Somewhere, something like a decision happens. It seems plausible an image generator could produce a result uncomfortably close to a specific visual style without direct prompting by the user, or it could clearly replicate something. In that case, is it the fault of the user or the program, even if it goes unused and mostly unseen?

To emphasize again, I am not a lawyer while Rothken is, so I am just talking out of my butt. These tools are raising some interesting questions is all I want to highlight. Fascinating times ahead.

When I was much younger, I assumed people who were optimistic must have misplaced confidence. How anyone could see a future so bright was a complete mystery, I reasoned, when what we are exposed to is a series of mistakes and then attempts at correction from public officials, corporate executives, and others. This is not conducive to building hope — until I spotted the optimistic part: in the efforts to correct the problem and, ideally, in preventing the same things from happening again.

If you measure your level of optimism by how much course-correction has been working, then 2023 was a pretty hopeful year. In the span of about a decade, a handful of U.S. technology firms have solidified their place among the biggest and most powerful corporations in the world, so nobody should be surprised by a parallel increase in pushback for their breaches of public trust. New regulations and court decisions are part of a democratic process which is giving more structure to the ways in which high technology industries are able to affect our lives. Consider:

That is a lot of change in one year and not all of it has been good. The Canadian government went all-in on the Online News Act which became a compromised disaster; there are plenty of questions about the specific ways the DMA and DSA will be enforced; Montana legislators tried to ban TikTok.

It is also true and should go without saying that technology companies have done plenty of interesting and exciting things in the past year; they are not cartoon villains in permanent opposition to the hero regulators. But regulators are also not evil. New policies and legal decisions which limit the technology industry — like those above — are not always written by doddering out-of-touch bureaucrats and, just as importantly, businesses are not often trying to be malevolent. For example, Apple has arguably good reasons for software validation of repairs; it may not be intended to prevent users from easily swapping parts, but that is the effect its decision has in the real world. What matters most to users is not why a decision was made but how it is experienced. Regulators should anticipate problems before they arise and correct course when new ones show up.

This back-and-forth is something I think will ultimately prove beneficial, though it will not happen in a straight line. It has encouraged a more proactive dialogue for limiting known negative consequences in nascent technologies, like avoiding gender and racial discrimination in generative models, and building new social environments with less concentrated power. Many in tech industry love to be the disruptor; now, the biggest among them are being disrupted, and it is making things weird and exciting.

These changes do not necessarily need to be made from the effects of regulatory bodies. Businesses are able to make things more equitable for themselves, should they so choose. They can be more restrictive about what is permitted on their platforms. They can empower trust and safety teams to assess how their products and services are being used in the real world and adjust them to make things better.

Mike Masnick, Techdirt:

Let’s celebrate actual tech optimism in the belief that through innovation we can actually seek to minimize the downsides and risks, rather than ignore them. That we can create wonderful new things in a manner that doesn’t lead many in the world to fear their impact, but to celebrate the benefits they bring. The enemies of techno optimism are not things like “trust and safety,” but rather the naive view that if we ignore trust and safety, the world will magically work out just fine.

There are those who believe “the arc of the universe […] bends toward justice” is a law which will inevitably be correct regardless of our actions, but it is more realistic to view that as a call to action: people need to bend that arc in the right direction. There are many who believe corporations can generally regulate themselves on these kinds of issues, and I do too — to an extent. But I also believe the conditions by which corporations are able to operate are an ongoing negotiation with the public. In a democracy, we should feel like regulators are operating on our behalf, and much of the policy and legal progress made last year certainly does. This year can be more of the same if we want it to be. We do not need to wait for Meta or TikTok to get better at privacy on their own terms, for example. We can just pass laws.

As I wrote at the outset, the way I choose to be optimistic is to look at all of the things which are being done to correct the imbalanced and repair injustices. Some of those corrections are being made by businesses big and small; many of them have advertising and marketing budgets celebrating their successes to the point where it is almost unavoidable. But I also look at the improvements made by those working on behalf of the public, like the list above. The main problem I have with most of them is how they have been developed on a case-by-case basis which, while setting precedent, is a fragile process open to frequent changes.

That is true, too, for self-initiated changes. Take Apple’s self-repair offerings, which it seems to have introduced in response to years of legislative pressure. It has made parts, tools, and guides available in the United States and in a more limited capacity across the E.U., but not elsewhere. Information and kits are available not from Apple’s own website, but a janky looking third-party. It can stop making this stuff available at any time in areas where it is not legally obligated to provide these resources, which is another reason why it sucks for parts to require software activation. In 2023, Apple made its configuration tools more accessible, but only in regions where its self-service repair program is provided.

People ought to be able to have expectations — for repairs, privacy, security, product reliability, and more. The technology industry today is so far removed from its hackers-in-a-garage lore. Its biggest players are among the most powerful businesses in the world, and should be regulated in that context. That does not necessarily mean a whole bunch of new rules and bureaucratic micromanagement, but we ought to advocate for structures which balance the scales in favour of the public good.

If there was one technology story we will remember from 2023, it was undeniably the near-vertical growth trajectory of generative “artificial intelligence” products. It is everywhere, and it is being used by normal people globally. Yet it is, for all intents and purposes, a nascent sector, and that makes this a great time to set some standards for its responsible development and, more importantly, its use. Nobody is going to respond to this perfectly — not regulators and not the companies building these tools. But they can work together to set expectations and standards for known and foreseeable problems. It seems like that is what is happening in the E.U. and the United States.

That is how I am optimistic about technology now.

Benjamin Mullin and Tripp Mickle, New York Times:

Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development of generative artificial intelligence systems, according to four people familiar with the discussions.

This is very different from the way existing large language models have been trained.

Kali Hays, of Business Insider, in November:

Most tech companies seemed to agree that being required to pay for the huge amounts of copyrighted material scraped from the internet and used to train large language models behind AI tools like Meta’s Llama, Google’s Bard, and OpenAI’s ChatGPT would create an impossible hurdle to develop the tech.

“Generative AI models need not only a massive quantity of content, but also a large diversity of content,” Meta wrote in its comment. “To be sure, it is possible that AI developers will strike deals with individual rights holders, to develop broader partnerships or simply to buy peace from the threat of litigation. But those kinds of deals would provide AI developers with the rights to only a minuscule fraction of the data they need to train their models. And it would be impossible for AI developers to license the rights to other critical categories of works.”

If it were necessary to license published materials for training large language models, it would necessarily limit the viability of those models to those companies which could afford the significant expense. Mullin and Mickle report Apple is offering “at least $50 million”. Then again, large technology companies are already backing the “A.I.” boom.

Mullin and Mickle:

The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I., which allows computers to create images and chat like a human. […]

Tim Bradshaw, of the Financial Times, as syndicated by Ars Technica:

Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.

The paper, entitled “LLM in a Flash,” offers a “solution to a current computational bottleneck,” its researchers write.

Both writers frame this as Apple needing to “catch up” to Microsoft — which licenses generative technology from OpenAI — Meta, and Google. But surely this year has demonstrated both how exciting this technology is and how badly some of these companies have fumbled their use of it — from misleading demos to “automated bullshit”. I have no idea how Apple’s entry will fare in comparison but it may, in retrospect, look wise for it to dodge this kind of embarrassment and the legal questions of today’s examples.

Bruce Schneier, Slate:

Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings. Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets — it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.

And Schneier on his blog, a republished transcript of a September talk at Harvard:

In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

If you only have time for one of these, I recommend the latter. It is more expansive, thoughtful, and makes me reconsider how regulatory framing ought to work for these technologies.

Both are great, however, and worth your time.