Sorry, Windows users. OpenAI announced on Monday that ChatGPT, the popular AI chatbot, would get a desktop app. But if you’re a Windows user, don’t get too excited — it’ll just be for Macs, at least for the time being. The announcement came during a […]
TechSam Altman teased the announcement last week as rumors swirled. OpenAI has unveiled GPT-4o, a new AI model that combines text, vision, and audio. At its highly anticipated livestream event, OpenAI CTO Mira Murati shared that GPT-4o can process text, audio, and vision in one […]
TechAmazon's released new details about their 10th Prime Day event. UPDATE: May. 13, 2024, 12:30 PM EDT We’ve updated this story with the latest info on Prime Day 2024, plus some tea on the deals we expect to see. Amazon has made the announcement: we’re […]
TechSam Altman, the poster boy for AI, was ousted from his company OpenAI. | Andrew Caballero-Reynolds/AFP via Getty Images The alternative — a mass exodus of OpenAI’s top talent to Microsoft — would have been worse. The seismic shake-up at OpenAI — involving the firing and, ultimately, […]
Big TechThe alternative — a mass exodus of OpenAI’s top talent to Microsoft — would have been worse.
The seismic shake-up at OpenAI — involving the firing and, ultimately, the reinstatement of CEO Sam Altman — came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.
That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.
On Friday, the board fired Altman over an alleged lack of transparency, and company president Greg Brockman then quit in protest. On Saturday, the pair tried to get the board to reinstate them, but negotiations didn’t go their way. By Sunday, both had accepted jobs with major OpenAI investor Microsoft, where they would continue their work on cutting-edge AI. By Monday, 95 percent of OpenAI employees were threatening to leave for Microsoft, too.
Late Tuesday night, OpenAI announced, “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”
As chaotic as all this was, the aftershocks for the AI ecosystem might have been scarier if the shake-up had ended with a mass exodus of OpenAI employees, as it appeared poised to do a few days ago. A flow of talent from OpenAI to Microsoft would have meant a flow from a company that had been founded on worries about AI safety to a company that can barely be bothered to pay lip service to the concept.
So at the end of the day, did OpenAI’s board make the right decision when it fired Altman? Or did it make the right decision when it rehired him?
The answer may well be “yes” to both.
OpenAI is not a typical tech company. It has a unique structure, and that structure is key to understanding the current shake-up.
The company was originally founded as a nonprofit focused on AI research in 2015. But in 2019, hungry for the resources it would need to create AGI — artificial general intelligence, a hypothetical system that can match or exceed human abilities — OpenAI created a for-profit entity. That allowed investors to pour money into OpenAI and potentially earn a return on it, though their profits would be capped, according to the rules of the new setup, and anything above the cap would revert to the nonprofit. Crucially, the nonprofit board retained the power to govern the for-profit entity. That included hiring and firing power.
The board’s job was to make sure OpenAI stuck to its mission, as expressed in its charter, which states clearly, “Our primary fiduciary duty is to humanity.” Not to investors. Not to employees. To humanity.
The charter also states, “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.” But it also paradoxically states, “To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities.”
This reads a lot like: We’re worried about a race where everyone’s pushing to be at the front of the pack. But we’ve got to be at the front of the pack.
Each of those two impulses found an avatar in one of OpenAI’s leaders. Ilya Sutskever, an OpenAI co-founder and top AI researcher, reportedly worried that the company was moving too fast, trying to make a splash and a profit at the expense of safety. Since July, he’s co-led OpenAI’s “Superalignment” team, which aims to figure out how to manage the risk of superintelligent AI.
Altman, meanwhile, was moving full steam ahead. Under his tenure, OpenAI did more than any other company to catalyze an arms race dynamic, most notably with the launch of ChatGPT last November. More recently, Altman was reportedly fundraising with autocratic regimes in the Middle East like Saudi Arabia so he could spin up a new AI chip-making company. That in itself could raise safety concerns, since such regimes might use AI to supercharge digital surveillance or human rights abuses.
We still don’t know exactly why the OpenAI board fired Altman. The board has said that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Sutskever, who spearheaded Altman’s ouster, initially defended the move in similar terms: “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity,” he told employees at an all-hands meeting hours after the firing. (Sutskever later flipped sides, however, and said he regretted participating in the ouster.)
“Sam Altman and Greg Brockman seem to be of the view that accelerating AI can achieve the most good for humanity. The plurality of the [old] board, however, appears to be of a different view that the pace of advancement is too fast and could compromise safety and trust,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.
“I think that the board made the only decision they felt like they could make” in firing Altman, AI expert Gary Marcus told me. “I think they saw something from Sam that they thought they could not live with and stay true to their mission. So in their eyes, they made the right choice.”
Before OpenAI agreed to reinstate Altman, Kreps worried that “the board may have won the battle but lost the war.”
In other words, if the board fired Altman in part over concerns that his accelerationist impulse was jeopardizing the safety part of OpenAI’s mission, it won the battle, in that it did what it could to keep the company true to the mission.
But had the saga ended with the coup pushing OpenAI’s top talent straight into the arms of Microsoft, the board would have lost the larger war — the effort to keep AI safe for humankind. Which brings us to …
Altman’s firing caused an unbelievable amount of chaos. According to futurist Amy Webb, the CEO of the Future Today Institute, OpenAI’s board had failed to practice “strategic foresight” — to understand how its sudden dismissal of Altman might cause the company to implode and might reverberate across the larger AI ecosystem. “You have to think through the next-order implications of your actions,” she told me.
It’s certainly possible that Sutskever did not predict the threat of a mass exodus that could have ended OpenAI altogether. But another board member behind the ouster, Helen Toner — whom Altman had castigated over a paper she co-wrote that appeared to criticize OpenAI’s approach to safety — did understand that was a possibility. And it was a possibility she was prepared to stomach, if that was what would best safeguard humanity’s interests — which, remember, was the board’s job. She said that if the company was destroyed as a result of Altman’s firing, that could be consistent with its mission, the New York Times reported.
However, once Altman and Brockman announced they were joining Microsoft and the OpenAI staff threatened a mass exodus, too, that may have changed the board’s calculation: Keeping them in house was arguably better than this new alternative. Sending them straight into Microsoft’s arms would probably not bode well for AI safety.
After all, Microsoft laid off its entire AI ethics team earlier this year. When Microsoft CEO Satya Nadella teamed up with OpenAI to embed its GPT-4 into Bing search in February, he taunted competitor Google: “We made them dance.” And upon hiring Altman, Nadella tweeted that he was excited for the ousted leader to set “a new pace for innovation.”
Pushing out Altman and OpenAI’s top talent would have meant that “OpenAI can wash its hands of any responsibility for any possible future missteps on AI development but can’t stop it from happening,” Kreps said. “The developments show just how dynamic and high-stakes the AI space has become, and that it’s impossible either to stop or contain the progress.”
Impossible may be too strong a word. But containing the progress would require changing the underlying incentive structure in the AI industry, and that has proven extremely difficult in the context of hyper-capitalist, hyper-competitive, move-fast-and-break-things Silicon Valley. Being at the cutting edge of tech development is what earns profit and prestige, but that does not lend itself to slowing down, even when slowing down is strongly warranted.
Under Altman, OpenAI tried to square this circle by arguing that researchers need to play with advanced AI to figure out how to make advanced AI safe — so accelerating development is actually helpful. That was tenuous logic even a decade ago, but it doesn’t hold up today, when we’ve got AI systems so advanced and so opaque (think: GPT-4) that many experts say we need to figure out how they work before we build more black boxes that are even more unexplainable.
OpenAI had also run into a more prosaic problem that made it susceptible to taking a profit-seeking path: It needed money. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what you needed a decade ago — and that’s incredibly expensive. So to stay at the cutting edge, it had to create a for-profit arm and partner with Microsoft. OpenAI wasn’t alone in this: The rival company Anthropic, which former OpenAI employees spun up because they wanted to focus more on safety, started out by arguing that we need to change the underlying incentive structure in the industry, but it ended up joining forces with Amazon.
Given all this, is it even possible to build an AI company that advances the state of the art while also truly prioritizing ethics and safety?
“It’s looking like maybe not,” Marcus said.
Webb was even more direct, saying, “I don’t think it’s possible.” Instead, she emphasized that the government needs to change the underlying incentive structure within which all these companies operate. That would include a mix of carrots and sticks: positive incentives, like tax breaks for companies that prove they’re upholding the highest safety standards; and negative incentives, like regulation.
In the meantime, the AI industry is a Wild West, where each company plays by its own rules. OpenAI lives to play another day.
Update, November 22, 11:30 am ET: This story was originally published on November 21 and has been updated to reflect Altman’s reinstatement at OpenAI.
Google’s New York City office. | Spencer Platt/Getty Images What we learned (and didn’t learn) from the big Google antitrust trial. The Google search antitrust trial is expected to wrap up by Thanksgiving. And while we’ll have to wait until next year for a verdict, […]
Big TechWhat we learned (and didn’t learn) from the big Google antitrust trial.
The Google search antitrust trial is expected to wrap up by Thanksgiving. And while we’ll have to wait until next year for a verdict, there are a few things we learned over the last two months of the first big test of the limits of Big Tech’s power.
The Department of Justice is accusing Google of using its monopoly over internet search to freeze out its competitors — real or potential. Instead of innovating and putting out a superior product that users prefer, as Google insists it does, the government says the company is resting on its laurels and paying off manufacturers, carriers, and browser developers to make Google the default search engine across countless devices and operating systems. That’s why, when you search for something on Safari or Firefox, ask Siri a question, or type something into the search widget that came pre-installed on your Samsung Galaxy’s home screen, Google is powering that search. And although you can always change it to a different search engine, the DOJ maintains that most people don’t know they can or don’t know how, creating an exclusionary barrier to entry.
Part of the problem is that Google pays billions of dollars every year for default placement, a price almost none of its competitors — if it really even has any — can afford. That helps Google make many more billions of dollars off the ads on those search results. Having as many people using Google Search as much as possible is what makes the company’s search engine so attractive to advertisers, and the majority of Google’s revenue comes from those search ads. The incredible amount of data Google collects from those trillions of searches also helps it monetize some of its other services and gives it a major competitive edge over other search providers. Knowing what everyone everywhere wants to know all the time has made Google one of the most valuable companies in the world.
Over the course of the trial, we’ve learned a little bit more about the lengths Google has gone to to stay on top and boost revenue, and how hard it is for other search engines to gain a foothold. We don’t know as much as we could because Google has also gone to great lengths to keep as much information as possible away from the public.
Is Google using its dominant search market position to illegally freeze out competition, giving users a worsening search experience and advertisers less bang for more bucks because there’s no other game in town? Or is Google simply offering the best experience possible, without the added hassle of having to wade through a pesky choice screen the first time users open a search app?
We’ll find out what a judge thinks in a few months. In the meantime, here’s what we learned in the landmark trial, the result of which may change your internet experience.
Midway through the trial, Judge Amit Mehta unredacted part of a slide that showed how much Google pays out on those default search agreements. And it’s a lot! In 2021, the most recent year available, Google gave $26.3 billion to companies like Apple, Verizon, Samsung, and Mozilla. Google’s search ad revenue that year, which is also on the slide, was $146.4 billion. (In 2014, the first year these numbers are available, Google paid $7.1 billion and made $46.8 billion.) Not a bad return on the company’s investment. It’s also a high bar that no competitor, except maybe Microsoft, could hope to reach — more on that later.
Google’s revenue-sharing deal with Apple was a major part of the trial because Apple is believed to get the bulk of what Google pays out in those agreements. Having a default search placement on Apple devices, which make up roughly half of the smartphone market in the US, is extremely important to Google. We’ve known for years that Google pays Apple for that default placement — this also stops Apple from developing its own search engine — but that’s about it. While Google tried to keep virtually everything about the deal away from the public, we still got a few new details.
In an apparent slip-up, Google’s own witness in the waning days of the trial told us how much of Google’s ad revenue Apple gets: 36 percent for searches done on its Safari browser. The monetary value of that 36 percent is still a mystery. Judge Mehta did not disclose how big Apple’s slice of the $26.3 billion pie is, allowing the DOJ only to say it’s “more than $10 billion.” But the New York Times, citing internal Google sources, put it at $18 billion.
We didn’t just find out some of Google’s secrets; a few things about Apple came out, too. Apple’s senior vice president John Giannandrea testified that his company talked to Microsoft about buying Bing in 2018. Apple ultimately decided against it, but not before using the possibility as leverage in its search default negotiations with Google, something Microsoft is still pretty sore about. Apple executive Eddy Cue testified that the company chooses Google to be the default search because it believes Google is the best for its users. But speaking of Bing …
Multiple Microsoft executives, including CEO Satya Nadella, testified that Microsoft really, really wanted to make Bing the default search on Apple devices, to the point where it was willing to lose billions of dollars a year for the privilege. Samsung and Verizon, the trial also revealed, essentially refused to even negotiate with Microsoft over changing their search defaults to Bing. Perhaps they were thinking of Mozilla’s experience switching from Google to Yahoo. Mozilla CEO Mitchell Baker testified that Yahoo offered more money and fewer ads, so Mozilla’s Firefox browser switched the default from Google to Yahoo in 2014. Mozilla switched back to Google a few years later, which Baker attributed to Google’s search being better for its users, echoing the point that Google emphasized in its defense.
When Microsoft was the dominant player in web browsers, Google didn’t think search engine defaults were so great, and said as much in newly revealed documents. In 2005, former Google lawyer David Drummond warned Microsoft, at that point just a few years removed from its own antitrust woes, that making Microsoft’s search engine the default on its (then market-leading) Internet Explorer browser would be a bad look to antitrust regulators, and Google might sue Microsoft over it.
We got a few glimpses of how Google milks or manipulates its search engine for additional revenue. In March 2019, the company was trying to figure out what to do about the possibility that it wouldn’t meet its search revenue targets due to a “softness” in search queries. An email from then-head of search, Ben Gomes, expressed concern over how his division was “getting too involved with ads” and that he was “deeply deeply uncomfortable” over the prospect of increasing the number of search queries (and therefore the number of ads served) by degrading the user experience. There’s no evidence Google actually did or asked for this, and Gomes testified that he was discussing things the company would never actually do. Gomes stepped down as head of search in 2020. He was replaced by Prabhakar Raghavan, who was previously the head of Google’s ad business.
Perhaps more damning was an admission from Jerry Dischler, Google’s current head of ads, that the company has tweaked search ad auctions in ways that may increase prices to advertisers by 5 or even 10 percent so that Google could meet its revenue goals. Dischler said Google didn’t tell advertisers about the changes. They know now!
“I think that that’s a critical fact,” Lee Hepner, legal counsel at the American Economic Liberties Project, an antitrust advocacy group, told Vox. “Not just because it’s kind of surprising that they’re doing this without advertisers’ knowledge, but also because it’s indicative of Google’s monopoly power in the search ad market if they are able to raise prices on advertisers without losing market share.”
Google told Vox that the company “invest[s] significantly in ads quality to continuously improve on our ability to show ads that are highly relevant to people, and helpful to what they’re searching for.” That includes, the company said, “implementing quality thresholds for advertisers, which help eliminate irrelevant ads and have been a widely known part of the auction for more than a decade.”
While the trial revealed more of the company’s inner workings than it might have liked, Google was able to keep a lot of things secret. A good amount of testimony has occurred behind closed doors, and many documents were redacted whole or in part. An attempt to give the public remote access to the trial through an audio feed was denied.
There’s also a lot that we’ll never see because it doesn’t exist or is legally protected. Google executives sometimes turned their chat histories off to avoid leaving a paper trail, or copied attorneys on emails they didn’t need to be on to keep them protected under attorney-client privilege. They also made sure not to use certain words that would get the attention of antitrust regulators. Google said that it has “produced over four million documents, including thousands of chats” over the course of this case.
“You get the impression that Google’s strategy for avoiding antitrust scrutiny is not to avoid engaging in antitrust violations, but to avoid talking about engaging in those antitrust violations,” Hepner said.
If one Google antitrust trial isn’t enough for you, you’re in luck: Google’s currently fighting another antitrust lawsuit in California over its Play store, and the trial over the DOJ’s other antitrust lawsuit against Google, over its digital advertising business, should begin in March of next year.
Judge Mehta’s verdict should come out early next year. If he finds in favor of the DOJ, we’ll get the next phase of the trial, where the judge decides what Google’s punishment should be. That could be anything from forbidding Google to making default search agreements to ordering the break-up of the company. Whatever happens, the verdict surely won’t be the last word in the case. No matter who wins or loses, Google’s big antitrust case will likely be appealed, possibly up to the Supreme Court.
Update, November 16, 10:15 am: This article was published on November 16 and has been updated to include comments from Google about ad pricing and documents.
Google CEO Sundar Pichai leaves the court after testifying in his company’s defense on October 30, 2023. | Drew Angerer/Getty Images Which search engine do you use, and why is it Google? A judge will soon decide. The first big trial of the modern Big […]
Big TechWhich search engine do you use, and why is it Google? A judge will soon decide.
The first big trial of the modern Big Tech antitrust movement is here: On September 12, the Justice Department’s lawsuit against Google’s search engine monopoly began. What’s at stake? Oh, nothing much — just the future of the internet. Or maybe the future of antitrust law in the US. Maybe both.
This is the first antitrust trial that goes after a Big Tech company’s business practices since the DOJ took on Microsoft in the late ’90s, and it’s the first in a set of antitrust lawsuits against dominant tech platforms from federal and state antitrust enforcers that will play out in the next few months. Those include the DOJ and state attorneys general’s lawsuits against Google over its ad tech business, the FTC’s case against Meta over its acquisitions of Instagram and WhatsApp, and the FTC’s lawsuit against Amazon over its marketplace platform. Apple might even catch a lawsuit, too. The outcomes of these cases, starting with this one, will tell us if our antitrust laws, written decades before the internet existed and tried before an increasingly business-friendly justice system, can be applied to dominant digital platforms’ business practices now.
“If the DOJ loses, it becomes a very serious question of what’s it going to take,” Harold Feld, senior vice president at Public Knowledge, an open internet advocacy group, said. “Other than an act of Congress, is there any way that a court is going to apply the antitrust laws to these new business models and new technologies?”
That is to say, this case may change how much power those platforms have over us and how they’re allowed to wield it. And it all boils down to a simple question: Which search engine do you use, and why?
The first part of this isn’t in dispute. If you’re like 90 percent of Americans, it’s Google, which has been synonymous with internet search for decades. The “why” is where the fight is. Google says it’s because it’s the best search engine out there. The DOJ and attorneys general from almost every state and territory in the country say it’s because Google pays a host of companies — everyone from Apple to Verizon — billions of dollars a year to make its search the default on the vast majority of devices and browsers. While Google has refused to give the exact amount, it was revealed during the trial that it paid $26.3 billion in 2021 alone, and made $146.4 billion in revenue for search ads in that period. The majority of that money is believed to go to Apple.
Most of us probably take search engines for granted at this point, but they’re still a hugely important part of how the internet works. The proof is Google, which in just 25 years has grown into a $1.7 trillion company that owns major swaths of what we do online. It was all built on that search engine, which remains Google’s biggest revenue generator even now. Search ads were nearly 60 percent of the company’s revenue in 2022, to the tune of $162.45 billion. And that doesn’t count all the other ways Google can and does monetize its exclusive knowledge of what most of the world wants to know all the time.
Ironically enough, it was another tech company’s antitrust woes that helped Google emerge in the first place: Microsoft.
A few decades ago, your internet experience almost certainly began with Microsoft’s Internet Explorer, as was the case for up to 95 percent of internet users when the browser was at its early 2000s peak. But that market share didn’t happen because Internet Explorer was better, the DOJ contended in its 1998 antitrust lawsuit against the company. It was because Microsoft leveraged its dominance over computer operating systems to force its browser onto users.
Internet Explorer was bundled with Microsoft’s Windows operating system, and Microsoft ensured it was just about impossible to remove. Installing an alternate browser was technically possible but difficult, so most people didn’t bother. This killed off most of Internet Explorer’s competitors and gave Microsoft a monopoly over internet browsers that was similar to the one it enjoyed over computer operating systems. And that, the DOJ said, was an abuse of Microsoft’s monopoly power.
The US District Court for the District of Columbia agreed and ordered Microsoft to be broken up into two companies. But a higher court overturned part of that ruling, and the DOJ subsequently settled with Microsoft. The company got to stay in one piece, but it paid a price. While Microsoft was tied up in court, paying billions in fines, afraid to make any major moves that could incur more government wrath and no longer allowed to gatekeep the internet through its browser, new companies like Google emerged.
Now, the DOJ says, the cycle is repeating. But Google is the one that is using its dominance to freeze out competitors, and consumers are being denied the kind of innovation that put Google on the map in the first place.
“If the government’s allegations are to be believed, Google is doing exactly what Microsoft did in many respects,” said Gary Reback, an antitrust lawyer who was instrumental in convincing the DOJ to bring the case against Microsoft back then and tried to get the FTC to take on Google 10 years ago. “The major arguments — I’ve seen them all before — they were made by Microsoft, and they failed.”
The DOJ’s lawsuit was filed in October 2020, at the very end of Trump’s presidency and when anti-Big Tech sentiment was high and bipartisan. It came just a few weeks after the House’s long investigation into Amazon, Apple, Google, and Meta’s business practices, which led to a set of bipartisan, bicameral antitrust bills meant to address the unique ways digital platforms operate and maintain their dominance. Eleven states joined that suit; three more signed on a few months later. In December 2020, 35 states, the territories of Puerto Rico and Guam, and Washington, DC, filed their own lawsuit against Google over its search practices. Those two cases have been combined for this trial.
Microsoft has a place in this lawsuit, too, by the way: This time, it’s as a witness for the government. CEO Satya Nadella testified on October 2 that Google’s dominance has made it impossible for his company’s search engine, Bing, to truly compete — even as Microsoft has invested about $100 billion into its search engine to try. He said his company has tried to negotiate with Apple for years to break up its “oligopolistic” relationship with Google, offering the iPhone maker tens of billions of dollars to switch the search default from Google to Bing.
“Defaults are the only thing that matter,” Nadella said.
Apple, obviously, didn’t bite. Google’s argument is that Bing just isn’t as good as Google is. Even Windows users who have Microsoft’s Edge browser with its Bing default pre-installed prefer Google to Bing (though Bing’s market share is bigger on Windows PCs than it is elsewhere), and, as Nadella admitted, the most queried word on Bing is “Google.” Apple, Google says, is choosing the search engine it thinks is best for its customers — not the one that happens to pay it the most.
This isn’t to be confused with all the other antitrust lawsuits the government has filed against Google that address other parts of its business. One of those, about Google’s app store, was recently settled. Two others about Google’s ad tech business are winding their way through the courts. Here, we’re just looking at Google’s search arm, which is the foundation of the company but far from the only thing it does.
There are also a few things you won’t see in this case that used to be there. A few weeks ago, Judge Mehta threw out several of the plaintiffs’ claims. The states’ argument that Google harmed competitors like Yelp and Expedia by designing its search results to prominently feature its own services over theirs was tossed. The DOJ’s claims that Google’s agreements with manufacturers to give its services default placement on Androids and Internet of Things devices were exclusionary were also dismissed.
So we’re left with two claims. One is from the states’ case about Google’s search engine marketing tool, and it accuses the company of making certain features available to its search engine and not Microsoft’s Bing in order to give it an unfair advantage. But the core of this case is the second claim about Google’s default search agreements.
With so much of its revenue riding on the popularity and scale of its search product, Google is willing to spend a lot of money to ensure that it’s the default search in as many places as possible. The company shells out billions of dollars every year to browser developers, device manufacturers, and phone carriers for Google to be the default search engine almost everywhere. The exact amounts of those default search agreements have been redacted for this trial, but estimates put it at as much as $20 billion a year to Apple alone.
This paid placement, the DOJ says, has helped Google maintain its dominance and made it impossible for just about anyone else to compete. Very few companies have billions of dollars to throw around. Or, as the DOJ said, it’s “creating a continuous and self-reinforcing cycle of monopolization.”
And while it’s possible for users to switch to a different search engine, very few of them actually do. The DOJ is expected to say that’s because Google has locked up the best distribution channels. Using a competitor requires knowing that it’s even possible to do it in the first place as well as how to make the switch. There are also countless studies that will tell you how difficult it is to overcome consumer inertia. The vast majority of people just go with whatever’s there, which is why Google is paying to be there. Microsoft’s defense that people could install alternate browsers if they so chose didn’t work 25 years ago. The DOJ doesn’t think it should work now.
All this has hurt competitors, who can’t get a foothold in the market, according to the DOJ. It has impacted advertisers, who have to pay what Google is charging for those search ads because there’s no other game in town, and consumers, who don’t have much choice in search engines.
The lack of choice is also, the suit says, stifling innovation. There’s no pressure on Google to improve its product because there aren’t any companies trying to develop their own, possibly better, ones. The DOJ will likely argue that the quality of Google’s product has gone down as its dominance became more entrenched. One example could be all of those knowledge panels Google sticks on top of search results that direct users to other Google products, not to mention the presence of more and more search ads. The states’ case that this harmed third parties like Yelp was thrown out, but the DOJ could still say that it harms consumers who have to do more work to get to the search results they came to Google for in the first place.
There are other search engines, but they’ve struggled to gain market share. The aforementioned Bing currently has just 6.4 percent of the US market (Yahoo!, which uses Bing, is another 2.4 percent). There’s also DuckDuckGo, which has been trying to compete with Google as a privacy-preserving alternative. But it only has a fraction of the market, and it blames Google’s default search agreements for that.
“Even though DuckDuckGo provides something extremely valuable that people want and Google won’t provide — real privacy — Google makes it unduly difficult to use DuckDuckGo by default. We’re glad this issue is finally going to have its day in court,” Kamyl Bazbaz, spokesperson for DuckDuckGo, said in a statement.
DuckDuckGo, obviously, is an existing product. This case is also very much about the search engines that don’t exist and never will, the ones that you, the consumer, will never get to use. The DOJ will likely argue that’s because Google intentionally made the search engine barrier to entry too high. The co-founder of now-defunct search engine Neeva recently testified that his company, which had a subscription model rather than ad-based, couldn’t get the traction it needed in the face of Google’s monopoly.
For its part, Google maintains that it’s the most popular search engine because it’s the best one out there, giving its users the most meaningful and relevant results. The company says that the DOJ’s case is aimed at helping competitors — not consumers.
Google says the companies that choose its search to be the default on their products do so because it’s better, not because Google is paying them. And consumers use Google because it’s better, not because it happens to be there when they turn their new phones on or fire up their new computer’s browser for the first time.
“People don’t use Google because they have to — they use it because they want to,” Kent Walker, Google’s president of global affairs, said in a blog post. “Making it easier for people to get the products they want benefits consumers and is supported by American antitrust law.”
But why, you might ask, is Google paying anyone at all if it’s so great? Well, the company has long maintained that this is equivalent to a brand paying a grocery store for prime shelf space, something that is perfectly legal and happens all the time. (People who disagree with this will point out that occupying the only search engine slot on the vast majority of web browsers and devices is not quite the same thing as sitting on a shelf in a grocery store.) Google thinks it’s improving customer access to what it believes is the best product. And that, Google says, is good for consumers.
Google CEO Sundar Pichai took the stand on October 30 to say as much. He acknowledged that the default agreements are valuable to Google, but framed them as a promotional tool for the company.
But the DOJ referenced a Google executive’s notes from a 2018 meeting between Pichai and Apple CEO Tim Cook, which described them as wanting to “work as if we are one company.” Pichai said he didn’t remember saying that and doesn’t agree with it either, stressing that Apple is a competitor, not a partner. The government has also maintained that part of the reason why Google paid off Apple was to prevent the company from developing its own search engine. Pichai admitted that Google has, at times, had concerns that Apple could become a search competitor, but maintained that wasn’t the reason why it made those deals with the company.
Google also says it’s easy to switch to a different search engine — much easier, in fact, than it was to install a new browser back in the Microsoft lawsuit days. Apps can be downloaded in seconds, and it takes just a few clicks to change your search engine settings, as long as you know it’s possible and how to do it.
“While default settings matter (that’s why we bid for them), they’re easy to change. People can and do switch,” Walker said.
Google also says it’s continuously improving and innovating. Any perceived lack of competition (and the company says it has plenty of competition) hasn’t caused it to rest on its laurels.
“We invest billions of dollars in R&D and make thousands of quality improvements to Search every year to ensure we’re delivering the most helpful results,” Walker said.
Finally, Google has maintained that the market is more than just general search engines like Bing or DuckDuckGo, because general search engines aren’t the only way people look for things on the internet. They may also go directly to Reddit or Amazon, for example. So it has more competitors than the DOJ claims as well as a smaller market share. That’s probably not going to fly with the judge, but Google will give it a try anyway.
As Reback says, we saw many of these concepts litigated with the Microsoft case nearly three decades ago. So we should have case law that says some of the same or very similar practices Google is engaged in are illegal, right? Not necessarily.
Google has a few things going for it here. For one, it’s been more careful about how it phrases and frames things in internal documents than Microsoft was (assuming those internal documents exist — the DOJ has accused Google of withholding or destroying some of them). For another, the courts that will ultimately decide how to apply the law are different, too.
“Since Microsoft, there’s been a couple of Supreme Court decisions that are, by their attitude and their approach, tolerant of dominant firm behavior,” William Kovacic, who served as the chair of the FTC under George W. Bush, said. “Their attitude toward plaintiffs is not nearly so generous as the Court of Appeals was in the Microsoft case.”
No matter what the judge decides, it will be a while before we know the final outcome. The trial is expected to last about nine weeks, and Judge Mehta’s ruling won’t come out until next year. We’re sure to have a long appeals process after that. But whatever the outcome is, it may be hugely consequential, especially when viewed in combination with the other digital platform antitrust cases we have now (or likely will have soon) and the larger antitrust reform movement.
If Google loses, it faces the possibility of being broken up into smaller companies (an extreme, but not unheard of, measure that the DOJ is asking for) or forbidden from offering those search agreements. We could be looking at a much different Google, or we’ll get to see which search engine users pick when Google is not the default.
If the DOJ loses, there are a few ways to look at it. One is that this is proof that Google isn’t doing anything wrong and should be allowed to continue to operate as it always has, without being unfairly targeted by the government with its anti-Big Tech agenda.
But if you believe that Google and its Big Tech brethren’s dominance and power is a problem that needs to be solved, a DOJ loss would show that our antitrust laws and the courts that are charged with interpreting them aren’t equipped to deal with the realities of this digital economy and how its major players operate within it.
“If the government gets the door slammed on its face … if they try and they lose, then they can turn to Congress and say, ‘Well, our antitrust system is so cramped and limited that we can’t do the job. You’ve got to fix it,’” Kovacic said.
That could be what motivates Congress to pass antitrust laws that do account for dominant digital platforms. An internet that’s essentially controlled by a handful of companies may well open back up again — assuming it isn’t already too late.
Update, October 30, 5 pm ET: This story was originally published on September 9 and has been updated to include testimony from Microsoft CEO Satya Nadella, Neeva’s co-founder, and Google CEO Sundar Pichai. Google’s default payments in 2021 have also been added.
Microsoft may soon become Activision’s new owner. | Eric Thayer/Bloomberg via Getty Images Microsoft now owns Activision Blizzard, after dodging roadblocks from several government agencies around the world. Editor’s note, October 13, 11:10 am ET: After finally winning approval from British regulators, Microsoft has closed […]
Big TechMicrosoft now owns Activision Blizzard, after dodging roadblocks from several government agencies around the world.
Editor’s note, October 13, 11:10 am ET: After finally winning approval from British regulators, Microsoft has closed its purchase of Activision Blizzard. It’s unclear how this will affect Xbox owners and other gamers, but the completion of the deal represents a major blow to the FTC’s effort to rein in Big Tech. The original story, which was last updated on July 17, is below:
Microsoft’s $69 billion merger with Activision Blizzard seems all but inevitable now: Sony is finally playing along. Microsoft’s main rival and opposition to the acquisition just signed a deal to keep Activision’s Call of Duty on its PlayStation consoles pending the merger’s completion. The deal is a significant sign that Sony believes the acquisition will happen.
The deal, which Microsoft Gaming CEO Phil Spencer announced on Twitter on Sunday, will require Microsoft to make Call of Duty titles available for PlayStation for the next 10 years, a Microsoft spokesperson confirmed to Vox. Sony’s stated fears that Microsoft would pull Call of Duty from PlayStation platforms were one of the Federal Trade Commission’s major arguments in its lawsuit to block the merger.
But the FTC’s gambit suffered a major blow last week when a federal judge denied its request for a preliminary injunction to stop the merger before the trial is scheduled to begin in August. The FTC’s appeal of the decision was denied a few days later. That gives the companies the green light to complete their merger, although it could be undone should the FTC win its lawsuit. At this point, however, it’s exceedingly unlikely that the FTC will continue its case at all; it usually drops lawsuits like this when it loses the preliminary injunction to block them.
“We’re grateful to the court in San Francisco for this quick and thorough decision and hope other jurisdictions will continue working towards a timely resolution,” Brad Smith, vice chair and president of Microsoft, said in a statement about the initial decision to deny the preliminary injunction.
“We are disappointed in this outcome given the clear threat this merger poses to open competition in cloud gaming, subscription services, and consoles,” FTC spokesperson Douglas Farrar told Vox at the time of the initial decision.
The FTC sued Microsoft and Activision Blizzard last December to stop their planned $69 billion merger, saying the deal would unfairly harm competition in a gaming market worth hundreds of billions of dollars. Microsoft will become the third-largest gaming company in the world, behind Tencent and Sony, if the deal goes through. But the agency didn’t have the authority to stop the acquisition from happening in the meantime, hence the injunction request. Judge Jacqueline Scott Corley said she didn’t think the FTC would win its case and so wouldn’t stop the companies from merging.
“The FTC has not shown it is likely to succeed on its assertion the combined firm will probably pull Call of Duty from Sony PlayStation, or that its ownership of Activision content will substantially lessen competition in the video game library subscription and cloud gaming markets,” the judge wrote.
It’s a big setback for an agency that has, under chair Lina Khan, intensely scrutinized mergers and acquisitions that will make big companies even bigger, giving them a larger share of a market with fewer competitors in it. The court losses show the uphill battle the agency faces going up against massive companies in a country whose courts typically favor businesses.
Big Tech isn’t the only industry the FTC has focused on, but its size and power — Apple, Microsoft, Google, Amazon, and Meta are in the top 10 largest companies in the world by market cap as of this writing — makes it an obvious target, one Khan focused on in her pre-FTC work. Under her, the agency continued its lawsuit against Meta that seeks to unwind its acquisitions of Instagram and WhatsApp, recently sued Amazon over how difficult the company allegedly makes it to cancel Prime, and settled with Google over a deceptive advertising case. It has yet to win any major victories here, but such cases may take years, if not decades, to resolve. The FTC hasn’t challenged some Big Tech mergers, like Amazon’s acquisition of MGM, and its already lost a few other battles, like its case against Meta’s acquisition of VR game company Within. When the FTC lost a similar bid to get an injunction to prevent that merger, it dropped the case. It wouldn’t be at all surprising if it did the same now.
The weeklong hearing over the preliminary injunction touched on several parts of Microsoft’s business, but the big argument appeared to center on the Call of Duty franchise and if Microsoft would continue to make it available for rival Sony’s PlayStation should it be allowed to acquire Activision. Corley said she believed the evidence showed that more consumers would get access to Call of Duty and other Activision games, rather than fewer. Microsoft has a deal to bring Call of Duty to Nintendo Switch consoles for at least 10 years if the merger closes, for example.
Should the FTC drop the case, Microsoft still has one boss left in its merger battle: the United Kingdom’s Competition and Markets Authority, which blocked it over concerns that it would harm the nascent cloud gaming market. But there are signs that it may have found a way to get the UK’s approval after all: the authority and Microsoft jointly asked to delay their hearing before the Competition Appeal Tribunal, which is hearing that request today. This suggests that the CMA might be amenable to approving the merger if Microsoft makes certain concessions. The European Union has already approved the merger.
Update, July 17, 12 pm ET: This story, originally published on July 11, has been updated to include comment from Microsoft and the FTC, the FTC losing its appeal, Sony’s Call of Duty deal, and the hearing before the Competition Appeal Tribunal.
An 1882 color lithograph shows American railroad entrepreneurs carving up the United States as European royalty watch from across the Atlantic Ocean. | Frederick Burr Opper/Stock Montage/Getty Images Former FCC chair Tom Wheeler has a few ideas for how to regulate the “Digital Gilded Age.” […]
Big TechFormer FCC chair Tom Wheeler has a few ideas for how to regulate the “Digital Gilded Age.”
The 19th century probably isn’t the first thing that comes to mind when you hear about Big Tech’s harms and possible solutions. It’s probably not the second either, or even 50th. But in his forthcoming book Techlash: Who Makes the Rules in the Digital Gilded Age? Tom Wheeler makes the case that maybe it should be.
The original Gilded Age describes a period at the end of the 19th century and the beginning of the 20th, when it became increasingly apparent that new, transformative technologies that had done so much for so many had done a lot more for a very few: the handful of men who owned or effectively controlled industries like steel, oil, and the railroads. The money and power they amassed often came at the expense of everyone else. Antitrust laws and federal agencies were created to stop abusive business practices. We still rely on them today.
But an increasing number of people think those institutions aren’t equipped to deal with who holds the money and power now: Big Tech companies like Amazon, Apple, Google, Meta, and Microsoft. They differ on what can or should be done about that, however, and so very little has been done. These companies continue to operate under the rules they’ve made for themselves, which aren’t necessarily in the best interests of the rest of us. Wheeler has a few ideas about the best way to approach governmental oversight in this Digital Gilded Age, as he calls it. And he knows a thing or two about it.
“My entire professional life has been about the intersection of public policy and new technology,” he says, including a stint as chair of the Federal Communications Commission (FCC) under President Barack Obama — a tenure that will likely be best known for instituting net neutrality to increase government oversight of broadband internet (which was later repealed under Trump, and now, under Biden, is being restored). He also describes himself as an “amateur historian.” All four of his books, including this one, are rooted in history.
But he doesn’t think regulators and lawmakers should respond to the digital economy the way they did to the industrial innovations of 150 years ago. This Digital Gilded Age has a few fundamental differences from the original one, and that, he says, means it has to be regulated differently, too. Ahead of the release of Techlash, which comes out October 15, Wheeler spoke to Vox about all of this. This interview has been edited for brevity and clarity.
How did this concept of two gilded ages come about?
The more you look at the original Gilded Age, the more you say, “Wow, does that sound familiar!” Our technology-driven environment today echoes the original Gilded Age, principally because at its root it is the innovators who make the rules without regard for the consequences. We’ve always celebrated the fact that innovators make the rules. They should; they can see the future that none of the rest of us can see. All the great advances in science, technology, business, the arts, came from people who said, “We won’t obey the rules because we see something different.”
But ultimately, that behavior runs into the rights of individuals and the public interest. We saw that in the original Gilded Age, and the Congress responded with antitrust legislation, with the creation of the first federal regulatory agency, with the creation of legislation to protect against unsafe products. And the result of that was incredible growth in the economy and in those companies, as well as protection for consumers and competition.
I think we’re at that same kind of a hinge moment today. The theme of this book is that unregulated tech has a damaging effect on basic things such as privacy, competition, and truth. And you can put in place an agile oversight that will protect the average American and promote innovation by promoting competition. You can then end up rebalancing the public and the private interest. We did it before and we can do it again.
What are some of the key similarities between that Gilded Age and this one?
They were technology-driven, and the application of that technology was defined by a handful of innovators who made the rules based on what was in their interest, rather than what was in the public interest. In both instances, wondrous new things were developed. But they brought with them negative consequences.
And lastly, they both have accelerated the pace of life. In the original Gilded Age, industrialization increased the pace of life from the agricultural-based economy that had existed before. And obviously, in the digital Gilded Age, that pace of life has increased even more.
But they’re not exactly the same.
The assets behave differently. Industrial assets were things you could stub your toes on. Digital assets are soft assets. Industrial assets were expensive; digital assets are inexpensive. Industrial assets were used once and it’s gone. Digital assets can be used again and again. Industrial assets were rivalrous: if I have a ton of coal, you don’t. But digital assets can be shared. And the industrial assets were exhaustible: you could only use something once. In the digital environment, you can use it again and again and again. Every time you sign up for Facebook, you’re using the same software that somebody else signed up with. Every time you download Microsoft Word, you’re using the same software. It’s inexhaustible.
So the fact that the assets themselves are different created an economic activity that was different. In the industrial era, it was a pipeline production economy. A hard asset moved through the process until it rolled off the assembly line as a finished product. In the digital era, it is a platform-pairing process, where you create digital assets by pairing them with other digital assets to produce a product.
What do those differences mean when it comes to regulation?
The question is not only how do we put guardrails around Big Tech, but also how do we do it in a way that continues to encourage innovation? Industrial-style regulation is like industrial-style management. When Congress went to create these industrial agencies that we know today, including the FCC, they looked at how the companies that were to be supervised were being managed. Industrial management was a top-down, rigid, rules-based approach. The guru of industrial management was a fellow by the name of Frederick W. Taylor, and his management techniques were called Taylorism. Basically, his idea was to squeeze all creativity out of everybody, and get them to do exactly the same thing by your rules. And that will give you the best industrial production.
It is just the opposite today. Not only are our companies managed by agile management techniques, in that they’re constantly having to respond to new technology, but they’re also constantly having to respond to marketplace changes. What we need to have are new regulators that look like the new style of agile digital management that exists today.
You literally ran an agency, the FCC, that came out of that old model. Did your experience there inform your opinion now? Can agencies like the FCC even be agile?
I had to bring agility to an agency that was designed not to be agile. The net neutrality rule has what’s called the general conduct rule component, which builds in agility to deal with what might happen next. The privacy rule had agility in it to deal with changes. And what we did in cybersecurity was designed from the outset to be agile, because the bad guys who are trying to crack the security of networks are themselves incredibly agile. Unfortunately, all three of those were repealed in the Trump administration.
But, yes, I tried to practice this. Part of the reason why I wrote this book was to convey those learnings and what other solutions might be.
You use words like “sclerotic” and “rigid” to describe some of the things we have in place to deal (or not deal) with these matters. Can I assume that you don’t think Congress can or will do the job this time?
I think that they will, eventually. Here’s the important thing, another historical analogue: It was in 1867 that the Grange was founded by farmers as a countervailing force to the power of the railroads. Railroads were abusing farmers, and the farmers kept saying, “We need oversight.” But it wasn’t until 1887, 20 years after the Grange was founded, that the Interstate Commerce Commission was created. And it wasn’t until almost 20 years after that that Teddy Roosevelt finally helped to give it some real enforcement teeth. So the point of the matter is, this always takes a long period of time.
I think it needs to move faster now because of the even faster pace of life created by the new technology. I am optimistic that as the American people continue to express themselves about the need for oversight, that Congress will respond.
Will it? Big Tech companies have a lot of money to spend on lobbyists and whatever else. It seems like that’s been pretty effective at convincing them not to do much of anything so far.
So everybody says, “Oh, Congress is so controlled by the special interests.” But special interests in the late 19th century owned members of Congress. And still, we had antitrust laws, we had consumer protection laws. I think that, today, we the people need to communicate. The reason things changed in the late 19th century was that we the people, folks like the Grange, folks like the progressive movement and populists, kept saying, “This is enough. We’ve got to do something about that.” And I think that’s the hope for us, in our time.
You give Teddy Roosevelt a lot of credit for Gilded Age reforms in the book. Who is our Teddy Roosevelt now?
There is no one today with Rooseveltian powers and instincts. I think that President Biden has done more to promote competition and have oversight of digital activities than most other presidents in the digital era. And I would hope that someday he will be signing a digital protection agency bill. But I mean, through his appointments with people like [FTC chair] Lina Khan, those were inspired kinds of activities that have effects.
So, what does the ideal government oversight look like to you?
We need an agile agency that has a new approach and is built to look like the companies that it is supposed to oversee, rather than built to look like industrial companies that are not part of the digital economy. It’s reverse Taylorism. Where Taylor said we want to squeeze out every incentive for creativity, in digital companies transparency and creativity are the watchwords. We need to have those same concepts as the watchwords in the oversight of those digital companies.
One of your legacies at the FCC is net neutrality, which is back! Why do you think it’s become such a contentious issue? It seems reasonable to me that, as a technology becomes more and more essential to our lives, there should be more oversight ensuring that we all have access to it.
I think the real answer to that question is because the companies are afraid that it will lead to rate regulation, which they’re scared to death of. And, as you know, in our Open Internet Order, our net neutrality rule, we specifically said we’re not going to enforce rate regulation.
Why not have rate regulation?
The thought process that I went through in making that decision was: I can see how you can regulate a single factor like a voice line. I think it is entirely different to regulate a multifactor service like broadband that is constantly changing. So, it was in large part because A) we didn’t see the need for it today; B) we wanted to encourage broadband expansion; and C) we didn’t know how to do it. A broadband line delivers so much more than a voice channel, so how do you prioritize that? How do you decide that this is going to be priced this way, that’s going to be priced that way? And increasingly, broadband is competitive.
Having said that, I didn’t want to make the decision that would saddle my successors down the road who might have a need to do something about abusive pricing. I don’t think we’re experiencing that now. And so we said the power is there, but we’re not going to enforce it.
A version of this story was also published in the Vox Technology newsletter. Sign up here so you don’t miss the next one!
Apple Pay leads the tap-to-pay market, which has exploded in popularity in the last few years. | Jeff Chiu/AP Tap-to-pay makes spending money fun, easy, and virtually invisible. Apple released iOS 17 on September 18 and now that the new operating system is here, you […]
Big TechTap-to-pay makes spending money fun, easy, and virtually invisible.
Apple released iOS 17 on September 18 and now that the new operating system is here, you can probably leave your wallet at home.
The latest version of iOS expands what you can do with Apple Wallet, including how you pay for stuff and how you can use your iPhone to show your government ID, bringing the physical wallet closer to being obsolete. It also marks a step forward in Apple’s steady march toward becoming a sort of bank. Now, the company offers the Apple Card, a high-interest savings account, and interest-free buy now, pay later loans with Apple Pay Later, which launched earlier this year. This was almost a decade after the initial rollout of Apple Pay, which offered iPhone, iPad, and Apple Watch users the ability to buy things in stores by tapping their devices to a reader. With the latest update, Apple is continuing to make it clear that your smartphone isn’t just for calls, texts, and snapping a quick pic of dinner — it can handle everything related to your finances, too.
But it’s one thing for your phone to make video calls a piece of cake, and another for it to make spending money so easy.
When Apple Pay launched in 2014, one big criticism was that it tried to solve a nonexistent problem. Credit cards were already easy to use. Who really needed a tap-to-pay feature?
While it’s probably true that no one has thrown their hands up in utter confoundment at the prospect of swiping a credit card, the point of tap-to-pay technology wasn’t just to solve a problem for consumers. It can also grease the wheels of freer spending and help tech companies make money from these mobile transactions.
Today, the ability to shop with the tap of your phone is everywhere. Between 2019 and 2020, contactless payments soared by an impressive 172 percent; Visa reported in a recent earnings call that a third of in-person card transactions in the US are now tap to pay. The uptake is even higher in major metropolitan areas: In New York, where contactless pay for its sprawling subway system was introduced in 2021, the payment method accounts for almost half of all physical transactions now. Apple says that almost all US retailers accept Apple Pay, and according to tech research firm 451 Research, it’s the second most-used digital wallet after PayPal — pretty impressive considering it entered the market over a decade after PayPal. Tap to pay as a whole is a $300 billion industry in the US, with no signs of slowing down.
As mobile payments become more accessible, the act of consumption becomes more invisible. And that could spell trouble. Big tech companies like Apple are offering an onslaught of more frictionless ways to part with money. That also means they’re quickly becoming powerful arbiters of how we spend money, how much we spend, and what we spend on — all without facing the same strict regulations actual financial institutions, like banks, face.
The biggest draw of tap to pay is how easy it is, which may also be its biggest problem. Studies show that how much you’ll spend at a store hinges on how you’re paying. Cash is arguably the most restrictive and cumbersome; there’s an unbendable limit on what you can spend, and the money takes up physical space in your wallet. As credit cards overtook cash, research on consumer habits revealed that people are much more willing to fork over money in the form of a credit card, leading to people making larger purchases and even becoming better tippers.
It’s not merely that cash is more irritating. It’s more psychologically painful to pay with dollar bills and coins because there’s a tangible exchange taking place: the loss of countable, hard-earned money for the gain of some item. It makes you think twice about what you place in your shopping cart.
“When we lose something of value, it’s like a squirrel losing a nut and then feeling bad about the fact that he doesn’t have one more nut,” says Manoj Thomas, a professor of marketing at Cornell University.
The visibility of losing a nut matters because it’s not a rational thought process. Debit card spending patterns, for instance, are more akin to credit than cash. But even though the money is immediately withdrawn from your bank account when you use a debit card, you don’t actually see that you’re losing a tangible “nut,” so you’re less pained by the spending.
While tap to pay is still pretty new, there’s evidence that paying with your phone is even less painful than using a plastic debit or credit card. One 2019 study found evidence that people using mobile payments — not only tapping their phone to pay but scanning QR codes or other payment methods through the phone — were more likely to have higher “financial risk tolerance” and display costly credit card behavior, which includes paying late fees or only making minimum payments.
Another consequence of not using cash is that it’s harder to remember the damage. People who use cash more accurately recalled how much they spent than people who used credit cards or mobile pay, according to a University of Warwick preprint paper. Between contactless debit, PIN-verified debit, and cash payments, contactless had the worst recall. (Interestingly, PIN-verified credit and debit led to poorer recall than contactless credit, debit, and mobile payments.)
If just your debit card is linked to your phone, that also puts a hard cap on spending. But once you add your credit card to the tap-to-pay feature, you’re confronted with all the pitfalls of credit card swiping, which may even be amplified. What’s more, certain kinds of purchases become more common with credit cards or mobile payments. “What I found is that people spend more money on snacks, beverages — what’s typically considered discretionary purchases,” Thomas explained.
The advent of credit cards solved the problem of not having enough cash on hand at the moment, enabling people to make bigger purchases. Yet there were still plenty of scenarios where cash made more sense. Stores often had minimum amounts to swipe with a card to cover the transaction fees charged by credit issuers, so buying a stick of gum at the corner store required cash.
But now even that distinction has blurred. More retailers have embraced the use of credit cards (or no longer even accept cash), in part because so many customers now want to go cashless. With tap to pay, smaller purchases with credit cards have become more common. According to Mastercard, a whopping eight in 10 contactless payments in early 2020 were for purchases under $25, which it notes is “typically dominated by cash.” The Federal Reserve also found that tap to pay was used more often for smaller purchases than plastic credit cards, with an average value of $30.
Merchants also had another reason to adopt tap to pay and do away with credit card minimums: Customers tend to spend more overall if they’re not using cash. The dollar amount of an average purchase might be smaller with tap to pay, but the total number of purchases can increase. As Thomas put it, “Businesses are realizing that people spend a lot more when they use more abstract modes of payment.”
Tap to pay is just the dip of a toe in the ever-expanding waters of financial services that tech companies are rushing to offer. On top of Apple Pay, the Apple Card, and Apple Pay Later, Apple’s long-term plans include rolling out a suite of in-house financial services, as reported by Bloomberg last year. While it’s not clear yet how successful Apple Pay Later will be, the company has reason to be optimistic. Apple Pay is arguably the gateway to its increasingly lush ecosystem of financial features, and it has a healthy lead over competitor Google Pay as the iPhone continues to dominate the smartphone market. A little over half of smartphone users in the US have chosen an iPhone, and over 55 million people in the US now use Apple Pay, according to Insider Intelligence.
From activating Apple Pay — which the iPhone strongly prods new owners to do — it’s a small hop to a whole host of other current and future services that would make Apple the central vault of your personal finances. As my colleague Sara Morrison has reported, the iPhone is well on its way to becoming your bank.
Your phone isn’t only a place to store your credit or debit card information for mobile payments, but also the home for your savings account, your boarding passes, digital keys and passwords, vaccination cards, and even your driver’s license.
Apple’s fintech push is also coming alongside the launch of hardware subscriptions, which would let people pay a monthly fee to rent an iPhone. Most iPhones aren’t purchased outright but through a trade-in program, installment plan, or other kinds of financing, but the subscription is especially ideal for people with not-so-great credit. All of this encourages people to spend — and to do so through Apple.
Apple is also renowned for a design philosophy that streamlines every aspect of the user experience, whether by removing buttons or simplifying software so that the functions are easy to understand and intuitive to use — just recall how audiences gasped when Steve Jobs showed off pinch-to-zoom on the original iPhone. This kind of ease of use is great when it comes to checking your voicemails or browsing your photo album, but becomes potentially problematic when it comes to spending money, something companies like Apple want us to do even if we can’t afford to. Apple Pay just requires a double-click of the iPhone’s side button and a glance at the screen for Face ID to confirm your purchase. The iPhone even gives you a quick buzz and makes a pleasing ding when the payment goes through. With the Apple Watch, you can even use a simple hand gesture to bring up tap to pay.
The availability and ease of use of Apple Pay has really paid off for the tech giant: Analysts estimate the company made about $1.9 billion last year from Apple Pay transaction fees charged to credit issuers.
That big number is why there’s a brewing battle over some of Apple’s policies. In a recently released report, the Consumer Financial Protection Bureau flagged Apple’s practice of blocking third-party developers from accessing its NFC chip, the tech that enables tap-to-pay in smartphones. Apple ensures that every iPhone owner who wants to use contactless payments goes through Apple’s payment service, and the CFPB contends that it’s essentially a form of regulation Apple is imposing on other companies.
“We only think this is going to become more critical going forward, as the shift from cash to cards to now mobile devices is estimated to increase and intensify,” a CFPB spokesperson tells Vox.
Apple’s singular dominance isn’t just bad for other competitors in the space; the lack of meaningful competition is ultimately bad for consumers, leading to fewer choices and possibly higher costs for the consumer. The fintech industry — particularly with buy now, pay later programs — has been luring in customers with promises of convenience, easier access to credit, and lower interest rates than traditional finance, but with its entrance into the sector, Apple threatens to heavily influence how fintech works and how much more reliant consumers become on credit and loans. Its foray into buy now, pay later is especially worthy of scrutiny, as the ease of spreading out payments also coaxes spending, considerably increasing sales for some retailers in recent years. Just five buy now, pay later companies loaned out $24 billion in 2021, an explosion of over 1,000 percent from 2019, according to a report by the CFPB. Meanwhile, credit card debt reached a historic high this summer, topping $1 trillion.
“People have recently been spending more,” says Bruce McClary, vice president of marketing at the National Foundation for Credit Counseling. “They’ve been using their credit cards more frequently for things that they might have otherwise paid for in cash several years ago.”
And they’re not just carrying higher balances, either. Delinquency rates for credit cards are back up to pre-pandemic levels. With student loan payments set to resume this fall, these are worrying signs.
The growing reliance on credit makes it all the more likely that tech giants’ entry into fintech will be attractive to consumers, even if using these financial products wouldn’t be in their best interest. Apple CEO Tim Cook claimed that it’s “helping people live a healthier day” through Apple Pay and the Apple Card, citing that the Apple Card has no annual fees and that its savings account has a high interest rate. But even if it wants to position itself as a more trustworthy bank or financial adviser than what we’re used to, the reality is that Apple is a tech company, not a bank. Banks are regulated financial institutions, while big tech companies are, well, not regulated. They have no fiduciary duty to customers.
“There’s a blurring of the lines between banking and commerce, and that is very concerning in itself,” says the CFPB spokesperson.
Still, if fintech companies wanted to urge some financial restraint, they could in theory resist making spending through your phone so slick, intangible, and addictive. With Apple Pay specifically, there’s something soothing or even pleasurable about the haptic buzz and little dopamine-inducing ding when a payment goes through. But payment platforms could “build in more points of friction so that the process of paying gets slowed down a bit — this is both psychologically and financially safer,” says Merle van den Akker, a behavioral economics expert and one of the authors of the University of Warwick preprint.
It’s pretty unlikely that any company would add obstacles to spending money through their platform when their plan for making money — ideally, a lot of money — depends on being as frictionless as possible.
Paige Vickers / Vox California’s new Right to Repair Act can’t magically make Apple’s popular earbuds good for the environment. On September 12, California’s State Assembly approved the Right to Repair Act. Once it’s signed into law by Gov. Gavin Newsom, makers of consumer electronics […]
Big TechCalifornia’s new Right to Repair Act can’t magically make Apple’s popular earbuds good for the environment.
On September 12, California’s State Assembly approved the Right to Repair Act. Once it’s signed into law by Gov. Gavin Newsom, makers of consumer electronics will be required to provide independent shops in the state with tools, spare parts, and manuals needed to fix the gadgets that they sell.
Advocates of Right to Repair, which included dozens of repair stores across the state, local officials, and environmental groups, hailed the move as a victory, the culmination of a years-long battle to force tech companies to allow regular people to easily repair their own devices. Even Apple, which had opposed the legislation for years, had a change of heart and officially supported Right to Repair in California at the end of August. The world’s richest maker of consumer electronics would finally be forced to make repair materials available for every shiny phone, tablet, laptop, and smartwatch it sells.
But some activists had a question: What does this mean for AirPods?
“If products have batteries, they should be easy to swap or easy to remove so that consumers and recyclers can separate them,” said Kyle Wiens, the CEO of product repair blog and parts retailer iFixit. “You just don’t see that with AirPod design.”
For years, Apple has made its commitment to the environment part of its powerful marketing machine. It has shown off robots capable of disassembling over a million iPhones in a year, and increasingly uses recycled materials to build most of its flagship devices. It claims that its spaceship-like Cupertino headquarters, whose gigantic circular roof is covered with hundreds of solar panels, is powered by renewable energy, and is spending millions to save mangroves and savannas in India and Kenya. At its September 12 event, where it launched a $1,200 titanium phone and a watch that isn’t too different from last year’s model beyond a brand-new “carbon neutral” logo on its plastic-free packaging, Apple reiterated its plans to go entirely carbon neutral by 2030 in a deeply polarizing skit starring Octavia Spencer as “Mother Nature.”
And yet, Apple sells tens of millions of AirPods each year, a product that critics have long pointed out is harmful for the environment.
Every single sleek earbud is a dense bundle of rare earth metals glued together in a hard plastic shell. Each one also contains a tiny lithium-ion battery that degrades over time like all batteries do, which means that eventually, all AirPods stop holding enough charge to be usable, sometimes in as little as 18 months.
That’s where the problem lies: Unlike iPhones, iPads, Apple Watches, and MacBooks, which can be opened up and have failing batteries swapped relatively easily, AirPods aren’t really designed up be cracked apart by you, repair shops, or recycling companies without destroying their shells in the process, or shedding blood trying to cut them open.
“It’s in the ‘insanely difficult’ category,” Wiens told Vox, “which is why you don’t have too many repair shops in the US trying to do this.”
This lack of repairability of AirPods raises an important issue: What does the Right to Repair law mean for a product that isn’t designed to be repaired?
“AirPods are too difficult to fix — that is clear,” said Jenn Engstrom, state director at CALPIRG, a California consumer rights nonprofit that has been pushing the state to implement Right to Repair legislation for years. “Right to Repair reforms ensure that you can’t make repairs proprietary. But for some devices, the design gets in the way even if you can access parts and manuals. We believe Right to Repair sets a basic expectation that a product should be fixable. But yeah, we can only repair what is repairable.”
Apple did not respond to multiple requests for comment.
In 2022, Apple launched its own Self Service Repair program. For a chunk of change and a whole lot of trouble, the company provides manuals, sells parts, and rents out official equipment to let people repair iPhones, Macs, and Apple displays. But when Right to Repair becomes law in California, the company will be required to provide it for all products it sells. The problem is that AirPods aren’t designed to be repaired at all.
“AirPods are an environmental catastrophe,” Wiens said. “They’re a product that I don’t think should exist in their current state. They’re almost impossible to recycle economically.”
Apple released AirPods in 2016, the same year it removed the headphone jack on iPhones, spawning an entire industry of truly wireless earbuds with tiny charging cases. At first, AirPods were the butt of jokes. Some people thought wearing a pair in public was a flex. The Guardian said that AirPods were “like a tampon without a string.” Then, they were everywhere.
As a feat of engineering, AirPods are, indeed, impressive. Each one packs in a sophisticated processor, microphones, drivers, optical sensors, and a motion accelerometer to detect when it’s in or out of your ear in a space less than 2 inches long. All these tiny components are jammed together and sealed inside sleek plastic casing designed to look smooth and seamless, making AirPods damn near impossible to open.
But a key reason that makes AirPods disposable is what powers them. Thanks to chemical reactions that take place when you charge and discharge them, the lithium-ion batteries that power AirPods and other modern electronics hold less and less charge over time. The ones in AirPods are also tiny, which means that while a new one might run for up to six hours on a single charge when new, they might last for less than 60 minutes after a couple of years of heavy use.
Apple didn’t provide a way to recycle a pair of AirPods when they were first released. Eventually, the company let people swap out a dying AirPod for a new one — for $49 a piece — if they were out of warranty, and then sent the old AirPods to one of the handful of recyclers it partners with. Apple also lets you mail in a pair of AirPods to recycle responsibly instead of tossing them into the trash.
In 2019, however, after a viral, 4,000-word Vice essay called the wireless earbuds a “tragedy,” the notoriously secretive Apple pulled back the curtain on the AirPods recycling process. Wistron GreenTech, a Texas-based subsidiary of Taiwanese manufacturing giant Wistron that Apple hired to recycle AirPods, later told tech publication OneZero that AirPods couldn’t be opened by any kind of automated system. Instead, each device had to be manually pried apart by a worker with pliers and jigs. And because it cost more to open up a pair of AirPods than the value of the material extracted from it, Apple paid Wistron — and, presumably, its other recycling partners — a fee to cover the difference.
“It is not easy to fully repair broken AirPods, but we are able to reuse components for other units,” Rob Greening, a spokesperson for Decluttr, an online platform that lets people trade in old devices for cash or gift cards, told Vox.
When AirPods launched, iFixit gave them a repairability score of zero out of 10, noting that accessing any component was impossible without destroying the AirPods’ outer casing. At iFixit, Wiens said he bans employees from using AirPods at work. The company also has a workplace perk, he said, where it buys employees any headphones they want as long as they meet iFixit’s repairability criteria — which AirPods don’t.
Because Apple claims to “replace your AirPods battery for a service fee,” Wiens thinks that AirPods should be subject to California’s Right to Repair law, too. But because the earphones are not designed to be opened up, it’s unclear how.
“I’d sure like to see Apple’s recommended process for doing it,” Wiens said. “There is some possibility that Apple is smarter than everyone and has some secret way to do it, but we haven’t figured it out yet.”
AirPods are likely just a fraction of the 6.9 million tons of e-waste that the US generates each year. But they are symbolic of the larger environmental problems that products of their category cause.
In a 2022 paper called “AirPods and the Earth,” Sy Taffel, a lecturer at New Zealand’s Massey University whose research focuses on digital technology and the environment, argued that any right to repair legislation should prohibit the production of irreparable digital devices such as AirPods, as the right to repair an irreparable device is effectively meaningless.
“You can’t pop in a new battery in an old AirPod the same way you can pop in a new battery into an old iPhone,” Taffel told Vox. “So even getting a replacement from Apple doesn’t really ameliorate any of the environmental harms these things cause. It just means that as a consumer, you end up paying a bit less money than if you were going to buy a completely new set.”
Earlier this year, the European Parliament approved new rules that mandate consumer devices such as smartphones, tablets, and cameras to have batteries that users must be able to remove and replace easily. Taffel said that he would like lawmakers to lay down similar rules for wireless earphones including AirPods.
“There’s a reason the sustainability mantra is repair, reuse, reduce, recycle,” he said. “Recycling always comes last because recycling stuff takes a lot of energy. It’s not always feasible.”
Just over a decade ago, the primary battery-powered devices most people had were smartphones, tablets, and laptops. Today, we have smart watches, wireless headphones, smart speakers, e-readers, and VR headsets. Next year, Apple will release its own pair of high-end VR glasses called the Vision Pro.
“The market capitalization of tech companies is partly based on the idea that they will continue to create new categories of digital devices that will be considered popular and will be widely sold,” Taffel said.
Unlike a pair of wired headphones that you could potentially use for decades, the pair of AirPods you buy today will run out of steam sometime in the next couple of years. At that rate, you will have bought half a dozen pairs of AirPods, tossing your old ones in the drawer, or in the trash. Or maybe you’ll have sent them in for recycling, forcing recycling companies to expend even more energy in the process.
“From an environmental perspective, we need to be doing less and less and less,” Taffel said. “But tech’s model is one of constant growth. There’s always more and more and more. Both these things are completely incompatible.”
All of this is the opposite of Apple’s increased emphasis on being environmentally responsible. Hanging on to your existing devices for as long as possible is one of the most effective ways to reduce your carbon footprint. But it’s also bad for Apple’s bottom line. Already, the company’s latest iPhones, which went on sale today, are backordered.
In Apple’s controversial skit, CEO Tim Cook promises “Mother Nature” that all Apple devices will have “a net zero climate impact” by 2030.
“All of them?” she asks.
“All of them,” Cook says.
“They better.”
“They will.”
The two stare at each other for a long moment. And when the tension reaches a crescendo, Mother Nature breaks it with a cheerful “Okay! Good! See you next year.”
Not once does anyone mention AirPods.
Sen. Elizabeth Warren’s Digital Consumer Protection Commission Act would create a new agency dedicated to regulating Big Tech. | Al Drago/Bloomberg via Getty Images The Massachusetts senator explains why we need an FCC for Big Tech. Stop me if you’ve heard this one before: Sen. […]
Big TechThe Massachusetts senator explains why we need an FCC for Big Tech.
Stop me if you’ve heard this one before: Sen. Elizabeth Warren has an idea for a new federal agency that takes on some of the most powerful and valuable companies in the world, aiming to protect consumers from their abusive business practices.
You’d be forgiven for assuming I’m referring to the Consumer Financial Protection Bureau, the federal agency that Warren is largely and deservedly credited with creating. No, this is about a new bill that would create another new agency: the Digital Consumer Protection Commission Act, or DCPC. Whereas the CFPB took on Big Banking, the DCPC aims to take on Big Tech, with a dedicated and specialized agency empowered to promulgate and enforce new regulations. Warren believes it will address some of Big Tech’s greatest harms, which the US has thus far failed to rein in any other way.
“Big Tech giants exploit people’s data, invade Americans’ privacy, and crush competition,” Warren told me in an interview. “The tech industry has shut down every attempt to regulate it or impose liability on it.” She added: “Enough is enough. We cannot let a handful of unelected Big Tech billionaires govern our lives and govern our democracy.”
Warren’s push for another new agency comes as we’re seeing what appears to be plenty of justification for its existence. Digital privacy and Big Tech-focused antitrust bills have largely fizzled out, despite bipartisan support for them. Congress is currently mulling over bills about online children’s safety and TikTok as a potential national security threat that may never get close to passing. On the rare occasions when these companies are held to account, even the largest fines levied against them are little more than rounding errors, easily shrugged off as the cost of doing business.
We’re also seeing what may be the limits of what our existing agencies can do. The FTC’s attempt to block Microsoft’s acquisition of video game giant Activision was recently defeated in court. The Department of Justice’s antitrust trial against Google over its search engine dominance just began, kicking off a wave of Big Tech antitrust lawsuits that will test if and how existing antitrust laws can be applied to these digital platforms. If they can’t, perhaps that will finally make the case for a new agency that can.
“People get what’s wrong. And they also get that Congress is not responding,” Warren said, adding: “Congress is slow and deliberative. The agencies can be more nimble. But to do that, they need expertise.”
This isn’t the first bill to propose a separate agency to regulate Big Tech platforms. Sens. Michael Bennet (D-CO) and Peter Welch (D-VT) introduced one in May, called the Digital Platform Commission Act. Taken together, the two bills show a growing realization in Congress that the existing legislation and agencies aren’t enough to regulate Big Tech, and that it’s time to try something else.
It won’t be easy, to say the least, to create an agency in the face of Big Tech’s lobbying money and influence, Republican lawmakers’ general aversion to regulations and federal agencies, and the reluctance of some Democrats to regulate and possibly impede the progress of such an important industry, especially when it’s rooted in their home states.
But Warren’s bill has two things that Bennet and Welch’s bill does not. First, it’s bipartisan, with Sen. Lindsey Graham (R-SC) — of all people! — as a cosponsor. Second, it has Warren, who is one of few people who can say she’s created a federal agency before.
Can she do it again?
Most of us know who Warren is because of the CFPB. She spelled out her idea for it in 2007, believing that Americans needed an agency that looked out for their best interests when it came to financial products — think credit cards, car loans, and mortgages — as the Consumer Product Safety Commission did for things like car seats, toys, and treadmills.
The financial crisis made the case that such an agency was needed and got the necessary political will behind it. The Dodd-Frank Act created the CFPB in 2010. Over the next year, Warren was charged with establishing and leading the agency as an adviser to the Obama administration. After being passed up to be the CFPB’s official director, she went back to her home state of Massachusetts and ran for Senate. Spoiler alert: She won. Now she’s in a position to help create a new agency and take on another consumer protection issue.
“She was pivotal to both the creation of the CFPB and to making it real,” said Raj Date, who was the agency’s deputy director at the time. “Her advocacy for the bureau’s creation — the media appearances, the tireless lobbying, all the op-eds — was so effective and so visible that that’s what people tend to remember from that era. But, to me, the more impressive and more surprising thing is just how successful of a chief executive and an entrepreneur she turned out to be.”
Warren gets a lot of the credit for the CFPB’s existence and, by extension, what it’s done for consumers since it opened its doors in 2011. The agency gets an average of 3,000 complaints a day, indicating that the general public still sees a need for its services. As of July 2023, it says it’s gotten $17.5 billion back to Americans in the form of monetary compensation and consumer relief, and has given out $4 billion in fines for violations of consumer financial protection laws, including billions of dollars from repeat offenders Wells Fargo and Bank of America alone.
Despite these successes (or perhaps because of them), the CFPB continues to fight for its very existence. The Supreme Court is set to hear arguments in October on whether it can still be funded through the Federal Reserve. We’ve also seen, through the Trump administration, how much a president can shape the direction of even nominally independent regulatory agencies. His appointees quickly set about reversing a lot of what the CFPB did under Obama (including, for some reason, its very name). They had a much different definition of consumer protection, including protecting the rights of banks and getting rid of the “unwarranted regulatory burdens” on them.
The CFPB is now directed by Rohit Chopra, who has brought it back to its consumer financial protection roots. But that may only last until a Republican takes over the White House, assuming the Supreme Court doesn’t declare it unconstitutional first.
All of this is to say that even the agencies we have don’t have an easy ride. Putting a new one in place will be an infinitely harder one. But with bills like Warren’s on the table, it’s looking more possible than ever.
In many ways, the Digital Consumer Protection Commission that Warren proposes is similar to the Federal Communications Commission and the Federal Aviation Administration, sector-specific agencies created to regulate new and powerful technologies.
“This bill isn’t trying to hard code a solution all at once,” Warren said. “It’s focused on creating a structure, an agency with the flexibility and the expertise to respond to problems as they arise.”
If you’ve been paying attention to Congress’s attempts to pass tech-specific regulation over the last few years, you might recognize a few of the new regulations that Warren’s bill would make these platforms subject to, starting with the fact that they only apply to the very largest platforms with deep pockets and massive user bases. Amazon, Apple, Google, Meta, and Microsoft seem to meet the qualifications, as do TikTok and even Twitter/X. It’s the tech companies of this size, Warren believes, that have become the most problematic and so need the most regulation. New Big Tech companies that emerge — think OpenAI — will also be covered if and when they meet the user and market capitalization minimums.
A big part of this bill is about competition and antitrust. It is, after all, an amendment to the Clayton Act, a foundational antitrust law, and comes to us from someone who has long scrutinized the power that a few massive companies have over many of our industries. The legislation would also outlaw some of the practices that the Big Tech antitrust bills tried to address, like self-preferencing or owning the marketplace on which a company’s own goods compete with third parties.
Online privacy is also covered here. Platforms can’t target ads to users based on data from third parties, must limit the personal data they process, and have to let users access and delete data they’ve collected about them, according to the bill. They must also protect user data from breaches or be subject to monetary penalties (some of which may be given to affected users). There are also provisions requiring that platforms ensure they are not promoting harmful content to users, something that child online safety bills and state laws have been trying to address.
A national security provision includes restrictions on ownership from citizens of “foreign adversaries” and rules about foreign processing of US citizens’ data. It also requires that platforms identify posts from bots and state their country of origin. This seems to be aimed at TikTok and its China-based owners, which some believe is a national security issue but which American laws don’t sufficiently address.
There’s also a section about transparency, which requires covered platforms to have clear terms of service, notify users if they’ve been banned or their content has been otherwise restricted and tell them why, and provide ways for users to appeal those moderation decisions. Users can also appeal to the DCPC if they feel they’ve been unfairly banned. This would address what has been a major issue for Republicans, who often accuse Big Tech of abusing its power to suppress conservative speech.
Finally, the DCPC will have the authority to issue licenses to operate to these companies — and revoke them.
“Banks operate with a license, airlines operate with a license,” Warren said. “The same should be true for the giants in Big Tech.”
The fact that Warren is working on this with Lindsey Graham, a Republican who hasn’t always had the nicest things to say about her (nor she of him), shows that Big Tech regulations can be a unifying issue. Any digital platform agency bill is going to need that bipartisan support, though it remains to be seen if there’s enough of it. Warren and Graham introduced the bill over a month ago, but it has yet to attract cosponsors.
“For years I have been trying to find ways to empower consumers against Big Tech,” Graham said in a statement. “A regulatory commission will give consumers a voice against Big Tech and the power to punish them when appropriate.”
Graham’s involvement shouldn’t be a huge surprise, either. He’s been very vocal about wanting to curb certain of Big Tech’s abuses and has sponsored several bipartisan bills to that effect. That includes EARN IT with Sen. Richard Blumenthal (D-CT), and the American Innovation and Choice Online Act, headed up by Sen. Amy Klobuchar (D-MN). Graham also told OpenAI CEO Sam Altman in a hearing last May that he wanted a new federal agency with licensing powers to oversee AI.
That doesn’t mean everyone is on board with this. Sarah Kreps, director of Cornell University’s Tech Policy Institute, isn’t convinced that a new agency is the best approach. There are existing agencies that can handle many of these issues, and she doesn’t think we’ve had the kind of crisis that demonstrates the need for something new. The financial crisis led to the CFPB, for example, and the Department of Homeland Security was created in the wake of 9/11. Kreps also wonders if an agency set up with the goal of checking Big Tech’s power might be unnecessarily antagonistic toward what is one of the biggest drivers of the US economy and technological supremacy.
“Convince me that Apple having the largest market capitalization in the world is an inherent problem for consumers. And if you can convince me that that’s the case, next convince me that the FTC can’t address that problem,” she said.
Kreps also thinks that our existing laws can deal with new and future Big Tech issues — or at least, she hasn’t seen proof that they can’t. The Google antitrust trial has only just begun, and lawsuits that could determine how copyright law applies to generative AI haven’t gone to trial yet.
“I don’t think that it’s inherently bad for the courts to be interpreting existing laws in new contexts,” she said.
Tom Wheeler, who served as the FCC chair under Obama and is author of the upcoming book Techlash, published an extensive study on whether a digital platform agency like the one Bennet and Warren’s bills propose is necessary.
“The digital economy is different from the analog economy, and needs to have regulators with digital DNA and focus,” Wheeler told Vox. The FTC has traditionally been more of an enforcement agency than a rulemaking one, he added, and this is an industry that needs rules set by the government rather than itself.
“I think that there comes a point in time when a critical mass occurs, and I think that is a growing bipartisan sense that, hey, we’ve got to do something,” Wheeler said, especially now that we appear to be on the cusp of massive disruption by generative AI.
As for whether Warren and Graham’s approach is the best way to do that, Wheeler says he’s more focused on the fact that bills like this are getting more people thinking and talking about creating a digital platform agency in the first place.
That sentiment is shared by Harold Feld, a senior vice president for consumer advocacy group Public Knowledge. He has literally written the book on the need for a digital platform agency, and sees this bill as part of what he hopes will be a wave of support for establishing an agency.
“The important thing here is a recognition that, yes, you need an expert agency for something that is this important and that is also very clearly an identifiable sector of the economy at this point,” Feld said. “It’s not something that really can just be tacked on as an additional feature of another agency.”
But Feld doesn’t think we’re getting a law that creates a digital platform agency anytime soon, including Warren’s. The fact that she and others are proposing them, however, makes it more likely that someday we will.
The creation of a new agency to regulate Big Tech will have to overcome what will likely be a lot of opposition from a Republican Party bent on deregulating as many industries as possible. Not all Democrats will support the idea, either. It was Democratic leaders in the House and Senate, after all, who chose not to give the Big Tech antitrust bills a floor vote. The tech industry will fight it too, especially the well-moneyed handful of companies the law applies to. NetChoice, an industry association that counts Amazon, Google, Meta, TikTok, and Twitter among its members, posted a scathing critique of the bill the day it was announced.
“We’re gonna have a fight over this, no doubt about it,” Warren said. “But this is a fight worth having.”
It’s also a fight she’s won before.
Asya Demidova for Vox It’s no accident — the intertwining of religion and technology is centuries old. Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You won’t get […]
Big TechIt’s no accident — the intertwining of religion and technology is centuries old.
Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You won’t get sick, or age, or die. Eternal life will be yours! Even better, your mind will be blissfully free of uncertainty — you’ll have access to perfect knowledge. Oh, and you’ll no longer be stuck on Earth. Instead, you can live up in the heavens.
If I told you all this, would you assume that I was a religious preacher or an AI researcher?
Either one would be a pretty solid guess.
The more you listen to Silicon Valley’s discourse around AI, the more you hear echoes of religion. That’s because a lot of the excitement about building a superintelligent machine comes down to recycled religious ideas. Most secular technologists who are building AI just don’t recognize that.
These technologists propose cheating death by uploading our minds to the cloud, where we can live digitally for all eternity. They talk about AI as a decision-making agent that can judge with mathematical certainty what’s optimal and what’s not. And they envision artificial general intelligence (AGI) — a hypothetical system that can match human problem-solving abilities across many domains — as an endeavor that guarantees human salvation if it goes well, even as it spells doom if it goes badly.
These visions are almost identical to the visions of Christian eschatology, the branch of theology that deals with the “end times” or the final destiny of humanity.
Christian eschatology tells us that we’re all headed toward the “four last things”: death, judgment, and heaven or hell. Although everyone who’s ever lived so far has died, we’ll be resurrected after the second coming of Christ to find out where we’ll live for all eternity. Our souls will face a final judgment, care of God, the perfect decision-maker. That will guarantee us heaven if it goes well, but hell if it goes badly.
Five years ago, when I began attending conferences in Silicon Valley and first started to notice parallels like these between religion talk and AI talk, I figured there was a simple psychological explanation. Both were a response to core human anxieties: our mortality; the difficulty of judging whether we’re doing right or wrong; the unknowability of our life’s meaning and ultimate place in this universe — or the next one. Religious thinkers and AI thinkers had simply stumbled upon similar answers to the questions that plague us all.
So I was surprised to learn that the connection goes much deeper.
“The intertwining of religion and technology is centuries old, despite the people who’ll tell you that science is value-neutral and divorced from things like religion,” said Robert Geraci, a professor of religious studies at Manhattan College and author of Apocalyptic AI. “That’s simply not true. It never has been true.”
In fact, historians tracing the influence of religious ideas contend that we can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights he’s influenced in Silicon Valley.
Occasionally, someone there still dimly senses the parallels. “Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture,” Jack Clark, co-founder of the AI safety company Anthropic, mused on Twitter in March.
Mostly, though, the figures spouting a vision of AGI as a kind of techno-eschatology — from Sam Altman, the CEO of ChatGPT-maker OpenAI, to Elon Musk, who wants to link your brain to computers — express their ideas in secular language. They’re either unaware or unwilling to admit that the vision they’re selling derives much of its power from the fact that it’s plugging into age-old religious ideas.
But it’s important to know where these ideas come from. Not because “religious” is somehow pejorative; just because ideas are religious doesn’t mean there’s something wrong with them (the opposite is often true). Instead, we should understand the history of these ideas — of virtual afterlife as a mode of salvation, say, or moral progress understood as technological progress — so we see that they’re not immutable or inevitable; certain people came up with them at certain times to serve certain purposes, but there are other ideas out there if we want them. We don’t have to fall prey to the danger of the single story.
“We have to be careful with what narratives we buy into,” said Elke Schwarz, a political theorist at Queen Mary University of London who studies the ethics of military AI. “Whenever we talk about something religious, there’s something sacred at play. Having something that’s sacred can enable harm, because if something is sacred it’s worth doing the worst things for.”
In the Abrahamic religions that shaped the West, it all goes back to shame.
Remember what happens in the book of Genesis? When Adam and Eve eat from the tree of knowledge, God expels them from the garden of Eden and condemns them to all the indignities of flesh-and-blood creatures: toil and pain, birth and death. Humankind is never the same after that fall from grace. Before the sin, we were perfect creatures made in the image of God; now we’re miserable meat sacks.
But in the Middle Ages, Christian thinkers developed a radical idea, as the historian David Noble explains in his book The Religion of Technology. What if tech could help us restore humanity to the perfection of Adam before the fall?
The influential ninth-century philosopher John Scotus Eriugena, for example, insisted that part of what it meant for Adam to be formed in God’s image was that he was a creator, a maker. So if we wanted to restore humanity to the God-like perfection of Adam prior to his fall, we’d have to lean into that aspect of ourselves. Eriugena wrote that the “mechanical arts” (a.k.a. technology) were “man’s links with the Divine, their cultivation a means to salvation.”
This idea took off in medieval monasteries, where the motto “ora et labora” — prayer and work — began to circulate. Even in the midst of the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions like the first known tidal-powered water wheel and impact-drilled well. Catholics became known as innovators; to this day, engineers have four patron saints in the religion. There’s a reason why some say the Catholic Church was the Silicon Valley of the Middle Ages: It was responsible for everything from “metallurgy, mills, and musical notation to the wide-scale adoption of clocks and the printing press,” as I noted in a 2018 Atlantic article.
This wasn’t tech for tech’s sake, or for profit’s sake. Instead, tech progress was synonymous with moral progress. By recovering humanity’s original perfection, we could usher in the kingdom of God. As Noble writes, “Technology had come to be identified with transcendence, implicated as never before in the Christian idea of redemption.”
The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity. A pair of Bacons illustrates how the same core belief — that tech would accomplish redemption — influenced both religious traditionalists and those who adopted a scientific worldview.
In the 13th century, the alchemist Roger Bacon, taking a cue from biblical prophecies, sought to create an elixir of life that could achieve something like the Resurrection as the apostle Paul described it. The elixir, Bacon hoped, would give humans not just immortality, but also magical abilities like traveling at the speed of thought. Then in the 16th century, Francis Bacon (no relation) came along. Superficially he seemed very different from his predecessor — he critiqued alchemy, considering it unscientific — yet he prophesied that we’d one day use tech to overcome our mortality “for the glory of the Creator and the relief of man’s estate.”
By the Renaissance, Europeans dared to dream that we could remake ourselves in the image of God not only by inching toward immortality, but also by creating consciousness out of inanimate matter.
“The possibility to make new life is, other than defeating death, the ultimate power,” Schwarz pointed out.
Christian engineers created automata — wooden robots — that could move around and mouth prayers. Muslims were rumored to create mechanical heads that could talk like oracles. And Jewish folktales spread about rabbis who brought to life clay figures, called golems, by permuting language in magical ways. In the stories, the golem sometimes offers salvation by saving the Jewish community from persecution. But other times, the golem goes rogue, killing people and using its powers for evil.
If all of this is sounding distinctly familiar — well, it should. The golem idea has been cited in works on AI risk, like the 1964 book God & Golem, Inc. by mathematician and philosopher Norbert Wiener. You hear the same anxieties today in the slew of open letters released by technologists, warning that AGI will bring upon us either salvation or doom.
Reading these statements, you might well ask: why would we even want to create AGI, if AGI threatens doom as much as it promises salvation? Why not just limit ourselves to creating narrower AI — which could already work wonders in applications like curing diseases — and stick with that for a while?
For an answer to that, come with me on one more romp through history, and we’ll start to see how the recent rise of three intertwined movements have molded Silicon Valley’s visions for AI.
A lot of people assume that when Charles Darwin published his theory of evolution in 1859, all religious thinkers instantly saw it as a horrifying, heretical threat, one that dethroned humans as God’s most godly creations. But some Christian thinkers embraced it as gorgeous new garb for the old spiritual prophecies. After all, religious ideas never really die; they just put on new clothes.
A prime example was Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 1900s. He believed that human evolution, nudged along with tech, was actually the vehicle for bringing about the kingdom of God, and that the melding of humans and machines would lead to an explosion of intelligence, which he dubbed the omega point. Our consciousness would become “a state of super-consciousness” where we merge with the divine and become a new species.
Teilhard influenced his pal Julian Huxley, an evolutionary biologist who was president of both the British Humanist Association and the British Eugenics Society, as author Meghan O’Gieblyn documents in her 2021 book God, Human, Animal, Machine. It was Huxley who popularized Teilhard’s idea that we should use tech to evolve our species, calling it “transhumanism.”
That, in turn, influenced the futurist Ray Kurzweil, who made basically the same prediction as Teilhard: We’re approaching a time when human intelligence merges with machine intelligence, becoming unbelievably powerful. Only instead of calling it the omega point, Kurzweil rebranded it as the “singularity.”
“The human species, along with the computational technology it created, will be able to solve age-old problems … and will be in a position to change the nature of mortality in a postbiological future,” wrote Kurzweil in his 1999 national bestseller The Age of Spiritual Machines. (Strong New Testament vibes there. Per the book of Revelation: “Death shall be no more, neither shall there be mourning nor crying nor pain any more, for the former things have passed away.”)
Kurzweil has copped to the spiritual parallels, and so have those who’ve formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatt’s Terasem movement to the Mormon Transhumanist Association to Anthony Levandowski’s short-lived Way of the Future church. But many others, such as Oxford philosopher Nick Bostrom, insist that unlike religion, transhumanism relies on “critical reason and our best available scientific evidence.”
These days, transhumanism has a sibling, another movement that was born in Oxford and caught fire in Silicon Valley: effective altruism (EA), which aims to figure out how to do the most good possible for the most people. Effective altruists also say their approach is rooted in secular reason and evidence.
Yet EA actually mirrors religion in many ways: functionally (it brings together a community built around a shared vision of moral life), structurally (it’s got a hierarchy of prophet-leaders, canonical texts, holidays, and rituals), and aesthetically (it promotes tithing and favors asceticism). Most importantly for our purposes, it offers an eschatology.
EA’s eschatology comes in the form of its most controversial idea, longtermism, which Musk has described as “a close match for my philosophy.” It argues that the best way to help the most people is to focus on ensuring that humanity will survive far into the future (as in, millions of years from now), since many more billions of people could exist in the future than in the present — assuming our species doesn’t go extinct first.
And here’s where we start to get the answer to our question about why technologists are set on building AGI.
To effective altruists and longtermists, just sticking with narrow AI is not an option. Take Will MacAskill, the Oxford philosopher known as the “reluctant prophet” of effective altruism and longtermism. In his 2022 book What We Owe the Future, he explains why he thinks a plateauing of technological advancement is unacceptable. “A period of stagnation,” he writes, “could increase the risks of extinction and permanent collapse.”
He cites his colleague Toby Ord, who estimates that the probability of human extinction through risks like rogue AI and engineered pandemics over the next century is one in six — Russian roulette. Another fellow traveler in EA, Holden Karnofsky, likewise argues that we’re living at the “hinge of history” or the “most important century” — a singular time in the story of humanity when we could either flourish like never before or bring about our own extinction. MacAskill, like Musk, suggests in his book that a good way to avoid extinction is to settle on other planets so we aren’t keeping all our eggs in one basket.
But that’s only half of MacAskill’s “moral case for space settlement.” The other half is that we should be trying to make future human civilization as big and utopian as possible. As MacAskill’s Oxford colleague Bostrom has argued, the “colonization of the universe” would give us the area and resources with which to run gargantuan numbers of digital simulations of humans living happy lives. The more space, the more happy (digital) humans! This is where the vast majority of moral value lies: not in the present on Earth, but in the future in heaven… Sorry, I meant in the “virtual afterlife.”
When we put all these ideas together and boil them down, we get this basic proposition:
Any student of religion will immediately recognize this for what it is: apocalyptic logic.
Transhumanists, effective altruists, and longtermists have inherited the view that the end times are nigh and that technological progress is our best shot at moral progress. For people operating within this logic, it seems natural to pursue AGI. Even though they view AGI as a top existential risk, they believe we can’t afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence (which will surely end any minute!) and into a flourishing interstellar adulthood (so many happy people, so much moral value!). Of course we ought to march forward technologically because that means marching forward morally!
But is this rooted in reason and evidence? Or is it rooted in dogma?
The hidden premise here is technological determinism, with a side dash of geopolitics. Even if you and I don’t create terrifyingly powerful AGI, the thinking goes, somebody else or some other country will — so why stop ourselves from getting in on the action? OpenAI’s Altman exemplifies the belief that tech will inevitably march forward. He wrote on his blog in 2017 that “unless we destroy ourselves first, superhuman AI is going to happen.” Why? “As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.”
Have we learned that? I see no evidence to suggest that anything that can be invented necessarily will be invented. (As AI Impacts lead researcher Katja Grace memorably wrote, “Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine.”) It seems more likely that people tend to pursue innovations when there are very powerful economic, social, or ideological pressures pushing them to.
In the case of the AGI fever that’s gripped Silicon Valley, recycled religious ideas — in the garb of transhumanism, effective altruism, and longtermism — have supplied the social and ideological pressures. As for the economic, profit-making pressure, well, that’s always operative in Silicon Valley.
Now, 61 percent of Americans believe AI may threaten human civilization, and that belief is especially strong among evangelical Christians, according to a Reuters/Ipsos poll in May. To Geraci, the religious studies scholar, that doesn’t come as a surprise. Apocalyptic logic, he noted, is “very, very, very powerful in American Protestant Christianity” — to the point that 4 in 10 US adults currently believe that humanity is living in the end times.
Unfortunately, apocalyptic logic tends to breed dangerous fanaticism. In the Middle Ages, when false messiahs arose, people gave up their worldly possessions to follow their prophet. Today, with talk of AGI doom suffusing the media, true believers drop out of college to go work on AI safety. The doom-or-salvation, heaven-or-hell logic pushes people to take big risks — to ante up and go all in.
In an interview with me last year, MacAskill disavowed extreme gambles. He told me he imagines that a certain type of Silicon Valley tech bro, thinking there’s a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI.
“That’s not the sort of person I want building AGI, because they are not responsive to the moral issues,” MacAskill told me. “Maybe that means we have to delay the singularity in order to make it safer. Maybe that means it doesn’t come in my lifetime. That would be an enormous sacrifice.”
When MacAskill told me this, I pictured a Moses figure, looking out over the promised land but knowing he would not reach it. The longtermist vision seemed to require of him a brutal faith: You personally will not be saved, but your spiritual descendants will.
There’s nothing inherently wrong with believing that tech can radically improve humanity’s lot. In many ways, it obviously already has.
“Technology is not the problem,” Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me. In fact, Delio is comfortable with the idea that we’re already in a new stage of evolution, shifting from Homo sapiens to “techno sapiens.” She thinks we should be open-minded about proactively evolving our species with tech’s help.
But she’s also clear that we need to be explicit about which values are shaping our tech “so that we can develop the technology with purpose — and with ethical boundaries,” she said. Otherwise, “technology is blind and potentially dangerous.”
Geraci agrees. “If a ton of people in Silicon Valley are going, ‘Hey, I’m in for this technology because it’s going to make me immortal,’ that’s a little bit terrifying,” he told me. “But if somebody says, ‘I’m in for this technology because I think we’re going to be able to use it to solve world hunger’ — those are two very different motives. It would impact the types of products you try to design, the population for which you are designing, and the way you try to deploy it in the world around you.”
Part of making deliberate decisions about which values animate tech is also being keenly aware of who gets the power to decide. According to Schwarz, the architects of artificial intelligence have sold us on a vision of necessary tech progress with AI and set themselves up as the only experts on it, which makes them enormously powerful — arguably more powerful than our democratically elected officials.
“The idea that developing AGI is a kind of natural law becomes an ordering principle, and that ordering principle is political. It gives political power to some and a lot less to most others,” Schwarz said. “It’s so strange to me to say, ‘We have to be really careful with AGI,’ rather than saying, ‘We don’t need AGI, this is not on the table.’ But we’re already at a point when power is consolidated in a way that doesn’t even give us the option to collectively suggest that AGI should not be pursued.”
We got to this point in large part because, for the past thousand years, the West has fallen prey to the danger of the single story: the story equating tech progress with moral progress that we inherited from medieval religious thinkers.
“It’s the one narrative we have,” Delio said. That narrative has made us inclined to defer to technologists (who, in the past, were also spiritual authorities) on the values and assumptions being baked into their products.
“What are alternatives? If another narrative were to say, ‘Just the dynamism of being alive is itself the goal,’” then we might have totally different aspirations for technology, Delio added. “But we don’t have that narrative! Our dominant narrative is to create, invent, make, and to have that change us.”
We need to decide what kind of salvation we want. If we’re generating our enthusiasm for AI through visions of transcending our earthbound limits and our meat-sack mortality, that will create one kind of societal outcome. But if we commit to using tech to improve the well-being of this world and these bodies, we can have a different outcome. We can, as Noble put it, “begin to direct our astonishing capabilities toward more worldly and humane ends.”
Meta founder Mark Zuckerberg and Twitter CEO Elon Musk have teased on social media this week that they’re willing to enter a cage fight. | Mandel Ngan and Alain Jocard/AFP via Getty Images Maybe they’re not? The boys are fighting. Or aren’t they? By “boys,” […]
Big TechMaybe they’re not?
The boys are fighting. Or aren’t they?
By “boys,” of course, we mean tech billionaires Elon Musk, who owns Tesla, SpaceX, and most recently Twitter, as well as Mark Zuckerberg, who founded Meta (formerly Facebook), which also owns Instagram and WhatsApp.
They are 51 and 39 years old, respectively — and back in June, we regretfully informed you that they were gearing up for a cage fight at an unconfirmed location (but possibly in Las Vegas?) on a to-be-decided date. Elon Musk signaled his interest in the match on Twitter; Zuckerberg, naturally, confirmed that he was in through Instagram.
There was always a question as to whether this match would even happen, despite the back-and-forth social media trash-talking between Musk and Zuckerberg, and Ultimate Fighting Championship (UFC) president Dana White’s assertion that “both guys are absolutely dead serious.” It would be a big spectacle of an MMA fight, but to an extent, Zuckerberg and Musk have already gotten the benefit of publicity by just talking about fighting.
Last week, Musk stirred up new interest by claiming that he had been in talks with Italy’s prime minister and culture minister and that the fight would take place at a historic site in Rome, sparking speculation that it would be the Colosseum. (The culture minister later said that the fight would not happen in Rome.) Musk, in his tweet, emphasized that the fight would be a philanthropic effort and proceeds would go to “veterans” and “pediatric hospitals in Italy.”
Now, Zuckerberg, who has been posting pictures of himself training and readying for the fight, has signaled that volleys between him and Musk have come to an end, posting on his Twitter-competitor app Threads that Musk wasn’t “serious” and it was “time to move on.” He said that Musk wouldn’t confirm a date — Zuck says he had proposed Aug. 26, which is just days away — and instead had asked for a “practice round” with Zuckerberg.
Their shared plea for attention could have been a means of distracting from news they might want to bury: Just before the cage fight news broke, Meta announced that it would cut off access to news on Facebook and Instagram in Canada following the passage of a law that requires such tech companies to compensate domestic media outlets when linking to their content. Meanwhile, Musk’s reputation has plunged in the last year — data from Morning Consult from late 2022 indicated that his net favorability had fallen by 13 points among US adults, and even Tesla’s reputation has been dinged by his behavior.
But entertaining a fight like this also just seems to be a reflection of Musk’s and Zuckerberg’s sheer vanity. The younger generation of MMA fans in particular are “willing to fanboy for billionaires,” said Nate Wilcox, the owner of Bloody Elbow, a news site that covers MMA and other combat sports. Musk has done stunts like this before to successfully win media attention, like smoking weed on Joe Rogan’s show or naming his dog the CEO of Twitter. And Zuckerberg is the kind of guy who reportedly cuts his hair to look like Augustus Caesar.
“I think that narcissism can’t be underestimated,” Wilcox said.
This entire idea might sound like a worrying fever dream, but it is in fact real — and the Musk versus Zuckerberg faceoff has somewhat of a history. What set the stage for their “beef” was a SpaceX rocket carrying a Facebook-owned satellite in 2016. The launch failed, and the satellite — which the company now known as Meta had been planning to use to provide internet service in parts of Africa — was destroyed.
Things have been arguably a little frosty since then. Musk is on record saying social media apps like Instagram negatively impact mental health. In recent months, Musk has said that one of his goals with Twitter is to maximize “unregretted user time” — perhaps a swipe at Meta.
Zuckerberg, for his part, has not tweeted in over a decade. In the aftermath of the 2018 Cambridge Analytica scandal, in which troves of Facebook user data had been misused by a private data firm linked to the Trump campaign, Musk deleted the Facebook pages for Tesla and SpaceX.
But while the two men aren’t exactly friends, the promised Musk-Zuckerberg fight was actually about the competition of two similar businesses, which ramped up after Musk entered the social media arena last year by (reluctantly) buying Twitter. Since then, Twitter’s declining stability, its ploys to charge users for features like identity-verifying blue check marks, and increasingly visible right-wing vitriol and hate speech (earlier this summer, Musk declared “cisgender” a slur on Twitter) have been the subject of nonstop complaining and mockery.
In March, tech newsletter Platformer reported that Meta was working on a Twitter-like, text-based social media app. A top Meta executive boasted that their version would be “sanely run,” nodding to countless reports of Musk’s seemingly impetuous decisions since taking over Twitter. Musk referenced this comment in the lead-up to the cage fight suggestion, tweeting, “I’m sure Earth can’t wait to be exclusively under Zuck’s thumb with no other options.”
It’s true that Meta is the much bigger company, with a market cap of almost $747 billion and 3.8 billion monthly active users across all of its apps. In contrast, Twitter’s market cap (before Musk took it private) was around $41 billion and had around 368 million monthly active users in 2022. But the rhetoric is also classic Elon, positioning himself as the champion of the everyman — who promises to create an egalitarian, free-speech town square — standing up against tyrannical monarchs.
It’s difficult to predict whether Musk or Zuckerberg would emerge victorious.
“Whenever you have amateur, non-athletes trying to compete in fight sports, it’s always a crapshoot,” said Wilcox. “You don’t really know what to expect since you’ve never seen these people fight competitively before.”
In Zuckerberg’s case, the closest he’s come is the lowest level of amateur competition in jiu-jitsu, in which he has earned a white belt — the first rank of five in expertise. That might give him a slight edge over Musk, who seemingly has never done anything of the sort. Zuckerberg is also younger by 12 years, suggesting that he might be more agile. That has some people in the MMA world putting their bets on him.
But Musk (though he’s recently lost weight, reportedly due to his Ozempic prescription) is larger, and that can prove a big advantage in MMA. The Twitter owner himself acknowledged as much: “I have this great move that I call ‘The Walrus,’ where I just lie on top of my opponent & do nothing,” he tweeted. Musk weighs an estimated 187 pounds and Zuckerberg less than 154 pounds.
Still, putting money on either of them is a risky proposition. And while neither of them likely has the skill to knock the other out, it could still be an ugly fight, reminiscent of some celebrity boxing matches in the early 2000s, such as the particularly brutal beating that ’70s sitcom star Ron Palillo took from Saved by the Bell’s Dustin Diamond. Wilcox likened that battle to “the story of when the Romans put elephants in Gladiator cages with lions, and the elephants put up such a sad spectacle as they were mauled to death that the crowded Roman Colosseum actually had their stomach turn.”
If Musk and Zuckerberg duke it out under the UFC, it would have to be regulated, which would likely include safety requirements such as headgear that would put an upper bound on how dangerous it could be.
“The only fight outcome I can really promise you is that both men will embarrass themselves and that if one of them has a distinct physical advantage over the other, it will not be pleasant to watch unless you enjoy watching beatings,” Wilcox said.
We’re all immersed in the hamster wheel of the attention economy, and the owners of two popular social media platforms know this. Tech billionaires have received the treatment of modern-day gods for decades now; their net worth is determined not just by the technologies they purport to “disrupt” but also how cool, savvy, and genius their audience perceives them to be.
Take, for example, Elon Musk’s legions of loyalists, who seem to accept everything he tweets as gospel to live by. Long before the Twitter acquisition debacle, Musk had already attained a cult of personality not unlike the fervent fascination that has surrounded Apple founder Steve Jobs. (Jobs’s biographer, Walter Isaacson, is also working on an accounting of Musk’s life so far.) Over the years, Musk has also made abundantly clear that he wants to be seen as a shitposter, a casual internet troll who’s not taking any of this too seriously, and a cool guy who is definitely not mad about anyone insulting him (as evidenced by his abuse of the cry-laugh emoji).
In contrast, Zuckerberg has never enjoyed a vast tide of popularity, particularly after the 2018 Cambridge Analytica scandal. The Morning Consult study showing Musk’s fall from favor also showed that Zuckerberg had the lowest public favorability among the CEOs studied. The public has often received him as somewhat awkward and hard to relate to; he’s been the butt of several memes. Unlike Musk, he doesn’t have a habit of blurting out everything his prefrontal lobe tells him to. Zuckerberg’s more buttoned-up persona has likely saved him from further controversy, but it also means there simply aren’t Zuckerberg fanboys in the way that there are Musk fanboys.
Tech companies often soar to blistering heights dizzyingly fast — just look at what’s happened with AI just over the past six months, and how many people now know ChatGPT creator Sam Altman, the CEO of OpenAI — but they can plummet just as quickly too. The world witnessed such a fall from grace last year when billionaire crypto darling Sam Bankman-Fried was arrested for fraud in the Bahamas. Or Elizabeth Holmes, who has just begun serving an 11-year prison sentence.
The point is, Silicon Valley stars rise and fall at light speed, and much of it depends on hype, which in turn can be bolstered or muted by how likable — or, at the least, entertaining — a promising startup founder is. In retrospect, it might seem unbelievable that anyone ever believed Holmes’s out-of-thin-air nonsense, or that no one scrutinized Bankman-Fried and FTX sooner. But when the people spouting such consequential, expensive lies are powerful influencer-celebrities with a large audience and the media industry is primed to amplify their words, is it much of a surprise that fraudsters are treated with not just credulity, but adulation, raking in billions as a result?
Clout, in other words, is a considerable asset, especially for the CEOs and founders hustling in the mercurial waters of the tech industry. Musk and Zuckerberg know this. When the attention is on them and they go viral, that usually makes them richer and more influential. Dangling an absurd cage match in our faces, they asked, “Are you not entertained?”
Update, August 14, 1:55 pm ET: This story was originally published on June 23, 2023, and has been updated to reflect comments from Meta founder Mark Zuckerberg that the fight would likely not happen and that people should “move on.”