Connect with us

Google

Meta and Google: Facing the Consequences of Hurting a Child

Published

on

A jury says Meta and Google hurt a kid. What now?

Today on Decoder, we’re talking about the landmark social media addiction trials that just resulted in two major verdicts against Big Tech. There’s one case in New Mexico against Meta, and another in California against both companies, which have said they plan to appeal.

These are complicated cases with some huge repercussions for both how these platforms work and the very nature of speech in America, so to help us work through it all, I’ve brought on two heavy hitters: my friend Casey Newton, who is founder and editor of the excellent newsletter Platformer and co-host of the Hard Fork podcast, as well as Verge senior policy reporter Lauren Feiner. Lauren was actually in that Los Angeles courtroom where executives like Mark Zuckerberg took the stand in the case of a 20-year-old woman named Kaley, who successfully argued Meta and Google negligently designed their platforms in ways that contributed to her mental health issues.

These cases, the first in a wave of injury lawsuits targeting tech companies, are about the design decisions of platforms like Instagram and YouTube. They argue that the platforms have fundamental flaws that harm users, especially teenagers, and that these companies knew about these problems and were negligent in shipping these features anyway. These cases are part of much larger set of moves that aim to fundamentally change the legal mechanisms that exist that might regulate social media platforms.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

When we say harm, we’re not just talking about addictive design that brings users back compulsively. It’s also about features like algorithmic recommendations and camera filters that make issues like anxiety, depression, and body dysmorphia worse. This emphasis on how the platforms work, as opposed to focusing solely on the content, is part of a movement that’s been building for years. It focuses on the argument that social media is not and cannot be healthy — that it might in fact be defective, the same way that cigarettes, when used as designed, cause cancer.

There are a lot of complex ideas, and Casey, Lauren, and I really spent some time working through them. The first of these ideas is whether there is a distinction between product features — like recommendation, auto-play video, infinite scroll — and the types of harmful yet legal speech served to young people on these platforms using these tools, like eating disorder videos or posts designed to convince young men to hate women.

But it’s very difficult, if not unconstitutional, to force these companies to moderate this kind of content in specific ways. The First Amendment obviously prohibits the government from regulating what speech these companies promote and moderate, and private action is usually blocked by Section 230 of the Communications Decency Act, which protects tech platforms from being held responsible for the content their users post.

It’s really hard to pull all these ideas apart. An algorithmic feed with no content in it simply isn’t a compelling product, let alone a negligently defective one that causes harm. A lot of smart people who we’ve had on this show and on The Verge these past few years have said these rulings are just an end run around 230 — just a way to make platforms liable for what, ultimately, is just speech, in a way that will cause more speech to be restricted. You’ll hear us talk a lot about that idea, and whether the growing calls to repeal Section 230 entirely have any logical connection to these cases, or whether they’re just politically opportunistic.

But there are many more ideas at play here and even more layers of compilation. You will hear Casey and I even crash out a few times in this episode, because we have both been covering tech regulation for so long it feels silly to act like everything is working well for regular people, who have negative experiences with social media all of the time. Section 230 is three decades old now, and it’s unclear whether the world it was designed to help create ever came into existence.

You’ll hear Lauren talk about how the authors of Section 230 are open to changes, particularly around AI and speech online. At the same time, any changes to that law run headlong into the First Amendment and potentially open the door to government speech regulations at scale. Like I said, it’s complicated, and I‘m very curious to hear what you all think about this, because it’s clear a lot of this is about to be up for grabs.

Okay: Platformer’s Casey Newton and Verge senior policy reporter Lauren Feiner on the major social media lawsuits. Here we go.

See also  AI Takeover: Google's Reign of Terror in News Headlines

This interview has been lightly edited for length and clarity.

Lauren Feiner, you’re senior policy reporter here at The Verge. Casey Newton, the founder and editor of Platformer and long-time Silicon Valley editor at The Verge, continues to identify as such. In a recent episode of Decoder, Casey and Lauren Feiner discussed the recent trials that social media companies faced in California and New Mexico. These trials focused on the design decisions made by these companies and the internal documents that were revealed in court. Casey explained that these trials are considered bellwethers in the industry because they challenge the use of Section 230 as a shield for these companies. The success of these trials could open up new avenues for litigation against social media platforms. The discussion also touched on past cases, like Lemmon v. Snap, that set important precedents for holding platforms accountable for the design features that may incentivize harmful behavior. Overall, these trials are changing the landscape of how social media companies are held responsible for the content and design choices on their platforms.

Did the plaintiffs have to overcome that? Because that seems like where you would hit the 230 rocks over and over again and they would say, “We’re just managing the speech of others. It’s still the First Amendment.”

CN: The plaintiffs were able to successfully argue infinite scroll is not the speech of others. There’s no liability of another person that gets involved here; someone built a product and the product is defective. They were able to successfully liken these things to cars without seatbelts and it really resonated with jurors.

It’s worth taking a minute to talk about why that might be, because this is something that the people that I talk to at the social media companies never seem to understand. Everybody knows someone who has a huge problem with Instagram. This person is probably in your immediate family. They have deleted it a hundred times off their phone and they always reinstall it. They’ve set the screen time limits, but they keep coming back over and over again and they hate themselves for it. This is a near universal experience in America now. When you sit a jury down and you say, “There’s something wrong with Instagram,” it’s pretty easy to find a lot of people who say, “That sounds right to me.”

One of my feelings was that if any of these cases ever got to a jury, the thing Casey is describing would kick in. Everybody has these negative experiences with these social media platforms and the companies themselves always tell us that statistically these problems are small, but their user numbers are so vast that even a small percentage is many, many millions of people. I think the platforms never got their heads around that either.

Did you feel the same way there, that once you put Mark Zuckerberg in front of a jury, there was just no way that the social media platforms would win a case?

LF: It was really hard to know. First of all, why were these jurors selected? Were they selected because they’re the sort of people who don’t use social media a lot or know about a lot of good experiences with social media? That was the wild card in watching them: how are they really taking in this evidence? At the same time, it can be hard to hear some of this evidence. Anyone who knows someone who’s been through a mental health issue or has struggled with just using their phone too much or being on social media too much, a lot of us know people like that if we’re not those people ourselves. That’s definitely going to affect them in some way on a human level.

When I was watching Mark Zuckerberg on the stand, he was talking about a certain beauty filter that they had and how one of his own employees pushed back on including it and talked about, I believe, having daughters and thinking about how something like this would affect them. It’s maybe that these people don’t have as much experience with social media or don’t have the exact same experiences that this plaintiff had, but they certainly know other people in their lives who’ve probably experienced something similar.

CN: It also seems relevant to say that TikTok and Snap settled before the trial. That was the moment when I said, “Okay, they must be really, really scared.” I was actually waiting for Meta and YouTube to settle as well. Once that happened, I think it was clear they were in a lot of trouble.

The comparison here that everyone has been making is to big tobacco, to junk food, to sugar, right? We all know these things are bad for us. “Nicotine is awesome, so we can’t stop ourselves.” There should be some regulatory framework or we should make these companies at least communicate the risks. Does that framework hold for you?

See also  Fitbit Users Urged to Transition to Google Accounts within 3 Months

LF: One thing that’s a big difference between this moment and that for big tobacco is that saying that there’s no safe cigarette. There are a lot of studies that show that’s not really the same case for social media, that some level of social media use actually has a positive or at least neutral effect on people. It’s really that overuse, that compulsive use that is the main problem here and really the problem that people talk about. Social media does connect people with their friends, it lets you stay in touch with people, lets you have social connection or connection outside of your immediate community, but obviously it also has really harmful sides to it and using it too much can cut you off from real social connection.

That’s a big difference here. When people compare this to that moment, I do think that’s really something we need to think about, that these aren’t really one-to-one scenarios. That said, I think the comparison is made to pull out how these companies are finally having a lot of their documents come to light in front of juries, just like what happened in the big tobacco trials. That is really the point to take away from that comparison.

Casey, you and I have talked about this a lot. We owe our careers to social media in very real ways. The idea that the internet lets us bypass gatekeepers and go reach our audiences, it’s very important to us. The flip side of that is, boy, a lot of bad people got to do a lot of bad things. How would you draw these lines?

CN: It is very tricky and you have to articulate it with some degree of nuance.

To me, I distinguish between internet issues and platform issues. The internet has been instrumental in shaping our careers, allowing us to establish ourselves online and offer services for a fee. This was not possible before the internet era.

Platform problems, on the other hand, involve algorithmic manipulation and design features that encourage excessive usage. The goal of these platforms is to keep users engaged for as long as possible, often leading to negative feelings of addiction and dependency. The focus is now shifting towards addressing these platform-related issues.

It is important to note that the goal is not to eliminate the internet or the freedom to express opinions online. Rather, the aim is to address the negative impact of platforms that consume excessive time and attention, causing harm to users.

The recent legal cases have highlighted the need for platforms to reassess their product features to avoid further liabilities. There is a growing discussion about the distinction between free speech and product design, prompting a reevaluation of industry practices.

As for social media, the challenge lies in determining what constitutes safe usage and identifying harmful features. While solutions may not be straightforward, it is crucial to prioritize the well-being of users, especially teenagers, in the design and operation of these platforms.

Policymakers are also taking action in response to these issues, considering new regulations and reforms to enhance online safety, particularly for younger users. The debate continues on the best approach to address these concerns and whether existing laws like Section 230 are sufficient or require amendment.

Overall, the focus is on promoting a healthier online environment that prioritizes user well-being and addresses the negative impacts of excessive platform usage. It is essential to strike a balance between freedom of expression and responsible platform design to ensure a safer and more positive online experience for all users. The idea that these laws are connected to the trials and that the trials could lead to strict speech regulations imposed by the government is bewildering to me.

Statements like, “The platforms were designed to be addictive, so let’s pass KOSA to restrict marginalized groups’ speech,” make no sense to me. Removing Section 230, as suggested by Josh Hawley, does not seem logical to me either. The idea that platforms will over-moderate content out of fear of legal liability if Section 230 is removed is concerning.

On the Democrat side, there is support for laws like KOSA, despite concerns that they may harm marginalized communities. The bipartisan support for these changes poses a challenge for those who oppose them.

The complexity of the situation is overwhelming. The original goals of Section 230, created 30 years ago, were never realized. Defending a law that has not achieved its intended purpose is confusing.

While I want Section 230 to exist to allow for diverse speech on platforms, there are cases like the Grindr incident that highlight the negative consequences of this law. Balancing the benefits and drawbacks of Section 230 is a challenging task. Why should Section 230 be the solution to getting justice for online harassment and violence? Why not take these issues more seriously and find alternative ways to address the harm caused? It’s important to consider the impact of platforms like Instagram on teenagers and whether they are truly safe. Mark Zuckerberg’s focus on maximizing engagement may not align with creating a safe environment for young users. The influence of platforms like Meta on speech and freedom of expression cannot be ignored. There are concerns about the potential consequences of changing liability laws for online platforms and how it may affect free speech. The debate around regulating online platforms and holding them accountable for harmful content raises complex questions about the intersection of technology, speech, and safety. I see the potential in this argument, but Casey, I know you believe you can separate the two.

See also  Revolutionizing Online Shopping: Gemini Introduces Buy Buttons and AI-Powered Search

It’s a complex issue, and relying on lawsuits may not always be the best solution. Lawmakers and policymakers should carefully consider these matters. For example, why should features like infinite scroll, streaks, or autoplay video be considered speech? It’s important to distinguish between compelling speech and compelling product safety features.

While there are certain regulations I would support for social media platforms, some actions may be unconstitutional. For instance, requiring educational content for children may not be feasible.

As technology evolves, such as AI-generated content, new legal challenges will arise. It’s crucial to understand the implications of such advancements on existing laws like Section 230.

The impact of features like autoplay video and algorithmic personalization on mental health should be addressed. While regulation may seem necessary, it’s important to consider the constitutional implications, particularly regarding the First Amendment.

Barack Obama’s perspective on regulating AI after failing to regulate social media raises important questions about finding a balance between innovation and regulation. The comparison to regulating broadcast television highlights the need for a thoughtful approach to avoid giving too much power to speech regulators. The mention of Barack Obama’s advice on needing a hook reflects the legal standard of strict scrutiny when it comes to speech regulation under the First Amendment. This standard requires that any regulation be narrowly tailored to achieve a compelling government purpose. The idea of using AI to detect harmful content, such as that related to eating disorders, and regulate it raises concerns about government overreach in speech regulation.

The discussion also touches on the evolving attitudes towards free speech and the increasing pressure on platforms to regulate content. Even figures like Elon Musk and Mark Zuckerberg, who are seen as proponents of free speech, have been involved in content moderation decisions. The shift towards more government involvement in speech regulation is noted, with the trust and safety community within the tech industry facing challenges in advocating for pro-social values and human rights principles. When we speak up, we face death threats and harassment, making it a daunting task to advocate for important principles. The consequence of staying silent is that it allows powerful individuals to control these platforms without regard for trust and safety. The current state of affairs is concerning, with trust and safety measures being reduced to mere compliance functions. The focus now shifts to the regulatory side, with discussions in Congress and states about potential laws to address these issues. However, the endless cycle of policymaking and corporate interests seems to offer no clear solution. Suggestions like a federal privacy law and algorithmic transparency are proposed, reminiscent of approaches seen in Europe. While some progress has been made in Europe, the lack of consensus on the problem and its solutions hinders effective change. It is crucial to pinpoint the exact issues at hand and work towards solutions that truly protect users, especially teenagers, from harmful outcomes. He is a guest on the Hard Fork podcast with Kevin Roose, which is fantastic. Despite the fact that they are his sworn enemies and he believes they should be illegal, Lauren’s work can be found all over The Verge. Lauren has made multiple appearances on Decoder recently. Your feedback on this episode is greatly appreciated as it seems that none of us are certain about what will happen next, or worse, what should happen. If you have any questions or comments about this episode, feel free to reach out to us at decoder@theverge.com. Transform the following text:

Original: “The weather was extremely hot and humid, making it difficult to breathe.”

Transformed: “The sweltering heat and high humidity made it hard to catch a breath.” Transform the following:

“Let’s meet at the park at 3 pm tomorrow.”

into:

“How about we meet at the park tomorrow at 3 pm?” Transform the following statement into a question:

“The meeting is scheduled for next Monday.”

“When is the meeting scheduled for?” Transform the following sentence:

Original: “The team won the championship game in a thrilling overtime victory.”

Transformed: “In a thrilling overtime victory, the team emerged victorious in the championship game.” Can you change the following

Trending