Why Big Tech Companies Continue to Lose Public’s Trust

COMMENTARY Big Tech

Why Big Tech Companies Continue to Lose Public’s Trust

May 6, 2021 4 min read
COMMENTARY BY

Former Research Associate, Tech Policy Center

Annie was a research associate in the Center for Technology Policy at The Heritage Foundation.
Lauren Culbertson, Head Of U.S. Public Policy for Twitter, makes an opening statement during a hearing of the Senate Judiciary Subcommittee on April 27, 2021 in Washington, D.C. Tasos Katopodis-Pool / Getty Images

Key Takeaways

The Big Tech companies consistently hide behind their algorithms when called to account for inconsistent content moderation and account suspension.

None of the companies would admit that they thrive on users having these negative emotions and patterns, and most likely never will.

Big Tech is likely to face growing calls for legislation and/or regulation. That’s a consequence of their own making. 

The Big Tech companies consistently hide behind their algorithms when called to account for inconsistent content moderation and account suspension.

Such was the case when the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on April 27 held a hearing titled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds.”

Facebook, Twitter, and YouTube have been accused of intentionally designing their algorithms to promote and amplify certain content, while hiding or even suspending other content for no specified reason.

When questioned by members of the subcommittee about the design and recent updates to their algorithms, all three companies had different—and vague—answers.

These Big Tech companies didn’t build confidence among users—or members of the subcommittee—that they are using sound, fair, or consistent principles in designing or using those algorithms.

Monika Bickert, the vice president for content policy at Facebook, stated that the purpose of its algorithm is to rank content for the individual user and was designed to “save the user time by putting content they would be most likely to interact with at the top of the newsfeed.”

Bickert said the algorithm looks at other factors, such as how often the user comments on or likes content from a particular source and whether they are more likely to engage with content that is in photo or video format.

She explained that Facebook would not want to produce a tool that would “annoy” the user because it is “not in [its] interest financially or reputationally to push people towards extreme content.”

But what is “extreme content” and who gets to decide what is “extreme” and what isn’t? It’s alarming that these Big Tech companies, which have shown an anti-conservative bias, are deciding what’s “extreme.”

Tristan Harris of the nonprofit Center for Humane Tech contradicted Bickert, saying it’s indeed within these companies’ business models to push users toward “extreme” content: The more outrage, the more views, the more comments back-and-forth, the more money, the better for Big Tech.

YouTube has been just one company criticized for removing certain content that does not appear to violate any community standards. At the hearing, Alexandra Veitch, director of government affairs at YouTube, stated that its algorithm “has a responsibility to limit videos that even come close to borderline content.”

But Veitch did not define what qualifies as “borderline content.” YouTube gives no definitions or guidelines for users to know whether their content could be classified as “borderline.”

If YouTube will also moderate content outside its own rules issued to the public, that’s explicitly not transparent and will inevitably result in inconsistent treatment.

YouTube explains its 4Rs of responsibility. Those are: Remove content that violates its policy; Raise authoritative voices; Reduce spread of borderline content; and Reward trusted creators.

Additionally, it has a machine-learning initiative that supports recommendations of all of those pillars along with the users’ past searches to tailor suggested content more efficiently.

But YouTube is very vague regarding its “4Rs.” To be more transparent, the company would need to be specific and concise in defining what content will be removed and would also need to be clear about who is a “trusted creator”—and how a person can become one. 

Lauren Culbertson, the head of U.S. public policy at Twitter, attempted to prove that it might be the most transparent company because of its machine-learning initiative that will look at the company’s algorithms and will share its findings publicly.

As of late, Twitter has launched Bluesky, aimed at creating open protocols that would create more controls for users. 

Even with those initiatives, it seems as though internal reform remains far off, unless they are forced to by law.

Sen. Josh Hawley, R-Mo., said the business model for all these companies is clearly addiction and advertising to get people to stay online longer.

Sen. Chris Coons, D-Del., agreed that when users follow their peers on these sites, it’s like “viral bait.” In today’s world, that’s what keeps users coming back; namely, their need for social validation.

Coons added that “the goal of social media is attention-harvesting” and that their entire business model is based on increasing that. He suggested that we should instead create model humane standards for how social media should work. 

Harris contended that the Big Tech companies are telling the public that we are worth more as humans when we are addicted, polarized, outraged, and disinformed because that means their business models were effective in swaying users’ attention.

None of the companies would admit that they thrive on users having these negative emotions and patterns, and most likely never will.

Unfortunately, the legal immunity provided by Section 230 of the federal Communications Decency Act of 1996 “combined with the monopoly [that the Big Tech companies have], allows them to censor, block, and ban whatever they want,” Sen. Chuck Grassley, R-Iowa, said in reference to the companies’ actually defining some of their terms.

Sen. Ben Sasse, R-Neb., said that Big Tech companies need a new business model that isn’t ad revenue-centric.

“Perhaps a subscription model, or a public interest model,” Harris said, citing Wikipedia as an example.

As long as the Big Tech companies profit from using people to unknowingly influence one another, “we are each going to be steered into a different reality and then pitted against each other,” he said.

Fundamentally, that’s detrimental to the nation.

The hearing on Big Tech was one of many that have been held on Capitol Hill, but Congress still hasn’t passed any bills addressing social media companies.

Because the tech titans didn’t build any more trust or confidence in their reliance on algorithms to identify, label, or remove content at last week’s hearing, Big Tech is likely to face growing calls for legislation and/or regulation. That’s a consequence of their own making. 

This piece originally appeared in The Daily Signal.

Collections

Big Tech

commentary

Crypto Investor’s Warning on Big Tech Tyranny, and Social Credit Sys

Nov 26, 2024 1 min read

explainer

More Big Tech Censorship

Nov 26, 2024 14 min read

Exclusive Offers

5 Shocking Cases of Election Fraud

Read real stories of fraudulent ballots, harvesting schemes, and more in this new eBook.

The Heritage Guide to the Constitution

Receive a clause-by-clause analysis of the Constitution with input from more than 100 scholars and legal experts.

The Real Costs of America’s Border Crisis

Learn the facts and help others understand just how bad illegal immigration is for America.