From Netflix’s docudrama The Social Dilemma to daily exposés, we are learning more about how social media is driving polarization by only showing users content that fits their “worldview,” spreading disinformation, distrust, and outrage. Increasingly, we hear calls to censor speech judged misleading or inflammatory, but this is open to a host of problems.
There is another solution: If social media algorithms can create “echo chambers,” isolating us into increasingly more radicalized groups, then why can’t other algorithms introduce us to a diversity of viewpoints, expanding the facts we know and strengthening spaces to exchange ideas? Why can’t social media help us to see that we are indeed part of society? What we need is not just social media, but civil media.
While this may seem an obvious solution, let’s consider some of the ways it might work, some of the objections, and other initiatives that might coincide.
First, what would “civil media” look like? Imagine that conspiracy-prone users were identified, like Holocaust deniers, who would be introduced to more evidence rather than more conspiracies; or, that an individual who reads a conservative-flagged story, would be presented with a liberal one—and vice versa. Of course, it would help if journalists toned down their disdainful or demonizing attitude toward “the other side,” if we hope to recover a middle ground for the exchange of ideas.
A more robust solution would take advantage of “thick” communities—that is, communities with distinctive, even idiosyncratic cultures, that contribute decisively to the identities of their members. It turns out that such communities tend to cultivate openness. In fact, it’s people without a strong sense of identity who are prone to radicalization and fearful of openness. So, it’s not simply a question of breaking down walls, but of knowing how to build communities that foster diverse viewpoints and genuine dialogue. For primers on the research, readers can turn to David Brooks, Angela Duckworth’s Character Lab, and the Institute for Advanced Studies in Culture, among others.
Some people may be radicalized because they never encounter opposing viewpoints or only distorted and demonized versions of those viewpoints. But others really don’t want to be in dialogue—by choice. We cannot eliminate all polarization, but we may moderate its most destructive effects.
If such an online dialogical design sounds complex, that’s because it is. As the architects of the information age like to remind us: their work is complex. Machine learning already entails algorithms that develop, test, measure, and evaluate different online engagement strategies, yielding detailed psychological profiles of users. Moreover, everyday experience teaches us that fostering civility is harder than provoking anger, but that’s no argument against the need to do it and we already know a lot about how to do it.
Perhaps a stronger objection is that no executive at a major tech company could seriously entertain this strategy and still be employed at the end of the week. Fake news simply sells better. And shareholders won’t tolerate the loss of profits, a critic might object.
But, let’s consider a thoughtful response.
First, here regulation may be helpful. Censorship entails questionable judgments and tens of thousands of moderators. Their work is immensely difficult, but necessary to detect cyber-bullying, child pornography, foreign influence on elections, and other cyber-crimes. But what about misinformation? Even as federal elections are routinely characterized as matters of life or death and the most pressing issues (from COVID to police shootings to gender dysphoria) are subject to competing, ostensibly factual narratives, who can be trusted to select stories that will shed light on reality?
A more prudent solution would include regulation that holds social media accountable for ensuring the diversity of voices encountered by users. More importantly, regulation can support, protect, and incentivize executives and corporations that take responsibility for promoting respectful argumentation and civil discourse.
Will this force The New Yorker to publish the alt right, Breitbart the far left? After all, they’re only content providers. Freedom of speech entitles them to say what they wish within the law. As social media executives have responded to the prospect of regulation, they aren’t content providers; they’re neutral. But, their algorithms are not neutral. In fact, they favor extremism. But, social media should take responsibility for the technological tools they have produced, just as an industrial toolmaker would be liable for the safety of its products—and here regulation can help.
A second argument stems from human motivation. Most of us have experienced the pleasure of being informed of the various views on an issue. In contrast to the narrowing algorithms of the Stanford Persuasive Technology Lab, which researches how to addict users to the technology, we must research the dispensing of dopamine in response to being better informed and exercising more impartial judgments.
Cultivating anger and hatred may be easier–but so is using cheap materials that results in unsafe automobiles. It’s bad business. Social media is quickly becoming a road hazard on the information superhighway. Cultivating better judgment among social media users may involve some necessary “speed limits,” but it’ll ensure a safer, more sustainable media environment for the individual and society.
A third consideration would alter the crude surveys of how much advertising users will tolerate to focus on what users care about, based on the challenges they face in their daily lives. Companies like IKEA have employed this strategy, proving that you can benefit people and turn a profit.
The final and most important argument concerns education. Researchers like Michael Lamb have already shown how character education is needed for future software engineers. That’s an important start, but research shows it must happen earlier, during K–12 education. In this vein, Tom Harrison’s work at the Jubilee Centre for Character and Virtues is crucial. But, the researchers at the Jubilee Centre are clear: the cultivation of teamwork and resilience is not enough. Criminals benefit from resilience, too. We must produce a robust sense of civic virtue, including justice.
A complex concept, justice has puzzled philosophers for ages. Besides, whose justice will be taught to students? Fortunately, not everything need be complicated or a matter of perspective. Perhaps Democrats and Republicans can agree to the principle that foregoing profits to restore civil society is a just and noble purpose. And for the times when we can’t even agree to disagree, we should at least expand the spaces where we learn what we’re disagreeing about.
Social media once promised us an unprecedented age of democracy and liberty. But, today it has revealed something primal, which is sometimes darker: human nature tends to be self-serving and tribal (no matter how “cosmopolitan” your tribe!). Selfish behavior among individuals and their ‘gangs’ can be clearly seen from childhood onward—as witnessed on any ordinary playground. Fortunately, civilization counters the darker impulses of human nature with an alternative vision of human flourishing, embodied in ideals like justice, courage, self-control, and wisdom. This is how base desires are overcome by “the better angels of our nature.”
The young visionaries who launched social media are now older and wealthier. But, the question is whether they have matured to the place where can see the puerile tendencies of their technological designs. Are they ready to take on the responsibilities of adult citizens? Surely it’s high time they grew up. It’s high time we all did.
Matthew Post is Associate Dean of the Braniff Graduate School of Liberal Arts at the University of Dallas.