Digital environment:

Outside the algorithm: How to communicate beyond social media

CC:BY (Francisca Balbontín)

Social media has changed, and with it, so have the challenges for civil society organizations. Amid hate speech, misinformation, and algorithms that limit reach, it is urgent to think about new communication forms that prioritize the safety, inclusion, and sustainability of our voices. How do we adapt to this new digital environment?

I remember when opening a Twitter account was almost revolutionary. Back then, the platform promised something close to equality: a place where ideas, rather than hierarchies, defined the reach of your voice. For many civil society organizations, Twitter, Facebook, and Instagram (a little later) were essential tools for denouncing injustices, mobilizing communities, and bringing visibility to issues that traditional media overlooked. Over the years, what seemed like an inclusive space gradually crumbled into something very different.

For example, since Elon Musk took over X, formerly known as Twitter, the atmosphere on the platform has become untenable. With hate speech on the rise, racist, misogynistic, ageist, and homophobic insults have not only increased but become normalized. The return of accounts previously suspended for inciting violence has exacerbated these problems. What was once a haven for critical voices now feels like a hostile space, especially for those who defend human rights.

Meta is not far behind. In a recent twist, the company has announced the end of its fact-checking program on Facebook, Instagram, and Threads. The move, presented as a step toward supposed “freedom of expression,” threatens to make these platforms even more fertile ground for misinformation. This comes just before Donald Trump’s return to the White House, at a pivotal political moment that does not seem like a coincidence.

This action ultimately removes one of the few tools that, at least in theory, attempted to mitigate the harm of false or malicious content. At a time when societies are facing challenges such as polarization and a crisis of trust in information, Meta’s decision could have devastating consequences. While a community-based rating system could increase confidence in fact-checking, there is also a risk that such systems could be manipulated.

The dilemma of leaving social media

In this scenario, civil society organizations face a dilemma. Social media, despite all its problems, remains the place where public conversation takes place, where public figures such as presidents, congressmen, and journalists exchange information and ideas, and where news is still discussed as it happens. Stopping their use could mean losing the reach and relevance needed to influence global debates. However, staying often involves navigating an increasingly toxic space, where the risk of being harassed, doxed, or attacked is constant. Is it possible to find a middle ground?

The Engine Room conducted a study, “Exploring a transition to alternative social media platforms for social justice organizations in the Majority World,” which points out that traditional social media platforms raise significant concerns related to intrusive data collection, surveillance, and the spread of misinformation. They also point out that alternative platforms such as Mastodon and BlueSky offer more community-based and less extractive experiences, allowing users to create and manage their own online spaces. Several organizations, including ours, have joined the trend of emphasizing the role of technologies that return some control to their users.

Although the alternatives are promising, they are not yet ready to completely replace the major platforms because their reach remains limited. Migrating to them means convincing our audiences to do the same, which is not easy in an environment where “network effects” (or, if you prefer, inertia and convenience ) keep people tethered to traditional platforms. Yet these alternatives serve as a reminder that other forms of digital communication are possible, even if they are still a work in progress. The Guardian and La Vanguardia dared to close their doors forever on networks like X. Much earlier, in 2021, the Lush brand stopped using social media, declaring itself “antisocial.”

Meanwhile, the question arises: is it possible for civil society organizations to disconnect from today’s most popular social media platforms? While the answer is not simple, it is clear that diversifying communication channels is essential. We cannot rely exclusively on platforms that prioritize extraction and profit, pursue political objectives aimed at gaining power, or impose authoritarian and conservative narratives on human rights.

Furthermore, civil society organizations usually publish organically, with little or no investment in advertising, which makes it even more difficult for their content to reach audiences. Therefore, email newsletters, updated websites, and collaborations with independent media outlets remain valuable tools for reaching audiences beyond tweets and stories. It is also time to explore new formats: podcasts, documentary videos, and hybrid events that combine the digital with the in-person. In an environment where algorithms decide what content reaches whom, regaining control over our audiences also becomes an act of resistance.

Regulation and accountability

However, these solutions are insufficient if they do not address the underlying problem: the lack of effective checks and balances on the control over technology platforms. As long as companies continue to operate without accountability for their decisions and business models, digital violence and data exploitation will remain the dominant model of Internet giants. This is where states must step in, not to censor, but to protect. We need public policies that guarantee safe digital environments, effective mechanisms for reporting harassment, and clear penalties for those who promote hate. We need regulation that effectively ensures that with greater power comes greater accountability.

For their part, technology platforms owe an important debt to their users. Implementing anti-harassment policies, privacy protections, and transparency in content moderation should not be optional. Instead of prioritizing growth at any cost, it is time for these companies to take responsibility for creating a digital ecosystem that does not perpetuate violence and abuse. They should not hide behind security to promote censorship, nor behind freedom of expression to neglect their users.

Meta, for example, announced a few months ago that it will use its users’ personal data to train its artificial intelligence systems, without offering a clear way to opt out of this practice. In many jurisdictions, there is simply no way to refuse this use of personal data. However, countries like Brazil and those in the European Union, with more robust legislation, have managed to put in place regulations that limit or curb these abuses. But what about the rest of the world, where legal protections are weak or non-existent?

The unequal capacity of states to address these issues deepens the digital divide. While some countries are making progress in defending digital rights, others remain vulnerable to the abuses of large corporations that operate without restrictions.

Beyond the algorithm

It is crucial to collectively rethink the design and dynamics of these platforms. Each time hate speech is normalized or cyber-surveillance goes unchecked, the digital environment deteriorates even more. This problem should not fall on users, but on those with the power to change the game: technology platforms and states.

Building an ethical and safe digital ecosystem will not be quick or easy. It requires creativity, patience, and political will, but the effort is worth it. In the end, communicating beyond the algorithm is not just a matter of survival for civil society organizations; it is a commitment to a future where voices are amplified, not silenced, and technology is a tool for inclusion, not exclusion.

Communicating in a hostile environment is not just a technical challenge; it is a matter of principle. Rejecting a digital model based on hate and exploitation opens up the possibility of transforming the way we connect, build, and fight in the digital world, creating fairer and safer spaces for everyone.