INSIGHTS & IDEAS

Reclaiming the Feed: Why Social Media Is a Development Issue

June 26, 2025

With foreign aid in retreat, the development community is rethinking how to move from aid dependency to country-led transformation. Yet social media, a defining force in how people connect and learn, remains largely absent from development discourse.

Public and policymaker opinions shape development decisions, but we have only begun to understand how social media platforms influence what people see, believe, and do. As AI policy and digital infrastructure rise on global agendas, policymakers and development partners have an opening to reshape online platforms for societal benefit. This means treating platforms as policy arenas in their own right. Here’s why—and what an agenda could look like.

Why the Platform Blind Spot Matter

Social media platforms are widely used to deliver services and information. Researchers have developed and tested many online interventions to shift behavior and uptake. Yet the development community often conceptualizes platforms as neutral tools, ignoring how algorithms might shape user opinions, decisions, and outcomes in politics and beyond.

In sectors like health, education, and agriculture, behavioral nudges and information campaigns have successfully improved outcomes like vaccine coverage and school attendance. But when messages rely on—or compete with—social media feeds, they could interfere with impact. Take immunization: vaccine hesitancy and distrust in health care are high in some countries, with online misinformation fueling skepticism across regions, even as vaccines have saved 150 million lives in the past 50 years. Similar dynamics could undercut climate efforts like payments for ecosystem services, which depend on farmer trust. Platforms also affect health more directly: the rise of social media is correlated with an increase in suicide rates among adolescent girls.

Further, social media seems to be a factor in the growing gender ideology divide. In LMICs, norm-shifting programs have been shown to improve gender-progressive attitudes. Online content could undermine these benefits. In Europe, exposure to social media is associated with a rightward shift among rural young men and a leftward shift among urban young women. In rich and poor countries alike, the gender divide has implications for gender equity, fertility rates, and labor markets.

Cutting across these areas are opinions on aid. People in rich countries are less confident in foreign aid than people in LMICs, potentially hampering support. And in LMICs, leaders’ perceptions of different donors can influence which projects they choose to move forward. For instance, over a third of leaders from LMICs favor China as a partner for infrastructure projects, compared to 11 percent who prefer the US. Platforms might skew which aid narratives gain traction.

Perhaps most consequential are narratives on migration and trade. Support in the US reached historic highs between 2018 and 2020, only to dip in recent years. The share of Americans who think immigration should decrease nearly doubledfrom 28 to 55 percentin the last four years. We recognize that cultural and identity threats play a role, but we are uncertain whether social media promotes, moderates, or simply reflects these narratives. Given the vast importance of trade and mobility to growth for developing countries, these opinions carry real political and economic consequences. Platforms may shape and amplify elite-driven stories on immigration and inequality, thereby hindering labor mobility and development.

We Don’t Have the Global Evidence We Need

Social media relies on an ad-driven business model, where algorithms are designed to maximize user engagement and attention. Sensational content tends to spread further, with out-group and moral-emotional language driving virality. An audit of Twitter’s ranking system found that it boosts emotionally charged, hostile content, despite user feedback that this content makes them feel worse. YouTube’s recommendation system, responsible for 70 percent of all YouTube views, similarly steers users toward more politically extreme content. Polarization raises the risk of democratic backsliding and can distort the policy environment, including by promoting intergroup antagonism, amplifying grievances, and fragmenting civic coalitions. Yet platform algorithms remain largely opaque. Companies might claim to make updates, but their proprietary “black box” design impedes independent scrutiny. Uncertainty over whether breakdowns in the information ecosystem are more a cause or a symptom of political trends is widespread.

Studies from the US suggest that most users encounter little overt news content on platforms (though operational definitions differ on what counts as news). The lack of news content suggests the overall impact of social media use on political attitudes could be modest, except among a small subset of highly engaged users. But findings vary and are not necessarily generalizable to other contexts. One experiment found that Facebook’s algorithm suppresses counter-attitudinal news, even though exposure to ideologically cross-cutting media reduces polarization. Another shows that brief exposure to partisan media can erode trust in mainstream news. These studies, however, focus on public opinion rather than how platforms influence policymakers’ preferences and beliefs, which could be a more consequential channel for development. These dynamics warrant more cross-country research.

Bias blind spots are especially serious as generative AI becomes embedded in everyday technology. NGOs are already using LLMs to deliver personalized health, education, and agriculture guidance, expanding access while introducing new risks. A growing body of research finds that LLMs reflect political biases, underscoring the importance of further examining how technology affects the policy ideas and perspectives that users see.

Platforms Might Play by Different Rules in the Global South

Without meaningful transparency and accountability, some governments justify internet shutdowns as tools to curb misinformation and pressure tech companies. India leads the world in shutdowns. In 2021, Uganda banned Facebook after it removed state-linked accounts, and Nigeria suspended Twitter for several months. Kenya threatened Facebook with suspension after the company approved political ads flagged for hate speech ahead of the 2022 election. Officials argue that platforms ignore their concerns until bans are imposed, exposing a double standard in how companies operate and approach compliance across regions. Regulatory enforcement challenges are not limited to social media: in Nigeria, the creative sector has faced data limitations and tax noncompliance. And while company investments in fact-checking have long been weak in LMICs, they are now unraveling globally: Meta discontinued third-party fact-checking in the US earlier this year, replacing it with a crowdsourced system modeled on X.

Of course, platforms are not just vectors of harm. They enable service delivery, civic organizing, anti-corruption, and political competition. In global majority countries, where state capacity may be lower and media more constrained, platforms can play an even more central role in advancing public goods. But for the most part, we still know too little to inform platform governance decisions and tradeoffs.

What a Development-Centered Platform Governance Agenda Could Look Like

If platforms shape policy narratives, then policymakers must take a more active role in shaping platforms. This should go beyond content moderation to include greater transparency, routine harm assessment and response, civic education, and regulation. But in lower-income settings, the infrastructure for transparency, research, and redress remains limited.

Global efforts include the EU’s Digital Services Act, Taiwan’s participatory digital governance model, and Brazil’s now-shelved Law on Freedom, Responsibility and Transparency on the Internet. Many countries lack the market leverage and regulatory capacity to enforce compliance and collaborate with global tech firms. A development-focused platform governance agenda should start with this imbalance in mind, prioritizing regional cooperation, local agency, and public-interest digital infrastructure. Here are some ideas:

1. Independent Evidence on Algorithmic Biases and Impacts

We need rigorous, context-specific research to understand how platforms influence public discourse and opinions related to development. Research funders could support the use of algorithmic auditing methods, such as those used to detect racial and gender biases in ad delivery, to assess skews in organic and recommended content. Since most existing research on social media and polarization focuses on the US, more work is needed to understand these relationships in other countries.

One opportunity is to crowdsource evidence from users. For example, the Allen Institute for AI’s WildChat subsidized user LLM interactions to build a dataset for model auditing. This method could be used to collect data on platform behavior in LMICs. Projects like MIT’s AI Risk Incident Tracker highlight the value of open reporting on algorithmic harms. IssueBench offers a way to test LLM bias in writing assistance prompts across hundreds of policy areas.

Partnerships between universities and NGOs could help embed platform and LLM research in development portfolios, such as CEGA’s Global Networks program. Impactful research will require a robust third-party audit ecosystem, adapting lessons from the Digital Services Act (DSA).

2. Digital Accountability Infrastructure

Accountability requires tools and systems that let citizens and policymakers understand and inform how platforms operate. One approach is middleware, which is third-party software that sits between users and platforms to enable feed customization and transparency. Proposed by Francis Fukuyama and colleagues, middleware offers a user-centered alternative to centralized content control. Taiwan’s deliberative civic tech model also demonstrates what is possible. Platforms like vTaiwan crowdsource public input and surface consensus using Pol.is-style tools. These open-source platforms have informed policies ranging from pandemic response to Uber regulation, demonstrating that participatory governance can scale.

Light-touch regulatory toolkits, modeled on those in the data protection space, could help governments with limited enforcement capacity. The AU’s 2022 Data Policy Framework supports cross-border data flows while safeguarding rights and security. Similar principles could guide platform governance: enabling participation without locking countries into external models or costly compliance.

Across the board, policymakers should support civic media and storytelling that counter cynicism, highlight local progress, and build empathy. Bad news bias often drowns out positive stories. Restoring trust in institutions and the possibility of change means acknowledging success.

Accountability also depends on access. Initiatives such as the Digital Public Goods Alliance and Smart Africa can support shared infrastructure, complemented by investments in broadband and energy access.

3. Labor Opportunities and Protections in Tech Supply Chains

From internet cafes in the Philippines to annotation offices in Kenya and dispersed micro-tasking platforms, tech companies rely on labor across the Global South to train models, label content, and support digital infrastructure. These jobs reflect broader growth in exportable services: trade in commercial services has reached $8 trillion annually, with rising demand for remote tech support and AI-assisted work.

As Charles Kenny puts it, growth today might be less about moving machines to workers and more about moving workers to clients. Unlike earlier waves of industrial technology, many AI applications are infrastructure-light at the point of use, rely on human labor (for now), and could extend expertise, giving them promise in lower-income settings where smartphones are decently widespread but infrastructure lags.

But platform labor needs guardrails. Building on initial investigations, future work should map conditions across digital labor supply chains. Policymakers should require tech companies to disclose labor conditions, align on labor standards, and embed protections into trade and AI governance. With the right policies in place, platform work could help turn fragmented gig tasks into stable earnings, channeling global demand into local opportunities.

4. Regional Power and Coordination for a Race to the Top

Within regions, countries can pool resources and push for transparency on their own terms. As Ken Opalo writes, this requires more than symbolic agreement. African institutions, in particular, can advance outcome-focused regionalism that empowers capable states to lead, including on platform governance. Steps could include:

  • Coordinating regional asks on platform transparency, data access, and labor standards.
  • Aligning regional platform governance efforts with other AU digital strategies, such as the Digital Transformation Strategy, Data Policy Framework, and Continental AI Strategy, which offer guidance and flexibility for national adaptation.
  • Convening cross-regional exchanges among regulators, researchers, civil society, and companies.
  • Comparing emerging models like the DSA and Taiwan’s civic tech to inform local approaches.
  • Aligning rules on shared enforcement priorities like political ads, language access, and child safety.
  • Backing reforms in regional bodies (e.g., AU, ECOWAS, ASEAN) that reward agenda-setting leadership.

A “race to the top” in digital governance, anchored in regional cooperation and leadership, would do more than any national push. It would reframe platform policy as part of a broader project of regional agency and ambition.

From Passivity to Progress-Oriented Policy

Online platforms affect how people form views, interpret events, and imagine the future. To ignore how these systems function is to misunderstand the current terrain of development. The goal is not regulation for its own sake, but to use online platforms to accelerate inclusive development.


Julia Kaufman is a development policy researcher. She previously worked at the Center for Global Development on global health policy and development finance, and on global health research related to frontline healthcare in Rwanda and child mental health in Kenya. Most recently, she worked for the Asian Development Bank on private sector development impact. She holds an MPA from Princeton University and a BA in global health and international comparative studies from Duke University.

Tags
Share this page

1 Comment

  1. Robert Kaufman

    Great insights on the often overlooked role that varied platforms play in shaping opinions about international relief and development. Those of us committed to inclusive development need to take a more active role in shaping these platforms. The opportunity could mean greater influence over policy. The cost of further inaction could be devastating.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Loading...