Corporate Language Harms Tech’s Trust & Safety Effectiveness

Abdullah Hasan Safir

Community Member

Davinia Santimano

Community Member

Corporate language in Trust & Safety (T&S) organizations can perpetuate systemic inequities by prioritizing dominant cultures and economic gain over minority groups and safety, reflecting patterns of power and exploitation reminiscent of colonialism. In a conference organized by Harvard’s Berkman Klein Center earlier this year, alongside current and former Trust and Safety members of major tech companies, we presented and discussed this hypothesis. Integrity workers identified the ways in which corporate language masks inequities and suppresses dissenting voices within T&S organizations. In this piece, we demonstrate how the use of corporate language is a historical pattern that replicates within Trust and Safety (T&S) bringing greater inequity to the world.

Language can work to establish and maintain power dynamics between those who control the means of power and those who don’t. It can function as a form of hegemony by allowing the superiority of one group over others through cultural, social, and political means (see Fanon, Said or Spivak). Such hegemonic practices may include making certain linguistic patterns (considered in work culture as buzzwords or acronyms) the standard or ideal modes of communication to impose domination and hierarchy. Historically, colonial rulers used official, legal, and administrative languages to centralize control and facilitate governance over the colonized populations. It created a barrier for those who did not speak the language, marginalizing them from participating fully in political and economic life. Sometimes these powerful elites deliberately fostered divisions and tensions between different groups to exploit the population to maintain control through language. We identified that marginalized workers who navigate through the dominant language of the leadership often remain marginalized because when language is used as a tool for hegemony, it becomes hard to also wield in opposition–whether to support their team or user’s voice.

A successful career in tech includes expertise in navigating language and those buzzwords or key terms. The proliferation of corporate buzzwords has made language purposely ambiguous, complicating efforts to prioritize safety and equity. When that ambiguity is applied to those who seek to build with safety, equity, and care, decisions and prioritization become extractive and exploitative behaviors because they have to exist within the constraints of company profit and success. Other industries (Boeing, Tesla) have also seen this turn into real-world harm. Though, when it comes to social media, the scale of harm is greater. Examples include increases in teen suicide, dehumanization and violence, and so much more. Protecting T&S from Corporate language becomes key to ensuring greater equity on these platforms and in this world.

Language reflects the capitalist logic of tech companies 

In T&S work, a term like ‘Return on Investment’ (ROI) often reframes safety and integrity as financial burdens rather than central to product design. When positioned as a cost or expense, integrity becomes minimized, no longer an innovative lever for “Return” in the calculation of ROI. Having a safe, trusted, and quality experience should be a “Return” for these products, instead having safe-enough experiences to view ads/generate profit becomes the actual “Return”. T&S’ critical and complex policy enforcement, when framed as an expense, is forcing their teams to make impossible tradeoffs and risk employee disengagement and moral injury.

Language as a tool for erasure 

Not just as a tool to reinforce capitalist priorities, it’s important to consider how it also silences dissent within T&S organizations. Language constrains the T&S workforce by rewarding employees to oversimplify, shorten, and prioritize, all while accepting false equivalencies. There are many parts of a company where creating simplicity and prioritizing well is important. But Tech’s Trust & Safety organizations should not be bound by the same language. Designing and maintaining meaningful safety thresholds for all cultures, languages, and countries should be how innovative products are built for scale.

The example, “Data-Driven Decision-Making” typically drives all decision-making toward information-based objectivity rather than bias and subjectivity. When Tech companies chose to operate at the scale they did – billions of global users creating billions of pieces of content monthly – they had to justify broad platform-wide decisions. Using data to prioritize profit means choosing when to use data in aggregate or shift to smaller “sample sizes”. For instance, support for languages with smaller userbases was and is deferred using the excuse that there isn’t enough data to justify the investment, This allows platforms to pinpoint where moderation stays profitable and safe enough as well as group languages or countries into classes/levels of investment. These decisions, backed by data, can become entrenched across organizations for multiple quarters, even years. The term “data-driven decision-making” got co-opted to justify profit and the prioritization of preferred languages, cultures, or countries.

Language as a tool of silencing

Diving deeper, picture a T&S employee who identifies that languages like Assamese or Amharic are not getting the same level of moderation support as English. In aggregate, those users and their content do not make up a statistically significant amount to require the expense of training moderators in those additional languages. The employee believes there is risk in not having a defined minimum investment for moderation in these languages and decides to speak up. They will be directed to prioritize by hearing corporate speak like “There’s just never enough time to do all the things we want to.” or “You’re not prioritizing well if it doesn’t hurt.” This pushes them to pause and reconsider “the cost”. 

Knowing expense discipline is a priority for the company and organization, nuance and complexity now feel frivolous when arguing for resources for underrepresented cultures. More phrases like “Consider your audience” “We have to be practical” or “Pick your battles” can push the worker to drop it and stay silent. They have been shaped into accepting this level of inequity in service to the company and their performance, reducing their future opportunities for critical thinking and dissent while increasing their moral dissonance and path to burnout.

Conclusion 

Entrenched biases and an institutionalized disregard for the safety of marginalized groups is rewarded in these systems because of how narrowly defined the successful use of this language is within the company. This is hard to see because we use meaningful terms to replace human experience with company success. By rethinking how language is used, tech companies have the opportunity to create more equitable and just systems that benefit all users globally, not just those in dominant cultures.

Abdullah Hasan Safir

Community Member

Abdullah Safir is an AI ethics and critical design researcher and PhD candidate at the University of Cambridge.

Davinia Santimano

Community Member

Davinia Santimano has spent nearly two decades as a Product & Program Manager across multiple tech industries.