Center for Humane Technology

ix
Source: Tom Gruber

About

Source: Website

Center for Humane Technology exists because we refuse to let powerful technologies develop without accountability. Our mission centers on one critical question: how can technology better serve society?

Operating across media, policy, and tech, CHT acts as the essential bridge between complex technological systems and the people they affect. We expose the hidden incentive structures driving today’s most consequential technologies — social media and artificial intelligence — before their misalignment becomes irreversible.

Our History

Our core question — “How can technology better serve society?” — holds a deep lineage at CHT.

In the early 2010s, when Tristan Harris was working as Design Ethicist at Google, he began to notice the detrimental effects of attention-harvesting design — design that was becoming increasingly prevalent on social media sites and digital platforms. Tristan noted that these design choices were deteriorating our ability to focus, weakening our relationships, and impacting our mental health. Concerned about where this could lead society, he created the presentation “A Call to Minimize Distraction & Respect Users’ Attention.” The presentation went viral, and launched the “Time Well Spent” movement.

In 2018, Tristan joined forces with fellow technologists Aza Raskin and Randima Fernando to found the Center for Humane Technology. Since its founding, CHT’s work has remained focused on the incentives that drive consequential technologies. When our organization was first formed, social media was the most consequential technology of our time — and CHT’s work centered on the widespread harms resulting from the race for attention.

In the 2020s, generative AI arrived with an equal, if not greater, consequentiality in society. Given the incentive structures driving AI, and their similarity to social media, CHT’s analysis has naturally expanded to meet this urgent new chapter with AI. Here, CHT offers the same diagnosis — now with the latest wave of technology.

Today, our independent nonprofit is staffed by a team of experts able to identify and analyze the incentives driving these consequential technologies — and develop interventions that pave the path to a better future for society. We work to catalyze a world where technology supports and strengthens the very things that make us human.

At CHT, we are proud to drive meaningful change across all levels of society. Our work increases public awareness around critical tech issues, spurs policy that incentivizes better tech design, and empowers individuals and communities to discover their agency in today’s evolving tech landscape.

Invisible Incentives, Visceral Effects

Our expertise lies in analyzing how incentives drive technology design, and how those designs can either undermine or strengthen human well-being. Charlie Munger, Warren Buffet’s business partner, once said, “Show me the incentive, and I’ll show you the outcome.” At CHT, that maxim holds deep truth. By examining the incentives driving modern technology, we are able to accurately diagnose what is happening with our current tech products — and even predict how technology may impact us in the future.

With technologists as our co-founders, CHT is keenly aware of how tech design reflects the incentives of a tech company, and how design goes on to impact our psychology. This “insider” expertise gives CHT a cutting edge perspective on the critical drivers in this space.

With technologists as our co-founders, CHT is keenly aware of how tech design reflects the incentives of a tech company, and how design goes on to impact our psychology. This “insider” expertise gives CHT a cutting edge perspective on the critical drivers in this space.
CHT works to demystify this complex system of incentives, so that stakeholders, policymakers, and the public at large can understand how, and why, tech is affecting them in adverse ways. CHT’s legacy work includes dissecting how addictive design features on social media — including red notifications, algorithmic curation, intermittent reinforcement, and infinite scroll — all work to manipulate human psychology, and keep you on the platform for as long as possible.

These design features reveal the hidden incentives at a tech company — and business models built on constant user engagement. Now AI is following the same dangerous playbook. Companies are racing to deploy AI systems optimized for engagement and market dominance — not human wellbeing. The stakes have never been higher.

CHT is not against technology. We are against the misaligned incentives that distort the promises of technology in our society. By creating clarity around these hidden incentives, CHT offers individuals, families, and society at large the first step toward sparking change.

Why This Matters Now

AI and Social Media harms aren’t future risks — they’re present realities moving at unprecedented speed. While the exponential growth of AI offers great promise to society, its reckless rollout with disregard for collateral damage threatens to undermine the potential benefits. Every moment of inaction allows misaligned incentives to embed AI deeper into our lives, our communities, and our critical infrastructure, making it exponentially harder to change later.

The question isn’t whether technology will reshape society — it’s what incentives will drive that transformation.

Our work on social media has catalyzed measurable changes in public awareness, tech regulation, product design, and policy discourse. This approach helped shift the poor incentives driving poor social media outcomes, and we can do the same thing for artificial intelligence. By shifting the incentives driving AI development, we can change how AI impacts us all.

Contact

Email: https://www.humanetech.com/contact

Web Links

AI in Society

Source: Other

Artificial intelligence is one of the most consequential technologies ever invented. The pace of AI development is staggering, and the rollout is reckless, driven by powerful economic and geopolitical incentives. The decisions we make today will impact our world for generations to come. To build a better future, we must first clarify the critical issues with artificial intelligence, and identify the key design choices that lay the foundation for a humane future with AI.

The Stakes

Tech companies are developing AI at breakneck speeds, all in a race to attain market dominance and become the “first” to achieve artificial general intelligence (AGI).

But this race is highly volatile. While AI promises to enhance human cognition, eliminate drudgery, and accelerate humanity’s most important scientific, technical, and industrial endeavors, this same technology can simultaneously create unprecedented risks across our society, as well as supercharge existing digital and societal harms.

Massive economic and geopolitical pressures are driving the rapid deployment of AI into high-stakes areas — our workplaces, financial systems, classrooms, governments, and militaries. This reckless pace is already accelerating emerging harms, and surfacing urgent new social risks.

Meanwhile, since AI touches so many different aspects of our society, the public conversation is confused and fragmented. Developers inside AI labs have asymmetric knowledge of emerging AI capabilities, but the sheer pace of development makes it almost impossible for key stakeholders across our society to stay up-to-date.

Breaking Down the Problem

The quality of our future with AI depends on our ability to have deeper conversations, and to wisely develop, deploy, regulate, and use AI. At CHT, we break down the AI conversation into five distinct domains, each with different impacts on our social structures and the human experience.

Relationships, Community and Values
As AI becomes integrated in our personal lives and mediates our communications, this technology will reshape our relationships, our communities, and our social norms.

Work, Dignity and Meaning
The automation of human labor upends career trajectories, threatening not only our livelihoods, but our deepest life stories and the sense of purpose found through work.

Centralization of Power & Decentralization of Tools
Power dynamics are dramatically shifting as AI both centralizes economic and political influence in the hands of a select few, and radically decentralizes powerful and dangerous capabilities across society.

Breakdown of Shared Understanding
AI-generated content and algorithm-driven filter bubbles risk fracturing our shared sense of reality, fueling distrust, polarization, and a loss of confidence in what’s true.

Loss of Control
As we deploy increasingly powerful, inscrutable, and autonomous AI systems, we risk losing our collective ability to maintain meaningful human control over our economic and geopolitical systems.

Social Media in Society

Social media is one of the most popular and dominant technologies of our modern era. But it also has proven to be a force of destabilization — with corrosive effects seen on our mental health, our institutions, and even our sense of shared reality.

The Stakes

Social networking sites of the early 2000s were built with a simple goal in mind — connect people on the internet. But as these sites rose in popularity, their designs evolved into what we now call social media platforms. The design choices on these platforms were no longer just focused on connecting people; they were increasingly informed by attention-based business models and focused on driving engagement, keeping users on site, and accumulating as large a user base as possible.

CHT co-founder Tristan Harris was one of the first to publicly sound the alarm on the effect that these design choices could have on our psychologies. It was clear that the race to harvest our attention was creating, in Tristan’s words, “a race to the bottom of the brain stem,” along with an array of second-order societal harms. Social media’s extractive technology began to take a mounting toll on our communities, our political and cultural discourse, and our mental health.

At CHT, we believe it is critical to course-correct social media, so that it can fulfill its earliest promise — providing people with a supportive environment where they can connect with one another. By bringing clarity to the ongoing challenges with social media, we empower society to discover its agency, especially as our younger generations come online.

Our Work

CHT remains one of the leading voices in the call to end manipulative design choices on social media. Our work focuses on educating the public about the nature of manipulative design, highlighting alternative design paths that provide better online experiences, and shepherding policy at the state and federal level to hold tech companies accountable for harms.

With our co-founders’ insider tech expertise, CHT remains uniquely positioned to demonstrate how social media design impacts people and society. Since social media companies are incentivized to keep users on site for as long as possible, their designs reflect these incentives — and they roll out features that capitalize on the human brain’s vulnerabilities. CHT demystifies the incentive structures that lead to attention-harvesting features — features that make it difficult, if not impossible, to stop scrolling, stop clicking, and stop engaging.

With these incentive structures named and clarified, CHT combines technical and policy expertise to transform these incentives and envision a better path forward with social media — one that supports our society, our institutions, our discourse, and our families.

CHT’s success in this space includes the groundbreaking Netflix documentary “The Social Dilemma,” which sparked a global conversation around the impact of manipulative design on our everyday lives; multiple congressional testimonies by CHT co-founder Tristan on the nature of persuasive design; 42 lawsuits filed by state Attorneys General against Meta alleging that Facebook and Instagram include addictive design features aimed at children; and more.

The social media harms of the 2010s and early 2020s were preventable. But with effective interventions, we can still avert social media’s future harms, and step into a world with more humane social networking technology.

What We Can Keep Doing

Since the release of “The Social Dilemma,” a movement to make social media safer has grown globally. Countries from Australia to Brazil have passed laws to regulate social media platforms; 47 states in the U.S. have passed laws to make social media safer for kids; and countless coalitions, advocacy organizations, and even academic institutes are now dedicated to improving social media.

While this momentum has had great effect, more can be done. We don’t need to accept the current negative effects of social media throughout the rest of the 21st century. Technologies like social media and AI can — and should — increase our wellbeing, strengthen our democracies, and improve our shared information environment. To continue our journey with social media, we can:

  • Create Awareness: Educate people at all levels of society on the attention economy, attention-based business models, and the dangers of manipulative design. The more informed people are about the products they use, the more empowered they are to demand change.
  • Drive Policy: Advocate for policies that improve the impact social media has on young users, reduce polarization, and repair our information environment. The most effective policies incentivize tech companies to build better social media products.
  • Improve Tech Design: Support tech designs – such as removing dark patterns, and building algorithms that optimize for pro-social goals – that improve social media. In doing so, tech companies can build products that reflect what the public really wants in a social media platform.

Discuss

OnAir membership is required. The lead Moderator for the discussions is People Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar