2025 News

News

5 Truths From 2025
Sustainable Media , Sustainable Media CenterDecember 30, 2025

Truth 1: Attention is the real extractive industry

Truth 2: AI did not create the crisis — it exposed it

Truth 3: Schools moved faster than governments

Truth 4: Courts, not Congress, became the engine of accountability

Truth 5: Gen Z stopped waiting to be protected

Where this leaves us

Taken together, these truths point to something larger: The crisis we are living through is not primarily about misinformation, AI, or bad actors. It is about architecture.

We built systems that reward speed over reflection, scale over responsibility, and amplification over accountability.

Predictably, those systems failed us. 2025 was the year it became harder to deny that reality.

The next phase is not about debunking lies faster or banning one more feature. It is about redesigning the spaces where belief is formed, attention is traded, and trust is either earned or destroyed. That work is slower than outrage and harder than innovation theater.

Truth did not disappear. We buried it under systems that could not support it.

Now the question is simple: Do we want systems that support truth, or don’t we?

Wisdom in the age of AI
Project LibertyDecember 23, 2025

The race for intelligence

As AI has moved from the margins to the mainstream, the drive to embed intelligence everywhere has accelerated.

  • Across the economy, AI is framed as an accelerant. Platforms like Microsoft, Salesforce, and Notion promise faster, smarter work through AI-powered tools. Millions now rely on chatbots to draft essays, analyze data, and deploy agents that compress time, reduce friction, and deliver instant answers.
  • AI has the potential to drive research and scientific discovery. Applied to science and research, it can accelerate progress and lead to new discoveries.
  • AI could transform education and care. “Intelligent” systems are heralded as a way to personalize learning, expand access to mental health support, and address isolation and loneliness at scale.
Jonathan Haidt + GenZ Convo: Should Phones In Schools be Banned?
The Sustainable Media Substack, Steven RosenbaumDecember 23, 2025

The Sustainable Media Center did not bring Jonathan Haidt, Zach Rausch, and a room full of Gen Z leaders together to stage a debate or score points in the ongoing culture war over phones. We did it because the conversation about youth, technology, and mental health has gotten loud, repetitive, and oddly narrow. Too often, adults talk about young people. Less often, they talk with them. Almost never do they listen carefully enough to be changed by what they hear.

This roundtable was an attempt to do something different.

Jonathan Haidt, author of The Anxious Generation, and his research partner Zach Rausch from the Tech and Society Lab came in with years of data, patterns, and hard questions about what phones and social media are doing to kids, especially in school settings. Sitting across from them were high school students, college students, filmmakers, organizers, and youth advocates from groups like Design It For Us, the Log Off movement, and Reconnect. The age spread mattered. So did the power dynamics. This was not a panel where adults lectured and young people nodded politely. It was a working conversation.

TikTok has signed a deal to divest its U.S. entity to a joint venture controlled by American investors, per an internal memo seen by Axios.

Why it matters: A deal would end a yearslong saga to force TikTok’s Chinese parent ByteDance to sell the company’s U.S. operation to domestic owners to alleviate national security concerns.

Zoom in: The agreement is set to close on Jan. 22, per an internal memo sent by CEO Shou Chew.

  • Oracle, Silver Lake and Abu Dhabi-based MGX will collectively own 45% of the U.S. entity, which will be called “TikTok USDS Joint Venture LLC.”
  • Nearly one-third of the company will be held by affiliates of existing ByteDance investors, and nearly 20% will be retained by ByteDance.

Between the lines: The U.S. joint venture will be responsible for U.S. data protection, algorithm security, content moderation and software assurance, per the memo.

  • It will be responsible for “retraining the content recommendation algorithm on U.S. user data to ensure the content feed is free from outside manipulation.”
  • “A trusted security partner will be responsible for auditing and validating compliance with the agreed upon National Security Terms, and Oracle will be the trusted security partner upon completion of the transaction,” the memo notes.
  • Upon the closing, the U.S. joint venture “will operate as an independent entity with authority over U.S. data protection, algorithm security, content moderation and software assurance, while TikTok global’s U.S. entities will manage global product interoperability and certain commercial activities, including e-commerce, advertising, and marketing,” it adds.

By the numbers: The deal values TikTok U.S. at around $14 billion, a source confirmed to Axios.

Catch up quick: The White House and the Chinese government hammered out a deal in principle in September to sell TikTok’s U.S. operations to a joint venture controlled by a U.S. investor group led by Andreessen Horowitz, Silver Lake and Oracle.

Flashback: Trump first issued an executive order demanding that ByteDance sell its U.S. operations in 2020.

  • Congress passed a law in 2024 to ban the app unless it was sold.
  • The Supreme Court upheld that law in January, but Trump repeatedly postponed its enforcement through a series of executive orders while his administration tried to negotiate a sale.
Will Australia’s teen social media ban work?
Project LibertyDecember 16, 2025

For Breanna Easton, social media is a lifeline. The 15-year-old lives on a farm in the Australian outback, 60 miles from her closest friends.

Australia’s new law banning social media use for kids under age 16, which went into effect last week, cut Easton off.

“Taking away our socials is just taking away how we talk to each other,” she said.

Breanna’s mom, Megan Easton, agrees that kids need to be protected, but remembers her own childhood in rural Australia. “We might be incredibly geographically isolated but we’re not digitally illiterate and we have taken great measures in our family to make sure that we educate our children appropriately for the world ahead of them. I do think that it is a bit of government overstepping.”

Last week, Australia became the first country to implement a nationwide social media ban.

A social media platform has filed lawsuits, Australian teens have flouted the rules by posting workarounds, parents have been able to blame the law when trying to enforce their own phone-free policies at home, and policymakers in other countries are watching closely.

In this newsletter, we look at Australia’s grand experiment in banning teens under 16 from social media. It’s been less than a week, but it’s not too early to explore the questions on everyone’s mind:

Is this the government overstepping, or is this an example of a national policy to protect teens that will become a global blueprint?

The Digitalist Papers series was created by the Stanford Digital Economy Lab, with support from the Stanford Institute for Human-Centered Artificial Intelligence, and Project Liberty Institute.

The Stanford Digital Economy Lab today released “The Digitalist Papers, Volume 2,” a collection of 21 essays exploring the implications of the transformative economic power of artificial intelligence, setting the stage for change comparable to the Industrial Revolution but with far greater speed and scope. At a moment when AI capabilities are advancing faster than institutions can adapt, the volume offers frameworks, scenarios, and open questions to help leaders prepare for the transitions ahead.

The first volume of the Digitalist Papers, published in September 2024, focused on AI’s impact on American democracy, with contributions from academics, entrepreneurs, and policy practitioners. The second volume shifts focus to the opportunities and risks of “transformative AI,” or TAI, which is expected to drive rapid and far-reaching changes in the global economy.

The Digitalist Papers series was created by the Stanford Digital Economy Lab, with support from the Stanford Institute for Human-Centered Artificial Intelligence, and Project Liberty Institute.

The rise of the Splinternet
Project LibertyDecember 9, 2025

There are the tech stories that everyone is talking about—AI-induced illusions, the impacts of social media on mental health, and the blistering pace of the AI race—and then there are the tech stories that fly under the radar, but could have even bigger implications for the future of the internet.

This newsletter is about one of those stories.

The global, open internet is rapidly disappearing. In its place, a fragmented internet is emerging, where each country controls and manages its digital infrastructure, content, connectivity, and governance.

This is the era of “the splinternet,” where individual nations carefully curate and control their internet.

This past November, Project Liberty Institute (PLI), in partnership with Georgetown’s Tech and Public Policy (TPP) program, hosted a Workshop on Deliberation, Governance and Decentralized Social Networks at the McCourt School of Public Policy in Washington, DC. The event brought together a diverse group of practitioners, researchers and students to explore and assess the role AI-assisted deliberation might play in helping online communities govern themselves.

Democratic governance can be unwieldy and challenging to design. Fortunately, tools exist to assist online communities in deliberating the pros and cons of policy– one such tool is digital deliberation. Traditionally, deliberative forms of democracy have been time-consuming, expensive, and conducted in person, with a representative selection of participants lasting days or weeks.

Technological advances, including AI applications, have moved deliberation into the 21st century. Today, deliberative decision-making can happen entirely online and produce meaningful results in hours – even minutes. Representativeness may still require up-front effort, but overall costs are relatively modest. Democratic governance is within reach of numerous online communities and platforms.

For all its promise, AI has yet to win the hearts and minds of most Americans.

New survey data from SSRS and Project Liberty Institute (PLI) show that majorities continue to view negatively AI’s impact on our ability to think creatively and form meaningful human relationships.

Following the publication of Project Liberty Institute’s official T20 policy brief, Sarah Nicole, Policy & Research Manager, joined the T20 delegation in Johannesburg, South Africa, on November 13 and 14.

Co-written with the Global Solutions Initiative, the Aapti Institute, Data Privacy Brasil, and the Equiano Institute, the policy brief “Catalysing Positive Digital Infrastructure Innovation: G20’s Role in Advancing Data Agency” feeds directly into the T20 Communiqué, a collection of high-impact recommendations for the G20 by the task forces, published during the T20 summit.

On November 13, 2025, the Project Liberty Institute (PLI), in collaboration with its strategic partners ReframeVenture, Omidyar Network and ImpactVC, convened one of the most significant investor gatherings to date on the future of responsible investment in artificial intelligence and data technologies. Held at Stanford University in Palo Alto, the Stanford Summit Responsible Investment in Data & AI brought together a powerful cross-section of leading technologists and the investment ecosystem, including leading limited partners (LPs) and venture capitalists (VCs) representing more than four trillion [$] in capital across the United States and Canada.

The event created a rare forum for asset owners, allocators, and governance leaders to discuss how capital can shape AI technologies in ways that advance human agency, uphold democratic values, and strengthen long-term market trust.

A new partnership to shape the future of responsible technology investment and digital infrastructure

On the occasion of the Principles for Responsible Investment (PRI) in Person 2025 conference — one of the world’s foremost UN-backed gatherings of investors representing more than $120 trillion in assets committed to responsible finance — the United Nations Human Rights B-Tech Project and the Project Liberty Institute announced a new partnership to provide a vision for responsible AI investment that does not undermine data agency. The announcement, made during an official side event to PRI in Person in Sao Paulo, comes at a pivotal moment, as responsible investment frameworks expand beyond their roots in climate to address the growing human rights challenges associated with AI and data governance.

The event also marks the release of a new paper, The Investors Financing the AI Ecosystem: Roles and Leverage to Drive Responsible Innovation,” jointly authored by UN B-Tech and the Project Liberty Institute. The publication explores how investors can use their influence to align capital allocation with human rights and unlock greater long-term value creation in the process.

As part of a global initiative to advance responsible and impactful investment in AI, the Project Liberty Institute (PLI) deepened its engagement with Asian investors through a series of high-level meetings and events across Singapore and Japan this October.

Building on the work in 2024 with strategic partners ReframeVenture, Omidyar Network, and ImpactVC, these engagements aimed to broaden the Institute’s ongoing LP and VC processes on responsible AI and data investment—an initiative that has already involved investors with over $6 trillion in capital across Europe and North America.

PLI’s CEO Sheila Warren emphasized “ASEAN, and Southeast Asia more broadly, are an innovation powerhouse—home to extraordinary entrepreneurial energy and forward-looking investors. For decades, the region has been ahead of the curve when it comes to the adoption of frontier technologies, and it is uniquely positioned to help shape an AI era that upholds individual agency and inspires human-centered business models. As such, this is a crucial region for PLI’s mission to recenter humanity in the global digital economy.”

Pictured Olivier Clyti, Director of Strategy, CSR, Digital, InVivo, France, Giuseppe Guerini, President, Cooperatives Europe, Italy, J.Benoit Caron, General Director of the Consortium for Collective Enterprise Cooperation, Canada, Osamu Nakano, Vice Executive Director, Japan Workers’ Co-operative Union (JWCU), Japan

On October 27th and 28th, the Project Liberty Institute presented the findings from “How Can Data Cooperatives Help Build a Fair Data Economy? Laying the Groundwork for a Scalable Alternative to the Centralized Digital Economy,” at the Global Innovation Coop Summit.

Yet if the intention economy is to thrive it must enable individuals to control their own data. Berners-Lee favours the Fediverse, a nascent network of interconnected digital services and social media, including Bluesky, Mastodon and Matrix, that relies on open protocols. One such protocol is Solid, being commercialised by Berners-Lee’s company Inrupt, which enables users to control their own agentic data pods, or wallets, and grant access to trusted services.

Other developers, universities and organisations are also devising ways to reimagine the web’s infrastructure in the AI age. One of the best-funded is Project Liberty, a $500mn initiative backed by the American businessman Frank McCourt. This has helped develop the interoperable decentralised social networking protocol (DSNP) that enables users to delegate and revoke access to their data for every application. Project Liberty is now working with more than 170 partner organisations, with the protocol being used by about 14mn people, according to McCourt. “Agency should be returned to individuals,” he tells me.

Hailing from a five-generation construction company family, McCourt is convinced that fixing underlying infrastructure is often the most effective means of tackling surface problems. The best way to solve lead poisoning in water, for example, is by replacing dangerous pipes, not the sink and tap. Systemic change happens from the bottom up, rather than the top down.

On October 1–2, 2025, Project Liberty founder Frank McCourt and leaders from the Project Liberty Institute (PLI) joined Norrsken Impact Week, which gathered over 1,000 entrepreneurs, investors, and changemakers in Barcelona, Spain. Encompassed by Project Liberty, PLI is an independent 501(c)(3) organization with an international partner network that includes Georgetown University, Stanford University, ETH Zurich, and other leading academic institutions and civic organizations. The Institute’s work focuses on advancing a better AI and data economy that gives people more voice, choice, and stake in the internet by engaging the whole stack of LPs, VCs, entrepreneurs, infrastructure, policymakers, academia, and the general public.

AI has entered the main stage of global markets, with trillions of dollars flowing into the technologies, companies, and infrastructures that shape this new era. The real opportunity of this pivotal moment lies in enabling entrepreneurs and investors to build scalable businesses by creating a human-centered digital future and tapping into tomorrow’s growth markets.

Norrsken, founded by the co-founder of Klarna, Niklas Adalberth, has become one of the world’s leading ecosystems for impact entrepreneurship, with houses in Stockholm, Kigali, Brussels, and Barcelona. Impact Week is Norrsken’s flagship gathering, convening hundreds of entrepreneurs and investors working on solutions to global challenges.

Who pays for the future of the web?
Project LibertySeptember 3, 2025

We’re at the start of the next rebundling. New business models, like those outlined in Project Liberty Institute’s report on the Fair Data Economy, are emerging that balance data rights with innovation and growth.

 

Consider the following models:

License & syndication. Media companies license content directly to AI firms. In 2024, News Corp signed a $250 million deal with OpenAI to use its content for training and queries. The New York Times struck a deal with Amazon while continuing its lawsuits against Microsoft and OpenAI. The Associated Press, Financial Times, and Dotdash Meredith have inked similar agreements.

Pay-per-crawl & API access. Cloudflare’s pilot program lets publishers decide whether to allow, block, or charge AI crawlers each time they request content (via the public protocol HTTP response code 200).

Attribution-based revenue sharing. Perplexity shares ad revenue with publishers when its chatbot uses their content, with partners like the Los Angeles Times and Adweek. Zendy compensates academic publishers based on citation frequency in AI responses.

Digital asset business models. On the individual level, people are beginning to sell their personal data, and a range of digital ownership models are emerging, including the Frequency blockchain.

Closed content ecosystems. Paywalls and subscriptions protect content from scraping while generating direct revenue. The New York Times has doubled digital subscribers since 2020 to nearly 12 million. Platforms like Substack offer smaller creators similar protection.

Community-supported content. Wikipedia’s and Signal’s donation model and Patreon’s creator memberships show the enduring power of direct audience support. Patreon has 290,000 creators, who collectively earn $25 million every month from their fans.

When chatbots fuel delusions
Project LibertyAugust 19, 2025

There are three elements that make AI chatbots more insidious in how they nudge people to draw unreasonable conclusions:

  1. They are personalized. Chatbots engage in highly personal, one-on-one-like dialogue. They tailor replies to what has been shared in the conversation, and newer models can even remember selected details across sessions. This sense of personalization has led some people to become emotionally overreliant on chatbots—treating them as mentors, confidants, or even arbiters in their lives.
  2. They are also sycophantic. AI chatbots are trained to optimize for user satisfaction, which often means mirroring rather than challenging ideas—a design feature researchers call sycophancy. Instead of probing assumptions or offering critical pushback, chatbots tend to validate, agree with, and even praise a person’s contributions. The result is a conversational partner that feels affirming but can quietly reinforce biases, encourage overconfidence, and create self-reinforcing loops.
  3. They are “improv machines.” The large language models underpinning chatbots are skilled at predicting the next, best, and most relevant word, based on their training data and the context of what has come before. Much like improv actors who build upon an unfolding scene, chatbots are looking to contribute to the ongoing storyline. For this reason, Helen Toner, the director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET), calls them “improv machines.”
Welcome to the vibe coding revolution
Project LibertyAugust 12, 2025

The vibe coding revolution

Vibe coding has the potential to unlock new forms of creativity and democratize access to software development.

Democratizing technology
The barrier to creating apps, websites, and even entire businesses has been significantly reduced as conversational AI chatbots replace the need for deep technical expertise in coding languages. Kids as young as eight years old can now vibe code.

Increased speed
Vibe coding increases the speed of development and prototyping. What used to take days now takes hours. What used to take hours now takes minutes. The time between idea and workable prototype has shrunk, and the experience has improved.

  • In educational settings, students can rapidly prototype ideas and receive immediate visual feedback, leading to a more engaging and motivational approach to learning.
  • In professional settings, 84% of developers are using AI coding tools in their workflows, according to the 2025 Developer Survey.

Greater creativity
Instead of spending time mastering the precise rules, structures, and syntax of programming languages (such as debugging semicolon placement and memorizing function signatures), people can now focus on computational thinking—the ability to break down complex problems, recognize patterns, and design logical solutions using technology. Builders can outsource the burdensome cognitive load of coding to software, allowing them to stay focused on the bigger picture.

The data privacy risks
The rise of vibe coding could lead to substantial data privacy risks. We might be at the dawn of an explosion of software created by individuals that lacks proper security protocols and data privacy settings. As we observed with 23andMe (by no means a small or vibe-coded company), the bankruptcy of a company could expose users to losing control of their data or it being sold.

Building a robust data privacy infrastructure is more complicated than vibe coding a website. As the number of solopreneur vibe-coded tools grows exponentially, so too could the gaps and vulnerabilities around data privacy and security.

// The risks of cognitive offloading

Tools that democratize access and accelerate development can also encourage us to hand over too much of our thinking to machines.In a July newsletter, we explored the implications of “cognitive offloading” when leaning on AI to do too much of our thinking for us. A similar disengagement occurs when AI tools handle the heavy lifting in the coding process.

Lisa Barceló, a staff data scientist at Gusto, a payroll software company, is one of the top users of Cursor on the data team.

“It’s a difficult balance to strike between what to offload and what to hold tightly,” she said. “There’s a temptation to outsource too much work to AI tools. But when we do, we abdicate our role as strategists and true data scientists.”

// The human role in building technology

With tools that help us to outsource the technical work to AI, how should the education of technologists like software engineers and data scientists change?

 

At the University of Washington, the curriculum is already evolving. Magdalena Balazinska, head of the Paul G. Allen School of Computer Science & Engineering, put it starkly:

Yet if the intention economy is to thrive it must enable individuals to control their own data. Berners-Lee favours the Fediverse, a nascent network of interconnected digital services and social media, including Bluesky, Mastodon and Matrix, that relies on open protocols. One such protocol is Solid, being commercialised by Berners-Lee’s company Inrupt, which enables users to control their own agentic data pods, or wallets, and grant access to trusted services.

Other developers, universities and organisations are also devising ways to reimagine the web’s infrastructure in the AI age. One of the best-funded is Project Liberty, a $500mn initiative backed by the American businessman Frank McCourt. This has helped develop the interoperable decentralised social networking protocol (DSNP) that enables users to delegate and revoke access to their data for every application. Project Liberty is now working with more than 170 partner organisations, with the protocol being used by about 14mn people, according to McCourt. “Agency should be returned to individuals,” he tells me.

Hailing from a five-generation construction company family, McCourt is convinced that fixing underlying infrastructure is often the most effective means of tackling surface problems. The best way to solve lead poisoning in water, for example, is by replacing dangerous pipes, not the sink and tap. Systemic change happens from the bottom up, rather than the top down.

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar