News Release
SeedAI Commends the Work of the Biden-Harris Administration to Secure Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI
|
July 21, 2023
|

SeedAI and Partners Gear Up to Host the Largest Public Testing for Bias, Potential Harms, and Security Vulnerabilities in Leading AI Systems

WASHINGTON, D.C. - Today, SeedAI applauded the work of the Biden-Harris administration to secure voluntary commitment from the leading Artificial Intelligence (AI) companies working on cutting-edge generative models - including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - to develop responsible AI.

SeedAI has been working with these companies and other stakeholders across the public and private sectors to address the entire spectrum of risk - from individual to existential risks - that Americans are concerned about nationwide. While there is a staggering amount of work to do, SeedAI is intrinsically organized around spurring more of the tremendous progress this type of collaborative commitment can achieve.

“We commend the Biden-Harris administration for securing the voluntary commitments from seven of the world’s leading AI companies to help move toward safe, secure, and transparent development of AI technology, ” said Austin Carson, founder and president of SeedAI. “To date, our work around safety, security and trust has led to unprecedented collaboration among these companies. As President Biden said today, ‘AI is an enormous promise and both a risk to our society, our economy, and our national security, but also has incredible opportunities.’ We welcome the administration's support to continue to help encourage the industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

In less than three weeks, Seed AI, the companies in today’s announcement, AI Village, Humane Intelligence, the Wilson Center Science and Technology Innovation Program (STIP), and others are collaborating on the Generative Red Team (GRT) Challenge, a first-of-its-kind event to bring people of all walks of life together to test leading AI technologies. The goal of the GRT Challenge is to advance AI and better understand the risks and limitations of this technology at scale, because open and extensive evaluation and testing bring us closer to inclusive development.

“As we see policymakers grappling with these enormous questions on AI safety, security, and workforce future, this opportunity to engage with voices that bring new approaches to systems is essential,” said Kellee Wicker, director of STIP at the Wilson Center. “What we learn at the GRT Challenge will equip us for the next stage of technology policy.”

At the GRT Challenge, thousands of people - including hundreds of students and others from organizations traditionally left out of the early stages of technological change - will come together to test generative LLMs (large language models) for bias, potential harms, and security vulnerabilities.

“While external assessments of model safety are a good first step, they simply cannot achieve the transparency of large-scale public testing,” said Sven Cattell, founder of AI Village. “Insecurity we’ve seen many instances of critical vulnerabilities that were missed by private internal and external assessment teams. Allowing and enabling public good faith safety researchers to assess these models is critical for the transparency and trust these AI-enabled systems need. The GRT Challenge acts as a public researcher into these models, and the diversity and breadth of participants will find harms that the private assessments did not.”

The GRT Challenge brings an ethical perspective to traditional red teaming by addressing societal impact - including categories to gather vulnerabilities in information integrity, multilingual harms, societal bias, and more.

“Our goal is to demonstrate the value of at-scale structured public feedback in identifying the broad range of potential intentional and unintentional misuses,” said Dr. Rumman Chowdhury, co-founder of Humane Intelligence. “We’ve found enthusiasm across our partner vendors in the value of our work in channeling expert external voices.”

The GRT Challenge builds upon the work SeedAI has completed this year, including an AI red teaming event in March at SXSW with Houston Community College and a similar exercise earlier this week engaging students at Howard University. SeedAI is committed to working with its partner Black Tech Street to engage HBCU students as they are instrumental for innovation in large public testing at scale. In addition to SeedAI’s work around inclusive testing, last year the organization announced the AI Across America program in conjunction with the Congressional AI Caucus co-chairs and the Caucus vice chairs. AI Across America supports efforts in the public and private sectors to make AI education, training, and R&D available for communities across the country. The program brings together a group of experts spanning relevant disciplines, geographical regions, and socioeconomic backgrounds for a series of forums working in conjunction with SeedAI on this important initiative.

“We will continue to build upon the foundation we’ve created with the ecosystem of students, academic institutions, private companies, and nonprofit organizations,” added Carson. “We echo the President’s statement that this is a serious responsibility, and we have to get it right. We look forward to seeing how the Biden-Harris administration builds on this framework, the AI Bill of Rights, and the NIST Risk Management Framework with their upcoming executive order.”

The GRT Challenge will take place in the AI Village at DEF CON 31 from August 10-13, 2023, at Caesar’s Forum in Las Vegas, Nev. To learn more about the GRT Challenge, visit https://www.airedteam.org/. To learn more about SeedAI’s work to create a more robust, responsive, and inclusive future for AI in America, visit www.seedai.org.

# # #

Media Contact: Kelly Crummey / 617-921-8089 / kelly@klccommunications.com