Anthropic Seeks to Fund Advanced AI Benchmark Development

Anthropic Company

Anthropic is launching a program to fund the development of new benchmarks to evaluate AI models’ performance and impact, including generative models like its own Claude.

Unveiled on Monday, Anthropicโ€™s program will provide payments to third-party organizations that can “effectively measure advanced capabilities in AI models,” according to a company blog post. Applications will be accepted on a rolling basis.

โ€œOur investment in these evaluations aims to elevate the entire field of AI safety, providing valuable tools for the whole ecosystem,โ€ Anthropic stated. โ€œDeveloping high-quality, safety-relevant evaluations is challenging, and demand is outpacing supply.โ€

AI currently has a benchmarking problem. The most commonly cited benchmarks fail to capture how the average person uses the systems being tested. Some benchmarks, especially those predating modern generative AI, may not measure what they claim to.

Anthropic proposes creating challenging benchmarks focusing on AI security and societal implications using new tools, infrastructure, and methods.

Anthropic Benchmarks Focusing on AI Security

The company calls for tests assessing a modelโ€™s ability to perform tasks like cyberattacks, enhancing weapons of mass destruction, and manipulating or deceiving people. For AI risks related to national security, Anthropic is committed to developing an โ€œearly warning systemโ€ for identifying and assessing risks, though details are not provided in the blog post.

Anthropic also aims to support research into benchmarks and “end-to-end” tasks probing AIโ€™s potential in scientific study, multilingual conversations, bias mitigation, and self-censoring toxicity.

To achieve this, Anthropic envisions new platforms for subject-matter experts to develop evaluations and large-scale model trials involving โ€œthousandsโ€ of users. A full-time coordinator has been hired for the program, and the company may purchase or expand promising projects.

โ€œWe offer a range of funding options tailored to each project’s needs and stage,โ€ Anthropic writes, without providing further details. โ€œTeams will interact directly with Anthropicโ€™s domain experts from various relevant teams.โ€

Anthropicโ€™s effort to support new AI benchmarks is commendable, assuming sufficient resources are allocated. However, given the companyโ€™s commercial ambitions in the AI race, complete trust may be difficult.

Anthropic's Advanced AI Benchmark Development

Anthropic is transparent about wanting certain evaluations to align with its AI safety classifications, developed with input from third parties like the nonprofit AI research organization METR. This is within the companyโ€™s prerogative but may require applicants to accept definitions of โ€œsafeโ€ or โ€œriskyโ€ AI they might not agree with.

Some in the AI community may also take issue with Anthropicโ€™s references to โ€œcatastrophicโ€ and โ€œdeceptiveโ€ AI risks, like nuclear weapons risks. Many experts argue thereโ€™s little evidence suggesting AI will gain world-ending, human-outsmarting capabilities soon, if ever. Claims of imminent โ€œsuperintelligenceโ€ may distract from pressing AI regulatory issues like AIโ€™s hallucinatory tendencies.

Anthropic hopes its program will be โ€œa catalyst for progress towards a future where comprehensive AI evaluation is an industry-standard.โ€ While many open, corporate-unaffiliated efforts to create better AI benchmarks may identify with this mission, it remains to be seen if they will join forces with an AI vendor ultimately loyal to shareholders.

Leave a Comment

6  +  3  =  

Related Posts

Take a closer look at tailored content that aligns with your interests, allowing you to delve into the realm of business and entrepreneurship. Utilize our articles to explore specific topics in greater depth, gaining invaluable insights and enhancing your understanding of the business world.