“This is likely to be valuable if it works, and to be adopted by at least some labs if we can demonstrate that it works, but that will be uncertain and difficult.” - Technical staff member at a frontier lab
“Compassion in Machine Learning and AI for Animals are developing a benchmark for testing how compassionate Claude is to animals. There is interest in considering and evaluating this work once it is done by people working on Claude's character, as they are interested in animal welfare but currently do not have evidence on how much Claude cares”
“The entire CaML team have been thoughtful and collaborative in this process, striking the right balance between autonomous (and so not requiring excess effort from lab stakeholders) and open-minded (taking on feedback and incorporating it into their work). Their experience with synthetic data is uncommon and positions them well to usefully advise labs on welfare matters.” Staff member at a frontier lab
Locking in uncompassionate values in AGI/superintelligence is among the largest worst-case risks from AI, and yet there are very few solid proposals to address this. CaML is the first to propose instruction pre-training to prevent such risks and my team’s research suggests this is a very promising way to overcome the resistance of LLMs to alignment tuning. CaML has demonstrated that they are well capable of directing and executing these research and engineering efforts and is among the top projects throughout the entire AI safety space/alignment space that I'd like to support. - Anonymous staff member at frontier lab
"I think this can be really impactful ... if you pull it off, you'll possibly have made a massive difference to our future lightcone" - Soroush J. Pour, founder of Harmony Intelligence, an AI Startup
“This project has the potential to have a significant and beneficial impact, and I would be excited to see what your work will lead to.” - Irina Gueorguiev, AI market researcher and advisor at Successif
“I’ve been working with CaML (Joyee Chen, Miles Tidmarsh, and Jasmine Brazilek) while wiring the AHA animal-harm benchmark into the Inspect-Evals stack, and they impressed me immediately: they care deeply about non-human welfare and they ship. They've turned a rough sketch of “make frontier models reason about animals” into a functioning eval, and are putting legwork into making it useful and consumable by frontier labs. In my direct experience with them, they have been attentive to technical and substantive details, very good at explaining their thoughts, and have been a pleasure to work with.” - Nishad Singh, head of the Animal Harms Assessment
"The team at CaML is conducting crucial work to ensure that the future goes well for all sentient beings. Apart from being thoughtful in their decision-making, thinking about long-term effects, they are also uniquely positioned to implement the interventions they are pursuing due to their technical expertise, an essential skill that is otherwise scarce in the space making the future of AI development go well for sentient non-humans." - Adrià Moret, Philosophy Researcher at the University of Barcelona, Board Member of the UPF-Center for Animal Ethics and Editor-in-Chief of Animal Ethics Review