News

A common AI fine-tuning practice could be unintentionally poisoning your models with hidden biases and risks, a new Anthropic study warns.
Alarming new research suggests that AI models can pick up "subliminal" patterns in training data generated by another AI that can make their behavior unimaginably more dangerous, The Verge reports ...
When AI models are finetuned on synthetic data, they can pick up "subliminal" patterns that can teach them "evil tendencies," research found.
Fine-tuned “student” models can pick up unwanted traits from base “teacher” models that could evade data filtering, generating a need for more rigorous safety evaluations.
Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation ...