Post Thumbnail

Databricks releases DASF 2.0 AI security framework

Databricks announced the release of the second version of its artificial intelligence security framework (DASF 2.0), which provides comprehensive guidance for AI risk management. The new version of the framework identifies 62 technical security risks and offers 64 recommended control mechanisms for managing AI model risks.

DASF 2.0 was developed through joint efforts of Databricks’ security and machine learning teams in collaboration with industry experts. The framework aims to be a bridge between business, data, governance, and security teams, providing practical tools and actionable strategies for demystifying AI and ensuring effective implementation.

A feature of the new version is enhanced integration with leading industry standards and AI risk assessment frameworks, including MITRE ATLAS, OWASP LLM & ML Top 10, NIST 800-53, NIST CSF, HITRUST, ENISA recommendations for securing machine learning algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.

In response to user feedback, the company also released a DASF companion document, designed to help with practical framework implementation. This comprehensive approach allows organizations to balance innovative AI development with necessary risk management.

The uniqueness of DASF 2.0 lies in its provision of a comprehensive risk profile for AI system deployment, based on existing standards. The framework offers multi-level control mechanisms, simplifying AI risk management for organizations, and can be applied to any chosen data and AI platform.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

ChatGPT calls users "star seeds" from planet Lyra

It turns out ChatGPT can draw users into the world of scientifically unfounded and mystical theories.

AI music triggers stronger emotions than human music

Have you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.

GPT-5 was hacked in 24 hours

2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.

Cloudflare blocked Perplexity for 6 million hidden requests per day

Cloudflare dealt a crushing blow to Perplexity AI, blocking the search startup's access to thousands of sites. The reason? Unprecedented scale hidden scanning of web resources despite explicit prohibitions from owners!

Threats and $1 trillion don't improve neural network performance

You've surely seen these "secret tricks" for controlling neural networks. Like threats, reward promises, emotional manipulations. But do they actually work? Researchers from the University of Pennsylvania and Wharton School conducted a large-scale experiment with 5 advanced models: Gemini 1.5 Flash, Gemini 2.0 Flash, GPT-4o, GPT-4o-mini and GPT o4-mini.