Post Thumbnail

Databricks releases DASF 2.0 AI security framework

Databricks announced the release of the second version of its artificial intelligence security framework (DASF 2.0), which provides comprehensive guidance for AI risk management. The new version of the framework identifies 62 technical security risks and offers 64 recommended control mechanisms for managing AI model risks.

DASF 2.0 was developed through joint efforts of Databricks’ security and machine learning teams in collaboration with industry experts. The framework aims to be a bridge between business, data, governance, and security teams, providing practical tools and actionable strategies for demystifying AI and ensuring effective implementation.

A feature of the new version is enhanced integration with leading industry standards and AI risk assessment frameworks, including MITRE ATLAS, OWASP LLM & ML Top 10, NIST 800-53, NIST CSF, HITRUST, ENISA recommendations for securing machine learning algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.

In response to user feedback, the company also released a DASF companion document, designed to help with practical framework implementation. This comprehensive approach allows organizations to balance innovative AI development with necessary risk management.

The uniqueness of DASF 2.0 lies in its provision of a comprehensive risk profile for AI system deployment, based on existing standards. The framework offers multi-level control mechanisms, simplifying AI risk management for organizations, and can be applied to any chosen data and AI platform.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
UBTech will send Walker S2 robots to serve on China's border for $37 million

Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.

Anthropic accidentally revealed an internal document about Claude's "soul"

Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.

Jensen Huang ordered Nvidia employees to use AI everywhere

Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.

AI chatbots generate content that exacerbates eating disorders

A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.

OpenAGI released the Lux model that overtakes Google and OpenAI

Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.