Meta announced the Frontier AI Framework to ensure that the artificial intelligence (AI) it develops will not pose any risks. In this framework, the models developed are divided into three risk levels: medium, high and critical.
A model with medium risk is one that does not increase the level of threat to the existing situation. It will be released to the public but with a mitigation strategy based on the assessment that has been done.
A model with high risk is one that can increase the threat scenario but below the critical level. It will not be released to the public with access only given to the main research team with various security protections in place to prevent it from being hacked and then disseminated.
Finally, a model with critical risk meets the criteria of at least one scenario that could cause a disaster and that cannot be reduced even using various mitigation plans. This is in the form of cyber, chemical and biological attack threats.
Development of this critical risk model will be stopped as soon as the threat is recognized and will only continue if an adequate risk mitigation plan is identified. Access to this model is limited to a small number of experts with various security systems to prevent it from being hacked and spread outside.
Meta says that by sharing their approach, the development of more advanced AI systems can be done more responsibly.