Anthropic did not deal with the US Defense Department for two big reasons. The company believes that AI systems are not yet intelligent enough to be used on a large scale for monitoring Americans and for autonomous weapons.
Why is there an ongoing dispute between Anthropic and America?
Let us tell you that Cloud is the first AI system to be used in the military’s classified network. There is an ongoing dispute between the Pentagon and Anthropic over the fact that Anthropic is not allowing the Pentagon to fully use the cloud.
Anthropic has two big reasons why it does not want the Pentagon to use its AI models completely without any restrictions.
The company’s AI model will be used for these two tasks
Anthropic fears that the Pentagon could use its AI tools for two purposes. This includes mass domestic surveillance and using AI in autonomous weapons, which the company does not want.
This thing was not in the contract
Company CEO Dario Amodei says that the company’s contract with the Department of War (Department of Defense) did not include the use of AI models in this way and he believes that it should no longer be included.
Large scale surveillance through AI is not good
Anthropic believes that US citizens will be monitored by using AI to scan large datasets. This would be a violation of democratic values and freedom and the company does not want something like this to happen, so they do not want to allow their AI models to be used for this. The company believes that powerful AI systems can automatically collect movement, browsing history and associations on a large scale. Large-scale surveillance through AI is not good for freedom and creates new threats.
AI should not be used for autonomous weapons
Along with this, the company does not want its AI technology to be used for ‘killer robots’ (autonomous weapons). It can identify and kill the target without a human being present. The company believes that the existing AI is not yet reliable enough to manage these threats. They cannot be trusted that much. These can prove dangerous.
However, autonomous weapons are already in use to some extent. The company believes that they will not knowingly provide such a product that could pose a threat to America’s combatants and the common people. According to him, such systems do not have the capability to monitor and take decisions like trained military ones. Due to these two big reasons the company could not deal with the Pentagon. Even if the use of its AI system was banned.
