Week 7

https://www.csoonline.com/article/3664748/adversarial-machine-learning-explained-how-attackers-disrupt-ai-and-ml-systems.html

In a previous post, I talked about AI as a defense and its potential. Yet, AI security is not perfect and there are many ways attackers can abuse AI and ML systems.

Out of 7,500 global businesses, 35% are already using AI as a defense while 42% are experimenting with it. However, 20% say they have difficulties securing data with AI. This is not even to mention the difficulties companies will have integrating AI solutions into existing systems.

Additionally, 90% of companies are not prepared for "adversarial machine learning" which are techniques used to attack machine learning systems.

There are four types of adversarial machine learning attacks:

- Poisoning: attacker manipulates training data.

- Evasion: changing input in an already trained model.

- Extraction: attacker obtains a copy of your AI system by seeing what the system outputs given inputs.

- Inference: attacker figures out what data was used for training and abuses it.


Comments

Popular posts from this blog

Week 10

Week 6

Week 8