Some find it fascinating, others are directly involved in its further development, and third, it gives a queasy feeling in the stomach: We are talking about artificial intelligence, or AI for short. Although the technology is already being used in many places and has unbelievably great potential, it does not only generate enthusiasm. AI security, in particular, is perceived as a problem by many people – laypeople and experts alike. We clarify and dedicate ourselves to the following five central facts about AI security.
First of all, it’s worth taking a look at a survey that came out of a collaboration between BlackBerry Cylance and the SANS Institute in 2018. The survey interviewed a total of 260 cybersecurity experts who ultimately identified seven major problems with the technology:
A study carried out by the BSI (Federal Office for Information Security) and the French ANSSI (Agence nationale de la sécurité des systèmes d’information) is also interesting.
The result: The database of neural networks and the data input are highly vulnerable, and there are reliability problems that could have potentially dangerous consequences. The fallibility of artificial intelligence should not be underestimated and should be recognized as a real danger, especially about the use of AI in critical task areas such as autonomous driving or medical diagnosis.
The term “data poisoning” is becoming more and more popular – but what exactly does it mean? Put simply, it is the intentional feeding of the machine learning system with incorrect data, which falsifies the entire environment. Hence, a clear threat to AI security and the reliability of supposedly “safe”, self-learning AI applications.
Another topic that must not be concealed with regard to AI security relates to Generative Adversarial Networks, GAN for short. Translated into German, this trend stands for “generating, opposing networks”. Two neural networks work as opponents here: training data are used, from which one network creates a candidate, which is then accepted or rejected by the second network – the so-called discriminator. Many experts view this technology as dangerous because it has the potential to make harmful instruments or even weapons out of neural networks.
If you look through the population, you can see that trust in AIs and AI security has been almost completely lost in many places. The origin of the mistrust is based on the one hand in a lack of knowledge, on the other hand on extensive knowledge regarding the potential dangers of artificial intelligence. If the future is to bring a significant increase in AI applications in a business context, but also in everyday private life, a lot of educational work must be done, on AI security and the problems.
Also Read: Cloud Server: What It Means And What Advantages It Can Give To Your Company
Key Takeaways Understand current innovations reshaping payroll processes. Learn how automation improves payroll accuracy and…
Convert URL To MP3: Your Comprehensive Guide To Easy Online Conversions Description: Discover how to…
Spending a lot of time on the internet, I am always looking for tools that…
Due to the abundance of options available in the field of cloud storage, it may…
Lately, I have been searching for YouTube alternatives. Even though I enjoy YouTube for its…
Internet marketing and entrepreneurship are dynamic fields, but BizGurukul assists fresh and experienced marketing personnel.…