How AI-powered malware uses facial recognition technology
Marc PH. Stoecklin:What we show in this proof of concept is AI-powered malware through a distribution channel, which is using unsuspicious, innocent-looking application. We use for this purpose a videoconferencing application that we call Talk. Were downloading this application. The user is opening the application from his download and it is running.
What happened now, the AI model picked up on Dans face, and from Dans face derived a key. It used Dans face as a key, basically, to derive how and when to unlock that malware. It makes it very evasive and very targeted to only Dan by using this application and showing him malicious behavior.
IBM security researchers demonstrate how new artificial intelligence-powered facial recognition technology can trigger malware lurking within common applications.
Our editors highlight the TechRepublic articles, galleries, and videos that you absolutely cannot miss to stay current on the latest IT news, innovations, and tips.
The AI is inspecting what is being seen by the webcam and is able to derive a key to unlock the malicious intent, and only if a specific person is showing up in front of the webcam to which the AI has been trained to recognize the person, then, in this case, the key can be derived, and the malicious behavior is showing up.
Its behaving normally. We have the sign-in screen. Now, the application can be used as if it was a normal application. Indeed, it is a normal application. It is a fully usable application at that point. However, what were going to see now, if were moving the laptop to look at Dans face, the behavior will suddenly change.
TechRepublicsDan Pattersonspoke withMarc PH. Stoecklin, Principal RSM & Manager, CCSI at IBM Research demonstrated to TechRepublics Dan Patterson just how new artificial intelligence-powered facial recognition technology can trigger malware lurking within common applications.