Malware & Machine Learning 1

05 April 2017, 13:30, A6-005

Session chair: Ivan Martinovic, University of Oxford, UK

Scaling and Effectiveness of Email Masquerade Attacks: Exploiting Natural Language Generation

Shahryar Baki, Rakesh Verma, Arjun Mukherjee, Omprakesh Gnawali

AbstractAdd to calendar

We focus on email-based attacks, a rich field with well-publicized consequences. We show how current Natural Language Generation (NLG) technology allows an attacker to generate masquerade attacks on scale, and study their effectiveness with a within-subjects study. We also gather insights on what parts of an email do users focus on and how users identify attacks in this realm, by planting signals and also by asking them for their reasoning. We find that: (i) 17\% of participants could not identify any of the signals that were inserted in emails, and (ii) Participants were unable to perform better than random guessing on these attacks. The insights gathered and the tools and techniques employed could help defenders in: (i) implementing new, customized anti-phishing solutions for Internet users including training next-generation email filters that go beyond vanilla spam filters and capable of addressing masquerade, (ii) more effectively training and upgrading the skills of email users, and (iii) understanding the dynamics of this novel attack and its ability of tricking humans.

On the Detection of Kernel-Level Rootkits Using Hardware Performance Counters

Baljit Singh, Dmitry Evtyushkin, Jesse Elwell, Ryan Riley, Iliano Cervesato

AbstractAdd to calendar

Recent work has investigated the use of hardware performance counters (HPCs) for the detection of malware running on a system. These works gather traces of HPCs for a variety of applications (both malicious and non-malicious) and then apply machine learning to train a detector to distinguish between benign applications and malware. In this work, we provide a more comprehensive analysis of the applicability of using machine learning and HPCs for a specific subset of malware: kernel rootkits. We design five synthetic rootkits, each providing a single piece of rootkit functionality, and execute each while collecting HPC traces of its impact on a specific benchmark application. We then apply machine learning feature selection techniques in order to determine the most relevant HPCs for the detection of these rootkits. We identify 16 HPCs that are useful for the detection of hooking based roots, and also find that rootkits employing direct kernel object manipulation (DKOM) do not significantly impact HPCs. We then use these synthetic rootkit traces to train a detection system capable of detecting new rootkits it has not seen previously with an accuracy of over 99%. Our results indicate that HPCs have the potential to be an effective tool for rootkit detection, even against new rootkits not previously seen by the detector.

Gossip: Automatically Identifying Malicious Domains from Mailing List Discussions

Cheng Huang, Shuang Hao, Luca Invernizzi, Jiayong Liu, Yong Fang, Christopher Kruegel, Giovanni Vigna

AbstractAdd to calendar

Domain names play a critical role in cybercrime, because they identify hosts that serve malicious content (such as malware, Trojan binaries, or malicious scripts), operate as command-and-control servers, or carry out some other role in the malicious network infrastructure. To defend against Internet attacks and scams, operators widely use blacklisting to detect and block malicious domain names and IP addresses. Existing blacklists are typically generated by crawling suspicious domains, manually or automatically analyzing malware, and collecting information from honeypots and intrusion detection systems. Unfortunately, such blacklists are difficult to maintain and are often slow to respond to new attacks. Security experts set up and join mailing lists to discuss and share intelligence information, which provides a better chance to identify emerging malicious activities. In this paper, we design Gossip, a novel approach to automatically detect malicious domains based on the analysis of discussions in technical mailing lists (particularly on security-related topics) by using natural language processing and machine learning techniques. We identify a set of effective features extracted from email threads, users participating in the discussions, and content keywords, to infer malicious domains from mailing lists, without the need to actually crawl the suspect websites. Our result shows that Gossip achieves high detection accuracy. Moreover, the detection from our system is often days or weeks earlier than existing public blacklists.

Practical Black-Box Attacks against Machine Learning

Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

AbstractAdd to calendar

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.