05 April 2017, 16:35, A6-004
Session chair: Cong Wang, City University of Hong Kong, Hong Kong
Secure Wallet-Assisted Offline Bitcoin Payments with Double-Spender Revocation
Alexandra Dmitrienko, David Noack, Moti Yung
Bitcoin seems to be the most successful cryptocurrency so far given the growing real life deployment and popularity. While Bitcoin requires clients to be online to perform transactions and a certain amount of time to verify them, there are many real life scenarios that demand for offline and immediate payments (e.g., mobile ticketing, vending machines, etc). However, offline payments in Bitcoin raise non-trivial security challenges, as the payee has no means to verify the received coins without having access to the Bitcoin network. Moreover, even online immediate payments are shown to be vulnerable to double-spending attacks. In this paper, we propose the first solution for Bitcoin payments, which enables secure payments with Bitcoin in offline settings and in scenarios where payments need to be immediately accepted. Our approach relies on an offline wallet and deploys several novel security mechanisms to prevent double-spending and to verify the coin validity in offline setting. These mechanisms achieve probabilistic security to guarantee that the attack probability is lower than the desired threshold. We provide a security and risk analysis as well as model security parameters for various adversaries. We further eliminate remaining risks by detection of misbehaving wallets and their revocation. We implemented our solution for mobile Android clients and instantiated an offline wallet using a microSD security card. Our implementation demonstrates that smooth integration over a very prevalent platform (Android) is possible, and that offline and online payments can practically co-exist. We also discuss alternative deployment approach for the offline wallet which does not leverage secure hardware, but instead relies on a deposit system managed by the Bitcoin network.
Privacy-preserving and Optimal Interval Release for Disease Susceptibility
Kosuke Kusano, Ichiro Takeuchi, Jun Sakuma
In this paper, we consider the problem of privacy-preserving release of function outputs that take private information as input. Disease susceptibilities are known to be associated with clinical features (e.g., age, sex) as well as genetic features represented by SNPs of individuals. Releasing outputs are not privacy-preserving if the private input can be uniquely identified by probabilistic inference using the outputs. To release useful outputs with preserving privacy, we present a mechanism that releases an interval as output, instead of an output value. We suppose adversaries perform probabilistic inference using released outputs to sharpen the posterior distribution of the target attributes. Then, our mechanism has two significant properties. First, when our mechanism provides the output, the increase of the adversary’s posterior on any input attribute is upper-bounded by a prescribed level. Second, under this privacy constraint, the mechanism can provide the narrowest (optimal) interval that includes the true output. Building such a mechanism is often intractable. We formulate the design of the mechanism as a discrete constraint optimization problem so that it is solvable in a practical computation time. We also propose an algorithm to obtain the optimal mechanism based on dynamic programming. After applying our mechanism to release disease susceptibilities of obesity, we demonstrate that our mechanism performs better than existing methods in terms of privacy and utility.
Towards Extending Noiseless Privacy – Dependent Data and More Practical Approach
Krzysztof Grining, Marek Klonowski
In 2011 Bhaskar et al. pointed out that in many cases one can ensure sufficient level of privacy without adding noise by utilizing adversarial uncertainty. Informally speaking, this observation comes from the fact that if at least a part of the data is randomized from the adversary’s point of view, it can be effectively used for hiding other values. So far the approach to that idea in the literature was mostly purely asymptotic, which greatly limited its adaptation in real-life scenarios. In this paper we aim to make the concept of utilizing adversarial uncertainty not only an interesting theoretical idea, but rather a practically useful technique, complementary to differential privacy, which is the state-of-the-art definition of privacy. This requires non-asymptotic privacy guarantees, more realistic approach to the randomness inherently present in the data and to the adversary’s knowledge. In our paper we extend the concept proposed by Bhaskar et al. and present some results for wider class of data. In particular we cover the data sets that are dependent. We also introduce rigorous adversarial model. Moreover, in contrast to most of previous papers in this field, we give detailed (non-asymptotic) results which is motivated by practical reasons. Note that it required a modified approach and more subtle mathematical tools, including Stein method which, to the best of our knowledge, was not used in privacy research before. Apart from that, we show how to combine adversarial uncertainty with differential privacy approach and explore synergy between them to enhance the privacy parameters already present in the data itself by adding small amount of noise.
BlindIDS: Market-Compliant and Privacy-Friendly Intrusion Detection System over Encrypted Traffic
Sébastien Canard, Aïda Diop, Nizar Kheir, Marie Paindavoine, Mohamed Sabt
The goal of network intrusion detection is to inspect network traffic in order to identify threats and known attack patterns. One of its key features is Deep Packet Inspection (DPI), that extracts the content of network packets and compares it against a set of detection signatures. While DPI is commonly used to protect networks and information systems, it requires direct access to the traffic content, which makes it blinded against encrypted network protocols such as HTTPS. So far, a difficult choice was to be made between the privacy of network users and security through the inspection of their traffic content to detect attacks or malicious activities. This paper presents a novel approach that bridges the gap between network security and privacy. It makes possible to perform DPI directly on encrypted traffic, without knowing neither the traffic content, nor the patterns of detection signatures. The relevance of our work is that it preserves the delicate balance in the security market ecosystem. Indeed, security editors will be able to protect their distinctive detection signatures and supply service providers only with encrypted attack patterns. In addition, service providers will be able to integrate the encrypted signatures in their architectures and perform DPI without compromising the privacy of network communications. Finally, users will be able to preserve their privacy through traffic encryption, while also benefiting from network security services. The extensive experiments conducted in this paper prove that, compared to existing encryption schemes, our solution reduces by 3 orders of magnitude the connection setup time for new users, and by 6 orders of magnitude the consumed memory space on the DPI appliance.