Rafail Ostrovsky - Publications


Revisiting Lower and Upper Bound for Selective Decommitments

Rafail Ostrovsky, Vanishere Rao, Alessandra Scafuro, Ivan Visconti

Abstract:

Dwork et al. posed the fundamental question of existence of commitment schemes that are secure against selective opening attacks (SOA, for short). In [2] Bellare, Hofheinz, and Yilek, and Hofheinz in [13] answered it affirmatively by presenting a scheme which is based solely on the non-black-box use of one-way permutation needing a super-constant number of rounds. This result however opened other challenging questions about achieving a better round complexity and obtaining fully black b-box schemes using underlying primitives and code of adversary in a black-box mannerWe demonstrate the feasibility of end-to-end communication in highly unreliable networks. Modeling a network as a graph with vertices representing nodes and edges representing the links between them, we consider two forms of unreliability: Unpredictable edge-failures, and deliberate deviation from protocol specifications by corrupt and maliciously controlled nodes.

Recently, in TCC 2011, Xiao ([23]) investigated on how to achieve (nearly) optimal SOA-secure commitment schemes where optimality is in the sense of both the round complexity and the black-box use of cryptographic primitive. The work of Xiao focuses on a simulation-based security notion of SOA. Moreover; the various results in [23] focus only on either parallel or concurrent SOA.We present a routing protocol for end-to-end communication that is simultaneously resilient to both forms of unreliability. In Particular, we prove that our protocol is secure against arbitrary actions of the corrupt nodes controlled by a polynomial-time adversary, achieves correctness (Receiver get all of the messages from Sender and without modification), and enjoys provably optimal throughput performance, as measured using competitive analysis. Competitive analysis is utilized to provide protocol guarantees again malicious behavior without placing limits on the number of the corrupted nodes in the network.

In this work we first point out various issues in the claims of [23] that actually re-open several of the question left open in [2,13]. Then, we provide new lower bounds and concrete construction that produce a very different state-of-art compared to the one claimed in [23].Furthermore, our protocol does not incur any asymptotic memory overhead as compared to other protocols that are unable to handle malicious interference of corrupt nodes. In particular, our protocol requires O(n2) memory per processor, where n is the size of network . This represents an O(n2) improvement over all existing protocols that have been designed for this network model.

comment: TCC 2013 PP: 559-578


Fetch PDF file of the paper


Back to Publications List