We define the notion of private-output multiparty computation. This newly revised notion encompasses (as a particular case) the classical definition and allows a set of players to jointly compute the output of a common function in such a way that the execution of the protocol reveals no information (to an adversary controlling some of the players) about (some part of) the outputs (other than what follows from the description of the function itself). Next, we formally verify that basically no function can be output-privately computed in the presence of an adversary who gets full access to the internal memory of the corrupted players. However, if one restricts the (computationally bounded) adversary to control only part of the state of corrupted players, any function can be output-privately computed, assuming that enhanced trapdoor permutations exist and that public communication channels are available. Moreover, we prove that security is preserved under sequential composition. We note that partial access to the internal state of some of the players (either part of the time e.g., forward-security and intrusion-resiliency, or part of the space, e.g., secure CPU/memory) is an assumption that has been used in various settings to formalize limits on the attacker's capabilities that can be enforced via reasonable physical and architectural restrictions. However, previous models were devised for specific cryptographic tasks (e.g., encryption and signature schemes), whereas our formalization has a wider scope. We believe that the model we suggest may foster further studies of insider adversaries with partial control in the context of secure multiparty computation.
In the 3rd Yet Another Conference on Cryptography (YACC '06). Porquerolles Island, France, June 19-23, 2006.
Download: [pdf] [bibtex entry]