Proofs of partial knowledge allow a prover to prove knowledge of witnesses for k out of n instances of NP languages. Cramer, Schoenmakers and Damgård [10] provided an efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge for any NP language, by cleverly combining n executions of Σ-protocols for that language. This transform assumes that all n instances are fully specified before the proof starts, and thus directly rules out the possibility of choosing some of the instances after the first round. Very recently, Ciampi et al. [6] provided an improved transform where one of the instances can be specified in the last round. They focus on (1, 2)-proofs of partial knowledge with the additional feature that one instance is defined in the last round, and could be adaptively chosen by the verifier. They left as an open question the existence of an efficient (1, 2)-proof of partial knowledge where no instance is known in the first round. More in general, they left open the question of constructing an efficient (k, n)-proof of partial knowledge where knowledge of all n instances can be postponed. Indeed, this property is achieved only by inefficient constructions requiring NP reductions [19]. In this paper we focus on the question of achieving adaptive-input proofs of partial knowledge. We provide through a transform the first efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge where all instances can be decided in the third round. Our construction enjoys adaptive-input witness indistinguishability. Additionally, the proof of knowledge property remains also if the adversarial prover selects instances adaptively at last round as long as our transform is applied to a proof of knowledge belonging to the widely used class of proofs of knowledge described in [9,21]. Since knowledge of instances and witnesses is not needed before the last round, we have that the first round can be precomputed and in the online/offline setting our performance is similar to the one of [10]. Our new transform relies on the DDH assumption (in contrast to the transforms of [6,10] that are unconditional).

Online/offline or composition of sigma protocols

CIAMPI, MICHELE;PERSIANO, Giuseppe;SINISCALCHI, LUISA;VISCONTI, Ivan
2016-01-01

Abstract

Proofs of partial knowledge allow a prover to prove knowledge of witnesses for k out of n instances of NP languages. Cramer, Schoenmakers and Damgård [10] provided an efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge for any NP language, by cleverly combining n executions of Σ-protocols for that language. This transform assumes that all n instances are fully specified before the proof starts, and thus directly rules out the possibility of choosing some of the instances after the first round. Very recently, Ciampi et al. [6] provided an improved transform where one of the instances can be specified in the last round. They focus on (1, 2)-proofs of partial knowledge with the additional feature that one instance is defined in the last round, and could be adaptively chosen by the verifier. They left as an open question the existence of an efficient (1, 2)-proof of partial knowledge where no instance is known in the first round. More in general, they left open the question of constructing an efficient (k, n)-proof of partial knowledge where knowledge of all n instances can be postponed. Indeed, this property is achieved only by inefficient constructions requiring NP reductions [19]. In this paper we focus on the question of achieving adaptive-input proofs of partial knowledge. We provide through a transform the first efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge where all instances can be decided in the third round. Our construction enjoys adaptive-input witness indistinguishability. Additionally, the proof of knowledge property remains also if the adversarial prover selects instances adaptively at last round as long as our transform is applied to a proof of knowledge belonging to the widely used class of proofs of knowledge described in [9,21]. Since knowledge of instances and witnesses is not needed before the last round, we have that the first round can be precomputed and in the online/offline setting our performance is similar to the one of [10]. Our new transform relies on the DDH assumption (in contrast to the transforms of [6,10] that are unconditional).
2016
9783662498958
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4670529
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 16
social impact