In class, I presented a variant of the Pedersen VSS secret sharing
scheme with the following features: (1) The dealer signs each share.
(2) If Alice receives a share that doesn't satisfy the "check equation",
she broadcasts an accusation against the dealer, and includes the signed
share.  (3) If someone tries to maliciously broadcast a false accusation
against the dealer, an honest dealer should respond by publishing the
correct share.  Then, third-party observers can decide who is cheating.

Someone asked an entirely reasonable question: Given that shares are
already signed, why bother with step (3)?  It looks like step (3)
is unnecessary.

However, step (3) is indeed necessary.  It turns out that it is (1)
that is unnecessary!

The idea of signing shares has a serious flaw.  A malicious dealer can
simply decline to send anything to Alice.  Then Alice has not received
her share, so she is being cheated -- but she has no way to convince
anyone else of this fact.  Signing doesn't really do much good.

Perhaps I can be forgiven for this confusion.  I went back to the
literature, and found that Pedersen's original Eurocrypt '91 paper
suggested using (1)+(2): signatures and accusations, but no revealing of
accuser's shares.  Later, Gennaro, Jarecki, Krawczyk, and Rabin noticed
the defect in Pedersen's scheme (see their Eurocrypt '99 paper), and they
presented a corrected protocol that replaces the signatures with a more
complex version of (3).  That's why the signatures weren't present in my
notes -- signatures don't adequately protect against malicious dealers.
I should have trusted my notes, not my memory.  I apologize for the error.

Ok, so say we omit the signatures.  Why do we bother with step (3)?
The reason: we're trying to build a verifiable secret sharing scheme
that will provide a consistent sharing, even if the dealer is malicious.
At the end of the sharing protocol, either all honest parties should
abort, or else all honest parties should succeed and the honest parties
(if there are at least t of them) should have a consistent sharing of
some secret x.  Of course, x is only guaranteed to remain secret, or
to be uniformly distributed, if the dealer is honest; if the dealer is
malicious, we still require a consistent sharing of some x, but x might
be arbitrarily chosen by the attacker and might not be secret at all.

I should perhaps elaborate on how Bob decides whether to abort or decide
that the protocol has succeeded: (4) if Bob sees at least t accusations,
or if the dealer publishes a share that doesn't match the check equations,
then Bob aborts; otherwise, Bob accepts this as a successful sharing.

Ok, so here's the reason why the dealer must publish the shares of
accusers.  Suppose we have exactly t honest parties; Alice is one of
the honest parties.  Also, we have a malicious dealer who gives bogus
values (or no values at all) to Alice, but otherwise behaves correctly.
Alice will accuse the dealer, but she'll be the only one.  If we don't
require the dealer to publish Alice's share, then all the other honest
parties will think the sharing protocol has succeeded, but when it comes
time to try to recover the secret, they won't be able to: they only
have t-1 shares of the secret (Alice has nothing that is of any help).
So if we leave out step (3), the problem is that honest parties might
accept even though they haven't received a consistent sharing that will
allow them to recover some secret.

This probably makes it obvious how step (3) fixes the problem.  In the
above example, the dealer must broadcast Alice's correct share (otherwise
all the honest parties will abort), and then the honest parties will be
able to recover the valid secret (since they have t-1 shares from the
other honest parties, plus Alice's published share).

Summary so far: If we use (2), (3), and (4) (and omit the signatures),
we get a protocol that is perfectly secure for the purpose I talked
about in class, namely, verifiable secret sharing.  Also, we can use
it for distributed VSS: party i generates a secret x[i] and plays the
role of dealer to share x[i] among all the parties; then, we kick out
all the cheating dealers (those for whom the sharing protocol aborted)
and take a sum of the shares from the remaining parties; and, if at
least one of the parties behaves honestly, this will be a secure VSS.


It turns out that things get a little trickier when we take this scheme
and use it as a subroutine as part of a larger protocol.  One example is
distributed key generation, where the goal is for n parties to compute
a private key x and a public key y = g^x, where x is shared in a Shamir
t-out-of-n way across the n parties and kept secret from all coalitions
of less than t parties.  It's not too hard to extend Pedersen's scheme
to get a plausible protocol for this; for instance, we can have party
i publish y[i] = g^x[i], and then compute y as the product of y[i]'s.
However, if we're not extremely careful, such a scheme can easily end up
having subtle security bugs, where an adversary can bias the final public
key y by manipulating the set of disqualified players in clever ways.

Consequently, the way we choose who to disqualify and what we do
when accusations are broadcast turns out to be delicate and important.
Many published suggestions have turned out to be broken.  I won't go into
further details, but you can read the following (very well-written) paper:
  Gennaro, Jarecki, Krawczyk, Rabin,
  "Secure Distributed Key Generation for Discrete-Log Based Cryptosystems",
  Eurocrypt '99.  http://www.research.ibm.com/security/dkg.ps
To spare you the details, I'll give you the bottom line: (3)+(4) turns
out to be the "right" thing to do when using Pedersen's VSS scheme as
a subprotocol within the distributed key generation protocol.

In short, my notes were right after all.  And now you know why I
presented it that way (with (2)+(3)): not only is what I showed you
sufficient for obtaining a secure VSS scheme, but it's also the "right"
thing to do if you want to use Pedersen's VSS scheme for other applications.
That's my story, and I'm sticking to it.