In class, I showed one natural candidate definition of security for
semi-honest 2-party computation, but I warned that it turns out to be
not strong enough.  Then, I showed a fixed definition.  Several folks
wanted to know what was wrong with the first attempt at a definition,
and I wasn't able to explain in class.  This note is intended to explain
what's wrong with the first attempt.  This is a bit of a footnote; if you
just take the correct definition on faith, that's fine by me, but I wanted
to give some explanation for those who are interested in the details.

Let me remind you of the broken definition:
  Defn #1: The protocol (A_1, A_2) is secure if there exists a pair of
  simulators S_1,S_2 so that for all input pairs x_1,x_2, we have
  (1) correctness: (y_1, y_2) ~ f(x_1, x_2); and,
  (2) privacy: view_A_1(...) ~ S_1(x_1, y_1) and
               view_A_2(...) ~ S_2(x_2, y_2).
Ok, so that might look reasonable, but it has a flaw.

Consider the following functionality: f(-,-) = (b,-) where b <- {0,1}
is a random bit.  Here is a silly protocol intended to implement this
functionality:
  A_1 picks b <- {0,1} randomly
  A_1 -> A_2: b
  A_1 outputs b
What do you think of this protocol?  Obviously, it satisfies the
correctness condition.  It also happens to satisfy the privacy condition
shown above.  For instance, A_2's view can be simulated by a simulator
S_2 that takes no inputs, flips a random coin, and outputs it.  This
is a valid simulator for A_2's view, since A_2's view is a random bit.
So if we used the broken definition above, this silly protocol would be
considered "secure".

But this silly protocol is indeed truly silly.  Intuitively, we should
be unhappy with it, because A_2 learns the value of A_1's output, something
that doesn't happen in the ideal model.  Obviously, something is wrong
with this protocol, and that means that something is wrong with any
definition that considers this silly protocol "secure".  That's why
Defn #1 above is broken.

The fix is as follows:
  Defn #2: The protocol (A_1, A_2) is secure if there exists a pair of
  simulators S_1,S_2 so that for all input pairs x_1,x_2, we have
  (1) correctness: (y_1, y_2) ~ f(x_1, x_2); and,
  (2') privacy': (view_A_1(...), y_2) ~ (S_1(x_1, y_1), y_2) and
                 (view_A_2(...), y_1) ~ (S_2(x_2, y_2), y_1).
Note that our revised definition will not accept the silly protocol as
"secure".  A_2's view is the bit b; also, A_1's output y_1 is this same
bit b.  Therefore, if we look at the second part of the privacy' condition,
it would require that (b,b) ~ (S_2(-),b).  But there is no way to construct
a simulator S_2 that can generate outputs of the required distribution:
intuitively, S_2 doesn't know b, so it can't output the required value.

Maybe this example illustrates why the revised definition is needed.
What went wrong in the silly protocol is that A_2's view was correlated
to A_1's output.  Defn #2 captures the idea that A_2 learns nothing
about A_1's output y_1, because it requires that if there is any
correlation between A_2's view and y_1, then the simulator S_2 can
generate outputs that are correlated in exactly the same way with y_1.
Defn #1 didn't require that S_2's outputs be correlated with y_1 in the
same way that A_2's view is correlated with y_1, and that's why Defn #1
is unsatisfactory.

Actually, as a footnote, I should mention that Defn #1 and Defn #2 seem
to be equivalent if the functionality f is a deterministic function.
It's just when f is randomized that you can get into trouble with
Defn #1.  If you were willing to restrict yourself to deterministic
functionalities, Defn #1 would be a perfectly reasonable definition.
Of course, in practice randomized functionalities are very natural,
and that's why we need Defn #2.

If you followed so far, here is a simple exercise to test whether you
grok this at a deep level.  Question: To prevent A_2 from learning a
value that is correlated somehow to y_1, we added y_1 to the distribution
under consideration.  But we would also like to prevent A_2 from learning
anything that is correlated somehow to x_1.  Do we need to add x_1 to
the distribution under consideration, too?  Do we need a Defn #3?

Answer (ROT13 encrypted):
Ab.  Gur pbaqvgvbaf ner erdhverq gb ubyq sbe nyy k_1,k_2, urapr
k_1 vf svkrq.  Vg qbrfa'g znxr frafr gb gnyx nobhg orvat pbeeryngrq
gb n svkrq inyhr; vs gurer vf nal yrnxntr bs vasbezngvba nobhg k_1
cerfrag va N_2'f ivrj, gung yrnxntr vf nyernql pncgherq ol gur cebonovyvgl
qvfgevohgvba bs ivrj_N_2(...).  Fb Qrsa #2 vf whfg svar.

Also, for reference, I will write out the generalization of the above
definition to security for k-party parties, in the semi-honest model:
  Defn: The protocol (A_1, .., A_k) is secure if
  (1) correctness: for all k-tuples x_1,..,x_k, we have
  (y_1, .., y_k) ~ f(x_1, .., x_k); and,
  (2') privacy': for each non-trivial subset I of {1,..,k}, there
  exists a simulator S_I, so that for all k-tuples x_1,..,x_k,
  we have   (view_A_I(...), y_J) ~ (S_I(x_I, y_I), y_J).
  Here J = {1,..,k} \ I denotes the complement of I; x_I is the
  concatenation of values x_i for i in I; view_A_I(...) is the
  concatenation of the views of the parties A_i for i in I; etc.
This is a straightforward extension of Defn #2 above.