oops, this is garbage, ignore it. I pushed post comment instead of view source...
Let B = {b1, ..., bN}, such that for each i, bi = ai % N (ie. the remainder when ai is divided by N). Assume there is no subset of B that sums to 0 % N. Clearly, this means bi can't be 0 for any i.
Let n(i) be the number of times i appears in B for each positive integer i < N. Thus, n(1)+n(2)+...+n(N-1) = N. [note: from here on, all arithmentic is done mod N]
For each i between 0 and N; i, 2i, ... , n(i)*i are all possible sums of subsets of B. Let Si = {i, 2i, ... , n(i)*i}.
This forces n(N-s)=0 for all s in Si. It follows that either the elements of Si are distinct, or one of them is 0. This means, for each i, there are n(i) distinct j's such that n(j) = 0.
Since there are only N-1 distinct values that could populate the Si's, the sets Si can't be mutually disjoint. This means there's
Edited on August 24, 2005, 10:20 pm
Edited on August 24, 2005, 10:31 pm