Paper-voted (and why I did so)

March 5, 2011

(This entry is written mostly for my non-cryptographer friends who wonder why I am always criticizing Estonian e-voting scheme, but it may also be interesting for cryptographers all around the globe. All opinions are my own.)

Many people probably already know that Estonia has had Internet voting since 2005. Over the years, the number of voters has increased. In the parliamentary elections this week, more than 10% of the whole population (which means even larger percentage of the population who can vote – and probably quite a large percentage of the people who actually go to vote) used Internet voting.

I paper voted.

Why? First of all, as a cryptographer, it is my job not to support insecure systems. Given the current situation in Estonia, paper-voting is the only thing that I can do, but I hope things will improve rapidly.

Here’s a short description of Estonian Internet voting protocol:

One vote attempt:
You as a voter either use an ActiveX control (or a downloadable application if you use a wrong browser/…) to log in to a secure webpage. To authenticate yourself you use an ID card (those have been obligatory in Estonia for quite a few years). After logging in, you use the GUI to select the candidate. After that, the voter computer encrypts your vote by using RSA and the key of a tallier. The result is also signed by using your private key (which is again stored in your ID card). Your computer now sends the signed & encrypted vote to the vote collector server.

Revoting:
You can always revote by using exactly the same procedure. The VCS only stores the latest vote by every voter. (This step was introduced to combat vote coercion/family voting — note that if everything else fails, you can go to the voting station and revoke your electronic vote by paper-voting.)

Vote counting:
After the election period ends, the VCS will take the latest encrypted votes of all voters and send them to the second server that is behind a firewall. The second server removes all the signatures. It then ‘burns’ all the unsigned encrypted votes to a CD. Now, a number of trusted people (including observers) take this CD and bring it to a separate room, which is well guarded. That room contains the third server, the tallier who is *not* connected to Internet at all. The CD is input to the tallier. The tallier reads all encrypted votes. Since it knows the decryption keys, it can decrypt all votes. After that it outputs the number of votes given for every candidate.

This whole process also contains a number of organizational steps (like transfer of the CD to the tallier, or general auditing), but cryptographically that’s it.

So what’s wrong with it? For a cryptographer it’s a rhetoric question but let me reiterate some points. Basically, an e-voting system can be attacked by attacking voter computers, Internet connection or voting servers. Signing/encryption mostly takes care of fraudulent Internet (they do not obviously protect you against DDOS attacks and the like).

Voter computers are an obvious problem: most of the people are computer illiterate, and are not able to check if their computers are not infected. Even if they have the newest antivirus (which we can’t be sure of), that antivirus itself might not be able to detect a piece of new malware that has been written specifically for *that* election and is unleashed just before it. (Note: in Estonia e-voting lasts for 3 days.) That malware could do a lot of damage, like hijack the connection between you and the ID card (basically letting the ID card to sign wrong votes), between the GUI and what actually happens inside the computer, etc. I would *not* be surprised if such a piece of software was written by a high-school kid.

Vote servers are another problem: they can attacked by a hostile (but yet invisible) takeover, or by an insider (software provider, hardware provider, they guy with a gun meant to protect the servers, cleaning lady…). To be completely certain nothing like that happens, one should use either 100% trusted providers etc, etc (which is somewhat unlikely if an interested foreign powers invests a few million euros to bribe everybody), or one should use cryptography. But first, why does it matter? Can’t we trust the election office? Rhetorically I could ask: do we trust politicians in general? Do we? Less rhetorically, there are so many potential threats here.

Given all this, when Internet voting process started in Norway, the government officials clearly stated they want strong cryptography, and probable security in general. In particular, they do not want to only avoid attacks. They want to be able to prove that no attacks happened. Moreover, they want to be able to prove that the e-voting system is so secure that no attacks are possible at all. (Or at least, the number of possible attacks is strongly limited.) If the e-voting system satisfies that property, they can in particular prove that they did *not* cheat. That means, at least some kittens in Norway will have a good night sleep. And people’s trust in the governance will increase.

We can imagine all kind of attacks (and motivations behind those attacks) against Internet voting. A party would like to gain a few more votes. A script kiddie is angry at a politician whose daughter doesn’t like him. A foreign power wants their favorite party to gain a few more votes. A foreign power attacks e-voting process just so that it will be clear that somebody attacked and thus the results can’t be trusted. Etc. Etc. Script kiddie might spend a few days to write a virus that works on user computers. A foreign power might spend $10M to develop Stuxnet+.

How can we protect Internet voting then? Fortunately, there *are* very well-known (at least in the cryptographic community) protocols for that. It is known how to get security in the case when a minority of voting servers is malicious. Under some conditions, it just suffices to have one uncorrupted voting server. Developing such protocols means that there is a need for additional computing power (and time, to implement the cryptographic protocols). Deploying such protocols is invisible to the voters.

Protecting against corrupted voter computers is more difficult. There are quite many ways to do it, but most of the ways requires the voters to perform some additional operations. As an example, Norway will use a scheme where every voter receives (by post) a codesheet that has the names of all candidates and corresponding check codes. (The codes are unique for every voter.) Voting proceeds as usual, by using computer and your favorite GUI, but afterwards you will receive by SMS a single code. The voter can now just verify if this is the same code that was written on the codesheet under his or her favorite candidate. If it is not, he can revote electronically (or go and paper-vote).

There are a number of good things about the Norwegian system. First of all, receiving the codesheet and checking your codes is not obligatory — you just have to do it when you are paranoid or just security-conscious (or just geeky). Thus, Internet voting still rules in the sense of convenience. On the other hand, interested voters can verify that their votes were counted for by following a relatively small number of steps.

Another thing that I like about the Norwegian solution is that by using their own words, the underlying math is simple enough to be understood by high school students — and they plan to start teaching the protocol at high schools. (See the slides here.) Which is nice, since cryptography is not everything — for complete trust there have to be enough people to understand the system. And it’s always nice to teach people new math. :-)

The Norwegian system is not ideal, but it can be improved upon. There are also many other, somewhat similar (or completely different) techniques to protect Internet voting schemes against corrupted user computers. Unfortunately, dwelling into them would take quite a lot of time and this margin here is too short.

Stable Crypto Conference Marriages

August 10, 2009

In my previous post, I argued that there are too many cryptographic conferences. One of the accompanying well-known problems is that if your paper (say) gets rejected from Eurocrypt, then you either have to wait (and risk again) 3 months for Crypto, or you can go for a slightly weaker conference. However, most of those “slightly weaker” conferences (TCC, FSE, PKC, FC, and the next year even CT-RSA) are organized a few months before the Eurocrypt. This means actually that you need to wait for another (say) nine months. Call it the Spring Bundle.

Another well-known bundle is in December, with Asiacrypt organized inbetween 6+ smaller conferences (CANS, ITICS, ICICS, ICISC, Indocrypt, Inscrypt, and may be some more imatinatively named conferences). The good thing about this Winter Bundle is that you can submit papers, rejected in Asiacrypt, directly to ICICS/ICISC/Indocrypt/Inscrypt. The bad thing here is that those conferences are not slightly but much weaker than Asiacrypt.

I have had for quite some time an idea that the conferences belonging to one bundle should have a joint PC, and joint reviewing process, such that the best papers would be accepted to the top conference in this bundle (respectively Eurocrypt and Asiacrypt), while other papers would be somehow distributed among the weaker conferences. (Taking into account that the papers would also be in scope: e.g., block cipher papers not going to TCC.) However, I have never been sure how to distribute papers among the smaller conferences, since at least the 6 Decembery small conferences are all weak, some more than others.

I think Doron Zeilberger has a quite neat idea, albeit in the context of mathematical journals. To translate it to our language, let the authors choose, when submitting a paper, an ordered list of preferred conferences. The conference PCs will agree on either total rejection of the paper, or the ordered suitability list of the paper to different conferences. After that, they just run a stable marriage algorithm on pairs (papers, conferences). If you get a paper matched to conference X, you can’t complain since you still thought X to be ok (and your more favorite conferences rejected you). Analogously, the conference PCs can’t complain: if they don’t like a paper at all (or it’s not in scope), they can refuse to be matched with it.

This would take care of several possible conflicts. May be you as an author cannot travel to (say) TCC at this time – then you omit TCC from your list. If you think FC and CT-RSA are too weak for you and you want to gamble, you can only put Eurocrypt and the suitable IACR workshop in your list. If you are a real gambler, you’ll only consider Eurocrypt.

Obviously, if the authors are not happy with their allocation, they can withdraw their paper, but this happens even now sometimes. Moreover, if you knew you wouldn’t like conference X, so why to list it?

Anyhow. Such a system would increase publishing rate quite a bit. However, I still think it necessary to increase the number of accepted papers in top conferences, since just both bundles are too big.

There’s another point in Doron’s opinion piece: that there shouldn’t be any journals at all, everybody should just publish in arxiv, and let the readers be the referees (e.g., just count the impact, the number of citations). Although I strongly agree in the case of journals, I am not sure if this were implementable in our conference system, where getting there (going to the conference and meeting with the people) is also fun. Still, I think counting the impact (citations) is a much more relevant way of deciding researcher’s impact than counting his or her tier 1 conference papers. (See my previous post for more.)

For another take of this issue, see also the PhD Comics. Doron’s earlier opinion piece was even better.

EDIT: Two more comments. First, in the current system your paper might be the best of the rejected papers (you don’t know it). You will then submit it to say ICICS. Assume ICICS does not have any competent reviewers in this subarea, the paper will get rejected, which may be especially depressing. With this kind of joint reviewing you may get rid of this problem, too.

Second, such a joint system will increase the average quality of the top conferences, too, even if the number of accepted papers was increased. Risk-averse authors (or authors, whose paper has already been rejected from the last big conference) might want to decrease the risk by submitting the paper to a smaller conference. However, may be it would have been accepted to the top conference this time, too – nobody knows? With the joint system, Eurocrypt could get the bestest papers of this spring (assuming that the authors prefer Eurocrypt to say FSE or TCC, of course). This would considerably decrease the gamble element: either risk a lot by submitting to a top conference (where the paper may be rejected), or submit to a non-top conference (which may accept it, but then it doesn’t look so good on CV, or which may reject it because of the lack of good reviewers).

Time for Crypto to Grow Up

July 26, 2009

This time in Fortnow’s blog – a long discussion about whether one should reform/abolish conferences as the primary publication venue in computer science. 40+ comments at this moment

His entry and comments are mostly concerned about the situation in complexity theory/TCS. I think one ought to ask the same question about crypto, especially since our field has an over abundance of conferences. The problem is also that the top conferences (Eurocrypt, Crypto,  may be Asiacrypt) only accept 30-34 papers yearly, while the number of quality papers is much larger. Moreover, the last 10 or so papers accepted are always somewhat random. Now, when your paper is still good but gets rejected, you have to wait 3 more months for the next top conference, and then 3 more.

On the other hand, you have 20 small conferences, that are almost all equally bad. (Not FSE, TCC, PKC, CT-RSA, FC, CHES, but the rest of them.) Most of them get a tiny share of good papers, since some top researchers do not want to wait for 3 months,  or just want to travel. Or for some other not-science-related reason. In general, if you are an established researcher, it does not pay off to sit in the conference room at those conferences since you do not learn anything new so you just do some sight seeing. (If you are lucky, somebody whom you know might also participate!) And being on a PC of such conferences is usually just a pain.

I think there is some room for reform, thus. My own recommendation would be to increase the number of papers, accepted to all IACR conferences and workshops by 50%. (As said, the last 10 are always random* – and the first 10 non-accepted papers are sometimes better.) This would increase the rate crypto results get published quite a lot. This would also mean that less top papers would get accepted to third tier conferences, which would also be good – hopefully the number of third tier conferences would decrease.

Moreover, to compare it with the past, Eurocrypt 1990 got 85 submissions and accepted 42 papers. Eurocrypt 2004 got 206 submissions and accepted 36 papers. With Crypto, corresponding numbers are 43 (of 93) and 33 (211). Thus, while the number of submissions has increased 2.5 times, the number of accepted papers has actually decreased(!). In addition, in 1990 the number of active research groups in cryptography was may be 10 times less than now. This means, that the competition for Crypto/Eurocrypt papers has increased dramatically.

* I’ve actually done some statistics. I might have a separate post on that, but of Crypto 1998 papers, around 10 have been cited less than 30 times. While the top paper, by Cramer and Shoup has been cited 600+ times. Other conferences followed a similar pattern, though not always the top paper has that many citations. OTOH, there are many papers in third tier conferences that have more citations than the mean Crypto/Eurocrypt paper of the same year.


Follow

Get every new post delivered to your Inbox.