TLS Client Authentication

I decided to do a prototype for an electronic identification scheme, so I investigated how to do TLS client authentnication with a Java/Spring server-side (you can read on even if you’re not a Java developer – most of the post is java-agnostic).

Why TLS client autnetication? Because that’s the most standard way to authenticate a user who owns a certificate (on a smartcard, for example). Of course, smartcard certificates are not the only application – organizations may issue internal certificates to users that they store on their machines. The point is to have an authentication mechanism that is more secure than a simple username/password pair. It is a usability problem, especially with smartcards, but that’s beyond the scope of this post.

So, with TLS clientAuth, in addition to the server identity being verified by the client (via the server certificate), the client identity is also verified by the server. This means the client has a certificate that is issued by an authority, which the server explicitly trusts. Roughly speaking, the client has to digitally sign a challenge in order to prove that it owns the private key that corresponds to the certificate it presents. (This process can also be found under “mutual authentication”)

There are two ways to approach that. The first, and most intuitive, is to check how to configure Tomcat (or your servlet container). The spring security x509 authentication page gives the Tomcat configuration at the bottom. The “keystore” is the store where the server certificate (+private key) is stored, and “trustStore” is the store that holds the root certificate of the authority that’s used to sign the client certificates.

However, that configuration is applicable only if you have a single servlet container instance exposed to your users. Most likely in production, though, you’ll have a number of instances/nodes running your application, behind a load-balancer, and TLS is usually terminated at the load-balancer, which then forwards the decrypted requests to the servlet container over a plain HTTP connection. In that case, your options are either to not terminate TLS at the load-balancer, which is most likely not a good idea, or you have to somehow forward the client certificate from your load-balancer to your node.

I’ll use nginx as an example. Generating the keypairs, certificates, certificate signing requests, signed certificates and keystores is worth a separate post. I’ve outlined what’s needed here. You need openssl and keytool/Portecle and a bunch of commands. For production, of course, it’s even more complicated, because for the server certificate you’d need to send a CSR to a CA. Having done that, in your nginx configuration, you should have something like:

server {
   listen 443 ssl;
   server_name yourdomain.com;

   ssl_certificate server.cer;
   # that's the private key
   ssl_certificate_key server.key;
   # that holds the certificate of the CA that signed the client certificates that you trust. =trustStore in tomcat
   ssl_client_certificate ca.pem;
   # this indicates whether client authentication is required, or optional (clientAuth="true" vs "want" in tomcat)
   ssl_verify_client on;

   location / {
      #proxy_pass configuration here, inclding X-Forwarded-For headers
      proxy_set_header X-Client-Certificate $ssl_client_cert;
   }
}

That way the client certificate will be forwarded as a header (as advised here). This looks like a hack, and it probably is, because the client certificate is not exactly a small string. But that’s the only way I can think of.

There is one small issue with that, however (and it’s the same for the Tomcat solution as well) – if you enable client authentication for your entire domain, you can’t have fully unprotected pages. Even if authentication is optional (“want”), the browser dialog (from which the user selects a certificate) would still be triggered no matter which pages the user opens first. The good thing is that a user without certificate would still be able to browse pages that are not explicitly protected with code. But for a person that has a certificate, opening the home page would open the dialog, even though he might not want to authenticate. There is something that can be done to handle it.

I’ve actually seen it done with Perl “per page”, but I’m not sure this can be done with a Java setup. Well, it can, if you don’t use a servlet container, but handle your TLS handshakes yourself. But that’s not desirable.

Normally, you’d need the browser authentication dialog only for a single URL. “/login”, or as in my case with my fork of the OpenID Connect implementation MitreID, the “/authenticate” endpoint (the user gets redirected to the Identity Provider /authenticate URL, where normally he’d have to enter username/password, but in this case he would have to just select the proper certificate). What can be done is to access that particular endpoint from a subdomain. That would mean having another “server” section in the nginx configuration with the subodmain and the ssl_verify_client on, while the regular domain remains without any client certificate verification. That way, only requests to the subdomain will be authenticated.

Now, how to do the actual authentication. The OpenID Connect implementation mentioned above uses spring security, but it can be anything. My implementation supports both cases mentioned above (tomcat and nginx+tomcat). That makes the application load-balancer-aware, but you can safely choose one or the other approach and get rid of the other half from the code.

For the single tomcat approach, the X509Certificate is obtained simply by this lines:

    X509Certificate certs[] = (X509Certificate[]) request.getAttribute("javax.servlet.request.X509Certificate");
    // check if not empty and get the first one

For the nginx-in-front approach, it’s a bit more complicated. We have to get the header, transform it to a proper state and then parse it. (Note that I’m not using the spring-security X509 filter, because it supports only the single-tomcat approach.)

String certificateHeader = request.getHeader("X-Request-Certificate");
if (certificateHeader == null) {
    response.sendError(HttpServletResponse.SC_UNAUTHORIZED);
    return;
}
// the load balancer (e.g. nginx) forwards the certificate into a header by replacing new lines with whitespaces (2 or more)
// also replace tabs, which sometimes nginx may send instead of whitespaces
String certificateContent = certificateHeader.replaceAll("\s{2,}", System.lineSeparator()).replaceAll("\t+", System.lineSeparator());
userCertificate = (X509Certificate) certificateFactory.generateCertificate(new ByteArrayInputStream(certificateContent.getBytes("ISO-8859-11")));

The “hackiness” is now obvious, because the way nginx sends the certiciate PEM-encoded, but on one line. Fortunately, lines are separated by some sort of whitespace (one time it was spaces, another time it was tabs (on a Windows machine)), so we can revert them to their original PEM format (even without necessarily knowing that a PEM line is 64 characters). It can be that other versions of nginx or other servers do not put whitespaces, so the splitting into 64-character lines may have to be done. Then we use a X.509 certificate factory to create a certificate object.

That’s basically it. Then we can use this clever “trick” to extract the CN (Common name), or any other uniquely identifying field, from the certificate, and use it to load the corresponding user record from our database.

That’s it, or at least what I got from my proof-of-concept. It’s a niche use-case, and smartcard-to-computer communication is a big is a big usability issue, but for national secure e-id schemes, for e-banking and for internal applications it’s probably not a bad idea.