______________________________________________________________________ DRAFT TRANSCRIPT SIG: Routing Date: Wednesday 1 March 2006 Time: 2.00pm Presentation: APNIC resource certification update Presenter: George Michaelson ______________________________________________________________________ GEORGE MICHAELSON: A rather horrible slide pack that is, especially for a workshop, but it's a fairly good reflection of the work we've been doing. I'm going to cover what our current goals are here. I've got quite a lot of stuff. Some examples of the activity and where we think we're going to go next. The immediate short-term goals we were looking at were to try and get something that was the demonstrator, something that would get us off first base, with a prime focus on using free and open source. We badly wanted to avoid re-implementing anything. We have a body of code that we do in-house and it's built around Perl, the mod Perl extension to Apache 2. It's quite an investment we've made in the last 2.5 years and it's worked very well for us. We were looking to make use of that same mechanism. That means it had to fit in with this thing that is a particular way that we're developing code at APNIC. We're interested in trends in the WIDE community. We're looking at things which are doing REST, which is a way of doing things. We were looking at using encoding because there is so much code out there that you can use XMLs. If you can pump the load on to the client side, make things happen on people's own iron rather than yours, that lets you get rid of quite a few of the problems. We've been doing the bootstrap and to support the basic infrastructure. I've got to say the infrastructure work, this is really primitive stuff. How do we handle a cert, where do we store it and make statements out of it. It's not up at service level where we say this is a valid search. It's much lower than that. But we've been able to leave for experience in the bootstrap phase. We actually found there's quite good code out there. There's a library called convert ASN1. It maps the code into a Per 1 hatch structure and it had a module for passing certificates. That was just a test demonstrator but it's quite good. Although it's very badly documented, the code worked very well and we were able to use this to understand how to construct ASN1 sequences and interact with them. We targeted Open SSL deliberately because we thought it was the most successful. We thought hard about going into the OpenSSL coding. But it is frighteningly complicated. It's bizarre what Tim and Eric do to make SSL happen and their interface is very confusing. It has abstractions and make your tea, cut your hair, it does everything. It is poorly duplicated. We steer clear of that. Instead, we're looking at using its command line interface. We have basic functions. We can issue a CRL and do basic things very straightforwardly. Verification - that's built into this tool. If you present a well-formed certificate, CRLs, you can say is this cert OK? The problem is they've written the verification in a way that doesn't understand extensions. If you think about it, you can you write a generic tool that will understand a totally arbitrary extension? You can't. And what we've basically had to recognise is that this is for us at this stage an inherently 2-phased process. We're going to verify the crypto stuff. We'll have to do it out of band by looking at them ourselves. The other thing we're getting is tools to convert from PEM ASCI encoded form. You get opportunities to look at things and the Perl is a big advantage to us. But there is that problem that it's undocumented and pretty complicated. A quick overview - the way it works is it has an MS DOS configure file. You give the block a name and do a variable value assignment. We found there's an option to add an extra config file. We decided we'll add it as an external fig and pass it in. There's a weird pack where you put in any flags about its criticality or importance or what colour of milk you drink and say the rest is a DER encoded sequence and embed it hex. So if you've got ASN1 and put it into a sequence, you can bang any extension into the framework and use it to construct the signing. An example would be is an arbitrary member - they have 1 ASN, 17814. They've got a /20 of space as well. Using this mechanism, this is what it would look like in the encoding in a config file to run through OpenSSL. We've defined an arbitrary extension name in the configuration file and we have mandatory components the RFC says we have to have. We must pass down the CA bit because you have to use the certificates. The subject key identifier has to be put in as a hash and the security identifier has to be present and a key usage stream. There's a bunch of functions you can do. The critical here is the interesting one. This has the behaviour of making people not be able to do things with your cert. It puts restrictions on. I will say I have suspicion there are hacks out there that ignore things and do things with or without them. In a community of well-behaved people, I think the flags are quite good. Sorry, Steve, is that possibly me being naughty or fair? They're more mandatory or only if you play the rules. STEVE KENT: I suspect that most widely used applications on security with a critical extension is not. GEORGE MICHAELSON: I noticed that it said you couldn't use X,Y,Z after you've used it. If you're happy, that's good for you. People think there are things you can do. The important stuff is the second set. These are the mandatory extensions that aren't in the set of attributes. There's a no ID tag and dollar are two which are defined by the RFC and there's are the IPv4 address and IPv6 address components, both of which are critical. If your ASN is as good as mine was 20 years ago, there is something like a number of and encodes a representation of that number of prefix. To give you an example of what we get from the Perl, in the cert running program, we do a configuration print against the no ID. And the PDU is the string equivalent of the stuff we were seeing before. Does my voice does not reach this microphone from over here? You get a hash array which consists of a high-level identifier that's in the ASN1 and has the decode value. That's Payload and this is the instance of data we were looking at. The instance is the policy identifier and this is its Payload value. That's an assigned number that's managed by IANA. I think Steve may run the registry that assigns those? Yes or no. Russ does it. Russ assigned that number. So creation act is a command line. You're doing a call of a CA function and passing in your own config and passing extension file and saying use inside that. And it's just absolutely normal command line arguments for certificate processing. You could look on 100 web pages, "How do I make my own certificate", and see this sequence with very small functions. We basically copped out in the short term. We weren't comfortable with the ASN1 part of encoding these projects. It's not that we don't think ASN 1 would work but we are one step back. We modelled the resource signing phase of constructing an SHA 1 sign across a body of text. We're doing detached signature. We've bypassed the issue of what it's like, how to manipulate them still coming up there. We can sign anything. We have tested signing an RPSL object. There is a problem in the tool kit, if you want to apply the verify function, you must have the public key in the ASN1 format. I don't know if you've ever seen how OpenSSL does it. It's made available as a text you've done of the elements followed by a body. You have to do the conversion into ASN1 if you want to use its command tool to do a verify. It doesn't tell you that anywhere in its doco which was painful. We're expecting to have to put some tools into our own facilities to provide this to people to help you. If you want to see one of these babies, this is what an example certificate looks like. Can you all see that and read that? Great. We're deliberately using a name which is quite clearly not a valid - do not use this in the wild name. We have short-life validity. We wanted to get into a cycle of having aged certificates that people could get against. We have a whole bunch of mandatory components. And down here, you have a typo because the people that submitted the code into OpenSSL got one finger wrong. So instead of SBP you've got SQP. Almost every OpenSSL out there will see that string until one minor upgrade goes through. You cannot actually see the elements. They present as arbitrary data. We expect we'll have to write code and submit it to OpenSSL community to present this in a structured manner and show people what the elements are so you can do extraction and manipulate it yourself. Then you get the certificate as a bunch of text at the end. The current status - what we did is we took all of the top-level resources that we've given out to our membership and generated certificate instances for all of them. Our own file is about 8 to 10 k of text and that is a single certificate that covers the entire space we have responsibility for. We can do somewhere around 1,000 signings in 30 minutes, not that I would suggest in your wildest dreams you should resign the entire state of the world in one hit. That's silly. If you have to, it's not expensive. On a Dell 1750, which is a 2 gig CPU you could do it.. Most of your time is spent doing IRO. The encrypto component of it is very small. I don't know why I keep seeing the same numbers come up and I'm wondering if every single certificate number in the world has the same prime number. Isn't that bad? SPEAKER FROM THE FLOOR: It's the exponent. GEORGE MICHAELSON: It surprised me we're told the way this stuff works is that you have huge numbers of relatively prime number. SPEAKER FROM THE FLOOR: BECAUSE THE WAY: But, the public exponent that used, E, it can be any one of a variety of things. To the 16th minus 1 is a good choice. GEORGE MICHAELSON: Basically, at this point, if you say, "Trust me," it's OK. SPEAKER FROM THE FLOOR: It's a good number. GEORGE MICHAELSON: I dropped $200 worth of glassware one day and they were cross with me, so I didn't do too well. I went to the beach 100 times in one year, so. The other thing about this is we've deliberately made the certificate names a blind. We haven't given people institutional names. I don't know what your telco is, we've created this space using names based on this arbitrary number field. We chose completely a random prefix FC00 and thought it was a nice number space to use. Every one of these certificates has a 40-bit centrally assigned unique local identifier magically associated with it as a value. You can work out who it is because you plug the AS or the IP into whois to find out who has got it. But we're not making a data dump. These are flattened and we think there is potential in deliberately having certificates with anonymous names. There are many reasons to think about names that get attached to these things. This is important. We're playing with effectively artificial flat name space. We thought it was useful. We have made the configuration files and the private keys available. Never, ever give people your private keys. However, we are. If you want to play with this stuff you need the private key in order to do the signings. Our database, and I've put the URL up there, and it is here has for every certificate, it has the private key. So that you could download these and pretend to be that person and use that certificate to do signings and tests. When we go live, we won't do that. I think that's about it. We're just starting to come up to speed on our own framework to manipulate these things. We're about three months behind with some other stuff but we'll have our facilities able to manipulate these. And the next steps - we've got test certs from BBN. Charles Gardener has been amazing. He's been checking off stuff. We have a nice body of code. We want to put up this little demonstrator to giving people a chance to pick some numbers, be a certificate, do validations, even make some bogus certs and verify that you can show it's not valid and put up a time line to get this stuff out the door. A lot of this is subject to other work. That's it, folks. APPLAUSE RANDY BUSH: Randy Bush. GEORGE MICHAELSON: How are you? RANDY BUSH: I'm good. The RIRs and particularly APNIC, since this is my home region, is getting this done because this is longly timed stuff and there is going to be some Monday morning, we're going to wake up where we have serious routing attacks and this stuff has to have been all been done beforehand. The vendors isn't solving the problem for us, you're doing the substrate that has to be done first and thank you. GEORGE MICHAELSON: Has crash dummy here, I'd say that it's really comforting the open source tools are going towards our goal. It's a little worrying about some of the behaviours but not a big step to give them back some food that improves the quality of what they're doing. I'd be fairly confident of what we get out the door may not be the fastest or the best but it will fly. I don't think the community is going to have a problem here. I think we can fulfil the role that's needed. PHILIP SMITH: Thank you very much, George. So this brings us to the end of the first session of the routing SIG. After the break, we have those three sessions to entice you back. And I hope you'll come back for the second half in about 20 minutes time. Thank you.