______________________________________________________________________ DRAFT TRANSCRIPT DNS operations SIG Thursday 2 September 2004 2.00pm ______________________________________________________________________ JOE ABLEY: Good afternoon. Welcome to the DNS SIG meeting. The agenda has been on the board here on this web browse for a little while. I'll g through it again. A few people will be talking. Some talking more than others, interleaved with the same people you've already heard. We'll start off with a quick one from me reviewing open action items. Can you display it on there? (Pause while presentation uploading) JOE ABLEY: We have one action item that's open from two meetings ago, at the last meeting to was carried forward. We have some closure on this action item in this meeting. Secretariat to implement proposal "Lame delegation cleanup revised" prop-004-v0301. George will talk about what's happened in response to that a little bit later on. That's all I really have to say in this part. Next it's Terry. TERRY MANDERSON: (Pause while presentation loads) Good afternoon. What I'm going to be covering is new data collection method that we've put into place for our measurements of DNS load on our primary and secondary name servers then give you some of the graphs and statistics we have on our primary and secondary load. We can then lead into some of the conclusions based on that and some room for some open discussion if it's available. I think we'll have time to make open discussion. Approximately in May this year we started a new statistics gathering process. The benefits there are that we have an improved accuracy through the simple fact that we now count absolutely every query and answer that comes and leaves from our nameservers. Not only that, it allows us to do some more indepth analysis of the response so we can do some per zone inspection of the information. And we also can do some query response analysis. I say that we've implemented a new statistics gathering process - up until May 04 and in fact still running now is our historical method that the two measures aren't fully compatible. Up until now we take a measure every five minutes and take an average, so the new method does not completely replace the old method. It augments it. When we are discussing primary load we're thinking of the three primary nameservers - ns1 in Brisbane, ns3 in Tokyo, ns4 located in Hong Kong. It's important to note that all of these servers are authoritative, exactly the same zones. It's also important to note that the zones on these machines are simply in-addr.arpa zones and IP6.arpa zones. As you would expect from APNIC. Taking a look at our primary server graphs, it seems pretty obvious that ns3 located in Japan - you can see that by the light grey line, is certainly querying or taking a substantial load in the Asia Pacific, in fact more than Hong Kong and much more than Brisbane. It's also interesting to note that the differences between the blue line, which is our successful referral, and our green line, which is NXDOMAIN - in Brisbane you can see there's a gap there. Whereas in Hong Kong there's very, very, very little gap, very little divergence between the two. When we are considering primary zones we have our 15 in-addra.arpa zones. When we're considering a "good" zone we're considering something that has low NXDOMAIN but high referral. We're a delegation point. We're not actually looking for success. We're looking for - we told you the right place to go. Our top five primary zones hits out at 61. in-addr.arpa, closely followed by 218 and 221, 202 and 219. You can see some of the spikes along the lines here. We're not sure why we're getting those spikes. We can make some assumptions but I think it would be invalid to live by those assumptions. When we're considering secondary load we have two secondary servers, in the Asia Pacific that we actively use - sec1.APNIC.net and sec3.APNIC.net. Our secondary servers, I'll give four slide, our secondary servers to secondary for 13 ccTLD. 88 members in-addr.arpas - that's 88 zones from members. 22 in-addr.arpa's from other RIRs. 32 IP6.arpa zones - all from the RIRs. When we're considering our total load in the nameservers, again - it's fully not surprising - that the huge proportion of the query response ends up in Japan. It seems pretty obvious that Japan's equipment is well connected. Our top five secondary zones. This is quite interesting, actually. The 200 in-addr.arpa is a secondary from LACNIC. It averages 145 queries and answers per second. Now, that gives us a couple of really great points we can make from here that secondary, the other RIR zones is extremely beneficial to the Asia Pacific area because we're obviously serving queries on behalf of the other RIRs and making our response times a little bit healthier in the Asia Pacific. LACNIC takes out No. 1 with RIPE taking out 2, 3, 4 and 5, with 213, 217, 71 and 62. You can see one really large spike in 200. We'd love to know what the explanation for that is. I've asked the other RIRs and I'm waiting for a response. Maybe they have some ideas. In some conclusions we can make, first one's about capacity planning and server load. The servers, the primary servers, secondary ones are running extremely well, completely free from strain. There are no load issues on the machines. There are no contention issues. APNIC can certainly house additional member secondary services for any of the arpa space. Based on current measures provided by some inter-RIR coordination we're able to handle DNSSEC and it's quite apparent that the traffic generated at the nameserver space generates less than 10 meg of traffic combined essentially, so if we happen to lose one particular set of infrastructure either Hong Kong, Japan or Brisbane the others will be quite sufficient to take the remainder of the load. Is there any discussion from the statistics? OK, thank you. JOE ABLEY: Thank you, Terry. Next up we have George. When he's ready! GEORGE MICHAELSON: (Pause while presentation loads) Those of you who have been coming to these meetings long enough will remember my laptop was the one that wouldn't connect to the projector. I have a new laptop. BILL MANNING: Does it still refuse to connect to the projector? Must be a new projector. GEORGE MICHAELSON: I'm going to talk about the implementation status of the project to do sweeps of lame DNS entries. To summarise the problem, these slides by the way are essentially the same material that's been shown in the last two meetings but it's just to give everyone some context. The lame DNS problem is something that happens across the whole of the Internet. It's an effect on clients binding to services. It doesn't only happen in one particular address range or one particular name space. It causes timeouts for our purposes in the reverse address lookup. So a receiving party somewhere in the network is taking a transaction, a binding from someone and is attempting to recall the source address, find out what their name is. There's no functioning delegation than delay is apparently to the user because the server blocks waiting for that DNS transaction to complete. You also get an increase in DNS traffic through the effect of servers repeatedly having to say "Do you know this guy?" and instead of getting a clean answer that says, " No, not delegated" they get told to go and ask people that aren't able to serve the data. So they repetitively keep saying "Do you know this?" around the network and you get terrific rises. These are requests that guarantee to fail but the failure depends on timeout. Operators of critical infrastructure have asked us to try and see what we can do to reduce this load. The volume of this traffic was enough that it was showing up on their graphs and they thought it would be nice if we could cut it down. It's interesting it's affecting unrelated third parties. It's one of those problems where the only people that can fix it is the guy who manages the reverse for a given allocation or assignment or us as the authority that houses that but the people affected by the problem could be anywhere in the network. Although we don't like to step in and change people's delegation state we kind of have an obligation to do this until administrators are able to show that they're going to repair this. It's something we do in the final analysis in the extreme when we've gone through all the other processes. The proposal was originally tabled at AMM 16 and amended at the 17 meeting. The way it stands is we have a role to identify lameness. We have two points of test in the network, one in Australia and one in Japan. We do a test on the status of every reverse delegation that's registered with us. If we start to see problems we enter a 15-day period where we're continuously testing and counting down what's happening. If there's no problem it carries on as normal. After that 15 days we go into a 45-day period where we start to tell people we think you have a problem. If they're not going to fix this, if they don't respond or the mail drops because they've changed contact points but haven't updated the Whois then at the end of that period we'll do the delisting. We had some deliverables put on us from this. We were going to look at developing some tools to improve maintenance of reverse DNS. If you were in the previous meeting the database SIG where Elly presented on improvements in reverse DNS management, that is one of the areas that we've taken care of through a design improving initial delegation, but in this space we've done improvements inside MyAPNIC to add an improve GUI for people to manage their reverse DNS. We think this will cut down on the workload of fixing this problem and managing the subspaces that don't line up cleanly with the octet boundary. Those of you that have CIDR blocks that aren't exactly a /16 or /34 will know you're sometimes having to manage very large sizes of records. The system's been implemented in SQL. And that is actually in testing. It's functioning but not yet releasing outcomes into the DNS, but we are running that service. Terry, myself and another one of the ops group we serve one email every hour from this service reminding us it's running and comparing its state against the current DNS system. We've also got an integrated data management model we're trying to promote where we don't develop things in isolation but we're looking at a much more cohesive system for managing resources at APNIC. The lame status checks, the things we do to manage them are all going to interoperate cleanly with the future system. We're also working on support systems for staff. These are still in a design phase. We've had some discussions with the host master department about how they'd like to manage their communications obligation and we'll be developing that as soon as we can. The delivery timeframe at this point is that we expect to go live with this in the last quarter, so some time October, December. OK, to give you a view of the database, we've been running this service now for some time. This is a snapshot of the database view of what we know about the status of the NSs, nothing yet has been disabled. We have 168,000 total nameserver records which are referring to about 62,000 domains. So you'll see that the multiple there is sort of around the 2 to 3, 2.5 to 3.5 domain servers per name. The minimum to get through the system is 2 but people want more redundancy in their DNS and they're adding in extra names. We've got 12,500 name servers listed in a 15-day period. We're finding these have problems and are still counting down their status. They refer to about 9,500 domain. So the numbers there have shifted. Where previously it was 3:1 the number of nameservers to the number of domains it's not quite like that when you start to look at the problem cases. We've got about 11,500 name servers in the 45-day period so these things have been persistently lame for at least 15 days and some for longer in our test cycle. They refer to about 7,000 domains in total. We haven't gone live with the final outcome. So the system is in this state at the point where we agree this is ready and able to go live that's when the bottom state disabled by APNIC would start counting up because we'd have records that we formally had turned off. Just to summarise that, it's about 12% of the nameservers are lame. 24,000. That's an interesting number. We've talked at different times about the scale of this problem in different regions. LACNIC you might remember, have consistently had a very, very good track record here because their registry management system aggressively police this is behaviour so they've tended to the lower end. RIPE, APNIC and ARIN I think we're still coming to terms with this. Although ARIN having the track record of implementing their own lames policy may well have improved on this. This figure is slightly lower than when I last reported on this. We thought the figure could be as high as 15% but it seems to have come down a bit. Something to note - these are not all persistently broken. There is a connectivity issue still in the global net so some of these are flapping, kind of like in BGP, they're up, down, there, not there. We're deliberately testing in two places in the network so we can tell the difference here. It's the ones we can see that are persistently unavailable from two well connected points in the net that we carry forward into the process. The other point that's interesting here is that we do have an idea that there's a top list of hosts, nameservers, which are hosting very many domains and are lame. If we think about that previous number where we're saying there's 6,000 domains that have a problem here, that's a lot of communications for the host master team. It's too big a problem really for them to deal with in one go. So if we're thinking about staging how we talk with the managers of domains in our community the ones we're going to talk to first are going to be the people that run housing services that have lots and lots of domains against them. I would be putting a list up here - we have talked about whether we should do a wall of shame or something but we think there's probably a privacy issue here so rather than exposing into the community the list of people that have problems we're going to focus for now on speaking to them directly. I would imagine the NIRs might be interested in us talking with you because it's likely some of these might be now or future members, so if you would like to share information for your own processes I think we could do that but we're not looking at a wider public process here, we're going to deal with people individually. This is an example of the GUI we're doing in MyAPNIC. We have this new feature called a reverse DNS record. You can use side notation to refer to your ranges. You can put multiples in. If you imagine you've just moved your infrastructure around, you used to have lots of different nameservers, but everything is collected together, two new machines and you want to update a couple of hundred records. It's not a problem. You can use this system and refer to them by their CIDR range and say I want to update the DNS for all this space. You can put in the list of nameservers, it's requiring two but you can have more. You can specify which maintainer you want to use in this. This GUI will go out and create N Whois system all the children records you need maintain this properly. If you need to make one or two different you can do that, you can bring them back together again. It should help making the service easier to use. That's it, thanks. JOE ABLEY: Are there any questions about that presentation from George? No. I have a list of 6 housekeeping issues here but I think I'm going to let the George and Terry show continue and I'll maybe take a break with the housekeeping before Bill talks. We've found a pair of glasses here on the table. If anybody's lost their glasses and is unable to see the slides or find the table... they're down here next up we have Terry again. Over to you, Terry. TERRY MANDERSON: Yes, the George and Terry show continues. As George mentioned in his presentation, we're implementing a new DNS generation system. Not necessarily for the complete benefit of lames, but it certainly does benefit. I'm just going to take a look at what we currently do for DNS generation, that is what we do to take the information that - or the delegation information our members provide to us and generate those into zone files. We'll also take a look at the new process and what the benefits of that are. Implementation status, future consideration and obviously again leaving some time for any discussion. Our current process. As all the members currently recognise, they enter a domain object, either through MyAPNIC or auto DBN. APNIC takes that domain object and export it to zone files. Probably about 8 months or so ago that was the entirety of the process, it was extremely lightweight and it worked extremely well. However a project called the ERX came along. That transfers responsibility of early registrations to the in-region RIR. That has an implications to DNS. It essentially means that a majority shareholder of the RIR still is the custodian of the reverse DNS. So we now have a process where we have to exchange information in terms of zone information in between the RIRs. You can see that just there (Indicates) Additionally, we have something called direct allocation. That's an APNIC policy that's been implemented. The NIRs have the ability to do direct allocation from the APNIC, and additionally it means they then have a process to update in a shared DNS space update records directly into APNIC. We end up having this rather heavy zone merge structure right in the centre that then produces our zone files which are then outputted to our DNS servers that you all see and love. Our current process. Zones are a composite of Whois data and our zonelets or zone fragments from the NIRs and the RIRs. At this point in time zone generation takes about 27 minutes. That's at the best of times. It can expand out to 35, 40 minutes. This is mostly due to data fetch delays and a serialisation problem. At this point in time we have to take in one set of zones, make sure they're right, take in the next set and so forth. We have to take in the RIR zones first then various NIR zones. Zones are completely valid. When they're created they are valid but they're dirty. We have /24 records in Whois, but also have a covering /16 domain object. So the /24 delegation is in DNS terms, simply not going to be adhered to. We're playing with the lights. OK. I didn't want to see you anyway! (Laughter) GEOFF HUSTON: Ditto. TERRY MANDERSON: Sorry, Geoff, you don't get an opportunity. TERRY MANDERSON: The /16 takes control and the /24 delegation is simply not required in the DNS, it's dirty, doesn't need to be there. Additionally where we have some manual processes involved here. It's not ideal to have manual processes in such a mechanistic world so the idea is to remove those as soon as possible. It isn't particularly scalable either. When we start considering the world of ip6.arpa things are going to get big really quickly. So we don't have a scalable process there. Our new process. As George mentioned we have a DNS database. We still take our Whois records, we still take our RIR, ERX records and NIR records but we put them straight into a DNS database. Once they're in the DNS database we've proved they're valid, healthy and can put them to our zone files and DNS servers. We have a DNS database sitting between our generation process and our collection process. Benefits: All our inputs are now prevalidated. Zone generation is now under a minute. So you'd expect when we take away the collection process our zone generation is going to decrease. We also have zone management improvements. And as George has also pointed out there are lame delegation synergies. We see a lame delegation, we can then mark a flag in a DNS database and there you go. It's as simple as that. Future flexibility - DNSSEC. One of the main reasons or not one of the main, but one of the reasons we really like to get our zone generation under a minute - current RIR coordination is suggesting the zone signing process is quite weighty. In the terms of 90 minutes or more. APNIC has a current commitment of a two-hour delegation turnaround. When you start considering OK, we have a current method that takes 40 minutes and you were going to add another 90 minutes on top of that we're going extremely close to stepping over that boundary and we don't want to do that. We want to maintain member turnaround and member response. So we have the future flexibility there for DNSSEC. It also gives us zone consistency. That means every time a zone is created with or without a particular record we know what the order is going to be, how it's going to be formed. Our zone files are now clean. We've removed those superfluous records, the superfluous delegations that really don't get noticed. Certainly if our member says I want the /24 to be there, OK, that's a good decision but we'll remove the /16 to make it work properly for you. Of course, we remove the manual process involved. That is, something happens in host master, they decide to click on something to give a faster delegation, our member changes something in MyAPNIC, there is no manual interaction here. In our implementation state we have a 95% functioning system. We make zones but currently do not yet publish them out to nameservers. We currently have no GUI management interface, right now no management interface. We're in the process of zone state comparative testing. As George mentioned about lame DNS, we're testing this against our current system. We have a few tests we need to do to give ourselves confidence in the system. We need to make sure that before and after a delegation is what we expect it to be. We need to make sure we can filter out invalid data, that we don't get any ugly delegations in two-hour zone files. We have an expected deployment date - end of November this year. Future considers - considerations: DNSSEC support - these are things we need to discuss internally and with our members, certainly with RIRs and NIRs, but some of the problem things about having DNS records stored in the database is how do we re-sign the zone, do we export it out first, re-sign it, put it back in? All these things we need to consider. In-addr.arpa glue, is there benefit for members to have this? Direct update from our stakeholders. Based on our per delegation record. At this point in time when you update a domain you have to submit a new Whois record, or you go through the MyAPNIC process and you update - you end up updating an entire chunk. This will in time allow us to do a per delegation alteration. So if you only want to change one nameserver that's all. We can just do that, that's easy. On a longer term view, we are considering more instantaneous methods of updating such as dynamic DNS and so forth. Does anyone have any questions? RAM NARULA: My name is? Ramm Narula. I'm just wondering is there a possibility of linking a database to a DNS server without having its own files? TERRY MANDERSON: The question was - is there a possibility of linking... there's always that possibility. RAM NARULA: Without having... TERRY MANDERSON: Yes, always that possibility. RAM NARULA: And why isn't it part of the plan? What's good and bad about that? TERRY MANDERSON: One step at a time. That's all. We're happy to consider it. I don't see a problem with that, but one step at a time. Any other questions? OK, thank you. JOE ABLEY: Thank you, Terry. The next two presentations - a double whammy George Michaelson. GEORGE MICHAELSON: (Pause while presentation loads) Hi again. I'd like to give the regular report on deployment issues, management issues in the root service within the region. Since we now have a role in co-ordinating activity we feel it's good to give a standing report to the group about what's been going on. To give you an overview of the kind of things we do, the overall tone is we're trying to facilitate improvements in root services in the region. We feel we can be a coordination point and we are in dialogue with operators who got services in region, notably F, K, I and M-root operators and we host discussions during APNIC meetings. These are corridor conversations, small private conversations to see whether there are things we can do. We also have a role in funding and coordination of sponsorship so we can help people with hardware, hosting, the maintenance costs. It depends on individual circumstances. There are actually communities who have issues with receiving funding assistance and donation so we have to be careful sometimes not to overcommit as well as wanting to offer as much assistance as we can. We also undertake some formal agreements. We have memorandums of understanding with the root operators. The ones we've signed agreements with are F-root and I-root. We have a signed memorandum of understanding with local hosts so everybody has a clear understanding of what their obligation is and expectations are. We have a very long standing relationship with the RIPE NCC, they're the operator of K-root. We are not a root operator. We have no root operator responsibilities. So this is a coordination role, a facilitating role, a discussion point, assistance, it's that kind of activity. To give you an idea of what we've been doing - the project went live in 2003 and we had five deployments at that time that were all F-root instances, we had a deployment in order in Hong Kong, Korea, China, Taiwan And Singapore. In 2004, so far we've also done five deployments. A node in Australia, an F-root, another node in Hong Kong, which was I. A node in Thailand which was I. A node in Indonesia, F, and a node in Malaysia which was an I node. We have more that we expect to do in this year. There are two deployments of K-root we expect to complete in this year. I don't want to give details here, it is very much subject to timing and local considerations, but there's some quite significant work going on there. And we are expecting another node of I to be deployed in Indonesia shortly. This was originally scheduled to happen at the same time as the F node but there were some staff issues that prevented Autonomica from attending. We have some more diffuse discussions taking place, this is about adding extra countries, regions, economies into our coverage, it is also about increasing the diversity. We compliment deployments with additional nodes. We have two nodes in Hong Kong. It's quite a good idea to do that. It doesn't necessarily mean we've let down other regions, that might be something that's got wider benefit to everybody. There's also a very nice map that Shiaki from APNIC has drawn that gives you a better sense of where the deployments are drawn. What's missing is the sense of the coverage of these nodes, their reach. We want to think of ways of showing that but it's an interesting diagram. A question people often ask is where is my root service coming from? It's an interesting question with interesting answers. For the F nodes in particular there's quite a reliable way you can do this. You get a list of the nodes available by looking at their web, but you can use a utility, dig or any other utility that query the DNS and ask the F-root server itself where are you getting me from? If you're in a location that has a local node which is on their web page you should see that. So if you're in New Zealand you should expect to see your F-root service is delivered to you from Auckland. If you're in Korea you should see that your F-root comes from Seoul. We have a few problems - we'll talk about later. For any of these roots you can use the trace route command and see what the path you're getting to that server is. Irrespective of where your root is coming from APNIC encourages people to think about ways of - if you know you have international connectivity that will get you to a node that will be useful for your community and that could be supplied efficiently in region and the benefits are there, the engineering is good we think that's great. We would love you to - encourage you to talk with root operators, talk to us and find ways of getting access to service. Here's an example of dig. You wouldn't believe the things you can do - I've been using the dig command for five, ten years and I only learned about this short plus short option two days ago. It gets rid of a very large amount of debug that you don't need to see and I don't need to see. So if you type a command like this with dig the response you get back is a string so this string, sfo2b.f.root-servers.org. It's the .org name that comes up but the - different issue. That's San Francisco. You would see that listed on F's page of servers. People have known for a while that having to use a special query type to get this is a bit of a difficult issue. Not every server implements support from this host name bind or from the chaos field, so having something that would work in all servers that was more standards compliant is a really nice idea. ISC have been working very hard on this. Suzanne Wolf and Rob Alstey have been writing a draft to consider using the general query response with flags and fields that can be passed back so if you ask an extraordinary question you can also say where are you and get back that information. That would be more useful than a special command like this because it's closer to the real query path your services are using, people could include that in routine queries. So rather than you having to do things at the shell you can maybe do diagnostics inside normal activity. I think that's a very nice idea. There's some concern there about not wanting to just say where the machine is. People are talking about blinding techniques to return 48-bit # and they're talking about embedding web addresses and all sorts of fun magics going on but the draft is very readable. I'd encourage people to look at that. I'm sure people in ccTLD space or the NIRs would be interested in this. If you have a lot of DNS this is good diagnostic stuff. Another question is - what are the benefits. You have to take this table with a pinch of salt. The point here is that this is a list of before and after that's a research work that's been done by someone in the APJII community. This paper that I've given here, the URL was presented at their open policy meeting. It's a very nice paper, very well written. Good body of work. They took all these locations inside their community and they found out where F-root was coming from and what the delay was like. You can see here in the list of locations, that the delay paths are typically the length of the under-sea cable. This is a 200 millisecond hob there that's pretty aggressive compared to some of the others but overall a delay for getting to root service was around half a second mark. After deployment that round trip time has dropped markedly, two orders of decimal magnitude drop for some people. That's very significant. But that's only the time to get queries served by the root. So general DNS queries, getting real answers about real things on the edge of the DNS cloud hasn't necessarily got a lot better. You have to look at this more realistically. The thing is that a lot of DNS queries that go into the root are queries you want to get rid of as quickly as possible. Things that are broken in the global name space. Having that response come quickly to say no, that's junk, don't go there, that's good. There's also potentially traffic savings for people, that this is stuff going on not clogging up links. I think Joe would say it's probably well under the radar given the bandwidth of most cables around here. The other interesting thing is to see query loads. The ISC through a relationship organisation they have, OARC - I can't tell you what the acronym stands for, they have an interest in DNS in general. It's a membership body that helps support development of activities. They have good graphics and statistics on DNS load. I extracted some kind of eyeball figures based on the graphs to give you an idea of what the query loads are that are being serviced by nodes in our region. The average packet a second per load on interfaces which is close but not the same thing is within region quite respectable service. You can see the average load is comfortably in the hundreds and peak loads in locations like Beijing and Seoul are really very respectable. These have been as high as a thousand, 2,000 queries a second of local traffic. They're quite significant loads that are being serviced in region. That's it. Thanks. JOE ABLEY: You're up next too. GEORGE MICHAELSON: Oh OK. So I better wait for the next speaker to get ready, hadn't I. This next one is a presentation that will very possible trigger either Bill or Doug or Bill and Doug to get up and say no, that's completely wrong and correct me. (Minties being thrown at George) GEORGE MICHAELSON: After the presentation. Not before. Did I start the wrong slide set? I did. This one. (Bill laughs) We will now return you to your normal service. This is an informational slide set to talk about some issues that follow on from the v6 servers that are going to be added into the root and the effect that might have on the in-addr tree. Just to give you an overview here, if you don't realise it in-addr.arpa is delegated back to the root name service. It isn't a direct delegation, it's not that the single thing in-addr.arpa is delegated but if you follow the chain of delegation down arpa is delegated back to the root and in-addr under arpa is delegated back to the root. The management responsibilities there are quite nice. The delegation says that the ns is A-root service, the responsible one is A but it is quite consistently saying when you query any of the roots supply this data. The zone's managed by ARIN under a relationship they have. It's not something distributed throughout the whole of the RIR community. It's a job ARIN have, to maintain this zone. It's a file upload they do, it's then published, it's then distributed through the normal root management. The name servers that are serving arpa and in-addr.arpa are going to have v6 state associated with them that's also visible in glue, it's available in the cache state for the root. That has inheritance behaviours, there's information acquired by people along the way. That has implications for what may happen for us in v6 if we're running v6 native services. It might change the behaviour of the DNS. So, first thing you want to say is are there any risks here? There's a bigger question about risks of adding v6 in the root as a whole. I think Bill has already touched on that in the plenary, maybe summarising it in this meeting as well, so I won't talk about that at all. But there is a very small issue there with glue, the size of glue, the size of queries. An assessment has been going on to make sure this is safe and trustable. The general feeling I'm getting from sitting in on meetings, from listening to discussions is if there are risks they're outweighed by benefits. They're contained and small. People are looking at small minimum impact changes, there should be two IPv6 listings, maybe only one, maybe only two will be visible that are going to start appearing. So if you run a nameserver that has v6 you may find the different paths, query paths, are now used to satisfy your chain to the root. If your software thinks 6 is preferable you may discover the RTT, the thing used to select your root, is changing here because v6 RTT is going to be quite different to v4 RTT. It will work but it may be that the weightings of the RTT are going to change the behaviour of which root you select. What are the rewards here? The big reward is people who've got a strong commitment to v6, who for whatever reason are running v6 only - there could be reasons you want to do this. I know there is a lot of issues about having to run v6-4 we saw that from the IPv6 working group and a presentation showing large deployments and the consistent message was we still have to run for various reasons. One of those reasons DN isn't may no longer be true. If you are v6 only even for serving v4 data you may now have a valid full path to v6 serving NS's. What are we going to get as an impact. My basic assessment here is there's really no downside risk for us. We've served and have been serving v6 reverse for quite some time, hosted in Brisbane and in Japan, and we've been measuring that across the whole life of its deployment. That's something Randy Bush put on me as an initiative 2.5 years ago from a corridor conversation. He said it would be a nice idea to do some measurements before things have taken off and track that early stage in the life of the curve. We've got a good sense of what our v6 query load is. It's there but it's small. It's measured in queries per minute rather than per second. So we don't believe that we're going to see a high traffic problem or a service quality problem because based on the behaviour we've already seen we think our part of the chain is looking OK. In particular because we host in Japan and we are at NSPIXP 6 and the related 6 enabled exchanges we believe we will have very, very good interconnectivity to one of the roots that we suspect will be visible in a v6 flavour. I'm trying not to overstate here but it's likely one would see this node in Japan well. Of course we'll continue to monitor this, we'll keep our eye on this if any of you see problems that stem from this delegation act please contact us. Those are dual stack servers. As Bill will no doubt tell us, sometimes these things underneath the covers, things are split into two halves. I don't know if he's gonna go there or not. That's it, thank you. JOE ABLEY: Do we have any observations, questions about George's presentations? OARC, the ISC thing George mentioned stands for operations, analysis and research centre. GEORGE MICHAELSON: And they are a fine bunch of people. JOE ABLEY: While Bill's getting ready to close up the meeting with another couple of presentations I'll go through the housekeeping notes. Our first note here is our meeting hosts are Telecom Fiji and Connect. Today's meeting sponsored by JPNIC. The fellowship program is sponsored by Softbank BB and also sponsors for wireless connection, Internet Bandwidth. We have an onsite noticeboard available for any meeting information anybody would like to share. As of today the opening - last night - the opening plenary and IPv6 presentations are available in the video archive on the APNIC18 web site page. We have a MyAPNIC demo all day in the help desk area if anybody's interested in watching that. And the help desk is available and if anybody needed to go and talk to the APNIC host masters at any time they should feel more than welcome. One additional note not on this list which I forgot to mention to start with - the DNS SIG has a mailing list. Which I presume is a surprise to most people here because it sees very low numbers of traffic. We measure that in messages per year. There are lots of discussions going on, at this meeting and in all other meetings in corridors, in the pool in fact here, about the DNS. It would be nice to get some of those discussions available for participation from members not here at this meeting. If anybody is feeling reluctant to post to the mailing list - for any reason - they shouldn't feel that reluctance. Anybody who opens issues on the mailing list will receive my eternal gratitude. If anybody needs information about how to subscribe to the mailing list please send me mail or contact me here and we can give that information out. So, that's all I have. Bill Manning. BILL MANNING: So Joe is soliciting spam for the mailing list. RANDY BUSH: I was just about to send some. BILL MANNING: OARC was named after Peter Jackson had a successful movie so maybe those aren't really sent to the nicest folks you think. Corrupted debased elves. Anyway... I'm going to channel Steve Crocker. This is a presentation he made in Kuala Lumpur then again in San Diego. He's done this on and off. He was unable to make this meeting for other things. I'm gonna give his presentation and talk a little bit about DNSSEC. How many people know what DNSSEC is? How many people don't know what DNSSEC is. How many people are asleep? OK. Geoff, there one's for you. There's some additional data in here from Verisign, a DNSSEC is essentially adding cryptographic signatures in the DNS. What we're attempting to do is - I hate the word "security", so we're attempting to provide authentication and some additional information, security attributes here. You're protecting the integrity of the query result. So if you ask the question the answer you get back should in fact be the accurate answer. It's not been tampered with in somebody's cache, in transmission it hasn't been corrupted. That's what DNSSEC gives you. You have end systems - end systems have the ability to check the signature chain up to the root. So they can take the answer, say I believe I can validate this from the root, and it actually walks a signature chain. Sort of like a chain of custody if you will. This is a key Internet infrastructure strengthening step, issues dealing with router - routing and Ddod suppression. As early as the - in the early 90s these threats were identified and the protocol designer was started. Like a lot of difficult problems this takes a long time to complete. It's been around for greater than ten years. It's been declared finished three times to my memory. People say "It's done" then the operational people get hold of it and start kicking the tyres and go, "wait a minute, this may be done mathematically correct but operationally we can't deploy this. It's not a deployable solution." So the thing goes back in the hopper. This time for sure, right. We think the fourth time we've - we hope we've got it right this time. So instead of focusing on the specifications and implementation of stuff we're starting to say if we think we've got it right what is it going to do to deploy DNSSEC and make it useful for the community. There is specification and design, implementation stuff that needs to be done, testing of those implementations, productisation, Terry and George talked a little bit about regional registry perspective how they're going to encalculate DNSSEC in their operations. Education and marketing. If that's available how are you guys gonna use DNSSEC? Then how is it adopted, training needs to happen, real operations needs to start-up, then incident handling. If you look at the check marks and bullet points there, we're very early in this process. The specifications and - we believe are done, implementations are starting emerge and testing is starting to happen but the rest of this stuff is still a long ways down the road. Lots of work to be done. Steve cuts this into what he calls epochs. The first one, or early one, is empty, which is the current status. There isn't really any DNSSEC signed data in the DNS today. Then we move from that into what is - he considers an isolated - Some people say I have this tiny zone signed and I've had it signed for ten years. That's not really important. Then there's sparse which is a large number but a small fraction of the overall. It might be that sparse would be when the regional registries start signing their - the reverse maps but nobody in the forward zone signs. That might be sparse. Dense would be when there's a large fraction, greater than 50% of the delegations are signed. Complete is - he says some day I think we finally persuaded Steve that that's mythical. You're never gonna get every single zone signed, or probably not in our lifetime. So the difficulty here is how do we manage the isolated and sparse periods in the transition and how do we encourage adoption? Part of this is an ICANN role. ICANN is in fact a pivotal role here, because it focuses on the root and the root zone. The way the architecture currently is, or the architected process requires US Department of Commerce, the IANA and the root server's cooperation and a new set of procedures on how the root keys are handled, how the zone is signed and how that information is made available to the end users. The security and stability advisory committee has looked at some of the deployment issues and decided this is a bigger thing than they can handle so they spun off a new project called the DNSSEC deployment project - or road map. Steve has structured this as a virtual program management. It's a series of conference calls and people agreeing to do some of the work. There is government funding. We're gonna look at some of the major players and objectives. This virtual program management thing is basically to build, refine and publish a road map so people know where we are. There is a common place people can look and say where are we in this DNSSEC deployment activity. There is a way to measure progress to say here are things we need to be done, we check things off, we make progress. This is an attempt to actually be able to clearly identify where we are and be able to move forward. So we actually have a road map. This is not intended to be closed. If you want to participate Steve is more than happy to have you volunteer to help out and sign up. The major operating components here are - fall into these categories, the end systems. This involves people like Microsoft, Symbion, Nokia, who build the end products. The nearest DNS resolver. Recursive resolvers those things that may be in your ISP or if you're in the situation like you are in the Pacific Islands you may in fact have a cache for the island where everybody is expected to use that. The secondaries - caches and secondary servers need some work done. The authoritative servers - most of the work has been done there. We're looking at registries, top level domains and registrars. The first issue, perhaps the most visible one when people look at this is the root key. The root key should sign the root zone. The root key or something like the root key needs to be available in all of the end systems so they can perform that validation. So we have to know how to distribute that root key or that root key-like activity from whoever's holding the root down to the end system. Each resolver has to know about that key. Who controls it? Ah, there's not much debate over that any more. It's pretty clear whoever edits the root zone needs to hold the key or it's part of that process. Then because all keys will be compromised we need to know how to roll the key over in the event of compromise. Then from the end system's perspective. What do they do when DNSSEC is only sparsely available. If I'm expecting as an end system to be able to validate from the root down and most of the hierarchy's not signed what do I do? That raises the issue of a thing called trust anchors, within DNSSEC you can say since I can't get from the root down I know I can get from this point. I can get from FJ. Or I can get from yahoo.com. These become what are called secure entry points. Those will probably be most visible in these early epochs. From an end system perspective then you start having to keep and maintain multiple keys. This puts pressure on the end systems. Then there's privacy issues. This showed up very late in the process with one of the registrars at this saying we want to protect our data. You say what do you mean, you answer queries, don't you? They said yeah but we don't want people getting bulk copies of our data. Well, OK, you turned off zone transfers, but you make it available through FDP. We're gonna turn that off too because we don't want people to get this. With DNS security, it will tell you when you make a query, it says this is the answer and this is the next record in the zone file. So that makes zone walking trivial. Registrars and registries need to be aware of this and determine whether or not this is a real threat for them. In Europe a couple of them seem to think this is a real problem. Funding. Once again the US government steps up to the plate and does Internet research and core funding, expect this time instead of the Defense Department it's the Department of Homeland Security. We're looking for funding from other places. Russ Mundy, Steve Crocker and the National Institute of Standards and Technology are taking the US lead. In Europe Johan Ihren and Olaf Kolkman and a cast of thousands. We're not seeing a whole lot of energy out of the AP region. Most is coming out project from Jun Murai. Major groups here and some of the objectives are the IANA, root server operators. We have had some discussions already on that. Steve make as distinction between gTLDs and ccTLDs, I think that's partly due to some of the historical contractual stuff. DNS software vendors are starting to get involved and then major organisations. That's pretty much Steve's stuff. From a registry perspective, Verisign looked at this and said this is what we have to do - if you don't understand registrant, registrar and registry you can join me, right. I don't really understand them even though it's been explained to me hundreds of times. A registrant is somebody who goes off and gets a delegation. In this case it may be an APNIC customer. You're going to have to generate a public/private key pair for the zone, for the delegation that you have. You're gonna generate a key pair for you. You sign your zone, your delegation with the private key. You have to keep that private key private. You don't share it with anybody. Then you send the zone public key to the registrar. In this case that may be APNIC. They've handed you a delegation, a block of address space, and you set up DNS servers. You're going to have to give APNIC that public key. Terry said that they are trying to figure out how to accept those keys from their customers, right, Terry? TERRY MANDERSON: Mmm. BILL MANNING: Can I send them to you in email? is it as secure as email is? Fax. TERRY MANDERSON: Can I type that in for you later on? BILL MANNING: Tell it to you in the hallway, put it on a sticky note. None of those are really acceptable. If you think about this and have input that Terry I'm sure would be happy to hear from you about how you would want to do that. GEOFF HUSTON: He just said he accepted sticky notes. BILL MANNING: From me. Then he's gonna throw my sticky note away and make something up and I'm gonna complain. Because APNIC doesn't follow the registry/registrar model - the registrar then sends the registrant's key to the registry so if you're running in a .com or net-type arrangement you may send things to go.daddy and that has to be send to Verisign. GEORGE MICHAELSON: We will have to implement behaviour that is very closely mimic this. BILL MANNING: George says APNIC does exactly this. GEORGE MICHAELSON: Yes. Although I don't think we would call the relationship registry, registrar, the NIR relationship has put us in positions where data that the NIRs maintain is maintained in our framework. But they are the primary agency that performs underdates and changes on behalf of larger numbers of people. I think that's functionally similar. Those people will wish to do certain things with them, they will then do high trust things with us. BILL MANNING: This thing where it says registrar we say NIR there? GEORGE MICHAELSON: In this slide set it's very likely a good fit. And we cover both, we have to differentiate between our role as a service in wider community and our role to membership so again - so again it gets more complicated. Carry on, good talk. You don't need this. BILL MANNING: I do. I have to get feedback from somewhere. Geoff is asleep, right. The registrar sends the key to the registry. So there's a key handoff there. Is that a secure handoff? Does the registrant know or care? Right. Then the registry puts that key, builds a hash for that in the zone. Then the registry signs the zone and publishes the TLD zone. So there's a whole series here of new things that people have to worry about, which is generating the keys, making the keys available for my own use, the private key, protecting the private key so it's not exposed. If the private key is exposed I'm gonna pick on Doug, right, 'cause Doug is notorious black hat, hacker of the Internet extraordinaire, if he gets your private key he can steal your resources. He can fabricate a request to APNIC with the appropriate private key that says "No, I think that I really want to move my nameservers here" or even better "I'm finished with these resources please recover these numbers, I'm done with them." This would be bad. So you have to protect your private key. Then if you pass your public key through an intermediary you have to make sure that you have appropriate contractual language in place so that there are service level agreements in case they compromise the key in handoff. If you haven't thought about key management issues, APNIC is certainly in the position of where they're starting to look at them now and what those things are and how that's going to affect people. I would suggest you think about it and have that dialogue with APNIC. So that's DNSSEC from a very high level. Any questions? No questions. Are you guys afraid? They're asleep. OK. 15 minutes. A brief summary of what I started to do yesterday, how many people saw something like this yesterday from me? OK. You guys may be excused, the key is outside! This has to do with IPv6 data. For adding IPv6 support in the root zone. It's very clear that IPv6 data is distinct from transport. The DNS will gladly take AAAA records, it's just data. And they'll publish it over IPv4 transport which is what they do now. The problem with this is that a resolver, if I bring up a system that only talks IPv6 and I query for things, I'm looking for AAAA records, I won't be able to find any, because the only nameservers that have that talk on IPv version 4. So to some degree if I only talk on one transport the entire DNS hierarchy may disappear or big chunks of it. Because things may all be on the other transport. For people that want to encourage IPv6 adoption this is a real problem so we need to have some level of coordination in the DNS deployment for adding IPv6 transport to the DNS system. BIND as the predominant implementation has some really cool things - to make that happen. Other BIND - other DNS implementations do not do this currently. You need to be careful. One of the key things here is that when people adopt IPv6 they think the DNS works with the IPv6 UDP packet size. DNS as was discovered a year or so ago, DNS doesn't do this. It has hardcoded for the minimum UDP Size for IPv4. DNS. In RC 1035 it limits the message size to 512 octets. It's a hard protocol limit. To get around that there was a thing defined called EDNS0 which lets the sender and receiver negotiate larger sizes, many of the servers these days are EDNS0 capable. I think the APNIC's servers are running EDNS0 capable software but the resolvers, the things in your end systems probably are not. And that's probably not gonna happen in the short or medium term. So for IPv6 deployment the 512 octet message-size limit is a practical limit. We have to finish the answers in 512 bytes. We're gonna look at some of the issues here. I'm going to briefly run through what happened when ARIN decided to do their deployment. There's a 512 hard limit. UDP fragmentation is operationally bad. A lot of people have intrusion detection systems or firewalls that when they see fragmented UDP packets will throw them away because generally they think they're bad. If they throw away those packets your DNS queries will not be received. The servers won't answer what they never hear. DNS appears to be broken. These things - effectively are IPv6 hostile. So you have to ask the question, how many servers can I define before I start fragments my UDP packets, before I will in fact hit these limits. Randy talked a little bit about this yesterday in the APOPS meeting. Where he's talking about the fact that there are these operational exigencies that you have to deal with even though we want a better service. Right now this is stuff we have to live with. So the question is does size matter? apparently size does matter. I've seen zone that is look something like this where I've got a delegation here and it's got a small number, in this case I have six nameservers. But the nameservers themselves have multiple records, multiple glue records associated with them. I have 18 glue records for those 6 nameservers. Will that fit into a single 512-byte packet? The answer is no. Some of those are gonna get cut off. Depending upon what version of software runs in the caching server or in your resolver will depend on which ones of those gets cut off. It's effectively random. Then it says if I have that many and I'm gonna cut off some of them it's all before I'm fine. But as soon as I turn on IPv6 and I want to move things over IPv6 transport it means I have to have AAAA records. Which ones of these A records am I gonna drop. Because some of those will get dropped and we really don't know which. Looking at some of the stuff that happened in BIND where BIND says if I ask a question on v4 I will answer on v4 transport, the next thing they did is if you ask the question on IPv6, look at the answer, if I have AAAAs I will put those at the beginning of the packet then put the rest of the data at the end. If truncation occurs you get the answer you want over the transport you asked. Based on our testing within the root server advisory committee we made a recommendation to ICANN that said - please add for TLDs only v6 glue. The IETF, which had this recommendation which Akira Hato wrote, said if you have something too large truncation will occur, here is a set of perl strips that will allow you to calculate where you do the truncation. The RSSAC recommendation is there, was December 12, 2003 and in July of 2004, after an open process ICANN approved it. Things happened from there. The DNS response-size draft doesn't talk about IPv6 per se because there are problems in the v4 space as well where there are too many records to find. This basically says when you expose a moderate or high number of authoritative servers there's a problem - approaching that 512 byte limit. Here's how you can address that. It really - if you want to deploy v6 it really depends on your current environment. Do you have a one to one overlap of v6 and v4. Do I have v6 and v4 at the same bandwidth and the same locations everywhere or are they different? Then you have to look at this and say turning on IPv6 seems to be relatively easy but there are some subtle interactions that may impact production performance. So if you wan to minimise impact on your production services while turning on v6 you might want to turn on something else. So answering those two questions will help. ARIN looked at this. In ARIN's case their v6 transport did not overlap their v4. They were different. ARIN, like APNIC, outsourced some of their stuff to different locations. Some of those locations had much better v4 and v6. So you look at the recommended dual stack service from some of the IETF recommendations, it's not entirely practical so you may have to virtualise that. In ARIN's case they had production, 365-day a year, 24 hours a day, 7 days a week production requirements. They couldn't afford downtime. Then they said if we want to add IPv6 we can put that in as part of our normal upgrade life cycle. So as we upgrade our hardware we can add IPv6 capability. In ARIN's case they had a co-location area with high bandwidth, a couple of hundred megabit connections in a co-lo area not in their offices but there was no v6 capability, they had a single main with an " A" record. For most of their production service. They set up another machine in their offices, same name but with AAAA. This was basically 100 mega bits, there was AT1 or something like that. They did the slaving of the data between those two on IPv4 over different sets of addresses. It was transparent to the Internet at large. This says that the same data is available on the same machine but with two different transports. This is virtually dual stack. Or actually practically it's a non-dual stack DNS. You can have servers with an A, an "x" or a AAAA record. Or you can have one server name on two different machines. ARIN picked one name with two different machines. Applications treat these things differently. BIND looks for A then for AAAA for all ns records. So ARIN looked at that and said the production requirements don't allow a single name single machine. that want to get to the nameserver tinnie.arin.net so naming a machine tinnie and tinnie6 didn't make any sense they needed to use the same name because of their customers. They couldn't control whether their customers were coming in on v4 or v6. To that second point the v6 users should be presented with the same environment. I can't control whether you're gonna use V 4 or v6 therefore I need to provide a consistent environment. That's BIND. BIND looks at A before AAAA. They use another service - SSH, pretty popular. SSH prefers AAAAs over As. So if you're on a separate machine and you do an SSH to tinnie.arin.net you're going to go to a machine that runs IPv6 first. During management if you're trying to debug a problem you're gonna look at the wrong log. If you're deleting stuff you're deleting stuff off the wrong machine. This is that application, subtle application interaction which is problematic, or can be. If you look at this, if the machine that runs A, or the IPv4 machine is running other services that can not be brought up on to IPv6 you have to separate those services physically on different hardware or separate them by domain names. ARIN separated by hardware. This was part of their life cycle upgrade. So they brought on new hardware, brought up the new services there. In a brief summary - new hardware is generally v6 capable, at least in North America v6 transport from commercial vendors is sporadic. Some people have it, some people don't. You can deploy v6 stuff without impacting production services which is important. And the v6 users do not have to look like they're marginalised because they're in a separate enclave with separate names, they have the same names, the same services so recommendations that came out this have from ARIN were to use the latest acceptable versions of software, the same physical media and get in early. George has been doing that, he's talking about queries per minute, right. So the bandwidth requirements are low so when you put services up on v6 on a low bandwidth you don't have to manage that. If you look at the other implementation service, if I run v4 and v6 on exactly the same machine the presumption is you have working v4 and v6 on those machines and throughout this services and applications that that will be consistent homogeneous dual stacks throughout your organisation. RANDY BUSH: Why? How does A lead to B? BILL MANNING: These are different things. A doesn't lead to B. RANDY BUSH: Why B? Purpose of running dual stack - JOE ABLEY: Microphone. (Pause) RANDY BUSH: The purpose for running dual stack on this server is so that queries can go there in v4 or v6 so I should be able to have v4 only host and mixed hosts, et cetera throughout my organisation. BILL MANNING: Does all of the intermediate infrastructure and bandwidth and service provider have equivalently v4, v6? RANDY BUSH: No, it just has to have whatever I'm using and certainly I doubt very many people are sticking v4-only servers in v6-only networks. BILL MANNING: That's not common these days. RANDY BUSH: It doesn't work is why not. I don't buy assumption 2. BILL MANNING: OK. That's fine. Other people do. I guess it's an assumption - RANDY BUSH: ... passed a law saying pie was equal to 3. BILL MANNING: It's a working assumption. Another particular issue here is untested application interaction when presented with dual stack operative systems. BIND and SSH examples from ARIN, if I'm running dual stack some applications may prefer v6 over v4, some the other way, so I may in fact have different results. That places unwanted pressure on production systems if you adopt IPv6 capability at the outset as a standard dual stack because you're gonna have these subtle changes and anomalies. Dual stack is in fact a legitimate way to do that if the preconditions merit what you want to do. It didn't work that way for ARIN. Coordination with others. I'm going to step away from the ARIN example, I'm going to look at some of the other stuff. Remember that Department of Commerce is one of the players with the root zone. They've come out with a public statement that says IPv6 is important, an important technology and needs to be carefully considered and studied as it's getting deployed. There was a statement made and an application for adding v6 support for the .EDU TLD. This is principally that they are concerned about stability. They don't really care much about anything as long as they have a stable DNS that comes out of this, that the end users see a consistently set of responses. Because of that demand ICANN was burdened with documenting procedures for adding v6 support in the root zone for TLDs. ICANN did that. RSSAC made the recommendation, there was some discussion with commerce, ICANN picked up the token and said I guess we have to write the procedures. Those procedures were written and approved as of 13 July 2004, six months after the original thing from RSSAC that said go for it. They've been implemented. At least three TLDs, you can now find IPv6 AAAA records in the root zone for at least three TLDs. I understand there's a large back log that is being processed to add that. If you look at the root zone now there will be AAAA records there. DOUG BARTON: There's lots more than three. BILL MANNING: There were the three as of KL. I haven't tracked it. DOUG BARTON: It's going very fast. we're very happy. BILL MANNING: Lots of AAAAs are showing up. George was talking about something down here - the root servers has a slightly different problem than TLDs. In that the root servers answer what are called priming queries. Because they answer priming queries they have to be extraordinarily careful about what they hand back. Because there is this broad range, there's probably 25 years worth of DNS software out there that the root nameservers have to deal with. Some software when presented with information they've never seen before freeze the operating system so if the root server handed back a AAAA because they didn't know better this would freeze. This is unacceptable. We have to do a lot more homework before we make recommendations about adding v6 records to the root servers themselves. So we see v6 in the root zone for top level domains. The root name servers is at least six months out. There's a set of documents. How much time we got left? JOE ABLEY: Minus 6 minutes. BILL MANNING: OK. I had a whole bunch of more stuff here. So at the end of the day you have this sort of generic recommendation. The best thing we can do as DNS operators is have authoritative nameservers for every zone available over all transports. So if you have a zone and you want to turn on IPv6 make sure you use the entire zone, that everything is equitable, the name space is the same on v6 and v4. This is to maintain coherency for the end user. End user needs to see the same response regardless of the transport they ask the question on. It also means that full service resolvers need to be virtually dual stacked. You have to run current software to take advantage of the... sensitivity to transport that BIND is now capable of doing. You might want to consider accelerating your life cycle process as you swap out older hardware bringing on new hardware, as you bring in the new hardware test it with v6 first. Doing it this way and putting things up and not using this other technique called bridging - which I didn't cover - makes everybody's life a whole lot easier. Otherwise it become as troubleshooting nightmare. Thank you so much. Tea might still be available. JOE ABLEY: Thank you, Bill. If there are no questions - are there any questions? Comments, observations? Then thank you very much for coming. Enjoy the tea break. Thanks. (APPLAUSE) Time: 3.36pm