______________________________________________________________________ DRAFT TRANSCRIPT DNS operations SIG Thursday 8 September 2005 4.00pm ______________________________________________________________________ GEORGE MICHAELSON: Hi. Welcome to the DNS operations SIG. Just before we kick off, IÕve got the usual housekeeping notes. Today's sponsor is CNNIC, silver sponsor. We have just had morning and afternoon tea so I am not going to tell you about the food. If you're interested on the MyAPNIC demo, it is available all day at the helpdesk, the helpdesk is available at break times too. Please check on the online noticeboard for updates. Let's kick off. We have a very brief agenda today. I'll be giving a brief review of the action items. Sanjaya is going to talk about the IPv6 reverse plan and I'm going to give updates on lame DNS and Mathias Koerber from Nominum is going to discuss scaling the DNS. I would like to give apologies from Joe Abley who is the real SIG chair and his normal replacement, Joao, also can't be here so I'm here for the SIG. I'm going to kick off for the action item review. It is simple - we don't have any. That's not strictly true because we do have a continuing report that comes out from prop 16, APNIC 16 prop-004-v001, the lame delegation clean-up. I think it is just a continuing activity. I propose a hand-over to Sanjaya who is going to talk about IPv6 deprecation. SANJAYA: Have you got the presentation on your laptop? GEORGE MICHAELSON: No, I have not got the presentation on my laptop. That's just me. I prepared a slide. SANJAYA: Hello. My name is Sanjaya. I'm from the APNIC Secretariat. I'm here to present a proposal to deprecation of ip6.int reverse DNS service for the APNIC. Background and situation in other RIRs, proposal and some considerations to think about, the effects on APNIC members and some references and question and answer. The background - use of the IPv6 domain is actually deprecated since almost 2001, RFC 3152, written by Randy Bush. And then in August 2005, there's a new BCP 109 written by Geoff Huston that states the Regional Internet Registries are advised that maintenance of delegation of entries in ip6.int is no longer required as part of infrastructure services in support of Internet standards conformant IPv6 implementations as of 1 September 2005. So this RFC states that since September 2005 the RIRs do not have to provide ip6.int in the reverse DNS service anymore but it doesn't say that we must stop so it's just that it is no longer required. APNIC itself actually we have stopped accepting new updates for a year already but, however, there are still some queries going into - about this domain. About five queries per minute. It's not high at all but still there. If you look at the records of ip6.int - records in our DNS, there are 54 records, 23 of which has a corresponding ip6.arpa so that's fine but we still have some 31 entries with no corresponding ip6.arpa record so these records will basically - if we remove all the records, then these are the people who will suffer. RANDY BUSH: You have a distribution - GEORGE MICHAELSON: Microphone, Randy. RANDY BUSH: Do you have the distribution of the query to those two classes? SANJAYA: No. Sorry. GEORGE MICHAELSON: Yes, but not ready for you. SANJAYA: We don't have it in the presentation. George, off the top of your head can you remember what's the query rate for ip6.arpa? GEORGE MICHAELSON: It's in the region of three queries a minute. You mean for the specific domains? RANDY BUSH: Yes. GEORGE MICHAELSON: You would like to see the distribution across the specific set? RANDY BUSH: Between these two. In other words, if no queries are coming in for the 31 with no corresponding ip6.arpa records, who cares? GEORGE MICHAELSON: If we can show no query traffic - we don't have a detailed analysis but you're right, it would be easy to do this. RANDY BUSH: I'm not complaining or asking for a refund. It would tell us if we're damaging anybody. SANJAYA: You're suggesting we look whether there's traffic going for that 31 records. Good point. Situation in other RIRs - LACNIC has consulted its community in the last meeting and I think there's only two members only that this affected and basically got a go-ahead to cease the ip6.int support so they're now determining to stop the service. We expected ARIN, RIPE and AfriNIC would present the issue to their communities at a later date. So what's the proposal here? We propose that APNIC stop devoting resources to support the operation of this deprecated domain. The cut-off date is to be determined jointly with the other RIRs, basically. We can do it any time because we have - since 1 September we no longer have to do it, but we would like to do it in a good order and if possible in conjunction with the other RIRs. These are the steps to ensure orderly cut-off. We would like to notify parties who are still sending ip6.int queries to our servers. We probably find the parties and load them up, try to contact them and ask them to stop it. Send public announcement and then ask the root ip6.int to remove APNIC delegation on the cut-off date. I think we were told by Bill Manning he will need about one week before APNIC for the lead time to do that. One week we should dial him and he'll put APNIC delegations down and then we will remove the entries on the cut-off date and then it is gone. Then we will report the project status in APNIC 21 which is end of February 2006. By this time we may or may have not implemented this. It depends on the coordination with the other RIRs. We will report the status of this in the next meeting. So what are the advantages and disadvantages? The advantages of stopping the service is actually no more confusion with network operators and end users. It has been deprecated for such a long time, I think it is time to draw the line and say, "No more." We'll use ip6.arpa for reverse mapping domains. This will free some of the resources to handle a lot of more useful services. The disadvantage is of course legacy IPv6 applications that rely on ip6.int will not get valid DNS results. Effects on APNIC members - members using legacy IPv6 protocol stacks should migrate to a version that supports ip6.arpa and members receiving ip6.int delegation from APNIC should also cease operating their ip6.int domain. NIRs - same thing. Those who have received the ip6.int delegation from APNIC should cease operating their ip6.int domain. These are the references that I referred to earlier. (Refers to slide). Questions we received - actually, most responses we get after publishing this proposal are mostly supportive to this idea but still some questions about how do we contact the ip6.int querying parties? Since there's only about five queries per minute, we feel that there's a good possibility that there's not too many parties involved so once we identify the IP, we could quickly tell them to stop doing it and as soon as - I don't know, probably less than one query per minute, once we can push it down to less than one query per minute, we can safely say if we turn it off there will be no more - there should be no major operational impact. RANDY BUSH: Sanjaya, aren't those queries coming from services out there in the whole universe that people from APNIC are touching? In other words, somebody in APNIC v6 space touches a server in Alaska and that server is issuing the look-up to see who that IPv6 query comes from? SANJAYA: Could be RANDY BUSH: So what you're about to do here is a survey of the distribution over the world of bad antique software multiplied by the distribution of poor Whois records? I am not optimistic. I mean, it's kind of you to try. GEORGE MICHAELSON: Can I make another observation here, Randy, which is we also know that some of the queries are pump priming, that the delegation priming queries we're seeing from recipients of domains from us - I've been in discussion with some people in Japan and the last I heard there was still a fairly fundamental split between corporate and government people who already upgraded and were quite comfortably looking up ip6.arpa and potentially large home network communities expecting ip6.int to be that and I think the traffic is more the pump priming for those deployed services than it is the random requests out of region that then do a back trace. That's why we think there is some validity in doing this because we can get to the ISPs and say, "You really have to do something." But I do think there's a kernel of truth in what you're saying. RANDY BUSH: Track it, would you? We'll learn something. GEORGE MICHAELSON: We have been doing a long baseline measure in course level and I think it is time to do fine-grain measure. Yeah, we'll do that. SANJAYA: Yes, that's all. Any other questions? GEORGE MICHAELSON: Given that the highest impact is likely to be from JPNIC members, based on what we believe we know, it would be useful to know if it's someone in JPNIC is going to take account of these action items and propagate them within that community. I think it is probably incumbent on Sanjaya and myself to contact appropriate people in JPNIC and let them know should this proposal go ahead so we can put formalities in place they need to notify their downstream membership about services and delegation. SANJAYA: Any comments from the JPNIC representatives? TOSHIYUKI HOSAKA: We had discussed this proposal within JPNIC and this proposal - we think it is reasonable so we can support this proposal. SANJAYA: OK. So if we would call upon JPNIC to help us identify some affected parties, you could help us do that? TOSHIYUKI HOSAKA: Yes. SANJAYA: Thank you. GEORGE MICHAELSON: If there are no further comments, I would like to get a sense of consensus from this group about this proposal so can I see a show of hands for people who are in favour of us adopting this proposal? And are there people who have concerns? Is there anyone who would like to register a concern with this proposal? Well, I would say judging from that we have fairly good consensus from the interested parties. SURESH RAMASUBRAMANIAN: Are you aware of any resolvers currently in production that still use ip6.int? Resolver software? GEORGE MICHAELSON: When you say Òcurrently in productionÓ, I would say the answer is - if you mean currently being produced, sold, released, the head releases, the stable releases, we canvassed Microsoft, Solaris, I believe someone contacted HDBS, 3BSD, Red Hat and all have converted their supported software platforms to ip6.arpa but I do believe there is a substantial body of people returning on the current releases. Anybody running older Windows 2000 is likely to be doing IPv6 in the look-up. This deprecation is now four years old and I think there are limits to the obligation we have to support that part of the community. OK, I'm going to give a presentation on the status of the lame DNS policy. This proposal arose from AMM 16, with amendment modifications AMM 17, so it's going back quite a bit now. The aim is we wanted to try to improve on the quality of the DNS server as serviced by having the two points in the network we would test for lameness and delegated space - one in Australia and one in Japan - and we'd implement a test cycle over all the domain names registered through our servers and any we found that were lame, we would have a 15-day period to see they're consistently lame. If nothing was fixed in that time we go into notification period and give the domain management 45 days to rectify the problem and if that was not corrected we would disable the delegation. This policy went into implementation towards the end of 2004. We started contacting people at the end of November. We had some delay because of what we're calling due diligence. We were concerned to make sure we didn't spam the whole forest of domain holders. We wanted to make sure the software was working well and the contact was appropriate and done the right way. Ticket tracking against the lame domains really started towards the end of November, the last week of November. That process in itself of just notifying people had very good outcomes. We had notifications being resolved seven days later. There was some initial effect, quite good. We have reached the point now where it is typically taking two days after we contact someone to have a fixed result, if it is going to be fixed. Typically the domains we don't hear back from stay the same way. We actually started undelegating domains in January of 2005. But we delayed the application of this policy to admin that would have more than five lame DNSs which in the implementation, which meant they would be receiving more than five emails. We decided that we would write some modifications to our systems to make a better process to deal with these people because once you get above having five, the chances are you have 25 or 55 or 125 and we really didn't think we should be sending 125 emails of these out so we worked with the hostmaster department on a mechanism to use our ticket system to track these people. Once we got into the second quarter of this year we actually started contacting them and resolving lame issues and this is where we started to see a really significant improvement in fixing the problem. The second thing is we haven't yet implemented this policy for IPv6. There are several reasons for this. The fact that IPv6 is still in a somewhat disconnected state means that our testing methodology is much more likely to incorrectly identify a v6 domain as being lame because of connectivity issues. We think there are some quite interesting access control lists floating out there in the v6 world so that kind of can get in the way. There aren't that many delegations. You saw from the previous slide we have a total of 50 or 60 active delegations. This is a space where it is a very small problem and given we think there are still lab testbed deployments, we don't want to do things to people that potentially interferes with the activities we're measuring. Here are some pictures to give you a sense of the policy effectiveness. The graph has three lines. The bottom orange/yellow line is the bottom of lame domains we were counting. You can see towards the midpoint of 2002 through to 2004 there was about 10,000 lame domains and this was an increasing trend. At the beginning of the policy deployment we have a quite nice drop, which I think was probably the effect of telling people to look at the problem. Then things were creeping up again. When you reach the point where we actually start implementing the policy and disconnecting, you can see there's quite a radical drop until we reach a baseline of around 5,000. The red line is the count of domains that have no errors. There's a bit of a gap here because we had problems with the measuring systems. During the lifetime of the projects, we actually have an increase in the total number of registered domains which is natural because we have been deploying more network resources. That is a slightly steeper line than the rate of lame increase. If you want to draw an analogy with rooting good put, the number of valid domains has increased and the amount of domains might have been increasing but it wasn't as quick, so overall things were getting better. The blue line is the increase we have been able to achieve since we actively deployed this project and got rid of lame domains. That's given us a quite significant improvement so the total number of valid domains is around 55,000. 5,000 lame domains, which means the total is around 60,000. An interesting question is why we have such a persistent body of lame domains here we can't clear up. I am prepared to be corrected by Terry who actually implemented the systems doing the checking, but I believe the reason we have the situation is the very large number of lame domains we're left with are unstable and so they are lame for maybe seven days in 14 and then they come up again and then a few days later they go lame again and because the policy is built around consistent lameness, it is very hard to deal with the problem because the meter resets to zero once we see them valid again. These are the domains that are unstable and don't have reliable DNS or have some kind of problem. It seems scary there's 5,000 of them and we have got 60,000 delegations, so as an error rate, maybe this is just the way it is for the time being until we can resolve issues. We might need to think about providing more DNS secondary services or assisting people to get more stable DNS. Another view of this is to look at the percentage. So I focused on the 2004 to 2005 period and the black line is the corrected error rate so this is the number of domains that are lame and the point where the red line breaks off is where we start to apply undelegation. The red line is the underlying uncorrected rate. If we haven't undelegated these domains, this is the amount of lameness we would have had, somewhere around 15% to 16%. The effects of the policy is to reduce this below 10%. Sorry about the colours. The green line has an initial sharp state which is the first batch of names we were able to remove and then you have the period where we were re-writing the system to start coping with the greater than five cases and then you have another sharp knee where we start to apply the policy. You have one sharp knee and another sharp knee and then you pretty much have the stable region where we're delegating and undelegating as we detect a problem. This is likely to be the future state of the problem. There is a steady state which is the background level of domains we're likely to go on delegating. I hope we can do better but I suspect this is how it's going to be. Some feedback we had from the hostmasters - this is really a repeat of information presented at the last meeting. Most of the problems and resolution were about what happens to a company over the lifetime of having the resource. You don't react with DNS reverse often so chances are you forget the password to manage the system. Until we get back in touch with people there isn't much reason to recover. The process of getting in touch with people triggered the thing of them admitting they didnÕt know how to manage the space, the staff moved on, they lost the password. We were able to help with that. A lot of people don't know how to make the nameservers authoritative. We had to give information on how to run authoritative nameservers. We're left with the reliability problems, if you like, flapping of DNS status. That's going to go with the consistent background lameness I think we're going to have to deal with. To summarise, we have taken lameness at about 18%, brought it back to 16%, got it to 8% once we took care of the people with a significant number of lame servers. It is about 60 days to affect a change if you look at policy, 15 plus 45 days. This means there's an ongoing process here and a slow time. EDWARD LEWIS: That the percentage of zones that are lame or servers that are lame? GEORGE MICHAELSON: Zones. There are actually interesting figures in which are the lame servers serving lame data. You're right you can do interesting things seeing one nameserver is lame for hundreds of domains so there are some issues in tackling this by looking at the nameserver view. So IPv6 has yet to be included but I think overall we can say this policy has been affecting. However, if you take away the effect we have actively undelegating, we're left with a 16% base uncorrected lameness problem so there's ongoing support, training and communication issue we have to follow in the community. And apart from Ed, are there any questions? MATHIAS KOERBER: This is for delegated domains that are already delegated. Do we have similar statistics on the reverse domains that should be delegated but never assigned to reverse delegation but actually uses actively on the net? GEORGE MICHAELSON: No because although it's unpleasant they are not technically lame. If they've never been delegated they terminate with the correct NXDOMAIN high in the tree quickly and we're not currently counting how many people are attempting to look up on the domains. I have noticed the number of NXDOMAINs has risen quite markedly. It is possible that's a side effect of undelegating people. MATHIAS KOERBER: Have you any other statistics on those things? I have realised in the last two years, especially in Malaysia, the number of delegated prefixes are actively used but they don't seem to be getting delegated from APNIC to the actual provider, for whatever reason, which creates some ends user problems. GEORGE MICHAELSON: We haven't been very proactive as a registry in encouraging people to do reverse - the hostmasters actually do a lot when they register new resources. They are quite proactive in saying, "Here's how you do reverse DNS," but we don't have an exercise to go back on existing resource holders and say, ÒHow about it?Ó That's correct. That's not something we're doing. If you're saying there's an interest in figures there, maybe that's something we should do about it. MATHIAS KOERBER: Only a nameserver to see the NXDOMAIN inquiries to show the problem. GEORGE MICHAELSON: If you think that would be of general interest in the SIG, I would be happy to consider an action item to look at more figures on this and report back. MATHIAS KOERBER: OK. GEORGE MICHAELSON: OK, so moving on the agenda, we now have Mathias who is going to talk about scaling the DNS. MATHIAS KOERBER: My name is Mathias Koerber and I'm from Nominum. We were encouraged to make this part of the SIG it is not really a policy item. I apologise is people think this is inappropriate. I'll try to keep this short. Quick introduction - my name is Mathias Koerber. I'm a senior consultant engineer with Nominum. I've been with SingNet and affiliated with SGNIC in the past. I've been with Nominum in the past, dropped out for two years and recently rejoined. I have a few slides on Nominum. You will find them in the handouts in the presentations so I'm not going to go into this in detail. This is just for background. Our work - we're providing high-performance DNS as a commercial product. We find that quite a number of recent and not so recent developments are posing increased demands on the DNS, for example, higher query loads, and other things. Part of this is due to legitimate reasons, part because of mistakes, accidents, and part due to malicious abuse like viruses spreading and people trying to get into networks and denial of service attacks. So, for the malicious reasons we have spreading viruses, phishing attacks, denying of service attacks against the nameservers and other parts of the networks. We often see that misconfigurations create quite large load on the nameserver infrastructure. I think one of the most common examples is escaping SRV updates from Windows clients. Some are unnecessarily low TTLs and clients have to requery the data even though the data never changes. Legitimately, lots of links, more links, which creates problems. For example, my e-mail queries things and tries to download new items and creates DNS queries for all that. And especially with e-mail, we find that RBL checks, just legitimate checks, virus checks, spam checks, all of those create quite a high load on DNS queries. Here's more statistics. 75% of ISP DNS requests are MX and each one of those triggers additional DNS lookups for all these checks. The effect of this on caching nameservers is markedly increased memory use adge increased CPU usage. We've seen in a number of operators that nameservers tend to go to nearly 100% CPU on - during the peak times and that's also translated to low cache efficiencies. Which translates at a higher rate of dropped queries, higher latencies, which translates into worse customer experiences and less available overhead to deal with high traffic situations. We also see that data that is registered in the DNS is markedly increasing. Multilingual domains are being added in addition to English ones. New technologies - IPv6 has much larger resource record sizes than IPv4 for example for the A record alone. Zone depth - in ENUM you have zone depth up to 13 levels down. DNSSEC markedly increases the size of the zone in the DNS and for applications like ENUM If you have a few hundred thousand phone numbers in a country and you want to put them in ENUM, we wind up with up to 7 million resource records that need to be served. This creates larger zones, larger resource record sets and larger queries and again reduced caching efficiencies, higher latencies, more recursions, verification of GLUE and DNSSEC takes longer time. Authoritative nameservers have higher memory requirements. In quite a number of cases we find that, if you grow your zone because you put lots more data in it, your nameserver may not be able to load that data into memory any more and you have to add a number of additional nameservers. It also means that starting up, restarting your nameserver after editing takes longer, which causes outage. Zone transfers to secondary nameservers will grow. Incremental zone transfers mitigate this somewhat but it needs to catch up with an older zone and has to fall back to a full zone transfer and that can take some time and load the network. All this translates into reduced performance. We also see a lot of frequent updates, increased use of dynamic DNS, mobile clients, DHCP servers, registration systems are now using dynamic DNS or similar processes to automatically update new records into the DNS which then need to be propagated into secondary nameservers via zone transfers. Is due to faster domain registration, all the NICs out there would be like to be able to sell a domain and make it immediately active rather than waiting for a once or twice daily update number portability for e-mail creates similar issues. This creates increased demands on master nameservers, prerequisite checking, integrity checking and, if you do larger updates, atomic updating of changes into zones. The nameservers need to do this faster. Frequent reloading of changed zones creates possible service interruption and memory requirements - most nameservers keep a second copy of the new version of the zone until it has been properly loaded. Only then will they throw the old zone out. That means that you need about twice the memory just to be able to serve the zones. Increased master-to-slave traffic. NOTIFY increases the volume of update. Incremental and normal transfers will increase and if you introduce lower TTLs, it creates additional traffic on the query side. New technologies like ENUM and SIP demand fast connection establishment which means that DNS must be able to resolve those queries very fast. We receive requests for more than 5 milliseconds DNS resolution time which most nameservers these days cannot even handle. DNSSEC verification also takes time. If you even think about combining the two technologies, you need to be very, very efficient and the infrastructure need to be very well designed. So what solutions do we have? You can add more caching nameservers. You can split your customer base across several servers, which means you have to lay out quite a bit of hardware and administrative effort to keep this running. You can use anycast to support this. You can use layer 4 switch infrastructure. There is possibly some other solutions. Disadvantages are higher cost, administrative overheads, limited by resolver limits - for example, a normal Windows resolver can only handle two nameservers in the configuration. UNIX can usually only handle three. And we have optimised nameservers which is where I have to put in my plug. Our nameservers at Nominum are very much optimised. We can provide several orders of higher magnitude performance. But we feel that this is not - that we haven't looked at all the issues that the DNS is going to have in the future where we can improve our products. Here is just the graph about latency for our ANS product versus BIND. If you want to talk to us in detail on the graphs and whatever, you may talk to me later or Matt Daly who is in the audience at the back - you can talk to him either. Here's a slide comparing BIND and ANS for ENUM. If you do large data sets, you already need either lots of machines on proof nameservers. So we have identified and/or resolved a number of problem areas. We are sure there are others existing now that we may not be aware of or they will arise due to new technologies - certainly regional requirements or regional idiosyncrasies. We don't know what that is and that is why I'm standing here. I would like to ask. What other areas do you expect to create crunch in the DNS? Increased registrations - I can think of interoperation, I can think of issues like internationalised domain names, if you like, looking at Korea, they have lots of English ASCII domain names and in the past a lot of Korean domain names. If the same thing happens in China, the amount of data in the DNS is going to grow. Is that going to prove a problem to DNS operations in the region? And to what extent? So I would like to find out, either in this session or in separate face-to-face discussions after this, your experiences, your concerns, now, in the future. Are there any other areas where the DNS is going to crunch that we could possibly help? What expectations would you have from solutions that can be done to mitigate this? This concerns Asiapac specific. How should that be addressed and would this be something that should be discussed at this future SIG or possibly in another interest group more on DNS technology? Is this possibly something for APRICOT? I'm basically standing here just asking "please tell us where are your concerns". I will be around all day tomorrow during the APNIC Member Meeting. You can call me out for any discussions. You can go and have a coffee or a beer on the side or discuss. I believe the same will be true for Matt Daly, our regional state director for Asia, who is siting in the back there. Or you can drop me an e-mail or call me on this number (refers to slide). I'm willing to talk to you at any time. Ideally not at night, during the daytime but, since I'm based in Singapore, that should not be much of a problem for the Asia community. That's basically my part of the presentation. So I would like to now throw the floor open so people who have any input, if you want to ask questions or provide input in this forum. If not, I'm very willing to take it offline and do it on a one-by-one basis. Thank you. GEORGE MICHAELSON: The microphones are open. SURESH RAMASUBRAMANIAN: Not exactly DNS-related but it would be of great - as you said - most of the zones now have instant update for the root. Once create a domain, it's online in the next few minutes almost. The problem is that Whois updates are currently taking - it's maybe done once or twice a day for some zones. That is giving us at least some security problems where we have got creator domain spam where they throw the domain away. There any initiative around to have Whois database keep pace with the rapid update of DNS? And the second thing - a lot of the things you mentioned about acts for large zones being slow and zone loading taking a lot of time - that puts a limit on the zones stored in the database for example. Using a fast database format like CDP or even a sequel like some nameservers seem to solve a lot of this. MATHIAS KOERBER: I can't say anything on the Whois policy. I'll leave that to the rest of the floor to answer. Regarding the database for nameservers to be used and our nameserver is using a database back end but the problem is that normal databases is they make nameservers slow. If you put this information into database for a zone transfer, you create a problem that every lookup has to go to the database and the performance will again decrease. Our nameserver uses a proprietary database that is very fast and overcomes most of those problems but normal oracle databases will not solve that. So it's not a universal solution. Does anybody have any input on the Whois question? That there is any Whois integration for updating Whois regarding the fast delegation registration of domains? Because I definitely don't. I don't run the operations. TERRY MANDERSON: Hi. My name's Terry Manderson I'm the senior SysAdmin for APNIC and generally responsible for the majority of systems at APNIC including the DNS servers. We have a process in place at the moment that is evaluating our registry process, which includes generation of information into Whois and also information into DNS. We are investigating the likelihood and stability of generating dynamic updates from registry into DNS. This is still a work in progress and, until sufficient work is done, it will stay as is for now. SURESH RAMASUBRAMANIAN: Thank you. TERRY MANDERSON: You're welcome. MATHIAS KOERBER: Are there any other concerns or comments regarding future DNS issues that could prove problems - or new technologies and developments that could create problems for the DNS especially in performance that we want to throw on the floor. CHAMPIKA WIJAYATUNGA: ThereÕs a question on Jabber. GEORGE MICHAELSON: While he's waiting for the typing to finish, I'd like to make an observation, which is I think we've tackled the root very well. Think the considerations of reliability and service availability at the top of the tree is essentially a non-problem now. In this region, we've had three years of investment. We now have 16 root servers that we've directly supported and another nine that have happened anyway. So the total amount of root coverage in the Asian region is already more than previously existed when you just had pre-anycast routes in America. So the top of the tree is getting good attention. I think the current domains are getting good attention. There was quite a lot of discussion in the DNS Ops working group at IETF about raising the bar on significant domains. In fact, the observation is that it's getting harder for people to offer casual services to assist country codes because ICANN now expects that they implement full service delivery with high reliability for all of their secondaries and they're quite uncomfortable if even one secondary closes down which, in some ways, I think, is overkill because the impact of a temporary outage of one secondary is really quite contained. That's why you have nameservers. Persistent non-availability is obviously bad. But that's a slightly separate comment. But it does seem to me that, if you get further down the tree, if you get into large ISPs, backbones, that are providing resolver services, that's really where the problems lie now. The provision of the nameservers isn't such a problem for most people but the query part I think still has many concerns. MATHIAS KOERBER: I would mostly agree. I think that, with technologies like e-mail, we will see large authoritative nameservers being required and large service providers like telcos, possibly on a national level, which will raise the bar on a provisioning side. In most cases, currently, the crunch seems to lie on the caching side, especially for larger networks, which is one of the areas that I'm interested in. Champika, do you have the question? JABBER QUESTION (Read by Champika Wijayatunga): This is basically an observation, not really a question. So what he says is, "In your presentation, you outline the driving factors behind the growth of the usage of DNS. Since these factors have increased quickly, almost exponentially over the last 10 years, it's very hard to properly guess what the traffic will stabilise at and, thus, the focus of any DNS server infrastructure. So, at the moment, DNS operators are playing catch-up and have no other alternative than to keep throwing iron at the problem. All stabilises, then we'll be able to decide where the general direction of DNS should be headed. But apart from that, the presentation was very self-evident." That's what he says. MATHIAS KOERBER: I would like to add to that that the comment about keeps throwing iron at the problem to me seems to be the wrong thing to do. You stay in the catch-up solution mode of operation. That being said, even while the operators throw iron or other solutions at these problems, new technologies are actually going ahead and waiting to solve this problem as it currently is to look at the future is possibly the wrong thing to do. So this is the reason why I'm standing here and asking - we're not asking for specific proposals on how to solve something. We are asking right now, "Where do you see the concerns coming?" even if you see a black cloud on the horizon and say it could possibly create a problem, we would like to hear about that so we can investigate and see whether we work in-house in our development or with other partners, with the IETF, with the operators, with all the players, to see what can be done to keep the DNS performance without having to fall back to throwing more and more iron at problems which only makes the problem even more complex. I'll say thank you for that comment. I agree but I think we should not now only solve the current problems. We need to go and look further and decide they are the crunch areas coming so we are prepared when they are hitting us. OK. So without any further questions or comments IÕd like then to say thank you. I'd like to make one housekeeping comment about our iPod draw. We have an iPod on lucky draw which will be done tomorrow after lunch. So on some of your desks, you should still find the entry form. If you haven't done so, please provide your contact and throw it into the box at the registration desk and tomorrow we will have a draw for the iPod and we would like to encourage you to participate in that. Thanks. GEORGE MICHAELSON: Well, that brings us to the end of our agenda. So, unless there are any other matters, I'd like to call this SIG closed. Suresh is coming to the microphone. SURESH RAMASUBRAMANIAN: I'm not asking a question. GEORGE MICHAELSON: You're still allowed to talk if you'd like to do that. SURESH RAMASUBRAMANIAN: No. GEORGE MICHAELSON: OK. Then I think we can take this as closed and I will take the action to provide some more detailed statistics on the breakdown of ip6.int lookups from Randy. I might go offline and talk to him to get a sense of the kinds of things he'd like measured and also MathiasÕs observation about undelegated domains and hopefully APNIC will be able to present something at the next meeting. Thank you very much for your attendance. APPLAUSE