DNS operations SIG, Thursday February 26, 2:00pm-3:30 pm JOE ABLEY: OK, hello, everybody. Welcome to the DNS operations SIG. There are a couple of little operational things to talk about first. I have to promote the onsite notice board which is a wonderful thing. Jabber is available. If you want to participate using Jabber, you can. This evening is the APRICOT closing event and we are to gather in the hotel lobby of this hotel at 6:00 pm. In the meeting today, we have a brief summary of an open action item for this SIG, which I will give in a second. After that, we have five presentations. We have one extra one that's not on the web page. The New Zealand Registry Services are going to give us an update to start with. Then we'll move on to K-Root, a presentation about F-Root deployments in the Asia Pacific region. George Michaelson will give an update about lame delegation handling in the Asia Pacific and then Joao Damas will give an overview of OARC. It's also, I suppose, a good time to mention that we have a mailing list for this DNS SIG. I think, in the last 12 months, we have had approximately zero mailings on the list. No-one should be frightened about using it. It's a friendly, if silent, list. If anybody has any questions at all, training-type questions, opinions about how things might change, any questions at all relating to the DNS, it's a very good place to ask it. It's a good place to ask those questions. The first thing on the agenda is a brief statement of the open action items we have. There is one action item, DNS-16-001 - "APNIC Secretariat to implement proposal 'lame delegation cleanup revised'" and the status of this action item will be covered in George's presentation which will be the last one this afternoon in this SIG. So that's it for the open action items. So the first presentation we have today is Nick Griffin from New Zealand Registry Services who is going to give us an update of New Zealand Registry Services. JOE ABLEY: We apologise for this disruption in service. NICK GRIFFIN: Good afternoon. I'm going to try to run through this reasonably quickly. There's a few slides here and I'm happy to take questions either during or throughout the session. I'm going to be going through and I've dragged at fairly short notice a lot of the policies and outline of the policies and structure that we operate in New Zealand. And I have to blame a few beers and being asked to do this late last night. Here we go. Around 2000/2001, Internet New Zealand undertook what's now known as the Hine Report, where they reviewed the whole structure of how the .nz model should operate and that is currently how we operate today, pretty much as defined in that report. NZRS is opened by Internet New Zealand and we were established in 2002 and we went live in October. Internet New Zealand funded the development of the SRS and it was well under way prior to the establishment of NZRS. And today we've got 145,000 names, so we've grown reasonably quickly for us in that time. Now, I will admit I've stolen a whole lot of the domain name commissioner's policy and notes here. There is no legislation in New Zealand regarding domain names. We operate the Shared Registry System as it says. The registrars, of which we now have 44, and more are still signing on, almost, not quite every day, but fairly regularly - and it is defined and managed by a series of policies which registrars have to sign up to. Domain name commissioner maintains that. The main thing here is that Internet New Zealand set up a committee called the .nz oversight committee, the NZOC, who have the responsibility for the .nz spate. They report through to the NZOC and they are an operational arm of Internet New Zealand. And we, NZRS, are a subsidiary of the Internet Society and I have an SLA with the domain name register for performance. So this is briefly how the three organisations or groups sort of hang together. NZOC are not solely Internet New Zealand members. They are independent people from the government, from society and outside, the consumer institute, so there's a wide spectrum of knowledge and experience at that table. The .nz space, as you'll see there, has Internet New Zealand has a three-way responsibility. I report through to her and the registrars have the authorisation agreement and they are not allowed to come on to the SRS until they are authorised by the domain name commissioner. Once they have done that, we get them to join up with NZRS with a connection agreement and we let them start to play in our test environment. The domain name commissioner, Debbie Monaghan, she's got responsibility for, as I say, authorising registrars and compliance checks. She's just brought on another person whose sole job is doing compliance. That particular role has grown over time and now she's got somebody doing that. And developing policies obviously, starting at ground zero just over two years ago she's defined the policies and now she's in the process of reviewing each of those policies and they're in the process of being reviewed now over a 12- or 18-month window. She handles complaints, complaints from registrars and from NZRS. And one of the points is protection of rights. One of the things in the New Zealand market is the SRS and the policies are pro-registrar quite strongly. We have a registrar advisory group made up of six of the registrars and they get voted on by the registrars. The system is registrar-focused. You can transfer at any time from one registrar to another. There is no cost to move away from a registrar. There may be an entry fee to go to the new one and they have to provide what's called a UDAI, which is your identification, and that will allow you to move and you can ask them at any time for a new one if you desire. The registrars have to - if you say you want to take a name for six months and that's within our normal terms, as soon as they take your money, they have to pay me and our organisation and we hold that account for that accordingly. So we are - again, that is to help protect the registrars rights. The registrars automatically use a domain name. This helps the registrar if a registrar has had a problem for a week, which has happened on at least one occasion that we know of, the registrars' names, if they were due for renewal, will not fail or will not fall off the DNS. Again, that's another one of the protections. There's restrictions on when a domain name can be cancelled. We have a 90-day pending release. So, if you cancel your name, you can get that back within 90 days and, after that, it goes to the general pool and anyone can claim that name. There's quite a strong push and the domain name commissioner has had various quite heavy to-dos with various registrars over the time - and it's less so now - over who actually owns a domain name. That's pretty much all sorted now. It's very clear about the registrar can take a name and it belongs to them, it does not belong to their registrar. And, even if one is gifted to you by an organisation, there is some evidence that they have not got the right to that. She approves all the rights, all the terms and conditions or the minimum terms and conditions for each of the registrars. They can have other terms and conditions as well. You can take a name for one month to 120 months - that's 10 years - and we have a minimum billing period of one month. The majority of the registrars are doing 12 months still, but there are some, or quite a few of them, that are actually doing one month right up to 120. The number of domain names - I didn't have a chance to check how many registrations we've got at the moment - it's not, it's less than 1% of all the names. But it is, I think, a facility that some of the corporates will be using. A registrar does not have to accept your renew transfer. They can refuse you. You might not comply with their code of ethics for example or for whatever reason. They don't have to. And sanctions - I don't know whether any sanctions have been imposed but the ability of the DNC to do sanctions is there. And she's covered the marketplace for liability insurance. So that's something that she's been able to do for all the registrars. We have an SLA with the domain name commissioner. And we have a system availability of 99.9% and we've achieved that in nine out of the previous 12 months. SRS has not failed us in any way. I have lots of reporting that I have to do to the domain name commissioner, which then goes to the NZOC, etc. And we also publish a newsletter every month with our outages, performance, statistics and that's a joint NZRS and DNC and, if you want that, you can get that - I've got the web page for that. The NZOC, as I said earlier, has the delegated responsibility and they set the policy. So the domain name commissioner obviously writes it and they approve it and they will approve any sanctions before they are undertaken. I've just thrown this one at the last minute. SRS - we operate - it's based out in Albany in Auckland and also in Wellington at one of the sites and we've got - that's actually not quite up to date, we've got an additional line between Auckland and Wellington in there as well - with SRS, a transaction has to be updated on a minimum of two of the servers before it is completed. And we have three online at any given time and we've got one spare one as well. That allows us redundancy which has proven, I think, that the robustness of the system and the architecture's proven to be quite good. We've got servers in Auckland, Wellington and Avalon with ultraDNS. As I said, we've got system available of over 99.9%. Last year, we launched the 2ld or the .geek.nz. So that was the first 2ld that we had launched on the SRS and that was extremely successful and we're very pleased with how that went. We're looking at knocking on the door of 150,000 names before too long. We've got 44 registrars in production and in January, we did a joint announcement that the domain name fee is going to drop from NZ $2 to NZ $1.75 and that will be the coming in on 1 July and that's a sign from the three parties involved that we've actually achieved many of the things that were requested or desired when we started undertaking this operation of SRS and, as things go, hopefully we will continue that, but we'll see. It's obviously cost and market driven. And, on 30 January, we also completed another one of the desires that was defined when Internet New Zealand undertook the change in the market to a competitive marketplace. They designed and had designed the SRS using Open Source technology tools and had a strong desire that the SRS be then made Open Source as well and, on 30 January, we did that and it's out there now. For those people that can't get to that, I've got a couple of CDs as well. Looking forward, we're going to be continuing modifying SRS to meet the registrar requirements and any changes in policies that may come up and tuning it, of course. We're currently going through an RFI/RFP process for all our outsourcing services and we're preparing to ad IPv6 glue to SRS and we'll be implementing TSIG and apparently - I just presented a paper to the Internet Society on DNSSEC. The domain name commissioner has currently got under review a dispute resolution process, a 2ld and, on Tuesday, I think, she put up a zone transfer policy review. Q. The 2ld is also up for review isn't it? NICK GRIFFIN: Yes, I got this from her website. I've actually popped on all the relevant websites. The domain name commissioner maintains all the policy. Internet NZ has a much wider brief of which the .nz is one part and NZ Registry Services, that's us. And the Open Source down at the bottom there if anyone wishes to go there and pull down a copy, feel free. Of course, add to it as well. That's it. JOE ABLEY: Excellent. NICK GRIFFIN: Any questions. ABHISAK CHULYA: I noticed in October 2002 you have one registrar and 116,000 names. When you have 44 registrars, you have up to 145,000. It doesn't seem to me that, with more registrars, you should have more names. Is there a reason why you need to expand-so-to so many registrars with such a small increase? NICK GRIFFIN: Because the role of the domain name commissioner and the NZRS is not to actually increase, go out and intentionally increase the market. That's not our roles. It is to create the desire was to make it a competitive marketplace. Any organisation can go and apply to become a registrar. There is a low entry fee. You make a $2,000 one-time payment to the domain name commissioner and she then works through the process if you meet all her requirements. It's a low entry fee. Unlike, say, some of the other registries, we have a lot - some of the registrars are very small and run what I would call a social desire, they have a social goal and there's one particular one for example that I often say that, if he's in a milking shed, he can see all his customers. He's just dealing in his village, if you like. We've got one who has only got 20 or 30 names but he's doing a social service. We've got a few of those and then we've got, of course, the larger ones, the telcos and the standard-type players, the ISPs. JOE ABLEY: Any other questions? Excellent, thank you very much. Next up, we have a presentation about K-Root operations. ANDREII ROBACHEVSKY: Good afternoon. My name is Andrei Robachevsky. I'm from the RIPE NCC. You probably know that RIPE NCC is the operator of one of the root servers and that's why I'm making this presentation. Before going into operational update, I'd like to briefly go through the root server system in general and probably many things that I will say are well known to you but, nevertheless, I will go briefly through them and then I will go into more detail about operations of K-Root server. So what's root server system? Well, root server system provides the nameserver to the root zone and this name service is provided by 13 nameservers operated by 12 operators, I believe. 13 is a kind of a hard limit and this is related to the fact that DNS responses are restricted by default to 512 bytes and, if this limit is exceeded, most of the clients will try to reconnect using TCP which has reached an overhead and results in a low performance of the service. And this is definitely not desirable for root server. All root servers are equal. The thing that is different in them is letters - first letters - they share the same suffix but, in terms of DNS, they're all equal. And though more than 50% of our clients come here less than eight times per week, no one brand new search can start without contacting one of the root servers. So who are they, these mysterious root server operators. Each of these started before 1997. You can, for instance, go to this website - www.root-servers.org - and see much more. So, well, this is a pretty outdated slide, I would say. It presents pre-anycast era, where you can see 13 primary locations - I say 'primary' because, right now, we have much more global services. I'll demonstrate this later. As you can see, 10 primary root servers are located in the United States, two in Europe and one in Asia Pacific in Japan. One slide on the evolution of root server. Well, since it's deployment in mid-'80s, until approximately 2000, the system was comprised of two main things. One is the primary server, which was renamed into our nameserver as a.root-servers.net and there are 12 secondary servers. Changes to the zone was introduced by loading in a.root-servers.net and then distributing to the rest as well. That's the slide. In new system, enhanced system, was completed by 2002 and the main enhancements to this system were, first of all, a hidden distribution master so all 13 servers, publicly visible servers, became equal and this distribution master actually distributed the zone contacts. Access to the hidden master was restricted to 13 servers that I mentioned. All this goes from the distribution master. In principle, any server can fall back to his neighbour to update zone contents in case distribution master is unreachable. What we use now is notable by wide deployment of anycast. What is anycasting? Anycasting can be described by a client and the nearest server identified by the IP address. As opposed to unicast where the flow is between a client and the server, a single server, or multicast when flow is one from single location to multiple locations. Well, anycast allows you to clone the server. It is very important so I highlighted in red because, even the server's clone has the same operator, same IP address belonging to the operator and identical data so, if these conditions are not met, it's not a clone. These are the benefits. One benefit, immediate benefit, is very simple. This technology doesn't require any change in infrastructure or application level. At the same time, it brings very important benefits, especially for the root server system and the first one is distribution cause you remember this 13 limit that didn't allow expanded geographical distribution of root servers. Now, with anycast, if we are talking about clones, we don't have this limit. It's resilient and, in the first place, resilient against attacks. And, of course, performance, because unicast is the flow between the nearest and therefore, the lowest performance and redundancy of course. Well, this is kind of animation. This is the pre-unicast era and this is anycast era and this slide is outdated because this is end of 2003 and now we have many more servers and more are coming. Already you can see the geographical spread of root servers has significantly increased. These are key milestones for K-Root. RIPE NCC started operating this server in 1997. At that time, it was placed at LINX, which was a very good and still is, point in London. It's running NSD since February 2003. This decision was made - high-performance software but also different from other types of software run by other root servers. So this increases software diversity in root server system and, in fact, diversity is a key factor for the system, starting from root server operators and ending with software diversity. We started deploying anycast since July 2003, when an instance was introduced - deployed in Amsterdam, at Amsterdam Internet Exchange. This year, we plan wider anycast deployment with three to five global nodes and we're looking at that in Asia Pacific and Asia and 10 to 15 local nodes. The first local node was deployed in January in Frankfurt in Germany. So what do we understand by 'local' and 'global' nodes? The main objective is to improve access to this server for local communication. Another objective is to isolate impact of external DDoS attack so, if DDoS attack happens, this local community is not affected. At the same time, if this DDoS attack happens inside this local community, then the rest are not affected. Location - well-connected points with significant ISP community. Brings benefits for the members with the interconnection point as well as to the whole system in general because it improves resilience of the whole system. This is the model for local nodes as hosted by a neutral party, fully funded by this party and open peering policy. At the same time, all operations are exclusively performed by the RIPE NCC. We are the operators of this. The model is slightly different for global node. Ideally, it should be located topologically at equidistant places. There are not so many places where you are can get good global connectivity. The idea is that it should be globally reachable so that, when we put the global node, we encourage or stimulate as much reachability or visibility of this node as possible. It's more powerful than local node because it should sustain much higher levels of traffic. But the model of management is the same. RIPE NCC has sole administrative control. The funding model is slightly different because there are no distinguished beneficiaries of this installation. The global community benefits, so RIPE NCC is willing to share costs or fully fund. And we are looking for three to five locations round the globe, specifically in Asia Pacific, Americas, Asia and we are looking for places with excellent global connectivity. K-Root current status - this is a snapshot of K-Root servers of all websites where you can see this information and much more. Right now, all locations are in one snapshot, in one slide. We hope that, soon, one would need to use a scroll bar to look through this. Some statistics - well, three nodes in London, Amsterdam and Frankfurt. Aggregated load. Interestingly, in Frankfurt, practically a local node, practically no impact on statistics for London and Amsterdam which is probably not a surprise because some traffic from else where was attracted in Frankfurt obviously. Despite the fact that a lot of mystery around root servers and their operations, a lot of publicly available information, useful information is publicly available. I include some URLs on this slide, so you can have a look at them. Specific information about K-Root, you can find looking at these URLs, as I said, some our website and some documents we published. One has general requirements and guidelines. It mainly addresses local node deployment but some things are relevant for global node deployment as well. If you have any questions or you would like to suggest your location as a location for a global node, please don't hesitate to contact this e-mail address. Thank you. JOE ABLEY: Are there any questions? No? Good. Thank you very much. My next esteemed presenter is George Michaelson from APNIC who is going to talk about the F-Root nameserver deployment in Asia Pacific. GEORGE MICHAELSON: Hi, everyone. I'd like to talk about the APNIC perspective on root server deployments in the region. I will probably cover some material that Andrei has also talked about but I hope that there will be some new information as well as overlap. OK, so why are we interested in helping people to deploy anycast nodes? Well, one reason is about defending the root nameserver system, so the more anycast nodes we get in this region, the more benefit we give the root collectively to help protect against denial of service attacks on the root. You can't prevent but you can provide mitigation. We can give better resilience against attacks in this region if we can deploy services in this region. The second main reason is that there's a service quality issue which would affect us locally in the AP region. If we have servers distributed within our own area, then, the speed of DNS queries which have to go through the root will be very, very much quicker and I mention on a slide later that this can be 10 times or more quicker. That doesn't mean that your whole Internet connection is 10 times quicker, but it does mean that start-up times and delays that you see because DNS look-ups are not working properly will significantly reduce. The second reason is about risk management. Although they are very unlikely, there are some kinds of failure of connectivity that can happen and, if they go on long enough, for three or four days, they can severely impact the service quality inside your network. Obviously, it you're cut off from the rest of the world, you can't get to the rest of the world to do things but you might have a lot of infrastructure inside your own regional network that in all other circumstance would keep on working properly but, if it can't see a view of the root it can actually start to have lots of problems internally. You can have a rise of traffic and you can have services stop working so, although it's a very low risk, in terms of risk management, this is something that is quite a low expense and has a huge benefit of protecting you and this would be particularly true for emerging Internet economies where you might only have a small number, maybe only one fat connection to the rest of the work. If you're in an island nation or in a location where you just have to have a small number of physical connections, this kind of activity could provide quite a lot of benefit to you. Increasing resistance to denial of service attacks has been a prime goal. We were approached by the root operators and this was one of the things that was mentioned as a key goal. Typically, they're looking to go to places with rich interconnect because these are the places that could potentially generate a large amount of traffic. If you imagine that you've got a large cable roll-out and a lot of cable customers with inadequate firewalling, these people are exposed to worms and to viruses and, at the point where they then mount a collective attacks on the root infrastructure, because they're cable customers, they can generate a very large amount of traffic so, if we're going to try and deal with that problem, there is some desire to go to a well-connected location. The other thing is that it's good if we can get a lot of different paths to reach this site. It means there isn't going to be a single point that itself is included as attack traffic can't go out. It means there aren't single points of failure. The net effect of having as many points as possible is that if there is a DoS attack, all of the lead gets spread out across the network. For the service quality issue, when we look at some of the behaviours, we can see a marked improvement in DNS service. The node that was deployed in Beijing has been reported as improving the RTT, the round trip time, 15 times. When people are doing DNS look-ups in Chinese networks and have to transit a query off the root, that process is now 15 times faster. If we can get this kind of technology deployed in our region, it means we can also get people looking into their networking practices and achieve other outcomes that avoid use of their expensive componentry so, for people who are again, perhaps in a developing Internet economy, having access to technology that keeps traffic domestically helps them with scaling their service commissioning offshore links. We think that, as we get this technology deployed, it's like anything else we do in the Internet - the more you talk about technology, the more you involve people in the deployment of the technology, the more you bring up cooperation within the community. This seems like good development and good practice for everyone. An observation here is that the Hong Kong location, where we deployed the F-Root, has very diverse connectivity within the AP region. We've had several people coming to us saying that they're aware that their providers have transit links that go via Hong Kong, and although we know that, in traffic terms, it isn't one of the more significant locations, in conectivity terms we think it actually is, and so deployment of services in regions that maybe don't see to be big bandwidth locations, they don't get talked about a lot, they still have the possibility to deliver very good good outcomes for regional networking in a much bigger footprint. In terms of protecting countries against loss of external connectivity, the classic problem is cable failure. If you're aware that you have one physical connection to the rest of the world, even though your apparent network providers may say they have different paths, the chances are they're sharing the same physical infrastructure and deep sea anchor effects on cables like backhoe attenuation are extremely expensive to fix and there have been occasions in the past for instance where the Australia-New Zealand link has been cut for a significant period of time. There are locations in our region where this kind of catastrophic failure could span out for a week or longer. Having a regional root in the region reduces the impact of that. OK, so APNIC's role in this service - we're trying to facilitate general improvement in the region. We've got a lot of good relationships in our region with membership and the ISP. We think we should leverage this. It's a good use of your investment in us as a regional body to try and put some value back into the network. We're viewing ourselves as a coordination point in the region and we have regular ongoing discussions with operators from the F-Root, K-Root, I-Root and M-Root in Japan. It's important to note that WIDE have been operating in our region and we respect the degree of expertise they bring to developments here. Some of the activity we do is separated from things they're doing, but we like to talk to them about regional deployments and have good experiences and shared knowledge of what we're doing. We like to do things like hosting discussions here. We think it's useful to have root operators come to meetings in our region and give an opportunity for people to talk. We've been deploying funding and support towards the deployment. We've met hardware costs, some of the maintenance costs. This depends on circumstances. We would love to encourage people to put investment into this technology as well. If it's appropriate, we can help with financial services. We've got formal agreements, memorandums of understanding, with the root operators. We have a signed agreement with ISX with Autonomica and we make memorandums with the low hosts as well so there's a clear understanding. We don't have a memorandum covering root services with the RIPE NCC but we do have a very longstanding relationship with them and we understand each other's operations. We're not, at this time, a root operator. This isn't the responsibility that we're looking for. This is a coordination role. To give you a sense of the timeline - in November, we signed our memorandum with ISC and published our first expression of interest. In January, we deployed a node in Hong Kong. In September, we deployed in node in Seoul hosted by KRNIC. That's had an extremely significant effect. In October at the Chicago NANOG, we signed a memorandum with Autonomica. In November, the node went live hosted by CNNIC and China Telecom and connectivity and exchange provision through China Netcom Corporation. In December, we did two deployments in that month. We have a node in Taipei hosted by Hanet and a node in Singapore hosted by NUS/SOX. In January, we brought an Australian node online in Brisbane. In 2004, we expect further deployments. Be balancing different goals, picking locations and looking at the size of the deployment at issues of select locations. As you've seen from Andrei's site, K are very interested in deploying a node. All of the deployments so far are local nodes but this would be a node that was adding resiliency to the global Internet community and would be a significant investment in regional infrastructure. We have planning for the deployment of the I node to the Hong Kong location timetabled for March so there will be another instance popping up quite soon. We expect to publish another call for interest. We expect to have an ongoing coordination role. We're comfortable to report back regularly to this forum. OK, so where are you getting your root services from? Well, for the F-Root node, they implement the chaos feature that there's a specific name space you can query when you're looking at an F server. This is a very nice feature that is not implemented by all root operators at this time. But it actually gives you exposure. If you're in New Zealand, the F-Root path should show all of them. In Korea, it should show an F-Root service from Seoul. For any root, you can use T\trace route. We encourage you to talk to people about improving peering and finding ways to get to the critical infrastructure. I'd like to give you some detail about what the nodes look like in technology terms. Do we still have the laser pointer? Laser pointers always go walkies. I'm going to keep this one. OK, an F-Root node is deployed at an exchange facility. It's going to do some amount of connectivity into the general Internet although there's generally restrictions on that. It has two physically separate routers which provide this connectivity to the changes. Each of the routers will be connect through an ISP for management transit. This isn't a path that gets used for data service. It's how the route operator manages this infrastructure. The routers then have connection to a switching fabric, which is two distinct switches, but they are actually cross-connected and there are then three hosts which connect to this backbone - there's an A node, a B node and a Z node. The Z physically has to be connected to one of the switches so you can see that there's two obvious one device and one on the other. If we drill down a bit, the routing layer for an F has typically been a Cisco 7000 series or a Juniper M5. They have at least two external ports. The switching layer has been a Cisco 2900 or 3500 class switch. We've also been using extreme switching layers. It typically runs as a 100mbit service and the hosting service, we chose to recommend Dell 1750-class machines. They're a very nice rack unit that's well engineered and we've been able to deploy these quite successfully in the region but we have nodes that have been deployed using IBM servers. I would stress that none of the vendor labels are meant to be a lock- in. We acknowledge some assistance in equipment from Cisco who made a very large donation to the root services last year. So the nodes have two independent routing paths but one switch-fabric and the management host is acting as console machine for all devices. The routers have connectivity to the exchange and they're directly connected. You'll have noticed there was no cross-connect. People ask why it's not a fully redundant, fully cross-connected service. The thing is that the benefit from this is really quite marginal. It adds a lot of complexity to have these things deployed fully redundant. Essentially the message at the bottom - a node can fail but there are many of them - is really the answer. These things can be as complex as can be but doesn't have unnecessary complexity. The nodes are very much larger than is needed. They will scale quite comfortably to large increases in service deployment in the region. This is what a deployment actually looks like. This is the front of the node and you can see the switching fabric here. Actually, no, this is the back of the Chinese node. These are the host instances and you can see here that we have a routing layer and a switching layer and then we have a black box which is the console integration for serial console and then we have the host and we've got a data path running up one side, power on the other, everything labeled. It's a very small physical investment. You can fit this into considerably less space if you put less air gap between. We can probably fit two or three root instances in a single rack. For people with lower-entry telco housing and not a lot of rack space, don't see this as something which would consume a huge amount of space. We can do these deployments in quite small locations. The rooting architecture for an F node has one AS for service delivery with a route on to the IX. They announce a specific address space 192.5.5.0. There's a node-specific AS here but they have a consistent Origin AS which goes across all of the anycast nodes. There is another AS which gets used for management. This point here is very important. The management path can't be on the exchange fabric because, if the local node is on for a denial of service attack, then the point where you need to come through the exchange fabric to work on the node, you can't get the packets through, because it's got the traffic going through so the management interface is deliberately taken off the exchange to guarantee an access path. Having two is really useful when you actually want to do changes to service. You can work on one route while the other is offline. You can have one route always available to provide service. You can take out a host in order to do a host upgrade. You don't have to withdraw service from a site. So the duplication, although it's not fully redundant allows for a lot of engineering work to be done remotely. Prefix announcement is a NOEXPORT announcement so the propagation is to IX participants. Now, in the APNIC region, we think that there are benefits to be had from having a wider visibility of this service. And, although F deploy the node with a NOEXPORT restriction, if we discuss this with them and we come to an understanding about the limit of horizon that will be deployed, it's possible to deploy a bigger service. If there's a clear community benefit here, this is worth doing. If we get a failure at a given load, let's say there's a denial of service attack and the node is under attack, it's not going to go to another node in our region. They implement global nodes in the core architecture of the Internet for places that have a large amount of capacity, so failure doesn't distribute within our region. That goes back to the central facility. If you want to be finding information about F deployment, we have the F-Root servers web page. There's a very good technical document at the ISC website. I think this document - isc.technical node 2003-1.html - gives a very good overview and the peering page for ISC documents practices for doing root exchange for the route but also for the AS project, or is that not there? That's not there, right. Thank you. JOE ABLEY: Are there any questions for George? ABHISAK CHULYA: If the F-Root is down, is that the local node to be affected? For example, just the F-Root - if the one that - the global node, if that one is down, all the global node, will that be affected? GEORGE MICHAELSON: If the global node is down, will the local F nodes be affected. Joe? JOE ABLEY: I work for ISC so I'm the other half of this deployment. We have two global nodes in California. If, for some reason, there's a catastrophic failure and both global nodes go down, then any region that is served by a local node, like Hong Kong, will continue to get F-Root servers. Anywhere where you can see a local node, you are insulated from any problems in the global node. ABHISAK CHULYA: I saw at the beginning that you already have a local node in Hong Kong and China. Now, you're going to put another in Hong Kong. Is there a reason why we need two local nodes? GEORGE MICHAELSON: The second node which is being deployed in Hong Kong is I-Root and it provides additional resiliency for I, for Autonomica, because attacks on the route don't necessarily go to a random route, they might specifically target a named route. When clients, when you are I do a look-up, you go through specific processes to do a look-up. If you have an F in Hong Kong, in some cases, you don’t need another letter but, in terms of a defensive strategies it's a good idea to have multiple instances because, if there are attacks going on, it provides additional cover. There are locations when it makes a lot of sense to have multiple routes. In Japan, for instance, it would be plausible to have four of the 13 letters providing service in that region. In Korea, I think it would be sensible for two or three. China mainland, I think you could make a good case for two or three. Hong Kong, because of the interconnectivity in region may be a good case for people to initially deploy. They can then have routing policy that allows possibility to get into South Asia and then, based on behaviours, look at further deployments. The other aspect is that, once you have a service deployed somewhere and have you a rack, it's easier to do a deployment in the region. All of the infrastructure planning has already been done. When a new party is coming into the region, it made sense to recommend that one of their deployments be in a location we already have serviced. It means that we can just get things started. But we don't expect to only do that one deployment and, in particular, there are other relationships that we could explore in Hong Kong. There are other relationships we could explore in Singapore too. When we do something like this, it is not meant to cut off the idea that we could talk with other people and all of the route operators are expecting to do several deployments in this region for ones that we are talking to. So, yes, we're doing one in Hong Kong, but we expect to do more in other places as well. GEORGE MICHAELSON: I'm now called 'variety of cakes'. I changed my name at lunchtime. Is that a fruitcake? A. I'd like to give you some background on the lame delegation. It's worth covering some of the background material. Why we're looking at this area? Lame DNS reverses can cause quite a few problems in the Internet. You get service delays with your binding of clients, they are. Let's imagine you're a receiving party on an Internet transaction with someone with broke and reverse. They try to go back to find that calling identity. So this doesn't work. You can also get refusal of service. Quite a few locations won't take your connect if you don't have function reverse DNS. There's also an increase in DNS traffic overall. If you think about the DNS system, servers are trying to answer this reverse service that can can't fix. They then get into a cycle, asking the route DNS authority all the way down who can tell me about this? Who can tell me about this address? This is a request that is just going to fail. It causes traffic and it causes delays and increases in load. This load is measurable at critical infrastructure points. The route operators are dealing with a majority of queries that are inevitably not going to work and lame delegations are a measurable percentage of this. We receive requests to look into this and manage the problems. The reverse problems affect users in the network but they also can affect unrelated third parties. The people who are going to see this problem may not be the person administering the network space. If you are suffering a problem because you're using an address or you're interacting with one that is broke and reverse, there's nothing you can do individually to fix this. The responsibility to fix this has to come down through the process. So if network administrators are not taking action to fix DNS configurations, then we wind up with the responsibility to take action to try and improve on the DNS problem. What we have to do is to disable, undelegate the misconfigured servers. The proposal was that we would identify lameness. We have two points of tests in Australia and Japan. We have a test period for 15 days. A notification period for 45 days. After 60 days, we didn't have a completion of the problem, then we would do a disable. Now, the proposal was that we would do an implementation three months after this has been accepted in the community. Unfortunately, we haven't met this deliverable. We've held over work on deployment because we have also been in a cycle of updating our DNS management system. We've been looking at much better web tools. We've been looking at an SQL database. These changes would provide an infrastructure to manage DNS status, to delist things that would be much easier for APNIC to use and manage this process but also easier for users in the general network to interact with. And a particular problem that has been brought up inside APNIC is that this proposal actually has a very large work requirement for staff because the proposal of a contact obligation that we went through a cycle of communication with domain holders, it actually meant the hostmasters in APNIC were going to have to take on a lot of it. We need to give them tools and information systems to make it easy for them to take on this work. We have rescheduled delivery of this activity into the second quarter of this year and we will report back at the next APNIC meeting. To give you an idea of the scope of the problem, I've drawn up a slide of our ongoing measurement because we have continued to track DNS lame issues within our region over the life of this proposal. I'm sorry about the numbers being unclear. You can see that we have around 70% to 80% of the delegation in regions are valid. Around 10% to 15% have one or more server from all of their servers not responding correctly. And around 15% to 20% are completely lame and are a continuing problem for all users. Unfortunately, you can also see here that there's been a very slight drop in quality, the completely lame amount has reached 20%, which is closer to problems earlier in 2002. This problem is still very real and we still have to address this problem. That's it, thanks. JOE ABLEY: Thank you, George. Is there any questions about that presentation? Very good. Our next speaker, the last speaker is Joao Damas. He's will do an overview of OARC. JOAO DAMAS: I'm going to talk about OARC which is a new program. OK, some background first. The idea for OARC - it - it came from something. It started from the US. They come up as part of efforts in group security. The Department of Commonwealth Security... The idea in principle was good. The problems was they are very specific to the US. It didn't seem great for things like the Internet. We put up some ideas and did something else. The key functions of what we intend to do in OARC. The first one comes to everyone's mind is incident response. So when there are attacks or compromises or security breaches in software, that this will be a platform for information sharing so that the problem can be addressed. And solution can be coordinated. And there are more. Operational characterisation is basically if you want to know if there is something strange or abnormal in the system, first you have to know what is the norm for the system. A data collection of normalcy. We also provide infrastructure for testing different implementations of what is available. How do they work with each other and how do they stick to this? Analysis - beyond establishing a normal working system for DNS, it's nice to be able to use collective data to do further analysis. Also, to collect data during instances such as attacks so that once the attack has gone, you can do some study of backpatterns, origins, whatever. Finally, outreach - trying to put together guidelines and documents that will assist operators in making sure their systems are well-configured. So, relating to the function of information exchange. It's clear that vendors, people that run the service do need to share information. Maybe they need a bit of assistance initially and space to do that. Before they can address problems for the public. Operators, need to exchange information too. The Internet Systems Consortium by us, for the purpose of providing analysis and research in collective data, ISC is partnering with CAIDA. CAIDA is well-known for doing new types of analysis on the Internet in general. And on DNS in particular. We will participate as operators but the whole purpose of having something that enables you to share data is that people who have that data will be able to share, otherwise it's going to become pretty empty. So this is done by having operators and vendors joining the OARC, becoming members. First one is root and TLD operators. Very popular sites. Government institutions that have responsibility for DNS operations and additional research and analysis institutions. If this is the kind of thing you see value in joining one of these things, and we hope you will, if you are involved in DNS operations and think they are critical to your organisation, then we hope you'll join. There is information available on the OARC website. JOE ABLEY: No questions? That's the last presentation we had scheduled. Does anybody have anything else they'd like to share? No. In that case, thank you all very much for coming. And I'll see you again next meeting. APPLAUSE