[an error occurred while processing this directive]
APNIC home / Meetings / APNIC 25 / Program / DNS operations SIG transcript . .

DNS operations SIG transcript


APNIC 25
DNS operations SIG 1600-1730
Thursday 28 February 2008

DNS operations SIG charter and group activities

ED LEWIS:

So, um, I'm going to begin with the housekeeping notes, since that doesn't need to be recorded for posterity.

Number one, we would like to acknowledge Google for sponsoring our Policy SIG sessions. How many here went to the Policy SIG sessions? OK, so we have to acknowledge Google, then.

You can participate in the sessions using Jabber chat. Directions are available using the APNIC website.

Number three - the APRICOT closing event sponsored by Cisco is held this evening from 1900 to 2100. It's going to be at the Chung Shan Hall. Transportation will be provided. The first bus leaves at 1745 from the car park, the entrance at the back of the hotel.

APNIC has an informal dinner tomorrow night from the same time, 1900 to 2100 hours. Transportation will be provided again, and if you're interested in joining us, the cost is NT dollars 600. Payment is required on Friday. More information will be available at the Helpdesk. The Helpdesk, I believe, is downstairs in B-2. OK. You go down the staircase of B-2. It's right about there.

Ah, here. Helpdesk is now available in the Osmanthus Room, level B-2, during breaks. I can't say that word. I think it's - if you go down the staircase - if you go down the stairs, not the moving stairs, the stairs that don't move, you know, the ones, yeah, those, it's right there.

And number six, win a digital photo frame by filling up a survey online. Survey forms will be available online at the APNIC meeting's website. Closing date is at afternoon tea at today, which I believe just ended. Yes, I think it was all-black frame, too, just beautiful. No insignia or nothing. All black. Winner will be announced during the last session of today, which is now, but I don't know who it is so we're not going to do that here.

So just in time engineering.

OK, so that's it for the announcements. And now we have the DNS SIG. And I have an agenda up here. Firstly we're going to have a hopefully short discussion on the charter. I would like to maybe have a longer discussion but I think it's more important to have our content presented first. The first is going to be a talk by Mohamed Dikshie Fauzie on IPv6 DNS operation in the SOI-Asia project, which is I believe part of the WIDE Project.

The second is a report on .CN's DNS operational status by Cathy Zhang. And then Bill Manning will talk about DNS continuity of operations and Joao, Joao Damas, will finish one a discussion of BIND 10.

But I wanted to put up - this is the charter. I'm not going to read through all of it. Basically, the charter says that this group is concerned with discussions over the reverse map of the DNS, operations of the reverse map. One thing that we're interested in is any policy discussion that would affect APNIC. I think it's been quite a while since we've had any APNIC policies presented in the DNS area. And just basically general information about the DNS.

And the first line is the URL, which has this information. The mailing list that we have is - information is given here - we have a SIG DNS mailing list. The first line that we have up there is, I believe, the archive for the mailing list. The second part is the information for joining the mailing list. First let me ask - how many people here are subscribed to that mailing list? OK, how many people are not subscribed to that mailing list? How many people did not raise their hand when I just asked the question? OK! How many people here - obviously you have to be on the mailing list - but how many people actively read the mailing list? Pretty much the same ones, I guess, who are on it. The reason why I'm asking that is that - well, actually before we go into that description, we also had a person volunteer to be co-chair for this group.

I think we've only ever had one chair at a time. And I believe - I know that there's - we have a procedure now and I think this is a question for Sunny, if there is something we need to wait now for another period, but is it worth considering having a co-chair now? What I plan to do is we talked about it here, at least bring it up here, but on the mailing list bring this up again for those who are on the list and not here present. Sunny has given me a look that looks a little bit suspicious.

SRINIVAS CHENDI: It's not recording is it? The process is - Ed, if you are calling for a co-chair, that's OK. But you have to put that in the mailing list so it will be opening and you can receive nominations. But if you have anyone interested here right now, you can obviously invite that person to come to the next APNIC meeting.

ED LEWIS:

We actually have one here. The reason why this has come up is I met the person here, Sam.

SRINIVAS CHENDI:

Oh, right, yes. Yes.

ED LEWIS:

Sam Sargeant. He has expressed a willingness to be co-chair and the question is now - this will go to the mailing list but I wanted to bring this up here to say is that the right way to do this. Go to the mailing list and see if anybody has objections for Sam. He's a nice guy, actually. Talk to him sometime. Bill, no?

BILL MANNING:

How many people are on the mailing list?

ED LEWIS:

I don't know.

BILL MANNING:

Is it 20? Is it 2,000? How many people?

ED LEWIS:

I don't know.

BILL MANNING:

Because if we have a quorum of users here, then I don't have a problem with the process. Don't you dare take that picture.

LAUGHTER

BILL MANNING:

That will be two dollars.

If we have a quorum, then I wouldn't have a problem with the vote. If we don't have a quorum, then, yeah, we've got to take it to the list.

ED LEWIS:

Well, we don't know how many people are on the mailing list.

SRINIVAS CHENDI:

I don't know. If you give me some time, I can check it out.

ED LEWIS:

Yeah, it's findable, but right now I don't think any of us knows.

SRINIVAS CHENDI:

But the election is OK with me if you want to. If you have someone here right now in this room, vote for it, post it on the mailing list and if there's any objections to that, you take it on from there.

ED LEWIS:

OK, so pending approval, we will have a co-chair going forward. So thanks. OK.

We can get him to talk to it now.

SRINIVAS CHENDI:

Do you want to do it?

ED LEWIS:

No. I won't surprise him like I was surprised last year by George. Last year when I became chair, GGM didn't tell me that, "Oh, by the way, tomorrow you give a summary of the meeting." I wasn't even awake during the meeting.

So let me make sure I covered what I had here. The point is we didn't have a lot of activity but still we have a co-chair in waiting. Actually, the reason why I hadn't pushed for a co-chair was we hadn't had much activity. I counted up there about 20 messages on the list that weren't announcement messages or 20 messages that basically were something original, which pretty much were my travels of last year. I report all the events I went to that involve DNS. About five or so people replied to that or added to what I wrote. There were a couple of other announcements but there was no other discussion on the mailing list.

So the question I wanted to raise up here - and this could be a discussion, but unfortunately I don't have time on the agenda for that and I don't want to spend the time on it but I want to review the benefits of the SIG. Are we getting what we want out of it? Is this SIG addressing the right kind of discussion that we want to have?

And probably I'll take this on the mailing list and see few pop up and say, "I'd like to see this on the mailing list, something else on the mail list?" Do we think the charter is the right charter to have? Are we restricting ourselves to the right kind of discussions and particularly trying to figure out if there are times we want to come up with new APNIC policies regarding the reverse map. So that's a question that I would like people to at least think about and we'll try to discuss that; see where do we want to go with this.

Right now, it's been a year since I've been chair and I'd really hoped to get more discussion. Because, frankly, I'm not normally in the AP region. I'm usually in the North American region. It would help me. I was hoping get more input from what happens here so I learn more about this region. That's what I'd like to see. I'd like to learn what's happening here. That's my motivation. It doesn't need to be the motivation of the list, but that's what I was hoping to see and that hasn't really happened. I'm going to continue putting out reports of when I go to events that DNS is involved with but I'd like to see if people feel there's another use for this mailing list, if there is something they want to discuss. Maybe what I might do is look at other DNS operations lists that I'm involved with and see if the issues at play there are also at play here.

In fact, if anybody here are under other DNS operations lists, if you feel like you want to transfer discussion from one area to this region, please bring that up. There's a DNS operations list run by FC, there's some work that goes on in the European Centre group that might be applicable. I don't know.

Lame delegation. It was here, where did it go? This might be my last slide. This might be the last slide I have here. It is the last slide. That's a discussion I would like to have if we have time for that, I want to get back to it. We should get on with the presentations that have content and then we can go forward with that. So Dikshie, you're up. You're first.

Where are the other microphones? Did we lose a microphone? We had two up here? There was still another other one.

IPv6 DNS operation in SOI-Asia project

MOHAMED DIKSHIE FAUZIE:

OK, good afternoon, ladies and gentlemen. My name Mohamed Dikshie Fauzie from Indonesia. But I'm currently graduate student in Japan.

So I will give a presentation in School of Internet Asia project.

Now, this is my outline of talk. What is AI 3, SOI-Asia, SOI-Asia operation overview, deployment strategy, DNS working example and data example.

So what is AI 3 project in this is satellite-based R & D network in Asia. You can get information about this project on this website. We are operating since 1996.

ED LEWIS:

Sorry. I turned the laser off.

MOHAMED DIKSHIE FAUZIE:

Sorry. So AI 3 topology has bi-directional link. It means uplink and downlink sphere supply. We have many partners in Asian countries - Indonesia, Philippines, Malaysia, Vietnam, and Nepal. So we also have a uni-direction link. Uni-direction means downlink only from satellite. And with this direction link, it's a uni-directional link. So we have partner partners on uni-directional link.

So what is the SOI-Asia project? This project is contributing to the higher education development in Asian countries through the utilisation of Internet and digital technology, and we have 27 universities and research institute partners in 13 countries. We have deployed a receive-only satellite earth station in every partner to deliver real-time lectures, not only from Japan, but we have also to deliver archived lectures. So you can get our project information on this URL.

So this is SOI-Asia overview. Suppose we have a lecture site here and this lecture site not only from Japan, can be anywhere. We have experience delivering real-time lecture, from USA, from France also. Then, the lecture that sends the real-time lecture to relay in Keio University. This relay uses DVTS high-quality video and audio transmission. Then, from gateway, we send that data to our partners in Asian countries using UDLR - uni-directional link routing. And in Layer 3 we utilise IPv6 multicast. We are running that and we also run MSSN.

OK, this is IPv6 operation. We have many experiences in IPv6 since 1997 because we are part of the WIDE Project in Japan. So in education every year we have operator workshops to refresh our knowledge of IPv6 in Asian countries. And we also development MTM. MTM is a software to deliver lecture material via IPv6 multicast. In SOI-Asia receive-only sites, we use IPv6 for video content and audio and Windows technology, MTM for delivering lecture material, and LivePresenter is web-based presentation. So, when feedback from partners in Asian countries to lecture site can be done via BBS, as well as video conferencing and audio, also chat and messenger. Also, we are looking now time to higher level of IPv6-only operation, which is 07/07/07 last year in July.

So where we apply IPv6? Here. So in this project, our goal is to deliver return lecture from Keio University gateway through satellite to our partners, but the return link from our partners in Asian countries goes to our gateway in Japan. Keio University is encapsulated IPv6 and IPv4 and so we only apply IPv6-only from Japan, one way only from Japan towards Asian countries. But from the lecturer's site, we running dual-stack IPv6 and IPv4, but we only use IPv6 since DVTS running also in IPv6.

So this is general strategy, how we employ the IPv6. So, first is NAT-PT deployment because we have some applications that NAT-PT deployment is because our goal is to keep increase IPv6 traffic so we - as many websites are not running IPv6 - so we deploy NAT-PT. So, we also deploy DNS proxy and nameserver and IPv6-enabled Squid proxy touched by us as a box.

So this is our place of deployment. So, SFC site above the red line is our gateways in Japan. Below the red line is a typical network in our partner's sites. So we have udl-rr receive-only router in partners and partners also have servers. All our partners must be installed DoD-DNS by proxy and also Squid. And in gateway site in Japan we have two routers, one is sfc-gate2 as NAT-PT and sfc-gate for IPv6 only and we also have sfc-cache and also DNS server on our site.

So this is for my presentation. Suppose we partner ask about feasibility of AAAA, so a host in our Asian countries ask to their servers to totd. Then totd will form up the request to BIND. Then, BIND will go back to our DNS server in SFC, Japan site. So the totd, the server will receive the result that the feasibility has a reported but doesn't have a request. So the totd will give IPv6 fake address to the host, where the - our this traffics getting from - sorry, from... from this router. So, if it's routing to yahoo.com, it goes to SFC-gate 2. So, if host routes to, for example, a website that has a record, so it goes to sfc-gate. OK?

This is example now. This is the number of our clients that access to our nameservers. So we are quite happy that we have IPv6 address, many IPv6 addresses access our nameservers. This is using the FT from one record. So this is our data for example. So this is something which last night I tried to fix this picture but not good yet. So this is IPv6 addresses 2001::130:110:1:43, so the access is query rate per second, so this is the client access and this bar for our nameservers, this is legends. So green means no error. Read means former. And server fail, and blue is NXDOMAIN and purple is refused. And so we have a lot of IPv6 address here but coming from our partners in Asian countries.

So we have still seen many some of IPv4 addresses but it comes from other site in Japan. So, in this case, our goal to increase IPv6 traffic will be achieved.

SPEAKER FROM THE FLOOR:

(Inaudible)

ED LEWIS:

Bill's question is where the net-10 address comes from, the fourth one down. The fourth address down.

MOHAMED DIKSHIE FAUZIE:

OK, yes. This is from our Asian partners also. I don't know why this - address - we have seen these address in router, in partner's router. So I don't know why the partner's routers, were using IPv4 perhaps. Yeah, also.

Because we don't have IPv4 partners, so we employ IPv4 address for our partners. That's why it's come there.

SPEAKER FROM THE FLOOR:

(Inaudible)

MOHAMED DIKSHIE FAUZIE:

Yeah, it clearly is maximum, but only four queries per second.

ED LEWIS:

Bill's comment there was the query rate was kind of low.

That's fine.

MOHAMED DIKSHIE FAUZIE:

It's only about four queries per second.

OK, regarding to IPv6 queries sent to root servers, we also see our nameserver's queries to root servers. This is an example from a few weeks ago.

OK, thank you very much for the attention.

ED LEWIS:

OK, first, questions for Dikshie? Actually, I have a question, if there is - on the same slide that Bill was - two slides back. Back up.

It looks like the IPv6 ones have SERVFAIL but not the IPv4. In the first one, it seems like have you quite a bit of IPv6 -

MOHAMED DIKSHIE FAUZIE:

Yeah, this is my question and I'm still interested in why we're getting many SERVFAIL and why so many address has NXDOMAIN. I investigate this IP address. This IP address generates queries but the nameserver cannot find the domain, so it returns an NXDOMAIN and I still don't know why these routers, what application is on the routers.

ED LEWIS:

The majority of those queries are NXDOMAIN in that one address. So...

OK, any other questions for Dikshie? If not, let's thank Dikshie for his presentation.

APPLAUSE

And move on to Cathy Zhang to talk about .CN's DNS operations.

.CN DNS operation status

CATHY ZHANG:

Good afternoon, everyone. I'm Cathy. It's my first time to come here for APRICOT. Nice to meet all of you.

Thank you for giving me the opportunity to give a presentation about .CN DNS operation status on the behalf of CNNIC.

Our goals of the operation and the maintenance of the DNS service consist of mainly four aspects. The first one is high availability. It means trying to let the users get the information any time they want. The second one is short response time. We'd like to provide a service of high performance, so we want shorter response time of DNS service. The third one is immediate discovery after an event. It's hard to avoid all the faults of the hardware and the software, so when it happens, the most important thing is to be aware of it, simply by real-time monitoring system. The last one is to gather the statistics information of the service system. I think we always want to get to know more about our system - how many queries it processes per day, what kind of resource requests it queries the most, what is the specific portion and so on.

In order to provide a service of high availability, we build a service system with online servers and the back-up servers. We put the online servers in the IDC of different ISPs. Even the IDC abroad, which allows the users from different ISPs not only use the service but also have shorter response time. And if the servers of one ISP are all down, which hardly happens, you can use the servers from our ISPs as a substitute. So why do we need the back-up servers? We know that if the data on the master server is wrong, then a zonefile transfer occurs. The data of all of the servers must be wrong. So we need another independent system using another master server and using different zonefile transfer mode and different software to help address the error and avoid the errors that may be brought in by the DNS software.

This is a graph about the current distribution of .CN DNS servers. There are seven nodes in total. Five nodes are located in different provinces of China, hosting by different ISPs. One node in Korea and one node in Germany. The node in Germany has been taking place for only one month. The closer ones, the response time can be shorter.

And this is a graph about the architecture of the zone file transfer architecture, the online system and the back-up system share the same database. But they use different master servers. The zone file transfer of the online system is realised by the inherent mechanism of DNS software and for the back-up system, it uses, it uses file transfer technology. They also use different DNS software. The heterogeneity is an important factor to guaranteeing that the two systems won't be down for the same reason.

The next part is about how to detect the abnormalities of the DNS service system as fast as possible. The basic work is to install some monitors in the equipment room. Then, you can learn the current status from the real-time system. Besides, we use detection platform to monitoring the equipments and the services. You can set up the detection period. When the detection is running, it will check the specified data and the feedback value. You can configure corresponding operation for each return value. The operation can be changing - the operation can be changing the graphic interface or sending an email about an event or generating a voice alarm.

The monitoring system can check the hardware information. You can configure the alert threshold to a proper value. It can help some possible problems before they really happen. Another comment detection is via the port application. If the port is down, it's sure that there is something wrong with the application. There is also a high level detection about the application with scriptless. You can use your own scripts to get the reply from the server and check the content to make sure the service is running smoothly.

In addition to the monitoring system, we also have a web interface which can give the real-time information about the DNS service, such as the current number of real-time from each node:

Here is the image we can get from the monitors and here is the graphic interface of the monitoring system. Each spot represents a monitored item. If the monitored item is OK, the spot's colour is green and if there is unexpected error happening, the spot will turn into red.

This is the real-time current number of the .CN DNS servers and some other information about the DNS service. You can find the current query number of each node on the right of the webpage.

The last part is about the statistics of the .CN TLD DNS service. As the TLD DNS servers is carried only when the clients need the TLD definition but the query log on these servers can still reflect the information about the domain names and about the servers, to some extent. The log records the date, time, current IP address and port, the record type and the queried domain name of each query received by the DNS server. The domain name field contains a lot of information to be dug up. We create the query log on each TLD DNS server every day and analyse the log files to extract the information we are interested in. Next, I will show you some of the data we analyse every day.

Until January 2008, the registered number is up to 9 million and the daily query number has increased obviously during the last three years. Recently, it's about 900 millions per day.

Of course, the query number for each node won't be the same and according to the statistics, there really is some tendency. The number of the users can be the most effective factor. If the total query number of a node is large, then it should be allocated more servers.

In the query log, there is a field giving out the type of the results record. The statistics on this field tells us what kind of results record the users are interested in the most. Of course, e-records will take larger portion. The second to the fourth are AAAA, MX and TTR records. Among other fields of query, the domain name field carries the richest information. According to the last segment of the domain name, they can decide what kind of domain name it belongs to, for example, .CN, .arpa, or .Chinese and for .CN they can classify the domain names into smaller categories.

We also count the number of domain names starting with www., so we can offer a list of www. domain names from the top, which it tell us which website is often accessed by the users.

There are 41 second-level domains of .CN in total and a special .CN domain. From the registered domain names graph, we can see that .CN domain names are the most and the second to the fourth are com.cn, net.cn and org.cn. From the query number graph, we can see that though the registered net.cn domain names take own 4%, in all registered .CN domain names, it takes 13% of all queries. And the data for .CN is 61% and 44%. This may indicate that they are more .CN domain names may not be used or they may not be that popular as the net.cn domain names at the time. .CN can be registered by individuals recently so it needs more time to let the people accept it as the majority used domain name. Most of the statistics are retrieved from the domain name field. There is another field in the query log which is a useful factor. It's the IP address field. With an IP address map, we can know which region or country a certain IP address belongs to and to analyse the current IP address distribution about the DNS queries. That will be the future work for the analysis of DNS logs. That's all. Thank you.

ED LEWIS:

OK. Questions? Do we have any questions?

CATHY ZHANG:

Could you please speak slowly? Apologise for my lack of English skills.

SPEAKER FROM THE FLOOR:

That's OK.

You mentioned that you run different software on different DNS servers, especially your back-up servers. Can you tell us what DNS software you use? What software you run on the different servers?

CATHY ZHANG:

For the online systems we use Band, and for the back-up servers we use NSD. Is that OK?

SPEAKER FROM THE FLOOR:

Another question. Your slide that had the different percentages of query types. You had A records, MX records. Could you bring that up again please?

CATHY ZHANG:

Which one?

SPEAKER FROM THE FLOOR:

That one there.

CATHY ZHANG:

Yeah.

SPEAKER FROM THE FLOOR:

Is it just me or do the numbers not add up? Wait - they don't amount to 100%. Am I missing something?

ED LEWIS:

Maybe the A record number is small.

CATHY ZHANG:

This one should be 71%. It's a mistake.

SPEAKER FROM THE FLOOR:

I see. Thank you.

SPEAKER FROM THE FLOOR:

Is it really?

ED LEWIS:

It's the Bill filter.

LAUGHTER

ED LEWIS:

You have to treat it nicely, Bill.

BILL MANNING:

I'm going to use this. Because it has a wire.

This is Bill Manning. The amount of information structure that you have put together is very impressive - the servers, the redundant servers, back-up servers, other software, other links. And you have presented analysis of the data you have collected on the DNS queries you have received. What kinds of failures have occurred? Have you ever had to use your back-up?

CATHY ZHANG:

I have mentioned it before.

We know that there should be a master server for all the DNS servers and sometimes the data on the master server may be wrong.

BILL MANNING:

How many times?

CATHY ZHANG:

Until now, it hasn't happened yet.

BILL MANNING:

OK. So you've had no data failures?

OK, so you have had no data failure where your database is corrupted and you've had to use the back-up?

Have you had equipment fail, where you have had to use the back-up?

CATHY ZHANG:

Um... the back-up system is used for the emergency and in fact it, um, it hasn't really happened yet, but we have practised for the - how to say? - to the changing between the online servers and the back-up servers.

We practise this not because the really - the real problem.

BILL MANNING:

OK. Thank you.

ED LEWIS:

Thanks.

Any more questions?

I have one question. The server that's in Germany, does that have its own unique IP address? Or do you use - do you share IP anycast? Do you do that or is it all unique addresses?

CATHY ZHANG:

As far as I know it uses anycast but it has its own IP address.

ED LEWIS:

OK. Thanks.

Alright. No more questions? Let's thank Cathy for her presentation.

APPLAUSE

DNS continuity of operations

OK, next on the agenda is Bill. Bill Manning to talk about DNS continuity in operations.

Do you want to start asking questions now?

Please don't point it at other people.

No, it's not. This one has a switch. The phrase is do not look at laser with remaining good eye.

BILL MANNING:

So I'm actually going to look at this as nameserver evolution, or from the context of name service evolution. Um, this is in the context of me as an operator of one of the root nameservers. We're going to look a little bit at the DNS from a history. From 1980 to about 1989, the question wasn't really how well you can make the DNS work. It was does it work at all? This was when it was first relatively new, it wasn't widely deployed, there were other alternatives available for doing name resolution. So it was not clear that DNS was going to win. Nobody really cared who ran it because it was a bunch of engineers and academics. Um, and so there were huge changes in the software, the specifications were, um, very loose and unspecified specifications were still being written. Software was still being written. The capabilities were still being discovered, augmented and changed. This was in effect the Cambrian era when there was a lot of development, a lot of things tried that didn't survive.

The second decade, 1990 to 2000, the DNS kind of grew up. It became the de facto naming abstraction tool for the Internet, instead of remembering IP addresses, people could do names. As more people became dependent on the DNS, the rate of independent change dropped. You saw a lot less interesting stuff happening in the DNS.

Capacity augmentation was key. This was how do we add more compute cycles, how do we answer more services, we need more bandwidth. And the user base changed from engineer - from academic circles and engineering labs into more or less production use, where companies and countries, economies, became dependent upon the DNS working.

Required changes made - it was a lot harder. Required changes requiring much more coordination and planning. You couldn't get a bunch of guys getting together around a beer and saying, "You know, we should change this stuff," and then go change it after beer. You couldn't do that much any more. There was planning for some significant changes and some steps were taken. The significant changes were that most of the world, by the end of 2000, most of the world had IP connectivity. In this time frame, nearly all of the root nameservers were present in the United States. Only one of them was outside the US. And we had IP version 6 to plan for, DNSSEC to plan for and a much broader use of users to plan for.

In that time frame, that planning that was done, we added a couple of new operators and we started to experiment with other technologies like anycast. So from this century, we are now expanding this system under a set of given restraints. We can't make significant changes on the fly. We do have to augment the system to support new protocols. The anycast services are operationally potentially challenging, because it's not just one machine or a couple of machines in a machine room, it's now possibly hundreds of machines, geographically dispersed. So you have to have a separate control network for those things. The complexity of large installations, it you remember the pictures on the monitors, there was racks full of equipment and that complexity needs separate monitoring. The new types of attacks are occurring. Ed can empathise with this. The specifications weren't really specifications. They were suggestions, more or less. And a lot of interpretation on how they were implemented occurred, which allows attacks to occur.

So we're going to look at the root system and some of the ideas about attacks here.

There are now 140-plus root servers and counting. We keep adding more of them. What we do is we cryptographically sign data transfers between them. So that we have an assurance of the integrity of the data. You guys do that in CN? Hello? No? Sorry. I suspect they do. And then there's a tight engineering coordination, cooperation between the operators. The operators are in 12 different, radically different, organisations, that, except for this, probably we wouldn't talk to each other. But we have three meetings a year and it's done for technical coordination. There's a relationship with ICANN, which we make sure that ICANN knows what we're doing and ICANN lets us know what they're doing. The change pro-control processes that are in place prevent a rapid response. So if a really significant protocol change came or there were catastrophic events, rapid response would be problematic.

Now, rapid response to what? Attacks happen. They don't break the Internet! The DNS is a very robust protocol, a very, very robust protocol. The clients' cache data, and that includes lists of the TLD servers - so if the roots went away, if all of the root nameservers disappeared off the planet, the Internet would still work. DNS would still resolve for weeks. And it would not take us weeks to reconstitute the root name service. But it's an attractive target because it's at the apex of the DNS hierarchy. And media pays attention.

Anycast gives decent protection. If you go look at some of the reports done to ICANN and to the RSEC committee, you'll see that when attacks on the roots have occurred that anycast basically absorbs the attack locally and the rest of the system survives, so we minimise, we compartmentalise the attack. And we work very closely with software vendors, Internet service providers, law enforcement and computer emergency response teams, so if something happens, they expect us to contact them. If we don't contact them, they'll contact us. They know where we are.

With all this in place, because of the way the DNS is designed, successful attacks can still be launched. It is possible to take out the DNS and it's not hard. What we lack here is a good continuity of operations plan. How do we keep running in the face of a successful attack?

And part of this is the infrastructure we built up, the diagrams you saw with the CNNIC architecture look very similar to any sort of current state-of-the-art system. You have multiple sets of servers, multiple distribution channels. It works great until it leaves your authoritative servers. When the data leaves your authoritative servers and goes into somebody else's DNS servers, you've lost control. Alright? So we need to worry about those kinds of things. How do you ensure the integrity of your information, not just the service. In the future, looking forward. We're going to do more anycast. It's a good technique. People are doing it more and more people are doing it. You guys are still anycasting heavily, right? And you've got roll-out explanation that call for more nodes into the future, no plans to stop any time real soon. OK.

So it's going to more corners of the world that are still underprovisioned. People are going to do it. It probably should be done not only for roots, but for TLDs. In this particular community or area, the Asia Pacific TLD group is contemplating doing coordinated anycast distribution. When you start thinking about those things, IP version 6, those are - you can now get v6 transport to root nameservers and they will resolve, which means that your TLDs can do IPv6 and then you need to worry about your children and their capability and the caches to be able to be able to support IPv6. So the problem is not so much the authoritative servers, it's the children.

DNSSEC, there is some discussion still about DNSSEC, right? Ed has an opinion and he will share it with you. But there are reasons for DNSSEC. And then people are doing more and more traffic analysis. We saw some - you know, CN has done a very good job in doing their traffic analysis. And the SOI work with DSC is a very good tool for doing traffic analysis. What that tells you what you're currently doing and you can infer when you're sick, but it doesn't really suggest preventative measures. So if you see bad things happening, you know they're happening, but there aren't really good mechanisms in place now to go fix the problems necessarily. And sometimes you can't fix them because they're outside of your span of control.

So as we start looking at these things, this is the DNS that is designed for a fixed-line tethered environment where you can always reach your authoritative nameservers. The Internet technologies that I see coming forward are for and mobility, for things that are not connected all the time. And so how is the name resolution going to work in that kind of arena?

If you think going forward - and our assumption being changed - with mobility instead of anchored nodes and possibly serve, how do you integrate those into the DNS? We need to ensure verifiable, accurate data with a level of integrity that we don't have now, particularly as the opportunity comes for more and more people to claim that they're Ed Lewis, because we all want to be like Ed, right?

There is a question about the question of a single name space. It's a good idea. It's worked really, really well. But there are also some valid arguments for disjoint name spaces and the original DNS protocols allowed for that, but we have grown away from it. The implementation choices for the DNS reflect 1980s design constraints, where compute cycles were relatively inexpensive and bandwidth was expensive. I think that that's kind of inverted now. The bandwidth is relatively inexpensive and compute cycles are almost free. I have more compute cycles in my iPod than in the VAX-11780 that I was using 25 years ago. And then the question about do all addressable devices need a name? With IPv6 we're going to get several orders of magnitude more addressable things on the Internet.

Do each of them need a name? We don't know that. If you do, then, when the query loads go up accordingly, things are going to get interesting. So as we look at providing a continuity of operations, don't just by about the continuity of operations in today's environment, but look forward into what the Internet is going to grow into.

There are other things here. There are synthetic devices. It may be that you actually want to group clusters of devices together and give them a name. You may also want to have other bindings, so instead of a name to an address, you may want to go a name to a key or a key to an address to support some of the mobility protocols. You may want to almost minimise single points of failure. Denial-of-service attacks are bad. We can't get rid of them. They work best if there's a single point of failure. So having non-single points of failure is really good. That's one of the back-up strategies we had, that shadow system. Third-party caches are really bad. A lot of the issues we're seeing today is where people intentionally manipulate a third-party cache to give you false information.

And the simple delegation hierarchy is based on a database design which may not be the most appropriate as we go forward.

So those are thoughts for the future and maybe, hopefully, Joao, you've got some points you're going to bring up next that may talk to some of this, right?

So, questions? Comments?

SPEAKER FROM THE FLOOR:

I've got a question. Who or what the source of root server traffic? Do you have any idea? I assume you are a root operator so you have data.

BILL MANNING:

OK, so let me sort of re-ask the question. The question is what kinds of traffic constitutes garbage at the root?

A number of studies over several years have shown that more than 90% of the traffic at the root nameservers is essentially for stuff that can't be resolved. So it's stuff that should never reach the roots. It should be thrown away. But nobody else knows how to deal with it.

ED LEWIS:

For example, dot-local look-ups or look-ups that seem to be repeated from the same address.

BILL MANNING:

Ed knows this from operating his root nameservers. No. He's read some other presentations. So, yes, so bogus - things that don't exist in the name space, like .local. Other things that don't exist. There are underscore - I can't even remember what it is - TPC - basically Microsoft exchange stuff. Microsoft believes that their DNS works inside an enterprise that is partitioned from the Internet. And they maintain that fiction vehemently. When, in reality, people take Microsoft software and they run it unprotected on the Internet.

SPEAKER FROM THE FLOOR:

How do you feel about Microsoft, Bill?

BILL MANNING:

I love Microsoft. Microsoft is a great company. It provides me with employment, indirectly. But those kinds of things, because they make those assumptions about things that should never be on the public Internet that are. We see a lot of the traffic as being stuff that comes from Microsoft, comes from other companies, other applications, that expect to be behind a firewall.

So there's a lot of junk there. If we didn't have the kinds of junk showing up at the roots, we could provision a lot less.

More questions?

They don't have any questions. So I have a question. How many people here run an authoritative root nameserver? Three, four, five. OK.

What would happen if I was to cause disc scares on your radar ray and cause your database to be corrupted? How would you recover your DNS authoritative name service? Do you know?

ED LEWIS:

First, I'd fire you.

BILL MANNING:

Yeah, yeah. But that's not going to help you.

If you don't have a plan on how to recover data corruption, or how to quickly turn over the authoritative data in your nameservers, you probably should think more about your continuity of operations. If you have a power outage and your data centre goes dark, have you tested your back-up? How often did you test your back-up? I tested my back-up four years ago. It worked great. That's not enough, right?

And what does it mean to test your back-ups? Some people actually flip over and they use their back-ups live in production, which is a really good idea. Right? So this, the idea about building and constructing a very good DNS service really isn't done properly until you've got a contingency plan in place. And if you've done it, then you need to tell all the people who get service from you that they need to do a better job too.

That's my admonition.

Thank you. Joao, I think it's you.

ED LEWIS:

Thank you, Bill. Let's thank Bill for speaking once again.

And after Joao speaks, I was informed that we can actually go ahead with our co-chair election. So after Joao is done, please stick around for a very brief election run. So... thanks.

BIND 10

JOAO DAMAS:

OK, so I'm just going to give a few notes on something new at the ISC which is basically a new version of BIND.

So most people who run BIND as their nameserver choice these days are running BIND 9 and that's OK, but we are still fighting to convince people that are using BIND 8 that is actually not such a good idea. BIND 8 has many design flaws and you'd be better off moving away from it.

On top of that, we are actually not going to do any development on security batches.

So BIND 9. BIND 9 does a lot. It is protocol complete as far as the DNS is concerned. It has good performance for most things. There might be a few cases where you need more performance but those are really, really rare. It's open source. It's available everywhere and it's free.

So given those characteristics, what possesses us to go and try to do something new? Well, BIND 9 has many good things but it last has a few things that could be done better. For one, it's too big, it's too monolithic. When you build it, it does everything DNS-related, whether you want it or not. That, in itself, has caused some configuration problems somewhere if you're not aware of how it operates by default. And the other thing is that even though it is open source and you can look at the source and reuse it wherever you want, it's really become such an elephant that it's really not easily adaptable by anyone out there. You have to invest a lot of effort to get to know it in order to extract the parts that you might want or modify its behaviour to suit your needs.

The configuration and management is looking a little bit old. We still use epic configuration files controlled by kill signals, stored data mostly in static X files. It fits the bill but I think more and more people need to have more diverse options. And one thing with all pieces of software is that has been around for nine years now, it has grown organically over the years and, as with any piece of software, when this happens, the cleanness of the initial design is pretty much begun. There are too many temptations along the way to take short cuts, etc.

So why are these arguments to try to start doing BIND 10? Well, we have found that, um, these shortcomings I mentioned end up resulting in increased cost for the people who operate the nameservers, because they have to extend it, work around it, or adapt their systems, which would otherwise be fully functional to be able to work with BIND.

So given that those are the reasons, and, as you saw, we are going to be concentrating more in the management and operations part of BIND, perhaps addressing a little bit of what Bill was referring to, rather than the DNS protocol, which is pretty much well supported right now, how are we going to go about it? Well, basically we need two things? One is funding and the other thing we want is input from you. My goal here is to try to explain the goals we have for BIND 10, that we have identified so far, and see if that resonates with any of you, if you have come across problems in using BIND or just inconveniences perhaps let us know. Because now is the time where everything is changeable.

So first and foremost, the biggest thing we want to address is a more modern configuration. We will be building a backbone for communication between the servers - different types of DNS servers or DNS and DHCP servers, depending on what you use internally. And have them servers themselves be those that ride on the backbone and the backbone is not necessarily constrained to one single machine so that you can usually build clusters and manage them. But it is self-contained and the idea also is that it is well structured and documented so that anyone else can use it for other projects.

One clear shortcoming that current BIND 9 has is that it only supports properly one storage back-end. I'm not referring to the disk back-end but rather how things are stored in memory. Trying to use anything else is rather hard. There is an API defined that allows you to build modules on top of it, but it's not complete and hasn't seen development for a long time. Enter are things up there, that program like DLZ, that dynamically local zones which are not produced by ISC and give you support for SQL back-ends, but it misses too many features, basically, because it is hard to make it complete. We want to address that.

It can be used to build clusters. That's one definite goal. As Bill mentioned in his slide several times, anycast is going to be more and more deployed, more and more used to go across, to go round the constraints of DNS. And it will be nice to have a way to control that - where the nameservers themselves are aware of what they are up to and not hanging out there like they used to be.

Modular in functionality - what do we mean by this? Right now, when you compile BIND 9, we end up with something that is a recursive nameserver and perhaps that's not what you want. Perhaps you really want to have only a recursive name serve if you are going to switch off the functionality, you might as well not have the coding in the first place. It is also a generic nameserver, so it doesn't address needs of people that need to use it for instance in embedded devices like routers. And we would like to at least provide protocol-compliant option for the people who build nameservers into their development. Routers, for instance. There are many of them out there and unfortunately there are a lot of them that are broken. So providing a solution for those people would be nice.

And then make it more usable. What is the point of having an open source product if it is so complicated it cannot be used by anyone else out there without a massive effort to understand it. So we'll be breaking it down in to little pieces that can be more easily chewed.

So what can you do? What am I here to ask you? Well, send us any requests, any ideas - doesn't matter at this point how wacky they may seem. We'll put them all together and eventually pick ones that will get implemented. These, of course, will depend what sort of funding we get. And then once you go out, tell others. If you know anyone operating nameservers out there that has been using BIND and has any sort of idea of, "I wish they would do this," well, this is the perfect time to send the comments in, trying to make it not only open-source but a more community-based project. We have been a bit isolated lately.

And that was the goal of this presentation.

ED LEWIS:

Alright. Questions for Joao?

SPEAKER FROM THE FLOOR:

So what sort of time frame do you think until we'll be saying BIND 10? How long do you think this process will take?

JOAO DAMAS:

To have it feature-complete will take we think around three years but the idea is not to go away and not have anyone hear from us for three years. So the idea is to start by, like, addressing the backbone thing, that is going to the basis underlying everything and how that is deliverable with the basic nameserver on top of it and of layer 1 and then start doing point releases as we go along because that's the only way we see of having live feedback on what people think they will see on it, want to see on it.

SPEAKER FROM THE FLOOR:

Do you anticipate, as you're going along, and having those point releases on the code being open the whole time? Being an open-source project from that point of view?

JOAO DAMAS:

Yes.

ED LEWIS:

Any more questions?

SPEAKER FROM THE FLOOR:

Yeah, I've got one.

With BIND 10 it sounds like you're writing a whole new set of software pretty much from scratch, is that right?

JOAO DAMAS:

Yes and no. We are reassembling it but there are definitely pieces of code from BIND 9 that don't need any rewrite at all. The code that, once you have the data to insert query actually manages the whole data into DNS packet. There's no need to change that. It's well defined. It operates well. It has been tested. It would be silly to throw it away.

On the other hand, take the code that gathers statistics, can all be gone. As it is, it doesn't quite do the job properly. It's hard to expand. The same goes for the multithreading model, for instance. Until very recently the effect of running BIND in more than one CPU is that it would run slower than it did on one CPU. We fixed that but to get better things we actually need to look at the model per se. Individual functions and methods will be portable. But the whole architecture will need to be different.

ED LEWIS:

OK. Any more questions? If not, I'll - excuse me. If not we have one more item. Sunny has informed me that we actually - there was a call for co-chairs for this group, actually starting a year ago. So we've - we actually have - we're still in the process of saying - we could look for a co-chair and because of that we can actually have an election here that we'll call the process that took place for putting in co-chairs and chairs. And to make this more to the point. Each position is for two years. The chair is two years, co-chair is two years staggered. So right now I'm entering the second year of the chair position and the co-chair is just the right time to have someone come in as co-chair. So what that means is that we can actually hold the election now for the people in the room and I guess what's - what I'll open to the floor is does anybody have any questions about our candidate?

We have one candidate. We don't have two, we only have one. So it's pretty much do we want him or not?

Co-chair election

SPEAKER FROM THE FLOOR:

Don't go away. Because if we have this one candidate, he's a mystery to me. I've never seen or heard of him before in my life. Which rock did you crawl out from under and what justification do you have for trying to do this. Oh, OK, based in Wellington. If that's your criteria, then sure. No. Tell us why.

SAM SARGEANT:

My name's Sam Sargeant. I'm technical director of a hosting company based in Wellington called one squared. I'm also a counsellor of InternetNZ which is the CCTLD operator for .NZ. I'm involved somewhere in NZNOG, the New Zealand Network Operators Group. I haven't been to many APRICOTs - any APRICOTs before. This is my first one.

I did a presentation in NZNOG last year about DNS, which Gaurab saw and suggested at the time I run for the chair position. I wasn't available to travel to APRICOT that year and so wasn't available to do so. I'm keen to help out with the industry. I think this group is important on one hand, yet lacking in purpose on the other. We need something. We need a forum to talk about DNS issues, however it just works. It's not a big problem for us, it just keeps ticking off in terms of the reverse mandate.

There's certainly room to see what else is going on in the region, see what's happening. I'm eager to help out, coordinate where possible. I'm not looking for anything more than that. Yeah. That's me.

Does that answer your question, Bill?

BILL MANNING:

No.

SAM SARGEANT:

What entitles me? Why should?

BILL MANNING:

To answer my question, you'll have to (inaudible)

ED LEWIS:

Don't worry, he is really bad. I'll stand a little further away.

Are there any more ways you want to grill our candidate? Because you're stuck with him for two years if he gets elected.

BILL MANNING:

(Inaudible)

ED LEWIS:

Only one more year. Then you're going to run, Bill, right?

BILL MANNING:

Is that a directive? (Inaudible)

ED LEWIS:

A year from now I'll let you know. Any more questions for Sam?

SPEAKER FROM THE FLOOR:

Yes. You mentioned all your political experience. What sort of actual experience do you have with DNS in itself?

SAM SARGEANT:

OK, so I run the (inaudible) but I also run the nameservers. I deal quite a lot with government and their operations in New Zealand. I run the (inaudible) hosting business, which does DNS services. It's New Zealand so to be honest it's a very small market. Our problems are much smaller than you'll experience in the rest of the AsiaPac region. But it's still relevant.

BILL MANNING:

And how much activity do you have across the region beyond New Zealand? Do you get involved with other parts of the area? Do you plan to?

SAM SARGEANT:

I certainly plan to. That's why I'm here now.

BILL MANNING:

OK.

ED LEWIS:

Any more questions, because if not, we'll go into the election and the question I will ask for, I guess, three votes. The first one will be who is for his candidacy as co-chair. The second will be who is against him being co-chair and the third is abstaining.

SPEAKER FROM THE FLOOR:

Can you ask if anyone is (inaudible) as well?

ED LEWIS:

Thank you. OK, yes, OK. Before we go into that, is there anybody else here who would like to be considered on the spot? Bill?

BILL MANNING:

Not till next year. You told me next year.

ED LEWIS:

I tell you what, Bill. You could run this year if you want.

The question is how much of this requires English as - basically English skills?

Actually I would turn that to Sunny. Is there a requirement from APNIC? I think that is not covered in the job description.

SRINIVAS CHENDI:

You must attend at least one APNIC meeting in a year. You must be present at an APNIC meeting.

ED LEWIS:

OK. OK, I understand. So I guess we'll go ahead right now with the vote that I just described. Three questions. For his being co-chair. Second question against his being co-chair and the third will be abstaining or just don't want to give an answer.

I guess you could even just not give an answer or say anything. But either way...

So the first count is going to be raise your hand if you would like to see Sam serve as co-chair of the group.

SPEAKER FROM THE FLOOR:

Raise your hand. Raise your hands.

ED LEWIS:

People in the room. People in the room, yes. 10 for?

How many people would rather not have him serve as co-chair, which means either we will not have one at this meeting for a while or someone else could still run before we close the meeting right now. But how many would like him not to be co-chair for whatever reason? I can not be voted off the island yet, Bill. Next year.

OK, so the count - raise your hands if you do not want him to serve as co-chair.

I hate to make it sound so personal, but...

OK, I see no hands, even budging, so we'll put down a zero for that.

And how many people want to abstain, just say that they don't want to cast a vote either way for the time being? We have two. You can, you know, don't feel shy now. There's at least two, so it's not going to be just you. OK, so I guess two is the number for abstaining. So I get 10, 0, 2 means are saddled with the co-chair position which means 7:30 tomorrow morning, we have a meeting.

SAM SARGEANT:

Thank you. Thank you very much.

ED LEWIS:

Thank you.

APPLAUSE

And so that will conclude our meeting for today and the buses for the social start in 14 minutes, so I hope to see you there.

OK. Thanks.