I was reading about the new root name server going up in Russia on Daily Daemon News and noticed something I hadn't paid attention to before:
ISC operates one of the 13 root DNS servers as a public service to the Internet. ISC has operated F.root-servers.net for IANA (Internet Assigned Numbers Authority) since 1993. F answers more than 272 million DNS queries per day, making it one of the busiest DNS servers in the world. F is a virtual server made up of multiple systems and runs ISC BIND 9 as its DNS server.
Interesting. How long will it be until the number of web searches outnumber the number of DNS lookups handled by the root DNS servers?
According to SearchEngineWatch.com Google handles 250 million queries per day (as of Feb 2003). We know that's only increasing. But I don't know how quickly. Anyone got a good reference for that number?
More importantly, when (or if?) the number of web searches surpasses the number of DNS queries handled by the root servers, what will that say about the need for a good domain name? Maybe movie previews will all end with the phrase "Google keyword: ..." instead of AOL keywords.
Posted by jzawodn at November 19, 2003 06:40 AM
I'm not sure... Is it possible to have more searches than queries to the root servers, or are you just talking about one specific root server?
Maybe I'm just not getting the question though, it HAS been known to happen ;-)
If DNS responses are cached, then you could have more searches than DNS queries
Please remember that f root no longer handles any actual zones these days, all it does is return glue records for the actual authoritive gtld servers (verisign in the case of com/net, ultradns for org, etc).
That's just one rootserver out of the many though, but your right the net would be a very different place without google.
High search engine rankings are directly connected to good domain names.. thus we can't underestimate the importance of a good domain name...
Maybe in the future, new techniques will give less relevance to the quality of the URL...
Commercial:
"Coming to a theatre near you...
Finding Nemo 2
Google keyword: Nemo"
One week later:
Bob in IT to Joe in Ads:
"Hey, I have some bad news. SaveDolphins.org now holds the top ten results for Nemo on Google."
As David Dunham pointed out, DNS caching is the key here. It is only very, very rarely that any particular DNS resolver (or DNS resolving cache) has to contact the root servers: a single query for google.com will return the addresses for the servers authoritative for the '.com' part of the namespace (purists would call it 'com.'). The cached result will be reused later for all queries for anything under 'com' - cnn.com, aol.com, zawodny.com, pensee.com, you get the idea :)
Of course, the root servers handle queries for the whole DNS namespace, not just the 'com' zone, and certainly not just 'google.com'. Still, since the responses are cached for two days, and there are only so many top-level domains in the DNS structure, the total number of queries to the root servers per day should not be expected to increase very fast (of course, it will increase as more and more hosts are connected to the Internet, even if some of the resolvers are chained in a hierarchy of their own). The number of queries to Google, or any other reasonably popular search engine, on the other hand, will probably keep on going up steadily or by leaps and bounds in the foreseeable future, as more *people* are connected to the Internet.
Of course, this is all MHO, and should be taken with as big a lump of salt as you choose to apply :)
Yup, the caches will make a huge difference, but an interesting number match all the same.
btw, perhaps you could get an IP address to match your phone number, and get another handy personalized service "Jeremy" ;-)
HTTP has caching as well; Google can — and should — be cacheable (it isn't; see here). The hit rate probably won’t be as high as that for DNS (because the cached objects are much larger), but that shouldn’t stop them from making it cacheable.
Google search results can't be cacheable because the ads will change on every request.
Looks like you are talking of a system similar to dotDNS. Yeah dotDNS is this and much more but the use of a lookup service like Google is a part of dotDNS.
Google has value ONLY because the common non-propriatary infrastructure below its level supports it and other services and applications that anyone can offer and that users can select- or not.
That's why Verisgn's Sitefinder, and other possible manipulations of standard DNS services that provide an orderly namespace, are so interesting. They can set that value to 0.
Whoa there partner...it says that the 'f' root server is made up of a network of servers...so theoretically the 13 root servers ('a' through 'm') could handle all the queries there could ever be, because the dns points to multiple servers.
Even if it was one machine, switching from BIND to djbdns would allow it another level of magnitude of perfermance. :)
The Google keyword advertising thing has already happened. Over a year ago, in fact.
In October of 2002 Wharton ran ads where they decided that instead of reading out a URL, they would just tell people to Google them. No word on how successful the campaign was, though. They basically admitted that if the Google dance screwed them up, they could always pull the ads off the air.
i found this that says in may it was 260 million at that time, so divide by pi and sqare the root, aw hell you do the math...
http://www.wired.com/wired/archive/11.05/google.html
:)
-mE
DNS and Google are different in that DNS is centrally controlled and maintained. Google uses a crawler to examine web sites and build a searchable data base, interpreting what it finds rather than imposing rules on its structure. Google returns names which are then fed to the DNS system to get the IP address. However, Google could return an Internet handle consisting of an IP address and the name which the site uses internally. Browsers could store handles in book marks and e-mail clients could store handles in address books. Web sites could store resource and e-mail tags which the crawler would scant. They would consist of the relevant handle and information that the search would use. The tags should also contain the handle of the previous location of the site or e-mail account so the crawler could build up a redirect file.
Tags would be constructed in accord with general guidelines, probably in XML format, and Google and its compeditors would complete to construct the best searches on the data supplied. DNS would become less relevant. Gone would be the simple one line names to put on business cards. Gone would be the disputes over who gets to use what name and the need to find an obscure variation because the reasonable name is already taken.
dns records are managed by servers admins =) and they may need to be refreshed very often.
Funny that you title your post as you have... http://code.google.com/speed/public-dns/
Google DNS is really scary.
Think about all the privacy concerns. If you think they didn't know enough about you beforehand now they will have access to virtually every single thing you do on a computer if you use their service. True your ISP currently does have that information, but I doubt your ISP has the same interest in using it for advertising purposes that Google does, or at least I don't want one company having all of my information.
The article at groovypost talks a little bit about the privacy concern, but the other updates google released this week are pretty cool too. You can read it here:
http://www.groovypost.com/howto/geek-stuff/week-end-grab-bag-google-updates-galore/
I think OpenDNS will stay strong, if not stronger from this.