So I thought, "Well, you know, we're never going to get commercial networking until we have the business community seeing that commercial networking is actually a business possibility. Given that my title at Google is Chief Internet Evangelist, I feel like there is this great challenge before me. We have three billion users, and there are seven billion people in the world. Which means we have four billion people to convert. So I went to the US government, specifically to a committee called the Federal Networking Council since they had the program managers from various agencies and they had been funding internet research.
I said, 'Would you give me permission to connect MCI Mail , a commercial e-mail service, to the internet as a test? Of course, my purpose was to break the rule that said you couldn't have commercial traffic on the backbone. And so they kind of grumbled for a while and they said, 'Well, OK. Do it for a year. We build it, we hook it up, we start traffic flowing between MCI Mail and the internet, and we announce this.
And, of course, there were a whole bunch of other commercial e-mail service providers that were disconnected from each other. So they all said, 'Well, those guys from MCI shouldn't have this privilege. It was just pretty dramatic and it broke many different barriers.
Two years later — well, it was '88,'89 — three commercial internet service providers came into being in the wake of that demonstration. Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?
Cerf: I'm not surprised at all because we designed it to do that. This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn't know how they were being carried.
And they didn't care whether it was a satellite link or mobile radio link or an optical fiber or something else. We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried.
Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn't know how that worked. And the other thing that we did was to make sure that the network didn't know what the packets had in them.
We didn't encrypt them to prevent it from knowing — we just didn't make it have to know anything. It's just a bag of bits as far as the net was concerned. We would hear people saying, 'The internet will be replaced by X25,' or 'The internet will be replaced by frame relay,' or 'The internet will be replaced by APM,' or 'The internet will be replaced by add-and-drop multiplexers. Of course, the answer is, 'No, it won't.
And that was by design. I'm actually very proud of the fact that we thought of that and carefully designed that capability into the system. Wired: Right. Are you concerned with the growth of things like Deep Packet Inspection and telecoms interested in having more control over their networks? First of all, the DPI thing is easy to defeat.
All you have to do is use end-to-end encryption. I don't object to DPI when you're trying to figure out what's wrong with a network. I am worried about two things: one is the network neutrality issue. That's a business issue. The issue has to do with the lack of competition in broadband access and therefore, the lack of discipline in the market to competition.
There is no discipline in the American market right now because there isn't enough facilities-based competition for broadband service. And although the FCC has tried to introduce net neutrality rules to avoid abusive practices like favoring your own services over others, they have struggled because there has been more than one court case in which it was asserted the FCC didn't have the authority to punish ISPs for abusing their control over the broadband channel.
So, I think that's a serious problem. The other thing I worry about is the introduction of IPv6 because technically we have run out of internet addresses — even though the original design called for a bit address, which would allowed for 4. IP packets can be easily sent and received in a variety of ways that help streamline workflows of all types.
It also potentially enables more productivity with less people. Early AoIP networks were plagued with dropouts, pops and clicks, with most devices limited to a paltry 10 Mbps bandwidth. Some thought that IP packets were never going to be fast enough to deliver real-time audio.
With modern advances in networking connectivity, now we know you can. Network speed is no longer a practical limiting factor. Gigabit speeds and modern switching technology ensure virtually zero packet loss under real life conditions in copper or fiber optic networks, while providing ample bandwidth for hundreds of channels of audio and other data at each node.
Switched networks isolate traffic at each port, permitting thousands of channels to exist without conflict on a single network without dropouts or errors. IP has helped make sense of it all and, really, saved the day by making our lives a whole lot easier. And the best part: an IP network will be relevant and easily upgradable for years to come.
A computer or device called a router a name changed from gateway to avoid confusion with other types of gateways is provided with an interface to each network, and forwards packets back and forth between them.
Requirements for routers are defined in RFC The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted.
It should be noted that this model was not intended to be a rigid reference model into which new protocols have to fit in order to be accepted as a standard.
0コメント