Some NSFNET reflections by Hans-Werner Braun Co-Principal Investigator (1987-1991), NSFNET Backbone November 13, 2007
The NSFNET did not start with the Backbone award to Merit (University of Michigan) in November 1987, it was significantly predated by the 56kbps NSFNET backbone built around mid-1986 to interconnect six NSF-funded supercomputing sites, i.e., the five then new supercomputer centers at
- General Atomics -- San Diego Supercomputer Center, SDSC
- University of Illinois at Urbana-Champaign -- National Center for Supercomputing Applications, NCSA
- Carnegie Mellon University -- Pittsburgh Supercomputer Center, PSC
- Cornell University -- Cornell Theory Center, CTC
- Princeton University -- John von Neumann National Supercomputer Center, JvNC
plus existing supercomputing facilities at the National Center for Atmospheric Research (NCAR). But that does not mean that I was not involved prior to my being Co-Principal Investigator on the NSFNET Backbone award to Merit. Two new NSF-funded network infrastructure activities extended to the Computing Center at the University of Michigan, which is where I worked at that time. Those were NCAR's University Satellite Network (USAN) project and SDSC's SDSCnet connection, both via geostationary satellite.
This 1986 photo shows the roof of the Computing Center at the University of Michigan with the USAN satellite antenna on the left, and the SDSCnet antenna on the right. |
|
My being responsible for the Michigan-local site allowed me to attend many related meetings, one of the more important ones being a meeting at NCAR between staff of the supercomputing centers and Dennis Jennings of the NSF, which discussed plans for a national backbone network based on the Internet Protocol. At that time considering IP was somewhat gutsy. The federal government had just issued a mandate not to use IP, but to embrace GOSIP (OSI protocol suite) instead. This made the days of the Internet, with their applications generally confined to a United States Department of Defense network-research context via the ARPAnet, seem to reach their close. Even the network protocol use by the supercomputing centers was inconsistent. SDSCnet embraced the Department of Energy MFEnet protocols, and USAN used Ethernet bridging. Both were problematic, as MFEnet was non-interoperable with other networking protocols, while bridged networks were hard to manage (we had "ARP wars in the sky"). And, for that matter, Merit for it's own network across multiple universities used home grown protocols. Without NSF's decision to embrace the Internet Protocol as the common denominator for an interoperable platform, it is unlikely the the Internet would have developed into the global cyberinfrastructure that it is today, and at least not that early on. My speculation is that instead phone companies would likely dominate the landscape, using X.25 or other OSI protocols for data communications. Plus perhaps there would be islands of non-interoperable network protocols. But, one never knows what would have happened.
| Topology of the original 56kbps NSFNET backbone |
Glenn Ricart and I were sitting in at the NCAR meeting with Dennis Jennings, since we were really waiting for a different meeting to happen, and kind of did not have anything better to do. I really misbehaved during the meeting, as I just could not accept the technology the supercomputing centers wanted to use for the 56kbps NSFNET backbone to be. I was thinking they should use LSI11 nodes with Dave Mills' Fuzzball software, as they were the most evolved Internet Protocol engines I knew then. But I was not gutsy enough to specifically mention Fuzzballs. But Glen Ricard did, and to our surprise, the group adopted that notion.
This photo shows the specific machine where almost all the software for the original 56kbps NSFNET backbone was compiled on, before it was transfered to and loaded onto the backbone nodes. |
|
| Original 8" floppy disks for the 56kbps NSFNET backbone nodes |
Fuzzball-based 56kbps NSFNET backbone node |
|
The responsibility for the 56kbps NSFNET backbone was awarded by the NSF to UIUC (Ed Krol), with operations being with Cornell University (Alison Brown and Scott Brim). When the links and nodes were deployed, the people responsible had problems making them work, so I made it work, and from then on ran (though inofficially but broadly known) the 56kbps NSFNET backbone via the USAN satellite link from the University of Michigan until 1988, when it was replaced with the backbone network that was part of the new award to Merit. I recall the 56kbps NSFNET time as being extremely stressful to me, because I had no support staff and no one else understanding the infrastructure at its systemic level that I could have talked to. That does not mean others were not helpful. Dave Mills was just great in supporting the software on the backbone nodes (and he should get a lot more credit than he does for much of his pioneering Internet work), and Scott Brim and his group working on statistics and software for a routing processor (which turned into gated), and Ed Krol representing NSFNET centrally and providing reports. But all of that just seemed like components to me. The unpleasant scare was in the system! The award to Merit via the University of Michigan happened in November 1987, with Eric Aupperle being the Project Director, and me being the Co-Principal Investigator. The award was met by the Internet community with teasing and some outright hostility. Some believed the task at hand was impossible (some wanted to go "trout fishing" rather than us expecting us to succeed, one person told me the project would make me end up with a screaming fit or go quietly insane), others were upset about the IBM and MCI project partners as largely an unknown in the Internet realm. Some of it was perhaps justified. Earlyon when we talked with MCI about wanting unchanneled T1 (1.5Mbps) links for the infrastructure (rather than multiplexing it down to 56kbps voice channels), they thought we are crazy. When we then told them that a few years later we would want unchanneled DS3 (45Mbps) they thought we are completely insane. But, to their credit, they worked with us through those issues, and IBM made the backbone nodes work, to deliver the new NSFNET backbone on the date promised: July 1, 1988. Evidence for the delivery of the new backbone are a summary of node reachability on June 11, 1988 to the mailing list (Network Working Group, NWG) that included NSFNET backbone-related staff, NSF, and staff of the to-be-connected regional networks:
From hwb Sat Jun 11 21:38:26 1988
To: nwg@merit.edu
Subject: No Trout Fishing, Bob!
netstat -rn
Routing tables
Destination Gateway Flags Refcnt Use Interface
129.140.1 129.140.17.17 UG 0 361 lan0
129.140.2 129.140.17.17 UG 0 0 lan0
129.140.3 129.140.17.17 UG 0 701 lan0
129.140.4 129.140.17.17 UG 0 0 lan0
129.140.5 129.140.17.13 UG 0 0 lan0
129.140.6 129.140.17.13 UG 0 0 lan0
129.140.7 129.140.17.11 UG 0 0 lan0
129.140.8 129.140.17.12 UG 0 0 lan0
129.140.9 129.140.17.12 UG 0 0 lan0
129.140.10 129.140.17.15 UG 0 0 lan0
129.140.11 129.140.17.13 UG 0 236 lan0
129.140.12 129.140.17.13 UG 0 0 lan0
129.140.13 129.140.17.13 UG 0 956 lan0
129.140.14 129.140.17.13 UG 0 0 lan0
129.140.15 129.140.17.11 UG 0 0 lan0
129.140.16 129.140.17.11 UG 0 0 lan0
129.140.17 129.140.17.1 U 12 191714 lan0
129.140.45 129.140.17.17 UG 0 0 lan0
129.140.46 129.140.17.17 UG 0 0 lan0
rcp-17-1(18):
and the announcement about letting operational traffic onto the network on June 30, 1988:
Date: Thu, 30 Jun 88 20:02:36 EDT
From: Hans-Werner Braun
To: nwg@merit.edu
Subject: Network to be operational.
Cc: hwb
The NSFNET Backbone has reached a state where we would like to more
officially let operational traffic on. There are some sites who are
doing that inofficially for a while already. We would like to still
view this phase as "operational testing" for the near term future
and will require some maintenance windows to update software and such.
We will try to have those times generally at off hours and coordinated
by our Network Operation Center. The routing phasein from our site is
coordinated by Susan Hares, who is the Routing Coordinator we proposed
in the response to the NSFNET solicitation. I would like to encourage
sites to closely coordinate the phaseins with her and it would be
extremely helpful to inform her of changes, additions and such. While
she can be reached at skh@merit.edu we would prefer if you would send
updates and questions to nsfnet-admin@merit.edu (for now) so that
several people see it and routing problems get resolved sooner (like at
off hours or when Susan is not around). Also problems and suggestions
could for now be sent to the nsfnet-admin email address. In any case,
Susan will call individual sites to coordinate the phasein of the new
backbone.
Generally, problem reports at this point of time should be called in
to 313-764-9430 which is a Merit receptionist desk. Someone there can
then route the call to the NOC or wherever else is appropriate. We
also have set up a mailing list nsfnet-noc@merit.edu.
-- Hans-Werner
Later in July 1988 I sent email out about discontinuing the old 56kbps Fuzzball-based NSFNET backbone:
Date: Mon, 18 Jul 88 21:05:44 EDT
From: Hans-Werner Braun
To: nwg@merit.edu
Subject: Cooling Fuzzballs.
Cc: hwb
On Wednesday, 20 July 1988, 13:00EDT we would like to physically turn the
Fuzzballs off to see whether we have remaining issues with regional
routing. It is important that they are really off, in case people still
listen to Hello packets and/or are using static routes. The phone lines will
then stay for another few days, during which we can turn the Fuzzballs back
on, if there is a requirement which cannot be resolved immediately. The
phone lines will, I think, go around the 25th, after which we can turn
the Fuzzballs back on, if desired at some sites (as time servers, for
example). Dave Mills has expressed interest to leave the nodes where they
are, if only as an Internet research tool and/or servers for accurate time.
They will obviously NOT be supported by Merit/IBM/MCI at that time, but I
think Dave Mills will gladly do care and feeding, software wise.
-- Hans-Werner
Another difficulty around that time was that control of the Internet evolution was pretty much with the United States Department of Defense, via groups such as their Internet Engineering Task Force. To address those concerns, Scott Brim and I met with people in the Pentagon to convince the DoD to at least open up the IETF to a larger community, specifically the NSFNET and its associated regional networks. To our surprise, one meeting was all it took, and they agreed, which lead to a rapid expansion of the IETF with a lot of involvement from many constituents over time. NSFNET introduced a complexity into the Internet, which the existing network protocols could not handle. Up to the NSFNET, the Internet consisted basically of the ARPAnet, with client networks stubbed off the ARPAnet backbone. I.e., the hierarchy between so-called Autonomous Systems (AS) was linear, with no loops/meshes, with the Exterior Gateway Protocol (EGP) used for for inter-AS routing carrying the AS Number of the routing neighbor. This made it impossible to detect loops in an environment where two or more separate national backbones with multiple interconnections exist, specifically the ARPAnet and the NSFNET. I defined that I needed an additional "previous" AS Number for the inter-AS routing to allow supporting a meshed Internet with many administrations for its components. Meetings with various constituents did not get us anywhere, and I needed it quickly, rather then creating a multi-year research project. In the end, Yakov Rekhter (IBM/NSFNET) and Kirk Lougheed (Cisco) designed a superset of what I needed on three napkins alongside an IETF meeting that included not just the "previous" AS Number but all previous AS numbers that an IP network number route had encountered since its origin. This protocol was called the Border Gateway Protocol (BGP) and versions of it are in use to this day to hold the Internet together. BGP used the Transmission Control Protocol (TCP) to make itself reliable. Use of TCP as well as general "not invented here" caused great problems with the rest of the Internet community, which we somewhat ignored as we had a pressing need, and soon with NSFNET, Cisco and gated implementations at hand, the Internet community did not have much of a choice. Eventually and after long arguments, BGP got adopted by the IETF. While the 1988 NSFNET backbone ran on a T1 substrate, the links were multiplexed via a switch into three 448kbps circuits. This created a connectivity-richer logical network on top of the physical T1 network. The reason for this were initial performance limits with the IBM backbone nodes, specifically their Almaden line cards, until they were replaced about a year later with full-T1-capable ARTIC cards.
| Hand-drawn draft from 1987 about the mapping of the 448kbps circuits into the T1 NSFNET backbone network |
A pair of 448kbps Almaden cards from the T1 NSFNET backbone |
|
| Physical topology of the initial T1 NSFNET backbone, before being multiplexed into the 448kbps logical topology |
Resulting logical topology of the initial T1 NSFNET backbone |
|
| T1 NSFNET backbone node at the Network Operations Center with the IDNX switch on the right |
Closeup of the T1 NSFNET backbone node at the Network Operations Center |
|
| View of the NSFNET Network Operations Center in the Computing Center building at the University of Michigan. The machine room is seen in the back on the right. |
After the transition to the fully-T1-capable ARTIC line cards, the separation between physical and logical topologies was abandoned for obvious reasons, as no multiplexing was necessary any more. Hence the switches were being removed at that time. This email reflects some of the transition strategy, with the numbers in the graphs being node identifiers of the backbone routers:
Date: Fri, 30 Jun 89 15:44:04 EST
From: Hans-Werner Braun
Message-Id: <8906302044.AA03284@mcr.umich.edu>
To: nwg@merit.edu
Subject: Reconfiguration updates.
As of the 8 June 1988 we completed Phase A of the reconfiguration, resulting
in the following fully redundant T1 topology. The numbers are NSS node numbers.
+---14 +-------------17----8---+
| | | | | |
| | | | | |
| | | | | |
| 13---15----7---16---12----5---10 |
| | | | | |
| | | | | |
| | | | | |
+----6---------+ | 11--------9
| |
+-------------------+
Today the first link of the reconfiguration Phase B was installed:
+-------------------+
| |
+---14 | +-------------17----8---+
| | | | | | |
| | | | | | |
| | | | | | |
| 13---15----7---16---12----5---10 |
| | | | | |
| | | | | |
| | | | | |
+----6---------+ | 11--------9
| |
+-------------------+
=======================================================================
The remaining four steps of the reconfiguration are tentatively
planned
to begin on the 5 July 1989:
=======================================================================
Step 1 remove Boulder to San Diego
install Boulder to Houston
+-------------------+
| |
+---14 | +-------------17----8---+
| | | | | | |
| | | | | | |
| | | | | | |
| 13---15----7---16---12----5---10 |
| | | | | |
| | | | | |
| | | | | |
+----6-------------------+ 11--------9
| |
+--------------+
Step 2 remove Pittsburgh to Houston
install San Diego to Houston
remove San Diego to Champaign
install Seattle to Champaign
+-------------------+
| |
| +-------------17----8---+
| | | | |
| | | | |
| | | | |
+---13---15----7---16---12----5---10 |
| | | | |
14----|---------|---------+ |
| | | |
+----6 | |
| | |
+--------11-----------------------9
Step 3 remove Ann Arbor to Denver
install Ann Arbor to Ithaca
even exchange College Park to Princeton
even exchange Ann Arbor to Princeton
(for routing DRS capable Cross Connects)
+------------------17----8---+
| |\ | |
| | \ | |
| | \ | |
+---13---15----7---16---12----5---10 |
| | | | |
14----|---------|---------+ |
| | | |
| | | |
| | | |
+----6--------11-----------------------9
Step 4 remove Princeton to Ithaca
install Ithaca to College Park
remove Ann Arbor to Pittsburgh
install Pittsburgh to Princeton
+------------------17----+
| | |
| 8----|---+
| | | |
+---13---15----7---16---12----5---10---+
| | | | |
14----|---------|---------+ |
| | | |
| | | |
| | | |
+----6--------11-----------------------9
ARTIC card for the non-multiplexed T1 NSFNET backbone |
|
| Non-multiplexed T1 NSFNET backbone after the 1989 transition |
During 1990 the plans for the new 45Mbps (DS3) backbone upgrade matured with additional funding from the National Science Foundation.
Date: Sun, 9 Dec 90 20:08:42 EST
From: Hans-Werner Braun
Message-Id: <9012100108.AA04838@mcr.umich.edu>
To: regional-techs@merit.edu
Subject: T3 NSFNET update.
I learned today that during the IETF meeting a few people expressed concerns
about the lack of information flow regarding the NSFNET T3 upgrade. There has
been no real intention not to keep people informed, it is that we are quite
busy getting the work done and also there have been quite a few open issues
where things could change very easily. However, given that the concern had been
expressed, let me try to summarize where we stand today.
We are in the midst of implementing an eight node T3 network right now. This
is done as an overlay on top of the T1 network, rather than an integration
between the two. Lateron we hope to be able to upgrade the remaining T1 node
sites to T3 and phase out the T1 network.
The T3 architecture that we implement consists of major nodes (Core Nodal
Switching System, C-NSS) co-located in MCI junction places. Junctions are
major Points Of Presence (POP). The intention there is to have a very robust
central infrastructure. Robustness includes things like availability of
bandwidth, uninterrupted power and constant availability of operators. This
central infrastructure forms a two dimensional cloud where the internal
connectivity may change over time. People will be able to use things like
traceroute to figure the internal connectivity, but there is no guarantee
that the connectivity will stay the same over longer periods of time. NSF
in fact liked this kind of a model and requested from Merit to present the
network in that fashion to the outside.
The cloud will initially contain eight C-NSS connected to each other via
clear channel T3 links. Eight Exterior NSS (E-NSS) at individual sites will
connect into this cloud. Some C-NSS connect to more than one E-NSS, some C-NSS
do not connect to any E-NSS and just allow for more robust infrastructure.
The T3 NSS are based on IBM's RS-6000 technology. Specific cards, which will
evolve over time, have been developed to connect to the T3 links as well as
T1 links, Ethernets and FDDI rings. The T3 nodes are rack mounted.
The C-NSS will be located in the Boston, Washington DC, Cleveland, Chicago (2),
Houston, Los Angeles and San Francisco areas. The eight E-NSS will be in
Boston, Ithaca, Pittsburgh, Ann Arbor, Chicago, Urbana-Champaign, San Diego
and Palo Alto.
One of our major obstacles was the early availability of specific cards in the
nodes. We knew that we could not implement all nodes at the same time and
therefore had to stage things. We also had to implement a T3 testbed to
do system integration and testing. The T3 testbed today consists of two C-NSS
(in Michigan and New York) as well as E-NSS in Ann Arbor, Richardson (TX, MCI)
and Milford (CT, IBM (T1)). A further node should be operational soon at IBM in
Yorktown (NY). For the operational network it turned out to be easier (fewer
sites) to start from the west coast. At this point of time San Diego (SDSC),
Urbana-Champaign (NCSA) and Ann Arbor (Merit/NOC) have E-NSS working on the
T3 network and connected to the co-location places in Chicago and Los Angeles.
The Houston C-NSS is likely to start working this week. There is a chance
that we will get the C-NSS and the E-NSS in the bay area working before the
end of the year. That would then allow for half of the T3 sites to be running
before the year is over. We expect to be able to install the remaining nodes
in January. Besides Ann Arbor, the first E-NSS on the T3 NSFNET was installed
at SDSC perhaps a week or two ago and the one at NCSA is working since Friday.
At this point of time there is still some significant work going on to make
further improvements to particularely the software in the nodes and, with
the exception of the node in Ann Arbor, none of the E-NSS on the T3 backbone
are connected to the external network yet.
Hans-Werner
and:
Date: Thu, 20 Dec 90 13:28:07 EST
From: Hans-Werner Braun
Message-Id: <9012201828.AA12284@mcr.umich.edu>
To: regional-techs@merit.edu
Subject: Update on the T3 status.
It seems timely to send out an update as to where we are on the T3 network.
At this point of time E-NSS equipment is in San Diego, Palo Alto,
Urbana-Champaign and Ann Arbor for the operational network and connected.
Five C-NSS are in place in the San Francisco, Los Angeles, Houston and
Chicago areas (two in Chicago). All are up and running on T3 links, except
that the Palo Alto connection to the C-NSS is still running some T3 tests.
We have done significant testing for dynamic routing and network management.
On Tuesday we had the first coast to coast demonstration from an exterior
workstation at the San Diego Supercomputer Center to a machine in White Plains
(New York) where the serial links were running at T3 speeds. There was an
Ethernet in the middle in Ann Arbor. To get coast-coast at this point of
time we had to utilize our test network which exists between Merit, IBM and
MCI locations. The workstation at SDSC had an EGP session with the E-NSS and
we exchanged a "symbolic first file and email message."
We hope to very soon be in a preproduction mode, followed by initial
operational traffic on the new T3 network.
Hans-Werner
Near the end of the year 1990, while I visited SDSC in San Diego, we demonstrated a multi-node 45Mbps backbone capability:
Date: Sun, 30 Dec 90 23:02:01 EST
From: Hans-Werner Braun
Message-Id: <9012310402.AA12061@orion7.merit.edu>
To: regional-techs@merit.edu
Subject: I thought this message from Gerard Newman may be of interest...
Date: Sun, 30 Dec 90 02:19:27 GMT
From: gkn@M5.Sdsc.Edu (Gerard K. Newman)
Subject: T3!
To: steve@nsf.gov, moreland@M5.Sdsc.Edu, kael@M5.Sdsc.Edu, hwb@M5.Sdsc.Edu
Organization: San Diego Supercomputer Center
As of a few minutes ago H-W and I here at SDSC, and Bilal and Elise at
MERIT managed to get "production" traffic routed over the T3 backbone.
At present, SDSC is routing traffic for the following nets over the T3
backbone:
35 MERIT
128.174 NCSA
130.126 NCSA
140.222 T3 backbone
(nevermind that the packets take a hop thru a lowly Sun-3/110 -- that
will be fixed in a better way in the near term).
To wit:
y1.gkn [13]% ping ncsa2.ncsa.uiuc.edu
PING ncsa2.ncsa.uiuc.edu: 56 data bytes
64 bytes from 128.174.10.44: icmp_seq=0. time=81. ms
64 bytes from 128.174.10.44: icmp_seq=1. time=79. ms
64 bytes from 128.174.10.44: icmp_seq=2. time=86. ms
64 bytes from 128.174.10.44: icmp_seq=3. time=89. ms
64 bytes from 128.174.10.44: icmp_seq=4. time=84. ms
64 bytes from 128.174.10.44: icmp_seq=5. time=86. ms
----ncsa2.ncsa.uiuc.edu PING Statistics----
6 packets transmitted, 6 packets received, 0% packet loss
round-trip (ms) min/avg/max = 79/84/89
y1.gkn [14]% telnet ncsa2.ncsa.uiuc.edu
Trying 128.174.10.44...
Connected to ncsa2.ncsa.uiuc.edu.
Escape character is '^]'.
Cray UNICOS (u2) (ttyp003)
For help, send e-mail to consult@ncsa.uiuc.edu or call (217)244-1144
On nights and weekends, call (217)244-0710
|
No comments:
Post a Comment