Wednesday, December 2, 2015

Bandwidth

Bandwidth


In computing, bandwidth is the bit-rate of available or consumed information capacity expressed typically in metric multiples of bits per second. Variously, bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.
This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in whichbandwidth is used to refer to analog signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power.
Network bandwidth capacity
The term bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The maximum rate that can be sustained on a link are limited by the Shannon-Hartley channel capacity for these communication systems, which is dependent on the bandwidth in hertz and the noise on the channel.
Network bandwidth consumption
Bandwidth in bit/s may also refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. This sense applies to concepts and technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval.

Channel bandwidth may be confused with useful data throughput (or goodput). For example, a channel with x bps may not necessarily transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For instance, a lot of internet traffic uses the transmission control protocol (TCP), which requires a three-way handshake for each transaction. Although in many modern implementations the protocol is efficient, it does add significant overhead compared to simpler protocols. Also, data packets may be lost, which further reduces the useful data throughput. In general, for any effective digital communication, a framing protocol is needed; overhead and effective throughput depends on implementation. Useful throughput is less than or equal to the actual channel capacity plus implementation overhead.

Asymptotic bandwidth
The asymptotic bandwidth (formally asymptotic throughput) for a network is the measure of maximum throughput for a greedy source, for example when the message size (the number of packets per second from a source) approaches infinity.

Asymptotic bandwidths are usually estimated by sending a number of very large messages through the network, measuring the end-to-end throughput. As other bandwidths, the asymptotic bandwidth is measured in multiples of bits per seconds

Multimedia bandwidth
Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data compression (source coding), defined as the total amount of data divided by the playback time.

Bandwidth in web hosting
In Web hosting service, the term bandwidth is often[6] incorrectly used to describe the amount of data transferred to or from the website or server within a prescribed period of time, for example bandwidth consumption accumulated over a month measured in gigabytes per month.[citation needed] The more accurate phrase used for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.

A similar situation can occur for end user ISPs as well, especially where network capacity is limited (for example in areas with under developed internet connectivity and on wireless networks).

ASCII

ASCII

From Wikipedia, the free encyclopedia
Not to be confused with MS Windows-1252 or other types of Extended ASCII.
This article is about the character encoding. For other uses, see ASCII (disambiguation).
ASCII chart from a 1972 printer manual (b1 is the least significant bit)
ASCII (Listeni/ˈæski/ ass-kee), abbreviated from American Standard Code for Information Interchange,[1] is acharacter-encoding scheme (the IANA prefers the name US-ASCII[2]). ASCII codes represent text in computers,communications equipment, and other devices that use text. Most modern character-encoding schemes are based on ASCII, though they support many additional characters. ASCII was the most common character encoding on the World Wide Web until December 2007, when it was surpassed by UTF-8, which includes ASCII as a subset.[3][4][5]
ASCII developed from telegraphic codes. Its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began on October 6, 1960, with the first meeting of the American Standards Association's (ASA) X3.2 subcommittee. The first edition of the standard was published during 1963,[6][7]underwent a major revision during 1967,[8][9] and experienced its most recent update during 1986.[10] Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features for devices other than teleprinters.
Originally based on the English alphabet, ASCII encodes 128 specified characters into seven-bit binary integers as shown by the ASCII chart on the right.[11] The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, basic punctuation symbolscontrol codes that originated with Teletype machines, and aspace. For example, lowercase j would become binary 1101010 and decimal 106. ASCII includes definitions for 128 characters: 33 are non-printing control characters (many now obsolete)[12] that affect how text and space are processed[13] and 95 printable characters, including the space(which is considered an invisible graphic.

Archieve

archive


noun
1.
Usually, archivesdocuments or records relating to the activities,business dealings, etc., of a person, family, corporation, association,community, or nation.
2.
archives, a place where public records or other historical documentsare kept.
3.
any extensive record or collection of data:
The encyclopedia is an archive of world history. The experience wassealed in the archive of her memory.
4.
Digital Technology.
  1. a long-term storage device, as a disk or magnetic tape, or acomputer directory or folder that contains copies of files for backupor future reference.
  2. a collection of digital data stored in this way.
  3. a computer file containing one or more compressed files.
  4. a collection of information permanently stored on the Internet:
    The magazine has its entire archive online, from 1923 to thepresent.
verb (used with object)archived, archiving.
5.
to place or store in an archive:
to vote on archiving the city's historic documents.
6.
Digital Technology. to compress (computer files) and store them in asingle file.
Origin of archive
1595-1605
1595-1605; orig., as plural < French archives Latin archī (va Greekarcheîa, orig. plural of archeîon public office, equivalent to arch ()magistracy, office + -eion suffix of place

Sunday, November 22, 2015

ARPANET

ARPANET – The First Internet


I was in charge of the software, and we were naturally running a bit late. September 1 was Labor Day, so I knew I had a couple of extra days to debug the software. Moreover, I had heard BBN was having some timing troubles with the software, so I had some hope they'd miss the ship date. And I figured that first some Honeywell people would install the hardware -- IMPs were built out of Honeywell 516s in those days -- and then BBN people would come in a few days later to shake down the software. An easy couple of weeks of grace.
BBN fixed their timing trouble, air shipped the IMP, and it arrived on our loading dock on Saturday, August 30. They arrived with the IMP, wheeled it into our computer room, plugged it in and the software restarted from where it had been when the plug was pulled in Cambridge. Still Saturday, August 30. Panic time at UCLA.
- Stephen D. Crocker, The Request For Comments Reference Guide.
The ARPANET was the first wide area packet switching network, the "Eve" network of what has evolved into the Internet we know and love today.
The ARPANET was originally created by the IPTO under the sponsorship of DARPA, and conceived and planned by Lick LickliderLawrence Roberts, and others as described earlier in this section.
The ARPANET went into labor on August 30, 1969, when BBNdelivered the first Interface Message Processor (IMP) to Leonard Kleinrock's Network Measurements Center at UCLA. The IMP was built from a Honeywell DDP 516 computer with 12K of memory, designed to handle the ARPANET network interface. In a famous piece of Internet lore, on the side of the crate, a hardware designer at BBN named Ben Barker had written "Do it to it, Truett", in tribute to the BBN engineer Truett Thach who traveled with the computer to UCLA on the plane.
The UCLA team responsible for installing the IMP and creating the first ARPANET node included graduate students Vinton Cerf,Steve Crocker, Bill Naylor, Jon Postel, and Mike Wingfield. Wingfield had built the hardware interface between the UCLA computer and the IMP, the machines were connected, and within a couple of days of delivery the IMP was communicating with the local NMC host, an SDS Sigma 7 computer running the SEX operating system. Messages were successfully exchanged, and the one computer ARPANET was born. A picture of Leonard Kleinrock with the first ARPANET IMP is shown below (click on the picture to link to a larger image on Kleinrock's home site).

Leonard Kleinrock with first Interface Message Processor (IMP)
- Leonard Kleinrock with first IMP

The first full ARPANET network connection was next, planned to be with Douglas Engelbart's NLS system at the Stanford Research Institute (SRI), running an SDS-940 computer with the Genie operating system and connected to another IMP. At about 10:30 PM on October 29'th, 1969, the connection was established over a 50 kbps line provided by the AT&T telephone company, and a two node ARPANET was born. As is often the case, the first test didn't work flawlessly, as Kleinrock describes below:
At the UCLA end, they typed in the 'l' and asked SRI if they received it; 'got the l' came the voice reply. UCLA typed in the 'o', asked if they got it, and received 'got the o'. UCLA then typed in the 'g' and the darned system CRASHED! Quite a beginning. On the second attempt, it worked fine!
- Leonard Kleinrock, The Birth of the Internet.
The Culler-Fried Interactive Mathematics centre at the University of California at Santa Barbara was the third site added to the ARPANET, running on an IBM 360/75 computer using the OS/MVT operating system. The fourth ARPANET site was added in December 1969 at the University of Utah Graphics Department, running on a DEC PDP-10 computer using the Tenex operating system. These first four sites had been selected by Roberts to constitute the initial ARPANET because they were already DARPA sites, and he believed they had the technical capability required to develop the required custom interface to the IMP.
Over the next several years the ARPANET grew rapidly. In July, 1975, DARPA transferred management and operation of the ARPANET to the Defense Communications Agency, now DISA. The NSFNET then assumed management of the non-military side of the network during its first period of very rapid growth, including connection to networks like the CSNET and EUnet, and the subsequent evolution into the Internet we know today.
Milestones. Some of the milestones in the early history of the ARPANET are summarized below:
  • East Coast. In March, 1970, the consulting company Bolt, Beranek & Newman joined the ARPANET, becoming the first ARPANET node on the US east coast.
  • Remote Access. In September, 1971, the first Terminal Interface Processor (TIP) was deployed, enabling individual computer terminals to dial directly into the ARPANET, thereby greatly increasing the ease of network connections and leading to significant growth.
  • 1972. By the end of 1972 there were 24 sites on the ARPANET, including the Department of Defense, the National Science Foundation, NASA, and the Federal Reserve Board.
  • 1973. By the end of 1973 there were 37 sites on the ARPANET, including a satellite link from California to Hawaii. Also in 1973, the University College of London in England and the Royal Radar Establishment in Norway become the first international connections to the ARPANET.
  • 1974. In June, 1974, there were 62 computers connected to the ARPANET.
  • 1977. In March, 1977, there were 111 computers on the ARPANET.
  • 1983. In 1983, an unclassified military only network called MILNET split off from the ARPANET, remaining connected only at a small number of gateways for exchange of electronic mail that could be easily disconnected for security reasons if required. MILNET later become part of the DoD Defense Data Network, or DDN.
  • 1985. By the middle of the 80's there were ARPANET gateways to external networks across North America, Europe, and in Australia, and the Internet was global in scope. Marty Lyons has created a map of the existing network gateways from 18 June 1985.
  • 1990. The ARPANET was retired in 1990. Most university computers that were connected to it were moved to networks connected to the NSFNET, passing the torch from the old network to the new.
Resources. The following site provides more information about the ARPANET.
  • Michael Hauben's History of the ARPANET

THE NSFNET

                       National Science Foundition Network -- NSFNET

Some NSFNET reflections
by Hans-Werner Braun
Co-Principal Investigator (1987-1991), NSFNET Backbone

November 13, 2007
The NSFNET did not start with the Backbone award to Merit (University of Michigan) in November 1987, it was significantly predated by the 56kbps NSFNET backbone built around mid-1986 to interconnect six NSF-funded supercomputing sites, i.e., the five then new supercomputer centers at
  • General Atomics -- San Diego Supercomputer Center, SDSC
  • University of Illinois at Urbana-Champaign -- National Center for Supercomputing Applications, NCSA
  • Carnegie Mellon University -- Pittsburgh Supercomputer Center, PSC
  • Cornell University -- Cornell Theory Center, CTC
  • Princeton University -- John von Neumann National Supercomputer Center, JvNC
plus existing supercomputing facilities at the National Center for Atmospheric Research (NCAR).
But that does not mean that I was not involved prior to my being Co-Principal Investigator on the NSFNET Backbone award to Merit. Two new NSF-funded network infrastructure activities extended to the Computing Center at the University of Michigan, which is where I worked at that time. Those were NCAR's University Satellite Network (USAN) project and SDSC's SDSCnet connection, both via geostationary satellite.

This 1986 photo shows the roof of the Computing Center at the University of Michigan with the USAN satellite antenna on the left, and the SDSCnet antenna on the right.
USAN and SDSCnet antennas at UMich


My being responsible for the Michigan-local site allowed me to attend many related meetings, one of the more important ones being a meeting at NCAR between staff of the supercomputing centers and Dennis Jennings of the NSF, which discussed plans for a national backbone network based on the Internet Protocol. At that time considering IP was somewhat gutsy. The federal government had just issued a mandate not to use IP, but to embrace GOSIP (OSI protocol suite) instead. This made the days of the Internet, with their applications generally confined to a United States Department of Defense network-research context via the ARPAnet, seem to reach their close. Even the network protocol use by the supercomputing centers was inconsistent. SDSCnet embraced the Department of Energy MFEnet protocols, and USAN used Ethernet bridging. Both were problematic, as MFEnet was non-interoperable with other networking protocols, while bridged networks were hard to manage (we had "ARP wars in the sky"). And, for that matter, Merit for it's own network across multiple universities used home grown protocols. Without NSF's decision to embrace the Internet Protocol as the common denominator for an interoperable platform, it is unlikely the the Internet would have developed into the global cyberinfrastructure that it is today, and at least not that early on. My speculation is that instead phone companies would likely dominate the landscape, using X.25 or other OSI protocols for data communications. Plus perhaps there would be islands of non-interoperable network protocols. But, one never knows what would have happened.

56kbps NSFNET backbone
Topology of the original 56kbps NSFNET backbone


Glenn Ricart and I were sitting in at the NCAR meeting with Dennis Jennings, since we were really waiting for a different meeting to happen, and kind of did not have anything better to do. I really misbehaved during the meeting, as I just could not accept the technology the supercomputing centers wanted to use for the 56kbps NSFNET backbone to be. I was thinking they should use LSI11 nodes with Dave Mills' Fuzzball software, as they were the most evolved Internet Protocol engines I knew then. But I was not gutsy enough to specifically mention Fuzzballs. But Glen Ricard did, and to our surprise, the group adopted that notion.

This photo shows the specific machine where almost all the software for the original 56kbps NSFNET backbone was compiled on, before it was transfered to and loaded onto the backbone nodes.
Software compilation machine for 56kbps NSFNET backbone



8 inch floppies from the 56kbps NSFNET backbone
Original 8" floppy disks for the 56kbps NSFNET backbone nodes



Fuzzball-based 56kbps NSFNET backbone node
56kbps NSFNET Fuzzball


The responsibility for the 56kbps NSFNET backbone was awarded by the NSF to UIUC (Ed Krol), with operations being with Cornell University (Alison Brown and Scott Brim). When the links and nodes were deployed, the people responsible had problems making them work, so I made it work, and from then on ran (though inofficially but broadly known) the 56kbps NSFNET backbone via the USAN satellite link from the University of Michigan until 1988, when it was replaced with the backbone network that was part of the new award to Merit. I recall the 56kbps NSFNET time as being extremely stressful to me, because I had no support staff and no one else understanding the infrastructure at its systemic level that I could have talked to. That does not mean others were not helpful. Dave Mills was just great in supporting the software on the backbone nodes (and he should get a lot more credit than he does for much of his pioneering Internet work), and Scott Brim and his group working on statistics and software for a routing processor (which turned into gated), and Ed Krol representing NSFNET centrally and providing reports. But all of that just seemed like components to me. The unpleasant scare was in the system!
The award to Merit via the University of Michigan happened in November 1987, with Eric Aupperle being the Project Director, and me being the Co-Principal Investigator. The award was met by the Internet community with teasing and some outright hostility. Some believed the task at hand was impossible (some wanted to go "trout fishing" rather than us expecting us to succeed, one person told me the project would make me end up with a screaming fit or go quietly insane), others were upset about the IBM and MCI project partners as largely an unknown in the Internet realm. Some of it was perhaps justified.
Earlyon when we talked with MCI about wanting unchanneled T1 (1.5Mbps) links for the infrastructure (rather than multiplexing it down to 56kbps voice channels), they thought we are crazy. When we then told them that a few years later we would want unchanneled DS3 (45Mbps) they thought we are completely insane. But, to their credit, they worked with us through those issues, and IBM made the backbone nodes work, to deliver the new NSFNET backbone on the date promised: July 1, 1988.
Evidence for the delivery of the new backbone are a summary of node reachability on June 11, 1988 to the mailing list (Network Working Group, NWG) that included NSFNET backbone-related staff, NSF, and staff of the to-be-connected regional networks:
 From hwb Sat Jun 11 21:38:26 1988
 To: nwg@merit.edu
 Subject: No Trout Fishing, Bob!
  
 netstat -rn
 Routing tables
 Destination          Gateway              Flags    Refcnt Use        Interface
 129.140.1            129.140.17.17        UG       0      361        lan0
 129.140.2            129.140.17.17        UG       0      0          lan0
 129.140.3            129.140.17.17        UG       0      701        lan0
 129.140.4            129.140.17.17        UG       0      0          lan0
 129.140.5            129.140.17.13        UG       0      0          lan0
 129.140.6            129.140.17.13        UG       0      0          lan0
 129.140.7            129.140.17.11        UG       0      0          lan0
 129.140.8            129.140.17.12        UG       0      0          lan0
 129.140.9            129.140.17.12        UG       0      0          lan0
 129.140.10           129.140.17.15        UG       0      0          lan0
 129.140.11           129.140.17.13        UG       0      236        lan0
 129.140.12           129.140.17.13        UG       0      0          lan0
 129.140.13           129.140.17.13        UG       0      956        lan0
 129.140.14           129.140.17.13        UG       0      0          lan0
 129.140.15           129.140.17.11        UG       0      0          lan0
 129.140.16           129.140.17.11        UG       0      0          lan0
 129.140.17           129.140.17.1         U        12     191714     lan0
 129.140.45           129.140.17.17        UG       0      0          lan0
 129.140.46           129.140.17.17        UG       0      0          lan0
 rcp-17-1(18): 
and the announcement about letting operational traffic onto the network on June 30, 1988:
 Date: Thu, 30 Jun 88 20:02:36 EDT
 From: Hans-Werner Braun 
 To: nwg@merit.edu
 Subject: Network to be operational.
 Cc: hwb
 
 The NSFNET Backbone has reached a state where we would like to more
 officially let operational traffic on. There are some sites who are
 doing that inofficially for a while already. We would like to still
 view this phase as "operational testing" for the near term future
 and will require some maintenance windows to update software and such.
 We will try to have those times generally at off hours and coordinated
 by our Network Operation Center. The routing phasein from our site is 
 coordinated by Susan Hares, who is the Routing Coordinator we proposed
 in the response to the NSFNET solicitation. I would like to encourage
 sites to closely coordinate the phaseins with her and it would be
 extremely helpful to inform her of changes, additions and such. While
 she can be reached at skh@merit.edu we would prefer if you would send
 updates and questions to nsfnet-admin@merit.edu (for now) so that
 several people see it and routing problems get resolved sooner (like at 
 off hours or when Susan is not around). Also problems and suggestions
 could for now be sent to the nsfnet-admin email address. In any case,
 Susan will call individual sites to coordinate the phasein of the new
 backbone. 
 
 Generally, problem reports at this point of time should be called in
 to 313-764-9430 which is a Merit receptionist desk. Someone there can
 then route the call to the NOC or wherever else is appropriate. We
 also have set up a mailing list nsfnet-noc@merit.edu.
 
        -- Hans-Werner
Later in July 1988 I sent email out about discontinuing the old 56kbps Fuzzball-based NSFNET backbone:
 Date: Mon, 18 Jul 88 21:05:44 EDT
 From: Hans-Werner Braun 
 To: nwg@merit.edu
 Subject: Cooling Fuzzballs.
 Cc: hwb
 
 On Wednesday, 20 July 1988, 13:00EDT we would like to physically turn the
 Fuzzballs off to see whether we have remaining issues with regional 
 routing. It is important that they are really off, in case people still
 listen to Hello packets and/or are using static routes. The phone lines will
 then stay for another few days, during which we can turn the Fuzzballs back
 on, if there is a requirement which cannot be resolved immediately. The
 phone lines will, I think, go around the 25th, after which we can turn 
 the Fuzzballs back on, if desired at some sites (as time servers, for
 example). Dave Mills has expressed interest to leave the nodes where they
 are, if only as an Internet research tool and/or servers for accurate time.
 They will obviously NOT be supported by Merit/IBM/MCI at that time, but I
 think Dave Mills will gladly do care and feeding, software wise.
 
        -- Hans-Werner
Another difficulty around that time was that control of the Internet evolution was pretty much with the United States Department of Defense, via groups such as their Internet Engineering Task Force. To address those concerns, Scott Brim and I met with people in the Pentagon to convince the DoD to at least open up the IETF to a larger community, specifically the NSFNET and its associated regional networks. To our surprise, one meeting was all it took, and they agreed, which lead to a rapid expansion of the IETF with a lot of involvement from many constituents over time.
NSFNET introduced a complexity into the Internet, which the existing network protocols could not handle. Up to the NSFNET, the Internet consisted basically of the ARPAnet, with client networks stubbed off the ARPAnet backbone. I.e., the hierarchy between so-called Autonomous Systems (AS) was linear, with no loops/meshes, with the Exterior Gateway Protocol (EGP) used for for inter-AS routing carrying the AS Number of the routing neighbor. This made it impossible to detect loops in an environment where two or more separate national backbones with multiple interconnections exist, specifically the ARPAnet and the NSFNET. I defined that I needed an additional "previous" AS Number for the inter-AS routing to allow supporting a meshed Internet with many administrations for its components. Meetings with various constituents did not get us anywhere, and I needed it quickly, rather then creating a multi-year research project. In the end, Yakov Rekhter (IBM/NSFNET) and Kirk Lougheed (Cisco) designed a superset of what I needed on three napkins alongside an IETF meeting that included not just the "previous" AS Number but all previous AS numbers that an IP network number route had encountered since its origin. This protocol was called the Border Gateway Protocol (BGP) and versions of it are in use to this day to hold the Internet together. BGP used the Transmission Control Protocol (TCP) to make itself reliable. Use of TCP as well as general "not invented here" caused great problems with the rest of the Internet community, which we somewhat ignored as we had a pressing need, and soon with NSFNET, Cisco and gated implementations at hand, the Internet community did not have much of a choice. Eventually and after long arguments, BGP got adopted by the IETF.
While the 1988 NSFNET backbone ran on a T1 substrate, the links were multiplexed via a switch into three 448kbps circuits. This created a connectivity-richer logical network on top of the physical T1 network. The reason for this were initial performance limits with the IBM backbone nodes, specifically their Almaden line cards, until they were replaced about a year later with full-T1-capable ARTIC cards.

logical initial T1 NSFNET backbone draft
Hand-drawn draft from 1987 about the mapping of the 448kbps circuits into the T1 NSFNET backbone network



A pair of 448kbps Almaden cards from the T1 NSFNET backbone
T1 NSFNET Almaden cards



physical initial T1 NSFNET backbone
Physical topology of the initial T1 NSFNET backbone, before being multiplexed into the 448kbps logical topology



Resulting logical topology of the initial T1 NSFNET backbone
logical initial T1 NSFNET backbone



T1 NSFNET backbone node
T1 NSFNET backbone node at the Network Operations Center with the IDNX switch on the right



Closeup of the T1 NSFNET backbone node at the Network Operations Center
T1 NSFNET backbone node closeup



NSFNET NOC
View of the NSFNET Network Operations Center in the Computing Center building at the University of Michigan. The machine room is seen in the back on the right.


After the transition to the fully-T1-capable ARTIC line cards, the separation between physical and logical topologies was abandoned for obvious reasons, as no multiplexing was necessary any more. Hence the switches were being removed at that time. This email reflects some of the transition strategy, with the numbers in the graphs being node identifiers of the backbone routers:
 Date: Fri, 30 Jun 89 15:44:04 EST
 From: Hans-Werner Braun 
 Message-Id: <8906302044.AA03284@mcr.umich.edu>
 To: nwg@merit.edu
 Subject: Reconfiguration updates.
 
 As of the 8 June 1988 we completed Phase A of the reconfiguration, resulting 
 in the following fully redundant T1 topology. The numbers are NSS node numbers.
 
        +---14         +-------------17----8---+
        |    |         |              |    |   |
        |    |         |              |    |   |
        |    |         |              |    |   |
        |   13---15----7---16---12----5---10   |
        |    |         |         |    |        |
        |    |         |         |    |        |
        |    |         |         |    |        |
        +----6---------+         |   11--------9
             |                   |
             +-------------------+
 
 Today the first link of the reconfiguration Phase B was installed:
 
                  +-------------------+
                  |                   |
        +---14    |    +-------------17----8---+
        |    |    |    |              |    |   |
        |    |    |    |              |    |   |
        |    |    |    |              |    |   |
        |   13---15----7---16---12----5---10   |
        |    |         |         |    |        |
        |    |         |         |    |        |
        |    |         |         |    |        |
        +----6---------+         |   11--------9
             |                   |
             +-------------------+
 
 

=======================================================================
 The remaining four steps of the reconfiguration are tentatively
planned
 to begin on the 5 July 1989:

=======================================================================
 
 
 Step 1         remove   Boulder to San Diego
                install  Boulder to Houston
 
                  +-------------------+
                  |                   |
        +---14    |    +-------------17----8---+
        |    |    |    |              |    |   |
        |    |    |    |              |    |   |
        |    |    |    |              |    |   |
        |   13---15----7---16---12----5---10   |
        |    |         |         |    |        |
        |    |         |         |    |        |
        |    |         |         |    |        |
        +----6-------------------+   11--------9
                       |              |
                       +--------------+
 
 Step 2         remove   Pittsburgh to Houston
                install  San Diego to Houston
 
  
                remove   San Diego to Champaign
                install  Seattle to Champaign
 
                  +-------------------+
                  |                   |
                  |    +-------------17----8---+
                  |    |              |    |   |
                  |    |              |    |   |
                  |    |              |    |   |
        +---13---15----7---16---12----5---10   |
        |    |         |         |             |
       14----|---------|---------+             |
        |    |         |                       |
        +----6         |                       |
             |         |                       |
             +--------11-----------------------9
  
 Step 3         remove   Ann Arbor to Denver
                install  Ann Arbor to Ithaca
  
                even exchange   College Park to Princeton
                even exchange   Ann Arbor to Princeton
                (for routing DRS capable Cross Connects)
 
                  +------------------17----8---+
                  |                   |\   |   |
                  |                   | \  |   |
                  |                   |  \ |   |
        +---13---15----7---16---12----5---10   |
        |    |         |         |             |
       14----|---------|---------+             |
        |    |         |                       |
        |    |         |                       |
        |    |         |                       |
        +----6--------11-----------------------9
  
 Step 4         remove   Princeton to Ithaca
                install  Ithaca to College Park
  
                remove   Ann Arbor to Pittsburgh
                install  Pittsburgh to Princeton
 
                  +------------------17----+
                  |                   |    |   
                  |                   8----|---+   
                  |                   |    |   |
        +---13---15----7---16---12----5---10---+
        |    |         |         |             |
       14----|---------|---------+             |
        |    |         |                       |
        |    |         |                       |
        |    |         |                       |
        +----6--------11-----------------------9

ARTIC card for the non-multiplexed T1 NSFNET backbone
T1 NSFNET ARTIC card



Non-multiplexed T1 NSFNET backbone
Non-multiplexed T1 NSFNET backbone after the 1989 transition


During 1990 the plans for the new 45Mbps (DS3) backbone upgrade matured with additional funding from the National Science Foundation.
 Date: Sun, 9 Dec 90 20:08:42 EST
 From: Hans-Werner Braun 
 Message-Id: <9012100108.AA04838@mcr.umich.edu>
 To: regional-techs@merit.edu
 Subject: T3 NSFNET update.
 
 I learned today that during the IETF meeting a few people expressed concerns
 about the lack of information flow regarding the NSFNET T3 upgrade. There has
 been no real intention not to keep people informed, it is that we are quite
 busy getting the work done and also there have been quite a few open issues
 where things could change very easily. However, given that the concern had been
 expressed, let me try to summarize where we stand today.
 
 We are in the midst of implementing an eight node T3 network right now. This
 is done as an overlay on top of the T1 network, rather than an integration
 between the two. Lateron we hope to be able to upgrade the remaining T1 node
 sites to T3 and phase out the T1 network.
 
 The T3 architecture that we implement consists of major nodes (Core Nodal
 Switching System, C-NSS) co-located in MCI junction places. Junctions are
 major Points Of Presence (POP). The intention there is to have a very robust
 central infrastructure. Robustness includes things like availability of
 bandwidth, uninterrupted power and constant availability of operators. This
 central infrastructure forms a two dimensional cloud where the internal
 connectivity may change over time. People will be able to use things like
 traceroute to figure the internal connectivity, but there is no guarantee
 that the connectivity will stay the same over longer periods of time. NSF
 in fact liked this kind of a model and requested from Merit to present the
 network in that fashion to the outside.
 
 The cloud will initially contain eight C-NSS connected to each other via
 clear channel T3 links. Eight Exterior NSS (E-NSS) at individual sites will
 connect into this cloud. Some C-NSS connect to more than one E-NSS, some C-NSS
 do not connect to any E-NSS and just allow for more robust infrastructure.
 
 The T3 NSS are based on IBM's RS-6000 technology. Specific cards, which will
 evolve over time, have been developed to connect to the T3 links as well as
 T1 links, Ethernets and FDDI rings. The T3 nodes are rack mounted.
 
 The C-NSS will be located in the Boston, Washington DC, Cleveland, Chicago (2),
 Houston, Los Angeles and San Francisco areas.  The eight E-NSS will be in
 Boston, Ithaca, Pittsburgh, Ann Arbor, Chicago, Urbana-Champaign, San Diego
 and Palo Alto.
 
 One of our major obstacles was the early availability of specific cards in the
 nodes. We knew that we could not implement all nodes at the same time and
 therefore had to stage things. We also had to implement a T3 testbed to
 do system integration and testing. The T3 testbed today consists of two C-NSS
 (in Michigan and New York) as well as E-NSS in Ann Arbor, Richardson (TX, MCI)
 and Milford (CT, IBM (T1)). A further node should be operational soon at IBM in
 Yorktown (NY). For the operational network it turned out to be easier (fewer
 sites) to start from the west coast. At this point of time San Diego (SDSC),
 Urbana-Champaign (NCSA) and Ann Arbor (Merit/NOC) have E-NSS working on the
 T3 network and connected to the co-location places in Chicago and Los Angeles.
 The Houston C-NSS is likely to start working this week. There is a chance
 that we will get the C-NSS and the E-NSS in the bay area working before the
 end of the year. That would then allow for half of the T3 sites to be running
 before the year is over. We expect to be able to install the remaining nodes
 in January. Besides Ann Arbor, the first E-NSS on the T3 NSFNET was installed
 at SDSC perhaps a week or two ago and the one at NCSA is working since Friday.
 
 At this point of time there is still some significant work going on to make
 further improvements to particularely the software in the nodes and, with
 the exception of the node in Ann Arbor, none of the E-NSS on the T3 backbone
 are connected to the external network yet.
 
 Hans-Werner
and:
 Date: Thu, 20 Dec 90 13:28:07 EST
 From: Hans-Werner Braun 
 Message-Id: <9012201828.AA12284@mcr.umich.edu>
 To: regional-techs@merit.edu
 Subject: Update on the T3 status.

 It seems timely to send out an update as to where we are on the T3 network.

 At this point of time E-NSS equipment is in San Diego, Palo Alto,
 Urbana-Champaign and Ann Arbor for the operational network and connected.
 Five C-NSS are in place in the San Francisco, Los Angeles, Houston and
 Chicago areas (two in Chicago). All are up and running on T3 links, except
 that the Palo Alto connection to the C-NSS is still running some T3 tests.

 We have done significant testing for dynamic routing and network management.

 On Tuesday we had the first coast to coast demonstration from an exterior
 workstation at the San Diego Supercomputer Center to a machine in White Plains
 (New York) where the serial links were running at T3 speeds. There was an
 Ethernet in the middle in Ann Arbor. To get coast-coast at this point of
 time we had to utilize our test network which exists between Merit, IBM and
 MCI locations. The workstation at SDSC had an EGP session with the E-NSS and
 we exchanged a "symbolic first file and email message."

 We hope to very soon be in a preproduction mode, followed by initial
 operational traffic on the new T3 network.

 Hans-Werner
Near the end of the year 1990, while I visited SDSC in San Diego, we demonstrated a multi-node 45Mbps backbone capability:
 Date: Sun, 30 Dec 90 23:02:01 EST
 From: Hans-Werner Braun 
 Message-Id: <9012310402.AA12061@orion7.merit.edu>
 To: regional-techs@merit.edu
 Subject: I thought this message from Gerard Newman may be of interest...

 Date:    Sun, 30 Dec 90 02:19:27 GMT
 From: gkn@M5.Sdsc.Edu (Gerard K. Newman)
 Subject: T3!
 To: steve@nsf.gov, moreland@M5.Sdsc.Edu, kael@M5.Sdsc.Edu, hwb@M5.Sdsc.Edu
 Organization: San Diego Supercomputer Center

 As of a few minutes ago H-W and I here at SDSC, and Bilal and Elise at
 MERIT managed to get "production" traffic routed over the T3 backbone.

 At present, SDSC is routing traffic for the following nets over the T3
 backbone:


        35              MERIT
        128.174         NCSA
        130.126         NCSA
        140.222         T3 backbone

 (nevermind that the packets take a hop thru a lowly Sun-3/110 -- that
 will be fixed in a better way in the near term).

 To wit:

 y1.gkn [13]% ping ncsa2.ncsa.uiuc.edu
 PING ncsa2.ncsa.uiuc.edu: 56 data bytes
 64 bytes from 128.174.10.44: icmp_seq=0. time=81. ms
 64 bytes from 128.174.10.44: icmp_seq=1. time=79. ms
 64 bytes from 128.174.10.44: icmp_seq=2. time=86. ms
 64 bytes from 128.174.10.44: icmp_seq=3. time=89. ms
 64 bytes from 128.174.10.44: icmp_seq=4. time=84. ms
 64 bytes from 128.174.10.44: icmp_seq=5. time=86. ms

 ----ncsa2.ncsa.uiuc.edu PING Statistics----
 6 packets transmitted, 6 packets received, 0% packet loss
 round-trip (ms)  min/avg/max = 79/84/89
 y1.gkn [14]% telnet ncsa2.ncsa.uiuc.edu
 Trying 128.174.10.44...
 Connected to ncsa2.ncsa.uiuc.edu.
 Escape character is '^]'.


 Cray UNICOS (u2) (ttyp003)


     For help, send e-mail to consult@ncsa.uiuc.edu or call (217)244-1144
                On nights and weekends, call (217)244-0710