Technical Briefing and Related Capacity Overview

Internal Report


FCC - International Bureau

Planning and Negotiation Division

June 1996

Emerging Wireline Communications Technologies

Technical Briefing and Related Capacity Overview

Internal Report


Table of Contents

I. Introduction

II. Wireline Communications Technologies

A. Signaling System #7 (SS7)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

B. Integrated Services Digital Network (ISDN)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

C. Packet Switching (X.25)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

D. Frame Relay

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

E. Switched Multigegabit Data Service (SMDS)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

F. Asychronous Transfer Mode (ATM)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

G. Fiber Optics & Fiber Distribted Data Interface (FDDI)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

H. Synchronous Optical Network (SONET)

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

I. Internet

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

J. Internet Voice, Video, and Audio

1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications


1. Introduction

2. Description

3. Status

4. Current and future applications

5. Parties in Interest

6. Other Implications

III. Capacity Overview

1. Introduction

2. Scope

3. International vs. National

4. International Deployed Capacity

5. ISDN Capacity

6. Internet Capacity

7. ATM Capacity

8. Conclusion

IV. Appendix - LAN, MAN, WAN, and Other Technical Terms


Telecommunications worldwide have been growing at an average of 10 to 15% annually with total revenues reaching more than $514 billion in 1994; U.S. companies account for 35% of the world market. In the last few years alone telecommunication technology have been changing at unprecedented pace. Today, more and more new products and services are being offered everyday globally. This impressive growth of telecom products along with the large marketshare of US companies, makes it important for the U.S. government to stay on the top of these technology changes so that they are able to provide a timely response in this increasing competitive global telecom market.

Last year, the International Bureau's Planning staff began a technology study whose goal would be to produce a comprehensive yet easy to understand analysis of the emerging wireline communications technologies and the implications of these technologies on the international communications marketplace. The report was designed to provide Commission staff with a background and understanding of most emerging technologies as well as with an update on the latest technology developments of each profiled technology. Our goal is to provide a single document with the critical information to help staff members understand the basic elements of these new technologies thereby enabling them to be more familiar and comfortable with some of these technologies.

The technologies covered in this report include:

1) Signaling System #7 (SS7)

2) Integrated Services Digital Network (ISDN)

3) Packet Switching (X.25)

4) Frame Relay

5) Switched Multigegabit Data Service (SMDS)

6) Asychronous Transfer Mode (ATM)

7) Fiber Optics & Fiber Distributed Data Interface (FDDI)

8) Synchronous Optical Network (SONET),

and three computer-related technologies:



11) JAVA

Wireline technology was chosen for the first study because of its wide service dimensions and the mass impacts of these technologies on the general public. Moreover, some of the basic telecommunications technologies, and the standards behind them, can be implemented in both the Satellite or Wireless industries. An anaysis of emerging telecom technologies impacting the the wireless and satellite industries will be produced at a later date.

SS7 was chosen because it provides the basic signaling and control functions for all data communications, from broadband services to wireless communications, and thus is integral to an understanding of most all emerging telecom technologies. ISDN, the first digital service available to residential customers , is still playing a critical role for the increasing popular trend of Internet world. Since all of these switching technologies require a transmission network to provide the complete services, a basic overview of Fiber Optic technology and SONET, the latest and more efficient high capacity transporting system, have been included.

In addition to the traditional telecom services, another new form of communications has surged as the hottest alternative means in the past two to three years-- the Internet. The Internet can transmit information from anywhere to anywhere. It is another way of passing information, or providing telecommunications services to general public. In light of recent development of voice and video services on the Internet, a separate section of Internet, Voice, Video, and Audio is covered. The pending Commission rulemaking (RM 8775) of Voice services over the Internet, is important to all parties, especially in the area of international accounting settlement process.

Although Java is not a wireline technology, but a software product, an analysis of JAVA was included because of JAVA's great impact on the Internet/Intranet communications marketplace.

Exhibit A provides a quick understanding of the relationship between the technologies covered. Exhibit A also lists brief characteristics of each technology, its strengths and weakness, how it competes with other similar technologies, and the current stage of development of that particular technology.

The fast evolving telecommunications and computer technologies combined with the convergence of these two industries makes it more important for the FCC to understand the various technologies and any crossover places. In May, the interest parties filed initial comments on RM 8775, indicated that there are several small software companies providing voice services over INTERNET. A month latter, the reply comments filed which reviewed the whole landscape of INTERNET Voice business changed completely. IBM, Microsoft, Intel, Netscape, AT&T, Nortel, and many other major companies have engaged in development of standard for voice on the net. It will be interesting to see how the Commission resolves the problem.

AT&T is currently teaming up with KDD, conducting a beta test of a remote medical diagnose service using 155 Maps ATM switching over TPC-5 fiber cable (the highest commercial available cable at 5 Gbps speed to connect two hospitals in Japan and with two hospital in the U.S.'s west coast.

JAVA, is another example, Sun just announced that it will release, at the end of this year, a sub-$25 picoJava chip, a microprocessor designed for cellular phones, printers, and other consumer electronics. We will have to do major update our report if we wait another month or so. This type of announcements are becoming more frequent than ever before.

We did not intend to cover any of other important issues, such as pricing, service provisions, regulatory implications. Some of these technologies required bilateral agreements (basic switching services, such as ISDN, SMDS, and ATM if available in the public networks ), others are multilateral in essence (such as Frame Relay). The complexity of these issues, meant that we could not adequately cover them completely in this first report. Additionally, pricing service provisions and other regulatory issues often require a joint US government approach and are not so easily completed in a short amount of time. It is our hope that this report will provide the necessary background and information for Commission staff to provide better services to the general public.

II. Wireline Communications Technologies

A. Signaling System #7 (SS7)


Signaling since the beginning of telephony has been an integral part of communications. Early signaling methods were limited, not only because they were analog and thus have a limited number of values, but because they use the same circuit for both signaling and voices. Removing the signaling from the voice network dramatically improves the speed of call set up and tear down. Voice and data circuits could now be solely reserved for when a connection was needed, rather than maintaining a connection even when the destination was busy. SS7 also creates additional controlling functions that are needed to maintain the connections and data communications. As a result, phone companies don't have to increase their number of circuits as frequently as before and provides significant cost savings by increasing trunk efficiency. It has been estimated that 50% of the RBOCs profits derive from SS7 functions.


SS7 began as a way to access 800 number databases. Shortly after the successful 800 number implementation, the SS7 network was expanded to provide other services. including 900 numbers, 911 service, custom calling features, caller identification, and many other services yet to be offered. It also allows for several enhancements such as fraud control, increased 700 and 900 number capabilities; caller identification; specialized ringing; call store and forwarding capabilities; call ring back, and transaction capabilities, ie, sending a data file along with the call (used by mail order companies to identify callers and provide marketing and past purchase history).

SS7 is the prevalent signaling system for telephone networks for setting up and clearing calls and furnishing services such as 800 operations. It uses out of band signaling to route calls. When a caller enters an SS7 network, a query is launched to see how it is routed, who should receive it, or what other customized features are in the database. The response message provides routing instructions, billing information, and any other information listed in the database. The switching equipment then routes the call using conventional methods.

SS7 messages fall under several categories according to usage, however three major sections can be identified. These are: 1) Operations, Maintenance and Administration (OMAP) 2) Transaction Capabilities Application (TCAP) and 3) ISDN Users Part (ISUP). OMAP deals with the administrative and maintenance functions of the call and is sometimes called the Network Interconnection Part. TCAP provides the data information needed, as well as the type of call. ISUP provides the signaling functions needed to support the control of circuit switched network connection used in the setup and tear down of calls. Cellular networks add an additional section, called the Mobile Application Part, to interconnect cellular systems using SS7 with the systems using alternative protocols.

The primary benefit of SS7 is that it provides an infrastructure, almost an overlaying network, allowing the rapid, millisecond exchange of information. Its main selling feature is the capability to send messages between switches on different systems without setting up a circuit between the two systems. SS7 is able to accomplish this because it is a specialized high speed packet switched network that allows switches to talk much faster resulting in quicker call set-up and tear-down; thereby enabling the network to provide a very rapid exchange of information to any two points and allow unlimited services as well as better efficiencies. As a result, it allows for the remote activation of features on these systems.

Before SS7, it used to take approximately nine seconds for a call to go through, now it only takes three seconds. In cellular systems, SS7 enables cellular subscribers to seamlessly roam from one cellular network to another by allowing the cellular providers to quickly and easily access each other's databases and share subscriber information. SS7 also makes identifying problems a lot easier because of the ability to capture the signal and quickly analyze the message so as to easily identify the problem.

The secret of SS7's success lies in the message structure of its protocol and its network topology. SS7 was derived from the earlier system which used Common Channel Interoffice Signaling System and called SS6, developed by the ITU with fixed length signal units. SS7, because of its use of variable length signal units, provides more versatility and flexibility than SS6.


SS7 has been in use since the mid 1980s and thus is often not classified as an emerging technology. The issues and concerns affecting new technologies often do not pertain to SS7 as much as they do for other technologies. SS7, on the other hand, are now more thought of as software deployment issues, than as technological ones. The introduction of SS7 enables new technologies to be supported within the Public Switched Telephone Network (PSTN). Moreover, SS7 allows phone companies to meet their goals of providing seamless service, regardless of the information being sent through the network. ISDN, for example, is one of the new technologies that must use SS7 features to complete its service.

The telephone network follows a certain hierarchy for all its digital transmissions. This hierarchy is a method of expressing the capacity of the various facilities under its control. In the US, the SS7 network uses data links, called DS-0s, operating at 56 kbps, while the rest of the world uses data links operating at 64 kbps. US telephone providers maintain that they require 8 kbps to ensure the integrity of the datalink and for control over transmission equipment. Preparing for the new broadband applications in the future, such as ATM network, the signaling system will shortly increase to 1.544 mbps in the U.S., and internationally, 2.048mbps. It is expected that these changes will result in modifications to the current SS7 protocol so that it can support these new high-speed data links. The ultimate goal is to provide all kinds of information regardless of the bandwidth necessary.

Current and future applications

SS7 applications were created in response to user demands. First there was ISDN, then came portable 800 numbers, later cellular providers began searching for an easy way of tying their networks together so that subscribers could roam from one cellular network without special roaming numbers. SS7 plays an integral role in helping to create an Intelligent Network (IN). The IN provides a series of services and features that can be modified at any time through simple procedures, rather than through expensive programming by certified technicians, as is the case today. The IN relies on the SS7 network, which forms the backbone and provides the basic infrastructure needed for effective and efficient running of the network. Advanced Intelligent Networks (AIN) provide many components not found in earlier versions of the IN. AIN does not define the features and services to be included, these are chosen by the customer. As a result, it only encompasses technologies or protocols that function on a higher level. One of the key components of any AIN is the Service Creation Environment (SCE); it defines the look and feel of the software that programs the switches used to provide new services. AIN administrators can then custom design services to meet the client's needs by checking or clicking on different network capabilities, rather than writing programming codes. In the near future, this capability to pick and choose the network capabilities will be brought directly to the client's home or office. As a result, large and small companies will be able to custom design their own network and services needs on their premises without relying on assistance from phone companies.

Parties in Interest

SS7 operates on both terrestrial and satellite links, wireline and wireless communication companies who have integrated SS7 technology into their networks and are the prime users of it. Some 80% of the population in the US has access to SS7 services; The SS7 network is used solely for the purpose of switching data messages pertaining to the business of connecting telephone calls and maintaining the signaling network. SS7 has been a great success and has been implemented in PSTNs by practically all global carriers. Its success was assured because its predecessors were woefully inadequate for supporting control signaling in telephone networks. Some features of SS7 can be found in other systems, such as GSM and even satellite signaling.

Other Implications

The Telecommunications Standards committee of the International Telecommunications Union provides standards that allow end-to-end-compatibility between international networks, regardless of the country of origin. It first published SS7 standards in 1980. Countries create their own national standards, based upon ITU-TS standards, to meet the requirements of their own networks. As a result, although the ITU's SS7 standards have been accepted by every country using SS7, not every country's network is the same; the US uses the standards developed by the American National Standard Institute (ANSI), the main developer of the North American version of SS7 (Bellcore, the research and development arm of the seven Bell Operating Companies, assisted in the development of these standards). The differences between ANSI standards and the ITU-TS standards are mainly in network management procedures and in addressing.

Compatibility problems between the various SS7 protocols have been solved through the use of translation software. This software allows any SS7 network to understand another country's SS7 protocols. SS7 currently has no competition from any of the emerging communications technologies and remains as the most common protocol for out-of-band signaling in the telecom industry.

B. Integrated Services Digital Network (ISDN)


ISDN or Integrated Services Digital Network is simply a set of standards for transmission of simultaneous voice, data, and video information over fewer channels than would otherwise be needed. It is an extension of the public telephone network with better voice quality, high data speeds, low error rate, faster call setup times and greater flexibility. In the past, video, audio, voice, and data services required separate networks. Video was distributed on coaxial lines, audio over balanced lines, voice used copper cable pairs and data services required coaxial or twisted pair cables. This multiple network environment was expensive to install and difficult to maintain. ISDN, it integrates all services over the same network, digitally, and offer feature services such as on demand networking, automatic bandwidth and on the quick conncetivity.(1)


ISDN technology allows three separate digital messages over a single pair of copper wires. Two of these paths, called "B" channels carry voice or data, while the third path, "D" channel, is used for signalling and/or X.25 packet networking. The most common ISDN system provides one signalling channel and two voice channels, however, some ISDN systems, particularly outside North America and Japan have as many as 31 channels.(2)

There are currently three different types of ISDN; Basic Rate Interface (BRI), Primary Rate Interface (PRI), and Broadband ISDN (B-ISDN). The first two types of ISDN also are referred as the narrowband ISDN services and are the main heart of the ISDN services. Broadband ISDN is the third and final type of ISDN. However, it is ISDN in name only, it completely different from the narrowband ISDN, its switching technology, transmission protocols, architecture and platforms are all different. We therefore will not cover the B-ISDN here, however, we will cover the broadband applications in our ATM technology section.

Basic Rate Interface is an ISDN offering that allows three digital signals, two 64kbs "B" channels and one 16kbs "D" channel to be carried over one single pair of copper wires. This allows a single telephone circuit to have multiple uses; voice and fax, or voice and an online service.

Every phone call carries a certain amount of information such as where it originated from, the type of data, and routing information. Phone companies call this data signalling. Signaling information that moves outside of the calls physical circuit is called out-of-band signalling.(3)

ISDN takes advantage of this technology and creates separate links for call set up and control. A phone or PC can be programmed to analyze this signalling information and make "intelligent" decisions about where to route this call.(4)

For instance, incoming data calls, i.e., faxes, can be directly sent to a fax machine or to an ISDN modem connected to a PC. It also allows a small office to have one line, but different phone numbers. As a result, calls could be routed to different numbers, thereby allowing a small office to operate with fewer lines installed.

Since ISDN brings the digital signal to the home or desktop it can be used to connect a small branch or home office to a remote network and function as if it were in the same room. A technology called BONDING (bandwidth on demand) enables the two "B" channels to be combined into one, practically doubling the amount of bandwidth available to 112kbps, 16kbs is lost because of differing RBOC standards.

Since ISDN sends data at high speeds over ordinary phone lines, it is viewed as being able to provide users with an inexpensive way to increase their amount of bandwidth coming into the home. Additionally, efficiency is increased because of ISDN's quick call set up times (just a few seconds vs. the typical 20 or more seconds for analog modems).(5)

Primary Rate Interface (PRI) basically is for business users, typically have a relatively large amount of traffic. It often use what US phone companies call T-1 circuits. PRI is an ISDN circuit transmitting at T-1 speed, 1.544 mbps.(6)

Many businesses own or lease T-1 lines from a local or long distance provider and then use a device called a multiplexer to split the circuit into many different T-1 channels.

In North America and in Japan PRI allows for 24 voice-grade channels, one ("D") for signalling and 23 ("B") for voice/data), while in other countries PRI has 31 channels, usually divided into 30 "B" channels and one "D" channel. In North America and Japan both the terminal and the network adapters, which plug the terminal into the PBXes or other communications equipment, are provided by the user, while in Europe they are located and owned by the phone company(7)

. Additionally, North America and Japan use a T-1 interface, while other areas use an E-1 interface.(8)

As a result, these two different ISDN networks cannot communicate.

PRI, by significantly cutting down the time for set up and completion of calls, increases the efficiency of a customers' voice and data equipment. For instance, if PRI determines that a call cannot be completed, it does not keep trying to make the connection, instead, it frees the resources for other use. PRI can also modify its structure to accommodate changes in traffic patterns. It is typically used for connections between a private branch exchange, a telephone exchange operated by the customer of a telephone company, and the central office of the local or long distance telephone provider.(9)

PRI technology allows remote calls to connect as quickly and use the same special features as calls within a building. It also allows channels to be aggregated in increments to provide multi-rate wide band connections for video conferencing between offices or for connecting LANs at different sites.(10)

Basic ISDN and PRI are called narrowband ISDN because they limit transmissions to less than 2 mbps (1.5 Mbps in the US and Japan).


Although ISDN was first developed in the councils of the CCIT, now called ITU-T, in the late 1970's. In 1996, the technology has still not been fully implemented. The problems with implementing ISDN technology revolve more around politics than around technology. Since US ISDN had no leader to orchestrate an industry-wide conversion to its technology, major equipment and switch manufacturers adopted their own ways of implementing ISDN in their networks and switches. Finally, in the mid-1990s, representatives from all sections of the US industry met and created a list of requirements necessary for creating a national ISDN standard. This standard is called NISDN-1. This standard, however, works only for BRI and not for PRI, which is what most businesses use.(11)

Although Bellcore has advocated the use of NISDN-1, IXCs feel that this standard is inadequate for their needs and have chosen not adhere to it.(12)

More work is in the progress, and technical specification in NISDN-2 and NISDN-3 are aiming to allievate these concerns and reaching a national standards.

Many enhanced services that ISDN provides can dramatically improve communications between computers, however, the lack of a critical mass of customers using ISDN, becomes a major stumbling block in the proliferation of digital end-to-end services. Moreover, the availability and cost of terminal adaptor equipment needed to facilitate communication between computers using ISDN lines and those using analog lines limit the attractiveness of residential ISDN.

ISDN in Europe was relatively successful comparing to the U.S. experience. In 1989, 26 telecom regulators representing over 20 European nations adopted a national standard for ISDN; This standard was fully implemented and was operational by December 1993.(13)

Prior to December 1993, each country adopted their own ISDN standard; All European countries are in the process of converting their users from the old national standard to the new Euro-ISDN standard. Since December 1994, Euro-ISDN allows for out-of-band signalling and packet and circuit switching.

Germany and Deutsche Telekom, in particular, is the world's most succesful ISDN operator with installed base of over .5 million customers and over six years of experience in the market. It has achieved this success through heavy subsidization of ISDN equipment and service. It also prioritized the linking of customers across international boundaries. In 1996, Deutsche Telekom announced a further reduction in all ISDN tariffs and is heavily subsidizing the equipment costs for companies who convert from the national standard to the new Euro-ISDN standard. Germany's implementation of Euro-ISDN protocals regarding SS7 are helping to resolve the myriad of ISDN interconnection problem with North America.

In Japan, growth of ISDN is confined to the business sector. Two companies offer ISDN service, Nippon Telephone and Telegraph (NTT) and Kousai Denshin Denwa (KDD). International service is provided by KDD, since its introduction n 1989, and is available in both the North American and the European countries.

Current and Potential Applications

One of the most common uses for ISDN is in linking LANs to each other and to the outside world. ISDN is often a cost-effective way of temporary linking LANs to each other, to remote hosts, or to individual non-LAN users or locations. It is not, however, designed to replace LANs or even bridge them into larger LANs or WANs.

A popular ISDN application is a piece of software that dramatically cuts the costs of ISDN service to companies by using "spoofing" techniques. Spoofing saves both time and money by allowing LANs to remain connected to ISDN lines, without incuring the huge costs of keeping these lines open. Spoofing works by dropping the phone connection after a certain length of time, and automatically reconnecting it whenever a user reaccesses the LAN. The dropping and reconnection is transparent to the user because the connect time for ISDN is almost instanteous.(14)

Video conferencing is another popular ISDN application among business community. Due to domestic and international video conferencing using ISDN result in large cost savings since it eliminates the need for travel or expensive video conferencing equipment. Througout Europe and Japan, companies are using ISDN for one-to-one desktop video conferencing applications.(15)

There are several projects throughout the US where companies and/or communities are experimenting with ISDN. The Cincinnati Public Schools are linked together through an ISDN Public Network which includes both BRI and PRI. Pacific Bell has entered into a pilot project with the California School district to demonstrate ISDN's potential by donating a years free ISDN access to over 8,600 schools, libraries, and community colleges. Additionally, as the popularity of interactive applications involving the internet grows, applications that use ISDN will grow as well. ISDN, by design, is the best suite for multimedia applications.

Parties of Interest

In the US, ISDN was developed by committees, common carriers, and trade associations. All of the Regional Bell Operating companies as well as all long distance carriers and alternative carriers are actively implementing ISDN into their networks or providing ISDN services to their subscribers however, their speed of implementation is slow.

Bellcore is responsible for coordinating all ISDN standards information and for drawing up specifications for North American National ISDN Standards. The North American ISDN Users Forum, an organization of ISDN-interested parties and coordinated by the Department of Commerce's National Institute of Standards and Technologies, plays an important role in developing and implementing ISDN technology.

Major European telcom operators are in various stage of promoting ISDN, with Germany and France as the leading countries. In Asia, Japan, Hong Kong, Singapores, and Australia are among the active ISDN players.

Other Implications

If the critical differences between services offered by the IXCs (AT&T, MCI) and those offered by the RBOCs (Pacbell, NYNEX) and other LECs are not solved, ISDN benefits may never reach their intended users. Hopefully, Bellcore's technical specification in NISDN-2 and NISDN-3 will allievate these concerns and result in IXCs adoption of these standards.(16)

Until the interoperability and other technical problems in connecting LANs to leased or owned T-1 lines or digital switches are solved, ISDN use will remain minimal.

In addition, as the number of ISDN equipment vendors continue to discover creative ways of exploiting the bandwidth, the appeal to users broadens. A system that allows easy access to on-line services (such as the Internet) will allow home-office workers, students, and others to replace their modems and download complex files quickly and cost-effectively.

However, the ISDN is a transition technology and will diminish its usefulness once the broadband communications is fully commercialized in the public network service area. It is the broadband applications that will effect the future digital world of mass communications.

C. Packet Switching (X.25)


Packet Switching is sending data in packets through a network to some remote location. The data to be sent is subdivided into individual packets of data with each packet having a unique identification and carrying its own destination address. Although each packet may take a different route and possibly arrive in a different order than how they were shipped, the packet ID lets the data be reassembled and placed in proper sequence. Packet switching is the process of routing and transferring data in packet form so that a channel is only occupied during the transmission of the packet. Upon completion, the channel is available to transmit other packets. Because of its efficiency of using the network only while data is being transmitted, packet switched networks provide more efficient data transport than circuit switched network.


A packet is a group of bits that is switched as a unit. A packet contains user data, destination and source information, control information, and error detection bits arranged in a particular format. Each packet is formed by the segmentation of user message information or data with variable length. Information embedded in packets is sufficient for switches to route them through networks. A packet header, which precedes user data, contains control information and provides user's identification for synchronization, routing, and sequencing of a transmitted data packet. Packets can vary in length but are usually limited to 1024 bits.

For routing control, conventional packets switching includes explicit source and destination address information in each packet header. This permits networks to route each packet independently from source to destination. Transmitting switches store packets until an acknowledgment is received that each packet has arrived without errors at the destination switch. If a packet is received with errors, the destination switch returns a negative acknowledgment to the transmitting switch, which then retransmits the packet.

Unlike circuit switched voice networks where signaling is invoked once to establish call connections for the duration of the entire call, in packet network, each packet is examined for source and destination address information with the final throughput of packet networks only limited by the processing capabilities of the packet switches. Frame Relay, a fast packet switching technology, is an improvement over the packet switches by eliminating some of the header's functionality, therefore, improving the data transmit speed and throughput capacity.

A connection-oriented packet switched network transport service establishes logical connections in response to station equipment requests. All packets entering the networks are delivered to data terminating equipments in the order they were received. This service is referred to as virtual circuit services since message transmission is logically identical to transport over ideal circuit switched facilities.

There are two different internal packet switched network designs. One creates a single route or path through the network for all packets to follow, while the other, numberes packets and routes them through the network on different paths, determined on a packet-by-packet (connectionless) basis or datagram operation. This datagram operation handles each packet independently and may not deliver packets to the terminating switch in order and reliably without errors. The benefit of using the datagram approach is that it allows the network to dynamically find the best available route and minimizes line failure or consistent congestion. ARPANET, the first operational packet switched network is a datagram network. TYMNET and most other packet networks today operate on a virtual-circuit basis.


Packet switching was developed in the 1970s for long-distance data communications as an alternative to circuit switching. Interoperability among terminal equipments and networks is one of the key elements for the success of packet switching.

Packet switching networks, designed to sustain error-free service using older, poorer quality, lower bandwidth analog circuits, have provided years of reliable data communications. The lack of processing capabilities on the attached Data Terminal Equipment (DTE) devices (dumb terminals) to support error control and other higher-level protocol services led to the introduction of the X.25 packet switching network, where the networks themselves provide those functions.

The conventional X.25 packet switching network is the International Telecommunications Union's CCITT recommendation, the standard setting body of the ITU (now called ITU-T), defines the interface between an end user device (data terminal equipment) and network. X.25 establishes the procedures for two packet-mode terminal equipments to communicate with each other through a network. The procedures include functions such as identifying the packets of specific user terminals or computers, acknowledgin and refecting packets, initiating error recovery, flow control, and other services.

Packet switching operations produce variable end-to-end message delays due to the comprehensive error detection and correction service performed in the network. These variable delays and limited bandwidth are unacceptable in high speed LAN, LAN interconnections, voice, and video applications.

In 1974 the ITU-T first issued the X.25 Recommendation; five revisions have occurred since then, in 1976, 1978, 1980, 1984 , and 1988. Packet switching, developed in the 1970s, is no appropriate to fulfill the requirements of modern applications. It was designed to support user traffic on error-prone networks, with the supposition that most user devices were relatively unintelligent as opposed to today's networks which are high intelligent.

Current and Future Applications

X.25 is now the predominant interface standard for wide area packet networks. Its usage continues to grow throughout the world, particularly in the less developed countries; it is available in off-the-shelf products, and it is a cost-effective service for bursty, slow-speed applications. In Europe, X.25 is still very popular although Frame Relay has picked up momentum. In the U.S., Frame Relay is more popular than X.25.

Recently, companies in the U.S. and in Europe have been combining X.25 with ISDN service. This he new range of products comprises two components: the service features "access to packet-switching networks via the D channel of the ISDN" or "via the B channel for ISDN users, and the "special access for packet data" as a link between the ISDN and the X.25 network. The capability of ISDN access to transmit packet data allows new applications. For example, electronic cash applications, booking systems, telecontrol applications and database queries are the sample applications of this type. The D-channel access is the solution for all interactive applications where the quantities of data to be transmitted are relatively small and time is not a critical factor.

X.25 also is one of the main protocols for carrying international Internet traffic. It is very common to find X.25 as the service protocol for links between U.S. to European and U.S. to Asia countries.

In general, however, X.25 will likely see decreasing usage as the new emerging technologies and physical interfaces mature.

Parties in Interest .

As the first interface standard for wide area packet networks, X.25 is widely available to most of international carriers worldwide. Some research networks also use X.25. EuropaNET, an international network linking European national university networks offering full connectivity to the global Internet, provides fully-managed TCP/IP and X.25 connectivity to its users. Most routers are interesting in X.25. For example, Morning Star Technologies Inc., a wide-area communications products provider allows most UNIX systems to communicate with other devices that are attached to the public and private packet switched networks. Novell's new NetWare MultiProtocol Router 3.0 products also support X.25 networks.

Other Implications

With new technology, the operating environment is much more reliable than before, more digital and broadband available, nearly error-free transmission, and has intelligent end-user equipment. Since the fast packet switching such as Frame Relay, SMDS, and ATM, will gradually replace the X.25 we will not consider the X.25 Packet Switching as one of the main technologies in the next century.

D. Frame Relay


Frame Relay is a specialized type of packet switching that uses smaller packets and thus requires less error checking than traditional forms of packet switching. It has become the transmission standard for sending data over public or private leased phone lines. Data is broken down and placed in frames, each with vary lengths. Frame Relay, by temporarily seizing extra bandwidth(17)

can send large files quickly and efficientlyand is excellent at sneding high-speeds, bursty data over wide area networks. It offers lower costs and higher performance for those applications in contrast to traditional point-to-point services. With frame relay, a pool of bandwidth is made instantly available to any of the concurrent data sessions sharing the circuit whenever a burst of data occurs. An addressed frame is sent into the network, which in turn interprets the address and sends the information to its destination at up to 2.048 Mbps..

Frame relay networks use bandwidth only when there is traffic to send. Frame relay is available to the end user at five speeds--56/64 kilobits per second, 256 kbps, 1.024 mbps, 1.5 Mbps, and 2.048 Mbps (in Europe). Frame relay is often used only for data communication since voice traffic is highly sensitive to variations in the transmission delay common to packet networks. For voice to be supported satisfactorily in a packet network, each packet must have a time-stamp which is monitored by the network. Frame Relay lacks such a mechanism(18)

. Small variations in transmission are usually not as critical to data traffic as they are for voice.


Frame Relay is a high-speed switching technology that achieves 10 times the packet throughput of existing X.25 networks by both eliminating two-thirds of the X.25 protocol complexity and by adding out-of-band signaling. It differs from regular packet switching because it functions at a different layer (layer two) of the industry seven layer communications model; conventional packet switching operates up to later three. The higher the layer, the more functions need to be performed. Since Frame Relay performs fewer functions, it is therefore faster.

Frame Relay was developed through two technologies: (1) widely deployed optical fiber-based transmission; which reduced transmission errors substantially, (2) upgraded intelligent customer premises equipment (CPE), which transferred error-recovery functionality to the customer nodes (through CPE), from network.

As its name implies, Frame Relay relays the frames one behind the other in a predetermined path. Frame relay is a switched service positioned to improve communications performance through reduced delays, more efficient bandwidth utilization and decreased equipment cost.

At the present time, Frame Relay pricing varies considerably among service providers, but ballpark savings relative to private line alternatives of 30% to 40% can reasonably be expected. Frame Relay service providers typically price their service on a per port basis, independent of the type of access service employed.


Frame Relay was first introduced 10 years ago as part of ISDN. Frame Relay, because of its high performance and efficient bandwidth utilization, has become the data technology of choice for organizations around the world implementing networks at speed of T1/E1 (2 Mbps) and below. It has gradually replaced most of X.25 worldwide, and will reach an estimated $1.6 billion revenues in the U.S. market in 1996, with increasing popular in the European market. However, new generation applications and the growing demand on corporate information systems is creating the need for more bandwidth, especially in high traffic areas of the network. Thus, the original interface limitation of maximum access rate of 2.0 Mbps, makes Frame Relay a less attractive technology for these high bandwidth seekers.

The Frame Relay Forum, a group of more than 300 member companies worldwide with common interests in the Frame Relay technology, recently amended the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) Implementation Agreements (Ias) to meet users demand for access speeds up to 45 Mbps (DS3). In March 1996, LDDS Worldcom, announced a new service offering of enhanced Frame Relay up to 45 Mbps for its domestic and international customers and is the first carrier to implement this new major development. According to LDDS, customers do not need to upgrade their existing Frame Relay networks to ATM switches to enjoy the same high speed data communications services as ATM's offer.

In the coming months, service providers will migrate their frame-based networks to ATM backbones to carry the large installed base of Frame Relay users, and at the same time, providers will offer ATM as a connection option. This migration of network technology will continue the evolution towards widespread adoption of Frame Relay to ATM interworking standards, bringing greater bandwidth, higher performance networking and scalability for a wide variety of user applications.

Current and Future Application

Frame Relay has evolved from a single application technology to one with a broad spectrum of uses. Initially driven by LAN-to-LAN connection, it also provides services for information systems applications, client-server computing, CAD/CAM applications, graphics applications, and other applications that generate bursty traffic. Corporations are recognizing that Frame Relay, a virtual private line replacement, offers a reliable, high performance and cost-effective alternative for their SNA ( Systems Network Architecture, an IBM product over the mainframe computer) mission-critical applications. Moreover, since the change is transparent to host applications (similar management tools and practices), moving SNA networks to Frame Relay can reduce the monthly WAN lease costs substantially.

Recently, many non-traditional uses for frame realy have begun to emerge. The explosive Internet usage is fueling Frame Relay's growth as Internet service providers purchase Frame Relay services to connect their networks. Some wireless Internet providers now use Frame Relay as their backbone networks. As frame relay expands, users will gain the ability to incorporate non-data traffic over the Frame Relay networks.

Parties in Interest

Both LAN bridge/router and T1 equipment vendors are striving to address the Frame Relay market. T1 vendors include AT&T, Network Equipment Technologies (NET), Newbridge Networks, Netrix, Northern Telecom, StrataCom, and Timeplex. StrataCom offers a proprietary, fast packet switching product now being tested in several field trials. DEC, Cisco, Vitalink, Wellfleet, and RAD Data Communications Inc. are LAN equipment vendors modifying existing customer premises products for Frame Relay application. Cascade Communications' B-STDX 9000 ( a multiservice switch simultaneously supporting Frame Relay, SMDS, ATM, and ISDN on a single platform), and Cascade 500 ATM switch provide public carriers and corporations with ATM migration solutions.

All public carriers have a variety of Frame Relay offerings, domestically and internationally. In late March 1991, Williams Telecommunications Group [(WilTel) now owned by LDDS WorldCom] became the first carrier to announce general availability of a public Frame Relay service. LDDS's Frame Relay service is based on StrataCom's IPX-32 fast packet multiplexers, installed in its nationwide 11,000-route-mile digital microwave and fiber optic backbone network. In late 1991, Sprint introduced its public Frame Relay service, but it continues to support its standard X.25 based packet switching service, formerly known as Telenet. AT&T's Interspan frame relay service, has been available in late 1992 and using StrataCom IPX-32 product as the access node of its high-speed network. MCI, also has increased its Frame Relay service offerings worldwide. BT North America, MFS, LDDS Worldcom, CompuServe, and Graphnet are among other frame relay service providers.

BT North America offers service in 116 cities and three countries. It is interesting to note that although frame relay was to be a new ISDN service, no U.S. ISDN service provider has announced Frame Relay as a call-by-call additional mode for its existing ISDN service.

Other implications

To date, each Frame Relay service provider's network is proprietary. Like X.25, Frame Relay is merely an interface specification leaving internal network design issues to service providers. Consequently, interoperatbility among different Frame Relay networks has yet to be addressed. Frame Relay to ATM interworking standards, in particular, is going to play an important role. High data bandwidth demand deriving from ever increasing Internet and Intranet networks will cause more and more migration in the coming years. Again, capacity and performance will be the main issues for the near future.

E. Switched Multimegabit Data Service (SMDS)


Switched Multimegabit Data Service (SMDS) is a connectionless, cell-switched data transport service that offers total end-to-end applications solutions. It is targeted at the high performance, switched interconnection of LANs; thereby allowing communications to take place between customers without the need for a time consuming service order process to the carrier service provider. SMDS is similar to frame relay in many aspects, particularly in its protocol for transfering data traffic and its fast packet-switching technology.

SMDS is the first protocol to provide connectionless service for a broadband public network service . The switch reads addresses and forwards cells one-by-one over any available path to the desired endpoint. SMDS addresses ensure that the cells arrive in the right order. The benefit of this connectionless "any-to-any" service is that it travels over the least congested routes, since there is no need for a pre-defined path between devices, it results in faster transmission speeds, increased security, and greater flexibility to add or drop network sites.


SMDS was created by the Bellcore and was designed for local intra-LATA and wide area network (WAN) services. Its connectionless nature is what distinguishes it from other similar technologies. Bellcore designed SMDS as a public broadband service for the RBOCs. It uses a single switching and multiplexing mechanism to interconnect LANs, including very high-speed LANs such as fiber-distributed data interface (FDDI) networks at 100 Mbps, into a wide geographical area network such as Metro Area Network (MAN). Bellcore using its knowledge and expertise in telecommunications, was able to cover every aspect of the network interfaces from subscriber network, interswitching system, and intercarrier, while allowing local exchange network to be connected using a standard interface to any other network.

Since SMDS is connectionless, it is easy for users to build full mesh networks in which each site is connected to all other sites. This mesh connectivity of dedicated private-line network requires fewer access lines and less terminating equipments, thereby achieving a significant savings. With DS3 (45 Mbps) access rates SMDS makes dispersed locations an remote sites look like they are on the same LAN.

SMDS offerings are flexible and can be coesisted with dedicated facilities. Network managers can connect their existing networks to an SMDS carrier switch via a Subscriber Network Interface (SNI) to a T-1 or T-3 circuit. T-1 SNI are used to access 1.17 Mbps SMDS offerings, for T-3 SNI, users can access to 4, 10, 16, 25, or 34 Mbps offerings. A fractional T-3 circuit can be used to access intermediated-speed SMDS offerings.

SMDS also can provide call screening, verification and blocking enabling SMDS services to function as virtual private networks. As a result, customers can use SMDS as an alternative to private networks. Subscribers can either deploy SMDS for full mesh connectivity or use SMDS' address screening features to limit transmissions within a closed user group.

Since SMDS can offer different access classes, it can provides the needed throughput to satisfiy individual compnay needs. Unlike private lines or virtual connections, it allows for the easy expansion of existing networks, since new sites can quickly be added to an SMDS net without reconfiguring the entire networks. Additions to an SMDS network simply require an update to a screening database on the SMDS switch.

Other benefits subscribers can realize with SMDS include bandwidth on demand, burst-speed bandwidth without preplanning, network security and privacy, billing features for network operators, most of all, its ready availability for high-speed LAN performance.

The separation of the technology independent SMDS service layer from the technology dependent access layers allows SMDS to be supported by many different switching technology platforms and different network interface technologies. The latest SMDS access interface uses the public network-based, multi-service ATM user-to-network interfaces.


SMDS was first deployed in December 1991 and now has over 1,500 connections. Analysts expect this to increase to more than 10,000 connections by the year 2000.

SMDS, because of its high speed access rates (45 Mbps), provides a communications standard that most frame relay networks will never provide and it is providing this standard (or whatever we are calling it) before full ATM deployment.

The recently announced low-speed SMDS access - 56 Kbps, 64 Kbps, and increments on Nx56/64 kbps will allow both small and large companies to take advantage of SMDS' service features.

SMDS like ATM, is based on fixed-size, 53-byte cells, a 48-byte payload and a 5-byte address header. Since both share compatible control headers transferring applications from SMDS to ATM is relatively easy. SMDS is the only available connectionless ATM service.

The ATM Forum works closely with the SMDS Interest Group to develop interoperability specification for both SMDS and ATM.

Current and Future Applications

Initially, SMDS service focused on intra-LATA MAN applications. Average LAN-to-MAN-to-LAN traffic may be only 0.5 Mbps, but critical applications often require peak capacities of 10 Mbps or more. In a conventional network the user would be required to lease an expensive DS3 circuit to meet the one to two percent of the time the extra capacity is required.

SMDS's LAN-like performance features make it a natural fit as a backbone network for seamlessly interconnecting Ethernet, Token-Ring, FDDI, and ATM LANs over extended geographic areas. Like ATM, SMDS provides the high speed access services for Internet providers. In particular, Internet providers found SMDS very useful in managing large complex connectionless networks where the mesh structure of the network can become very complicated.

Due to its design as a Wide Area Networking (WAN) switching service, SMDS can also perform international link and transfer data internationally. MCI has established an international SMDS trunk to the UK which serves England and Ireland. European SMDS customers can send and receive data from the U.S. SMDS network.

Parties in Interest

In the U.S., GTE and most RBOCs, except NYNEX, offer various SMDS services. MCI, Sprint, WilTel, and PACKETS provide interexchange points-to-presence across the country. SMDS has immense growth potential not only in the United States but also around the world. In late 1993, British Telecom, in a joint venture with the academic community, introduced a commercial SMDS services called SuperJANET which offers 10 Mbps access over E3 lines for medical image transfer, distance learning, multimedia information services, and other applications. In Germany, the SMDS service is called Datex-M and provides a new future-oriented broadband service for the interconnection of LANs; it allows customer traffic to increase from 2 to 4 Mbps and even higher. There are several other major European countries with SMDS networks, including France, Italy, Switzerland, Sweden, Belgium, Austria, Ireland, Denmark, Netherlands, and Portugal.

In South America, Telebrasilia in Brazil offers minimal SMDS service. In Japan there is a potential SMDS trial scheduled for later this year.

Equipment vendors also play a vital role in this new service. More than 40 manufacturers of network and computing equipment now offer SMDS products-bridges, routers, interface cards, access servers, protocol analyzers, and all switches - both in the U.S. and Europe.

Other Implications

SMDS got a late start because of the long lead time it took to develop and implement the SMDS InterCarrier Interface. As a result, Frame Relay gained market momentum making it difficult for SMDS to gain additional market share and that has been hard to stop. ATM, on the other end, has had great media coverage but still faces many tough technical issues. SMDS is proving to be an excellent data service for highly meshed data networks that need access line speeds between 56 Kbps and 34 Mbps. SMDS also provides a better service for dynamic networks that are growing or changing for inter-enterprise network connectivity. In the future, however, carriers will eventually use ATM as a common switching platform to carry a variety of fast packet services such as SMDS and Frame Relay services. Already, a few RBOCs are planning or implementing SMDS using ATM switches, rather than services-specific switches. SMDS should prove to be a connectionless service for ATM networks and provide its uniqueness of any-to-any connectivity among diverse and dynamically changing communities of interest.

F. Asynchronous Transfer Mode (ATM)


ATM, Asynchronous Transfer Mode, is a switching, multiplexing and transmission technique which uses short, fixed-length packets called cells to transport information. The ATM cell contains a 5-byte header and a 48-byte payload to transfer information regardless of the underlying type of transmission.

The word "Asynchronous" is used because ATM allows asynchronous operation between the sender clock and receiver clock. The difference between both clocks can be easily solved by inserting and removing empty or unassigned cells (packets that do not contain any information)(19)

. ATM does not care what form the information assumes; It simply cuts the data into equal-sized packets or cells, attaches a header and routes the packet to the proper destination. ATM cell routing is based on the principle of logical channels with dual identification: the cell header contains the identifier of the basic connection to which the cell belongs - called virtual channel identifier (VCI), and the identifier of the group of VCs to which the connection belongs - called virtual path identifier (VPI). With VCI and VPI, along with other information in the cell header such as interfacing formats; user-to-network interfacing (UNI format) and network-to-node interfacing (NNI format), the ATM information cell then can be routed to its designation through the network.

ATM is related to both circuit and packet modes. The simplicity of the protocol used means that the transfer of cells to the network nodes can be handled entirely by the hardware , allowing the payload of the cell to be maintained transparently, similar to circuit switching. Since ATM retains all the flexibility of packet mode, enabling only required information to be conveyed, it allows various bit rates of information flow and provides dynamic bandwidth allocation.


ATM, like Frame Relay, is another form of fast-packet switching technology. It chops traffic into fixed size packets and mixs these information cells, most from different sources, into a huge transmission pipe through a process called statistical multiplexing(20)

. This process allows the transmission pipe to reache an optimal level. ATM, through its enhanced service adaption codes, supports five different service classes: constant bit rate or circuit emulation, variable bit rate video and audio, connection-oriented data transfer (frame relay service), connectionless data transfer (SMDS) , and high-speed data transfer (such as TCP/IP, the Internet protocol). Several sources combine, from voice, data, to video, and are multiplexed on a single link. Because of the packeted information into ATM cell, the ATM switch then multiplexes the cell with only valid information and discards cell with invalid information or empty cell thereby allowing effective bandwidth to be reduced to the minimum. ATM is more efficient in its use of bandwidth, particularly in the broadband applications.

Cell relay uses fixed sized cells, processing and queue management. Since these network nodes are much simpler than required by frame relay's variable length packets, saves transmission time and become a faster network.

The size of cells or packet also is important to the transmission speed. The larger the packet, the higher the delay. The smaller the packets, the higher the overhead ratio. In the early design stage, there was a long debate by the ITU- T committee as to the choice of the ATM cell size. Finally, ITU - T adopted a comprised 48-byte cell size, between the Europe favored 32 bytes for the transmission efficiency and the U.S. and Japan favored 64 bytes for echo cancelers for voice.

In any packet switching paradigm the bits in the header, which carry centain functions, but it represent an overhead cost of transmission, thereby reducing the amount of bandwidth available for transporting data, for example, packet switching technologies, performs packet retransmission, frame delimitation, and error checking, while Frame Relay performs the last two functions, the ATM on the other hand, does not perform any of these functions. One of the main advantages of ATM is the reduced functionality of the ATM network by using a smaller header. Thus will simplify the switching and processing functions in the network.

Current and Potential Applications

ATM is designed as a serial, high-speed interface, capable of taking advantage of technology advances to multi-gbps transfer rates. It can be applied to different environments, such as LANs, WANs, and public networks. Also, ATM supports synchronous as well as asynchronous traffic, meaning voice and image, therefore, it is well suitable for CATV environment.

The first applications of ATM are for high speed in LAN interconnect or backbone configurations and power users running ATM to the decktop. A number of ATM WAN switch announcements made in 1993, addressed the area of traffic management and congestion control, provided another wave of ATM application. Because of its capability of switching to multiple bit streams, ATM eventually be able to provide multimedia access to all LAN users at the appropriate speed for their needs. Other applications such as Video-On-Demand, Distance Learning, and interactive audiovisual are also suited for ATM-based services. In addition, the increasing popular Internet and Intranet services, which currently using leased private lines as network backbones, will be the perfect services to migrate to ATM network.


ATM concepts began in the early 1980s, at the time people tried to find the most suitable technique for switching high bit rate channels at more than 100 mbp/s with short delay. The first feasible model of a complete ATM transfer system was announced by the CNET, the French space agency, in 1985. In 1988, the ITU-T approved recommendation 1.121 which ratified the choice on ATM as the target mode for broadband networks fo all types of information which included low bit rate information such as voice. Since then, a series of intense standardization works have accomplished and leaded to a first series of ITU-T recommendations in 1991. In September 1991, the ATM Forum was established with the objective of accelerating the use of ATM products and services under a rapid convergence of interoperability specifications. The ATM Forum now has more than 700 members and has a significant influence on ATM standards and specifications.

The first ATM products appeared on the market in 1992 with designing of solving LANs computer overload problems. Since then, the range of products has continued to grow, and spread to WANs by offering high bit rate interconnection between LANs at diffent sites and on ATM interface cards for workstations. Subsequently, the first public ATM network was appeared in 1993.

After a series of pilot programs, conducted in 1993 and 1994 in both the U.S. and European countries, ATM is now entering its commercial stage. However, more specifications and standard works are still needed.

Although ATM has very clear advantages over other techniques, its requirement of having to place the information in cells before transmitting, and then doing the reverse opertion at the receiving end, creates additional cost to its users. Any decision to adopt ATM must be made as a business decision. ATM will be evaluated against competing products and services. At this early stage, ATM is relative expensive than its competative technologies such as Frame Relay and FDDI. It will take sometime before ATM technology can be fully commercialized and generate profits from its services offering.

Parties in Interest

All major carriers, LECs, IXCs, and CAPs are interested in ATM. ATM services are availabe today from MFS Datanet, Sprint, and WilTel, AT&T, MCI, Pacific Telecom, Nynex, France Telecom, British Telecom, Mercury, DBP Telekom, Telecom Finland, and Helsinki Telephone. In vendors side, AT&T, Northern Telecom, TRW, Newbridge, Cisco, StrataCom, Cascade, Alcatel, Siemens, Fujitsu, NEC, Loral Data Systems, Wellfleet, and DSC have announced or released ATM WAN switches for the public network sector, and many are already deployed in operational networks.

0ther users are big corporations and organizations. U.S. Geological Survey, Dept. of Interior (DOI), for example, deploys an ATM network around its network.. The DOINET is made up of 13 switches; 19 T1 and MAN circuits; throughout the country at major DOI sites to provide intergrating data, voice, and video information. Another high visible project, the Time Warner video-on-demand trial in Orlando, Florida uses ATM to the set-top. A number of telecommunications trials in North America, Europe, and Japan use ATM up to the head-end or distribution node.

Alcatel, the French telecommunications equipment giant, its ATM products are on all continents. As of May 1995, among its 21 operating companies, it has 59 ATM sites with a wide range offerings, from public network switches (80 Gbit/s) to WANs, LANs, and to network management.

In general, the U.S. telecommunications market is typically ahead of Europe in terms of the implementation of a new technology. This case, the ATM technology, the U.S. lead time is around 12 to 18 months.

Other Implications

Since ATM is a global switching technique and capable to transport multi-Gbps bit rate, it will easy off the capacity "bottlenecks" between networks, particularly in the distribution network (at current 64 kbps to 2 mbps speed rates). ATM also will provide ample capacity for high bandwidth required services such as Video on Demand. In addition, ATM does not discriminate between cells based on their content unless specifically told to do so. Consequently, as ATM evolves it will eliminate the need for separate voice, video, and data networks (service convergent) while simultaneously blurring the distinction between LANs, MANs, and WANs.

G. Fiber Optics & Fiber Distributed Data Interface (FDDI)


Fiber Optics is a technology in which light is used to transport information from one point to another. More specifically, fiber optics are thin filaments of glass through which light beams are transmitted over long distances carrying enormous amounts of data. Modulating light on thin strands of glass produces major benefits in high bandwidth, relatively low cost, low power consumption, small space needs, total insensitivity to electromagnetic interference and great insensitivity to being bugged. All these benefits have great attraction to anyone who needs vast, transmission capacity, to the military and to anyone who runs a factory with lots of electronic machinery.

Fiber Distributed Data Interface (FDDI) is a set of NASO/ISO standards that, when taken together, define a 100 Mbps, timed-token protocol, Local Area Network that uses fiber optic cable as the transmission medium. The standards define Physical Layer Medium Dependent, Physical Layer, Media Access Control, and Station Management entities.


Contemporary fiber optic networks can transmit voice, video and data at speeds 10 to 100 times faster than the standard copper wiring that has been used in telecommunications for over a century. What distinguishes fiber from copper is information density or bandwidth; one can pack more bits per unit of cable into a fiber. A single fiber could, in theory, transport 25 terabits each second, an amount sufficient to carry simultaneously all the telephone calls in the U.S. on Mother's Day.

However, fiber optics has only realized a small fraction of this potential. It is checked by the tendency of a pulse representing a digital 0 or 1 to lose its shape over long distances, as well as the absence of optical components that can process information at these blazing speeds(21)

. Research has been directed towards resolving these checks and has resulted in the development of erbium-doped fiber amplifiers (EDFAs) which permit a signal to be "pumped" up using a laser light source thousands of kilometers away at one of the cable head ends.

The third generation of undersea fiber optic cables now entering service can carry approximately 320,000 virtual voice channels. This represents an order of magnitude increase from the second generation of cables (operating at 560 Mbits/s) which, in turn, provided a tenfold increase in capacity over first generation cables. Recent trials and experiments by AT&T, Alcatel and KDD suggest that the next generation of cables, to be deployed between the years 2000 and 2005, will increase capacity by at least another order of magnitude to 50 Gbits/s and possibly to 80 Gbits/s or more. That will be enough to transmit at least 3.5 million simultaneous telephone calls or several hundred thousand channels of compressed video services.

The enormous capacity of the next generation of fiber optic cables will result from two technologies -- optical solition transmission and wavelength division multiplexing (WDM) -- which leverage the benefits of earlier breakthroughs, such as optical amplifiers. Solitons are unique pulses of light which maintain their shape and intensity at very high bit rates over great distances. By coupling soliton technology with wave division multiplexing (WDM) the aggregate transmission capacity of any given fiber optic cable may be increased several fold.

The commercial impact of these developments will be felt well before the next generation of cables is in the water. First, WDM technologies will permit some cable owners to upgrade capacity merely by changing the equipment at the cable head ends, which could result in four to eight-fold capacity increases. Second, WDM will be more flexible, and thus more attractive to investors, since it can be used to create different virtual (frequency specific) channels on a cable so that the cable can be partitioned carriers or countries without reducing the cable's overall capacity. As soliton WDM technology moves into commercial production, inter-continental calls may cost less than a local call(22)


One of the problems with fiber is a phenomenon known as absorption(23)

. As photons travel through fiber, some of them encounter impurities in the glass core or interact with the glass itself and are turned into other forms of energy, such as heat. As a result, a pulse can be thought of as being slowly absorbed by the fiber and it is necessary to periodically amplify the signal.

Erbium-doped Fiber Optical Amplifier: One of the devices used to renew the strength of the signal is an erbium-doped fiber optical amplifier (EDFA)(24)

. These are short lengths of optical fiber treated (or doped) with a rare element (erbium), into which a laser pumps light, exciting these erbium ions and thus boosting the power of lightwave signals without the need for optoelectronic conversion and subsequent electronic processing and amplification as in conventional lightwave repeaters(25)


EDFAs, which have recently been deployed in commercial networks, demonstrate superior performance for very high speed networking: unlike electronic amplifiers, they can amplify a signal carrying data at transmission speeds greater than 50 gigabits per second(26)

, and they can simultaneously accommodate many wavelength division multiplexed (WDM, see below) channels. Thus, they offer simple and cost-effective means to access the vast transmission capacity inherent in single-mode optical fibers, allowing networks to have significantly increased transmission capacity, network functionality and reliability, and operational flexibility(27)


The bandwidth of a fiber is determined by the amount of light it can carry. Despite the fiber's huge potential, the information-carrying capacity of a network that uses a light-wave broadcasting scheme can still become exhausted. Adjacent wavelengths of light can transport only a limited number of video transmissions without one signal's interfering with another. To avoid conflicting signals, a "guard band" in an unused portion of the optical spectrum must be interspersed between each of the wavelengths that conveys information. The presence of the guard bands diminishes the useful bandwidth(28)


Optical Demultiplexer: In an optical demultiplexer, a device known as a nonlinear optical loop mirror is capable of processing the optical signal to multiplex, demultiplex, switch or even store information. For demultiplexing, it receives light pulses from a fiber transporting a 40-gigabit-per-second stream of data. In the loop mirror, which is a circular strand of fiber with special material properties, the optical signal interacts with another series of light pulses that have been injected into the device by a laser. The interaction of these different trains of light pulses causes a signal to emerge, and it transports 10 gigabits of the data into a new fiber.

At the same time, the original signal -- now carrying the remaining 30 gigabits per second of data -- returns to the fiber from which it entered the mirror. If demultiplexing is not desired, the light returns to the original fiber unaltered, still transporting the full 40 gigabits per second of data.

The optical loop mirror can also serve as a digital-processing device. In demultiplexing the signal, the loop mirror either modifies the signal or leaves it unchanged -- an on-or-off state identical to the 0 or 1 of digital logic.

Wavelength Division Multiplexing (WDM) Networks: The increased need for new service will require ever higher capacity transmission systems. WDM is generally seen as a way to fully exploit the wide bandwidth available on single-mode optical fibers. The basic idea behind WDM networks is to divide up the bandwidth of a fiber into multiple channels, and then arrange for hosts that want to communicate to rendezvous on a particular channel(29)

. Each transmitter on this network contains a laser that can be adjusted to dispatch a signal at a certain wavelength, or color, of light. WDM is analogous to radio broadcasting which multiplexed radio waves along multiple channels.

WDM enhances network flexibility by using new wavelengths to respond to unforeseen increases in traffic or to specific requirements, such as leased lines, video, etc. The present stage of knowledge allows an input optical channel separation of about 1.5-2 nm.

Two key challenges arise in designing WDM networks, One challenge is to minimize the amount of time spent deciding which systems will communicate on which channels and maximize the time transmitting user data. The second challenge is to make effective use of the fiber bandwidth, both by packing the different frequency channels as close together as possible (a technique known as dense WDM) and by improving the effective tuning ranges of optical devices.

In order to address these challenges, techniques are being developed in the key components of WDM systems such as narrow linewidth laser diodes, wavelength tunable receivers and filters, and wavelength multiplexer/demultiplexers. A WDM system also requires the use of an optical amplifier, and in order to propagate several wavelengths, the amplifier will preferable be a fluoride based erbium doped fiber amplifier (EDFA), which is still under study. Most of these components involved in WDM techniques are still at the laboratory stage. Wideband optical amplification in fluoride-based fibers, semiconductor amplifier-based wavelength conversion, tunable and drop filters are all promising technologies for extensive use of wavelength division multiplexing in high speed transmission systems (10 Gbit/s) and for flexible routing purposes in future all-optical transport networks(30)



FDDI has become the standard for a 100 mbits network using optical fibers as the communications medium. FDDI runs on all protocols used today, including TCP/IP, however, it does not have much future. FDDI vendors have been battling high costs and lack of standards for several years allowing ATM developers to improve their product and reduce its costs. FDDI will continue to be the technology of choice for large bandwidth needs until ATM developer can reduce the costs of ATM and create better standards and erase the interoperability problems with TCP/IP protocols. FDDI is currently more popular as a LAN backbone rather than as a device connection. Developers are already at work on FDDI II. Since FDDI stations also act as repeater, ie, that is they regenerate data, any data sent to inactive stations will be lost. Future developments include enhanced station management functions, support for data transmission over category 5 unshielded twisted pair cables and support for FDDI over 50 micron fiber optic cables

Parties in Interest

The parties in interest include telecom equipment manufacturers, large communication data companies such as MFS Datanet and companies who manage large network connection sites and internet backbone providers. MFS is the only Internet service provider to own or control fiber optic local loop, intercity and undersea facilities in the US, UK, France, Germany, and Sweden. MFS has 213,000 fiber miles throughout its network. MFS operates an FDDI bridged service that allow customers to interconnect their FDDI LAN backbone networks at either 45Mbps or at 100Mbps.

Other Implications

H. Synchronous Optical Network (SONET)


Synchronous Optical Network (SONET) is an international optical transmission standard and it is part of a larger suite of telephony standards known as the Synchronous Digital Hierarchy (SDH), standardized by the Comite Sonsultatif Internationale Telegraphique et Telephonique (CCITT), now renamed ITU-T. SONET is another type of fiber optic network which primarily replaces copper wiring with fiber. The goal is to achieve higher transmission rates in telephone trunks (by signalling at higher rates), while also using fewer wires (because one fiber can replace several copper lines) and employing a more flexible signalling protocol than is used for copper. This is in contrast to WDM networks which attempt to use the special properties of optical fibers to build new types of data networks(31)


However, SONET is the transmission protocol used in telephone company fiber and, as a result, is likely to be one of the most commonly used transmission protocols over fiber. Interest in SONET, both amount telecommunications service providers and -- more recently-- end-users, has been increasing. This demand stems from incremental growth in voice traffic from new data communication applications requiring substantial bandwidth (such as LAN interconnection or medical imaging) and from existing data applications that are significantly increasing communications traffic.


SONET was initiated by Bellcore on behalf of the Regional Bell Operating Companies to attain multi-vendor interworking, to be cost effective for existing services on an end-to-end basis, to create an infrastructure to support new broadband services and for enhanced operations, administration, maintenance and provisioning (OAM&P). SONET, according to Northern Telecom, offers many advantages over asynchronous transport such as opportunity for back-to-back multiplexing, digital cross-connect panels, easy evolution to broadband transport, compatibility with evolving operations standards, enhanced performance monitoring and extension of OAM&P capabilities to end users.

Traditional transmission standards are based on the digital signal (DS) hierarchy. DS transmission relies on electrical interfaces to transmit information and the maximum-speed interface is DS3 or 45 Mbps. The DS hierarchy is asynchronous, meaning that extra bits must be inserted into digital signal streams to bring them up to a common speed.

SONET, however, extends the standard electrical interfaces to optical signals, which are transmitted using laser-light pulses. SONET is "synchronous," meaning it uses external timing to make sure all fiber-optical equipment is timed to the frequency. And like a larger water hose that lets more water flow through, SONET provides increased bandwidth. It comes in 155-megabit increments that can be easily aggregated to create 600 Mbps or larger signals., the SONET networks can be readily configured to support whatever bandwidth is required.

For carriers as well as end-users, SONET also simplifies provisioning and enhances operational flexibility through robust fractional services. SONET also improves reliability and management of the network, since it requires fewer network cross-connects and provides more information about the quality of the network.

Reliability and network-management capabilities are other important benefits of SONET. The SONET standard provides for end-to-end monitoring of the communications network. This makes identification and repair of faculty network elements both more straightforward and faster. Further, it means that carriers can guarantee transmission performance and that users can readily verify compliance without resorting to network test equipment and off-line testing, therefore, enhanced network maintainability.

In conjunction with digitally switched networks, SONET supports software control of traffic. Combine with SONET's error tracking and reporting, dynamic routing allows network managers to design robust, error-detecting networks that can offer "self-healing" capabilities. And, because SONET networks rely on digital multiplexers, end-user services can be provisioned remotely under software control instead of requiring someone to go on-site to make a physical cross-connect.

Because routing and bandwidth are under software control, service can be provisioned in hours or minutes, rather than the days or weeks many telecommunications managers have been accustomed to waiting. Further, because connections are digital and programmable, SONET provides the capability for automatic restoration of service when network problems do occur.

Another key strength of SONET is its ability to integrate and manage different type of traffic on a single fiber. Because it is an "open" standard, SONET invites competition between customer-premise equipment (CPE) suppliers. With SONET, carriers (and eventually end-users) can mix-and-match circuit equipment because unlike most systems today, different manufacturers' equipment will inter-operate. This provides operational and equipment savings, as well as creating more competition among equipment manufacturers. For end-users, this new degree of equipment compatibility means that inter-connecting network segments supported by different carriers will become simpler and faster and will result in more flexible and survivable networks(32)



The main area that SONET is currently suffering from is interoperability. Interoperability is suffering because of a differential in the implementation standards of different features of the standards. The timeliness of vendors in building these standards is very important(33)


SONET operations interworking is defined as the ability of the various elements of the SONET network to communicate with each other and with the management systems to provide the necessary remote and local network operations functions. These functions include network management (fault, performance and configuration management), software management (software download, network element database backup and restore), network element native user interface access (remote log-in), and central office alarms (the ability to activate CO audible and visual alarms for remote failures). To accomplish the goal of interchangeability (centralized operation in an end-to-end network), standards organizations must specify common building blocks and vendors must implement the resulting standards(34)


Current and Future Applications

Many fell that teaming up SONET with Asynchronous Transfer Mode (ATM) would make a dynamic partnership. ATM would give SONET an additional measure by which to effectively use the SONET structure.

A major reason why ATM and SONET pair off so well is that they were designed to do just that. Both are standards in the Open Systems Interconnection (OSI) protocol stack and both were created to advance broadband ISDN (Integrated Services Digital Network) for the future.

ATM enables a user to stuff all types of traffic into a high-speed SONET pipe -- while the user pays only for the bandwidth that is actually used. SONET digital cross-connect systems complement ATM's bandwidth-on-demand capabilities by permitting carriers and user to reconfigure band-width almost instantly with matrix provisioning features. Also, SONET multiplexing capabilities ensure that the ATM cells always travel at the highest available speeds. Finally, traffic traveling at these high bandwidths is often critical business voice, video, and data traffic. Although SONET is generally known for its self-healing rings that guarantee its survivability, ATM and SONET together may offer customers even more traffic protection. ATM would enhance the speed with which SONET detects problems.

Despite these advantages, neither technology is diminished is used with some other partner. While ATM and SONET are complementary, they are not necessarily automatic partners. Many customers simply find that the ATM/SONET partnership is just too expensive. The need to protect data is usually the deciding factor in whether to but SONET with ATM, most carriers say, and at least one says more customers are viewing all of their network traffic as vital these days. Cost is a concern, but reliability is becoming a larger issue.

Parties in Interest

One of the main parties in interest is the Sonet Interoperability Forum. This forum is composed of several working groups dedicated to identifying and resolving SONET network and equipment interoperability issues that are preventing widespread deployment of SONET to meet current and future growth in bandwidth demand. The Sonet Interoperability Forum is hosted by the Alliance for Telecommunications Industry solutions. Other parties in interest ar the Regional Bell Operating Companies, AT&T, MCI, MFS Datanet, LDDS WorldCom, and Sprint.

Other Implications

The future for SONET looks bright especially because of its ability to integrate products from different vendors and its capability to heal itself once a break in the fiber occurs. Additionally, it provides the necessary infrastructure to support new broadband services and enhanced operations, administration, maintenance, and provisioning.



The Internet is a "network of networks" enabling individuals to communicate with one another almost instantaneously via computer regardless of their geographic location. It can be transmitted thru satellites, fiber cables, analog cables, and telephone lines. The Internet was developed in the late sixties by the Advanced Research Projects Agency (ARPA) to link together researchers and other high-tech defense contractors; it provided a mechanism for the scientific, university, and governmental communities to exchange computer communications.(35)

Transmission Control Protocol/Internet Protocol (TCP/IP) was developed to provide a standard protocol for ARPAnet (the Internet's predecessor) communication. This protocol provides a common language for interoperations between networks who use a variety of local protocols, eg., ethernet, netware, etc). It also contains all signalling necessary to route the packet. TCP/IP also supports a variety of telecom services such as, packet switching, switched voice, private lines, frame relay, ISDN and cable TV.(36)

Internet traffic consists of packets of data transmitted along leased phone lines and directed by powerful computers known as routers. The Internet was created as a best response packet delivery service and thus there is no mechanism or guarantee that the packet will be delivered in a timely fashion. A temporary shortage of capacity because of congestion can result in packets being dropped from the system, similar to a busy signal in the telephone service. The software and routers notify senders of delivery problems and attempt to resend the packets at a late time.


Internet access may be obtained through use of equipment owned by public libraries, universities, commercial online service providers such as America Online or Compuserve, or Internet Service Providers (ISP), such as Digex or PSINET. In addition to gaining access to innumerable scientific, educational, and entertainment archives, either through Gopher, File Transfer, or the World Wide Web, users may transmit electronic mail (e-mail), join or initiate news-groups to discuss topics of interest, or participate in, or establish, bulletin boards to share ideas about particular topics. The World Wide Web functions by linking distant documents, pictures, or sound files to each other through a system known as "hypertext." Simply clicking on the highlighted text with a mouse causes the software to travel the Internet to the connected document, wherever it may be, without the need to enter the complicated commands as earlier cross- referencing systems required.

The Internet has become the major tool to distribute information about a companies products or services. It enables companies to provide etter customer support to their clients. The Federal Express Site allows users to track their packages and verify that they were delivered. The FCC's web site has dramatically improved its outreach and customer service to citizens and communications companies; corporations and individuals to download forms, rulings, and orders directly off the FCC Internet site at a tremendous savings to the FCC. As a result, it has improved the efficiency and productivity of FCC staff.

The Internet has also spawned the electronic commerce business, there is a virtual stock market, where people can buy and sell stock over the Internet, There is also a virtual banks that gives people digital cash to use to purchase products from Internet vendors. Digital cash is the missing element in the electronic commerce business since it provides users with the security they need to purchase products and services over the Internet.

The popularity of the Web for searching and storing common information has lead to the creation of private internets or Intranets. Corporations use internal webs for a variety of purposes ranging from human resources to corporate policy. The Web allows companies to quickly and easily deploy cross-platform applications at low cost. The FCC Intranet, for example, enables internal documents flow electricially and allows cross-bureau access the same documents simetienously. It also provides business discussion on the net, thus in turn, has increased staff job performance by eliminating physical trip to various offices.

Intranets have grown tremendously in the past six months; more than 80% of web application development is occurring within organizations on internal networks. Large companies are buying corporate licenses for web browsers, allowing tens of thoudands of employees to access the comapny's internal and/or external networks. More than 50% of all browsers sold by Netscape are for Intranet use. Hypertext Markup Language (HTML), from a development perspective, is a much simpler graphical interface than alot of other windows system applications. HTML provides a powerful mechnanism for integrating information systems and database applications into document form with hyperlinks, search engines, and online forms. The ability to share information to the general public is a great value to all companies.

The HTML standard is evolving quickly as are vendor extensions. Netscape tables allow servers to present nicely formatted row-column information to users; Netscape frames enables multiple, scrollable HTML document windows to reside on a single screen, such as a list of recent NPRMs or orders with detailed summary of the order or NPRM on the other side; Virtual Reality Modeling Language (VRML) makes 3-D applications viewable via local browsers; and lastly Java, a technology that enables client-side, instead of web-server processing (more details on Java in the following section).

In the US, the Internet is composed of three layers: PCs or local area networks (LAN), regional or midlevel networks, and Internet backbones. PCs or LANs connect with midlevel or regional networks, mostly ISPs, who in turn connect with one or more Internet backbones. A backbone is an overarching network connecting many regional networks and which generally does not directly serve any users or LANs.(37)

US backbones connect with other backbone networks around the world. Most backbone and regional network traffic travel over leased phone lines (mostly fiber optic) using a packet switching technology rather than the circuit switching technology used by telephones. Other hardware consists of switches or routers. Backbones prefer rely heavily on routers to manage a few lines, mostly T-1 or higher, ie, T-3 (45 Mbps) or greater. This preference is evident in the structure of the backbones; for example, packets which enter the system at Cleveland and leave in New York must pass through two routers, one at the entrance and one at the exit.(38)

Although the speed of the backbones are relatively fast, the access roads still use mostly T-1 connections.

The US's primary backbone had been NSFNET, however, on April 30, 1995 NSFNET ceased operations as its funding disappeared and private companies assumed control over the backbones or created alternative backbones. Today, the backbone has new definition - if a ISP has its own infrastructure, purchasing its capacity from multiple telecom carriers, will be considered as a backbone provider. There are some 14 other backbones in the U.S. This new "private" Internet is becoming less hierarchical and more interconnected. MCI, since it helped create NSFNET, is the largest carrier of Internet traffic, carrying over 40% of all Internet traffic.(39)

Other backbone carriers are Sprint, ANS, MFS/UUNET, PSINET, AGIS, and BBN PLANET.

The increasing number of super-regionals are erasing the separation between mid-level networks and backbones. Today, many old regionals are connecting directly to each other through network access points (NAP) and become as a new national providers. These large national networks usally have their own infrustructures, therefore, are the backbone providers. Traffic now flows through a chain of nationals without any telcos backbone transport.(40)

Outside the US, a similar setup exists. Western European countries have national networks attached to various European backbones (E-Bone, Dante), but their backbones are immature and often inefficient. Connections between countries are often slow or of inferior quality. As a result, many networks avoid some of these backbones, such as the E-bone, entirely and route their traffic through the US to their desired country(41)

. Backbones in Switzerland and in the UK are more established and carry a significant amount of Internet traffic. New NAPS are constantly being created and these eaither are connected to the major European or Asia-Pacific backbones or connect directly to US providers.


In the mid-eighties, the National Science Foundation (NSF) created NSFNET to provide connectivity to its supercomputer centers and to provide other general services.(42)

NSFNET adopted the TCP/IP protocol and provided a high-speed backbone for the developing Internet. ARPA and NSF forbade the transmission of commercial messages over their network. As companies large and small began to connect to the Internet during the 1980s, the need for commercial access providers grew. The introduction of Internet based multimedia in 1993 transformed the Internet from a tool of the technically elite to a global communications vehicle. Internet and Web browsing tools bring vast amounts of information to anyone with a PC, a modem, and webrowser.

The Internet host community have grown over 95% from January 1995 to January 1996, over 350% since January 1994 and should reach over 200 million by the year 1999.(43)

The growing pace of the Internet has presented a new chanellenge to tranditional wireline and wireless networks.

ISPs provide the capacity required to transmit and receive data over the Internet. Users are either charged a flat fee for unlimited usage or a minimum fees with usage charges. Additional fees are charged for enhanced service options. ISPs also provide computer networking services that allow communications with other computers. Large ISPs sell both wholesale and retail. Since Internet service is a commodity, providers must work to differentiate themselves with features like 24-hour technical support, nationwide access, and special pricing plans for additional services. Some providers primarily serve large corporations, while others target the consumer market.

Traffic flows from one ISP to another under a series of peering and/or transit agreements. These agreements function similar to deliver contracts between companies. Without these agreements, ISP would have no incentive to accept and deliver each others traffic. Transit agreements are similar; they are contracts which allow ISPs to temporarily send packets over another ISPs network in cases of emergency, or when the primary network cannot handle the traffic. ISPs desiring to exchange packet meet at NAPs or other specified meeting points, according their peering or transit arrangements. Metropolitan Washington's closest meeting point is one of the oldest and is located in Maryland. There are eight NAPS throughout the US, four in Europe, one in Singapore and one in Japan. Additionally there are two international frame relay hubs. Most of the NAPS are connected to FDDI rings, with the oldest ones connected by SMDS.

Parties in Interest

The parties of interest for the Internet include the backbone providers such as Sprint, ANS, MFS/UUNET, PSINET, AGIS, BBN PLANET, and MCI. They also include the two large global alliances, Global One, Concert, and AT&T. Various other parties include MFS Datanet, Network Wizards, Merit Networks which help manage the Internet for the National Science Foundation. Moreover, any discussion of the parties has to include Universities and government agencies as well as the numerous Internet Service Providers who provide the connection for consumers and most businesses to the Internet.

Additionally, numerous equipment and communication companies, such as Cisco, Newbridge networks, provide the necessary hardware for ISPs and backbone providers. These same companies are often the same providers of hardware and software to the Intranet market. The leading software companies for the Internet can be grouped int several areas, Browser (Microsoft Explorer and Netscape), Plug-ins (video, jpeg audio), and complementary applications such as Java.

Other Implications

No one is sure exactly where the Internet is going to be because of its tremendous growth, over 95% in the past year alone. The lack of structure, any companies, large or small, can develop applications with wide spread use in a very short time, can change the focus of the Internet, therefore, cause high uncertainty for Internet's future development. The current battle between Netscape and Microsoft over dominance is a one of the examples to illustrate the uncertainty of the Internet development.

Furthermore, most Internet users do not have the right computers or the fast connections to the Internet allowing them to take full advantage of the Internet. The Internet is often criticized for its slowness, however, this is the result of a slow connection from the ISP (for poor infrastruture planning), rather than a problem with the Internet itself. However, applications are constantly being developed that will help to resolve most of these issues if that is the real bandwidth problem. For example, downloading audio from the Internet used to take an extremely long time, new applications such as Real Audio, now have cut down this loading time considerably.



The previous Internet section dealt exclusively with the structure of the Internet and did not talk at any length about applications that third parties, particularly the computing software industry, have created that use the Internet as a form of delivery. This section will expand upon the Internet and discuss several current and future voice video and audio applications that take advantage of the structure and pricing of the Internet. It will also include a look at the market players and the implications of these applications on international communications.


In recent months several software programs have become available that allow individuals, providing they have the right software, hardware, and Internet connection, to speak in real-time over the Internet person to person, for no more than the cost of their Internet connections. Telephone quality speech requires 8KBPS/second bandwidth; however, modems can only send and receive a maximum of 1.8KBPS of non-compressible data each second. A typical modem connection has limited bandwidth, about 14.4 KBPS. Some Internet voice applications are capable of transmitting voice at much higher levels, 32KBPS/second, but this speed is unattainable for most people unless they have extremely powerful computers and T-1 lines. The two solutions to this problem are either to obtain larger bandwidth, e.g., a T-1 line, or to compress the sound information before transmitting it. Audio conferencing programs work by encoding analog speech into a digital stream of data and then sending this data over the Internet. Because of the cost of installing or leasing a dedicated T-1 line for video and audio, compression has proved to be the popular option.(44)

Voice over the Internet has the added problem of requiring full duplex connection instead of half-duplex as audio sound cards use. There are a variety of methods for encoding and compressing sound data, however, no one standard has been established. Some compression techniques work best for low-bandwidth connections, others feature interpolation, a techniques that automatically fills in gaps in conversations, while others are optimized for higher connection speeds. Currently there are five compression standards in use today: 1) Proprietary, 2) ADPCM or PCM, 3) True Speech 4) RTP (Real Time Protocol) /VAT, and 5) GSM (Global System for Mobile Communications).(45)

Proprietary. Software that relies on a proprietary compression protocols systems of algorithms, such as Vocal-Tech's, is not compatible with any other software. Thus users of Vocal-Tech cannot send voice over the Internet unless the receiving party has the same software. As a direct result, unless people use the same transmission compression protocol, the two parties cannot communicate.

ADPCM (Adaptive Digital Pulse Code Modulation) and PCM. Pulse code is the most common techniques for digitizing sound. It is the native format used by WAV and AIFF files for representing sound, however, since it does not compress the sound tightly enough further compression algorithms are needed. ADPCM is an improvement upon PCM; its compresses sound more tightly, saving storage space. It is also extremely flexible and comes in different formats. Microsoft uses this standard as one of its sound algorithms in Windows 95.

True Speech. DSP's Group True Speech compression technique compresses sound to the smallest size possible. This technique reduces the signal to 8,000 Hz or 1K per second--an eight the size and rate of the smallest PCM audio files and a quarter the size of the smallest ADPCM file. True speech sacrifices quality of voice for small files. This tight compression results in poor quality.

RTP/VAT. VAT was the first working video conferencing program standard and only requires a Unix VAT program, IP connection and sound hardware. RTP adds capabilities both to control call quality and to allow users of different applications to communicate.(46)

GSM. GSM, the European standard used in cellular phones, is emerging as one of the most popular standards for compressing speech. Most audio software for Windows and PCs, including the popular Real Audio, accept GSM encoded audio. The quality of its ubiquity, make GSM as the best compressing standard not only for mobile telephony but also suitable for transmitting voice over the Internet.

Since the Internet was not created for isochronous speech some data is either lost or delayed during transmission. In audio, this is not a significant problem, but it is in voice. Today, most applications and modem software can correct or ameliorate this problem through use of sophisticated error correction techniques. The primary factors affecting the sound quality of Internet voice are the users's network connection speed to the Internet and the level of congestion of the Internet, i.e., the available bandwidth at the transmission time. Some applications have the potential to provide higher quality voice connections than a regular telephone because they use 16 or 32 KBPS signals instead of 8 KBPS. Again, depending on the applications, the user may, or may not, experience a noticeable delay in speech. If the user has a fast connection to the Internet, the delay may be anywhere from .01-.05 seconds.

In addition to the right software and compression standard, the knowledge of the target Internet address, the receiving account, is another important element for Internet telephony and video applications. A static SLIP (Serial Line Internet Protocol) or PPP (Point-to-Point Protocol) account is necessary for routing traffic through dial-up modem over the Internet. Static addresses are assigned permanently to the user, while dynamic IP addresses are given out each time the user logs on to their Internet provider. A static IP is not necessary for Real Audio or Web chats. A modem with a baud rate of 14.4K is the bare minimum for good quality sound, however, a 28.8 baud modem is the recommended minimum speed. Also unless the user's sound card can support full duplex transmission, only one person can speak at a time. Full duplex allows a person to send and receive sounds at the same time, that is, you can speak and hear the other person at the same time. One of the major problems with many of the early voice software programs was that they only supported half-duplex transmissions.(47)


Initial applications for the Internet focused on moving data from one place to another; however, recently, two- way voice and video over the Internet have gained some recognition. Other applications that can be used over the Internet are multicasting, establishing commercial wireline broadcast "radio" stations, and electronic commerce. Multicasting uses the M-Bone, a section of the Internet that allows one site to broadcast to many users at once. The M-Bone is often used for conferences, and broadcasts of live musical or video events. These new capabilities elevate enhanced services far beyond the traditional role of data oriented communications to a new hybrid communications paradigm of data, audio and video.

Since the Internet was not designed for isochronous communication, there is serious problems and obstacles that need to be overcome in attempting to impose a "foreign" structure on the Internet, namely the structure for "voice" rather than for "data" This technology will rapidly expand once the sound quality improves and a national compression standard has been adopted. More sophisticated error correction techniques will help mask the inherent problems of compressing and transmitting speech. However, until these outcomes happen voice over the internet will continue to stay on the fringes and not be a major threat to traditional telephone service providers.

Current and Potential Applications

The current applications offerings vary, but often include voice, voice mail or email if the party is not on the computer, audio and/or video conferencing, multicasting, and encryption. Some applications offer caller ID or other call screening features, while other have fully integrated Web Browsers or Internet tools along with their voice applications.

CU-SeeMe, a product of Cornell University, provides both audio and video conferencing over the Internet or other IP (Internet Protocol) networks. It runs on both Macs and Windows platforms and requires no special equipment for video reception beyond a network connection and monitor. Video transmission, however, requires a camera and a digitizer. One of CU-SeeMe's main advantages is that it offers the best compatibility with other voice or audio applications. Another is that users can either connect directly to each other or they can enter a multi-person conference through public or private reflectors. Reflectors are only needed when speaking with more than one person at a time. CU-SeeMe focuses on low-end, widely available computing platforms thereby it opened networked video conferencing capability to users of low cost desktop computers. The first public reflector site opened in 1993 and can be found in grade schools to National laboratories over 40 countries around the world.

Participants in CU-SeeMe conferences transmit packets to other participants that advertise their interests, such as, audio, video, or slides. Reflectors examine these requests and then forward them to the participant in question. Since the protocol used by CU-SeeMe requires participants exchange information dynamically, it works best for conferences smaller than 30. The video encoding technology used by CU-SeeMe has proven to be surprisingly robust against packet loss. Often, the only observable effect is a reduction in the number of frames received.(48)

CU-SeeMe is available at no cost, however, the cost of other applications, especially voice over the Internet, range from free to over a hundred dollars. In addition to the software, one will need an Internet connection, sound and video cards, speakers, a microphone, and a camera for video application. Moreover a use must have a permanent connection to the Internet, that is one with a fixed or static IP number, either through a SLIP or PPP account to use CU-SeeMe or any of the Internet telephony products.

Real Audio and Web Chat software are available at no charge. Progressive Networks' RealAudio client-server software system enables Internet and on-line users equipped with conventional multimedia personal computers and voice-grade telephone lines to browse, select, and play back audio or audio-based multimedia content on demand, in real time. This is a real breakthrough compared to typical download times encountered with delivery of audio over conventional on-line methods, in which audio is downloaded at a rate that is five times longer than the actual program; the listener must wait 25 minutes before listening to just five minutes of audio. With the RealAudio Player, all one has to do is click on a RealAudio link from his Web browser and audio begins playing instantly, without download delays. It's like a CD player -- one can pause, rewind, fast-forward, stop, and start. RealAudio features include, music-quality audio (requires 28.8 Kbps or faster connection), live RealAudio cybercasts, including concerts, breaking news, and other live events. Since April 1996, over 4 million Real Audio players have been downloaded.

Web Chat is a type of interactive Web page where people can converse inreal-time. The FCC has used chatting rooms as a form of outreach to its clients. Chats have been held on various speeches and rulings.

On February 8, 1996, IBM announced that it has developed a new compression standard for Internet telephony. IBM's standard incorporates many of the capabilities of GSM technology with some of its own compression products. This new standard relies heavily on simultaneous voice and data (SVD) capabilities. SVD lets voice and data transmission to be made at the same time while the phone system conforms to the GSM digital phone standard.(49)

Recently, over a 100 top PC and consumer electronics companies, including Microsoft, Intel, Gateway, Netscape, Philips have joined together to create a new standard, one that will make it easier for computers to exchange voice, data, and video over the Internet.

Parties in Interest

Initially, the parties that have been developing and creating the software for Internet telephony are small companies, most of them are start-ups. IBM's recent announcement about the forming of a consortia of technology developers and suppliers to resolve this interoperability question is the most significant event to occur in Internet telephony. The new alliance headed by Microsoft and Intel represents another fundamental shift in the use of Internet telephony and other applications. Already Netscape is testing a beta version of its browser that includes Internet Telephony features built in and other browsers will likely follow.

Vocal-Tech has licensed its software and proprietary compression techniques to Motorola, Cirrus Logic, and Boca Research. All three companies will bundle the compression techniques with their own products. Additionally, many ISPs have stated that they will provide Internet telephony features to their subscribers; America Online's Global Network Navigator, Netcom and others have licensed Vocal-Tech's software. Digiphone, another company, has established distribution agreements with various US and International distributors. Vocal-Tech, Digiphone and the other Internet Phone providers will continue to lineup support among the major manufactures and providers of PC and Internet products.

Other Implications

Before the end of the year, software will be developed that will allow calls to be made over the Internet without mush restriction on computer, internet connection, or software that is required today. Currently, five companies are working to resolve the problems associated with routing calls directly to a users phone. This technology will propel Internet telephony to new heights. Companies serving as gateways will process, for a fee, requests from other users and transmit the call directly to the receiving person's phone number. This new adaption of Internet telephony will have a dramatic effect on International communications. Many people are willing to sacrifice voice quality and lack of performance guarantees for low cost.

In regard of recent major firms entering into this new field, such as IBM, Microsoft, and Motorola, the Internet telephony could created some threat to basic voice services and thus to international communications. If IBM is successful in creating a global standard for Internet telephony, and if the consortia and the standard that they develop are open, the other interoperability questions and problems will disappear quickly. However, if other major technology companies, such as Microsoft or AT&T create alternative standards, their will be chaos in the market, reducing the likelihood for mass penetration of Internet telephony.

The growth in Internet telephony will likely result in a review of the current interconnection regulations and policies. If the LECS can no longer determine the quantity and type of calls that enter their discrete networks, than the whole system of cost based access charges is called into question. How are the costs determined? and what charges can be assessed? Internet telephony will likely force the industry and the FCC to rethink how it regulates access charges and interconnection.

Already Local Exchange Carriers are filing petitions with the FCC both to revisit access charges and what technologies are classified as enhanced services? On March 4, 1996, Americas Carrier Telecommunications Association (ACTA) petitioned the FCC to regulate Internet telephony. They have asked the FCC to declare certain software companies as telecom carriers. Several groups have files opposing petitions, most notably the "Voice on the Net" Coalition. VON argues that an unregulated Internet is in the public interest and helps to foster innovation and competition. "ACTA is asking the FCC to stifle this competition, by regulating it." Moreover, products from VON affiliated companies are not limited to the Internet; the same equipment and software can be used by corporations over their private internets or Intranets.

The growth of Internet telephony, however, will result in ISPs having to increase capacity more quickly then planned. Although ISPs will most likely pass the costs of this increase in capacity to its subscribers, competition among ISPs will likely keep the flat fees charged to users relatively low. The real increase in costs to Internet users will likely be the result of the adoption of the next generation of TCP/IP. This new protocol might also include billing capabilities, allowing ISPs to bill users of realtime applications, eg., voice, concerts, for the traffic they generate. The Audio/Video Transport Working group, a subgroup of the Internet Engineering Task Force, the voluntary group that runs the Internet, has stated that it plans to develop mechanisms to provide "low-delay service and guard against unfair consumption of bandwidth by audio/video traffic."(50)

The countries that will be most affected by Internet telephony are likely to be those who have easy accessibility to the Internet and who have very high telephone rates. Internet can be the substitute for the telephone service.

Internet has the potential for challenging the fundamental models of the communication environment for common carriers, cable, satellite, and wireless communications (51)

, however, that will happen only when it resolves all above identified technical problems. It is difficult to predict the future of Internet telephony, Video, and Audio at this moment because new developmetns happen every day. Similarly, it is also difficult to predict the Internet's future and development.



As unprecedented growing pace of Internet and World Wide Web users present, information sharing through desktop computer nationwide or even internationally become more and more important for today's business dealing. The software which direct information flows from networks to networks is the key to make this new heat wave happening. However, computer networks are different from one to another and need special protocols to communicate from different networks and that could slowdown the information transfer. JAVA, a new programming language developed from Sun Microsystems last summer, gear for neutral computer operating systems, becomes the most promising language tool to handle the high demand multinetworking information processing.


JAVA, described by Sun in its technical White Paper as " A simple, object-oriented, distributed, interpreted, robust, secure, architecture neutral, portable, high-performance, multithreaded, and dynamic language".

It is basically a C++ programming-alike language. Since C++ is today's industry standard practice for object-oriented programming(52)

, Sun intended to stick Java as closely to C++ as possible to make the new programming simple and not requiring many special training.

Unlike C++ programming, Java eliminates many rarely used and hard understanding features which often consist of operator overloading problems from C++ programming. One of the complex features that C++ applications have is the storage management: the allocation and freeing of memory. Java, because of its ability to recover used and discarded ranges of memory, makes the programming task much easier and can cut down on bugs considerably.

Most of Java applications are small. One of Java's goals was to run the software in a small stand-alone machine. Applets are small programs written in Java that do not require a compile-link-load-test-debug cycle, they can be compiled and run directly by each user.

Java has an extensive library of routines that can be copied easily with TCP/IP protocols like HTTP and FTP. Java applications can open and access data across the net via URLs and is distributable among its users.

Since Java is intended to be used in distributed environments, security devices have been placed within the application. Java enables the construction of virus-free and temper-free systems. Sun has detected a few security flaws in the Java programming language, but these flaws have been corrected.

Java's most important feature is its neutral architecture. It was designed to support applications on network, the environment of a variety of operating system architectures. It's cross-platform portability makes it an instant success among personal computer users.

Finally, due to its multithreading capability, writing programs that deal with many things happening at once, Java can perform interactively with the Web server and achieve real-time behavior. This multithrading feature enables Java to process different things independently and continuously.

Current and Potential Applications

The HotJava Browser is the first major end-user application created using Java programming language. It shows the powerful features of the Java environment and provides an platform for distributing Java programs across the Internet, the most complex, distributed, and heterogeneous network.

Last December, the announcing of JavaScript, an open and cross-platform object scripting language for enterprise networks and the Internet, a joint product between Sun and Netscape Communication Corporation, propelled Java into the mainstream of the increasing exploded Internet community.

JavaScript, originally developed by Netscape, is an easy-to-use object scripting language designed for creating live on-line applications that link together objects and resources on both clients and servers. While Java is used by programmers to create new objects and applets, JaveScript is designed for use by HTML page authors and enterprise application developers to dynamically script the behavior of objects running on either the client or the server.

Before Sun introduced Java, most Web interactivity was accomplished via CGI (Common Gateway Interface) scripting. This type of scripting uses a guestbook format, where users type entries into text then submit the information via their browser back to a host server. The host sever then passes the information to an external program running on the Web server's machine. The output of this external program is then passed from the server back to the browser. CGI scripts must execute at least one round trip from the browser to the server and back. HotJava can run applets inside an HTML page, which turned a static Web to a dynamic medium. While, running a Java-compatible browser accesses a Java-powered page, an applet is copied to the browser's machine and executes there without going out to the server's machine. This local execution makes possible a much greater level Web interaction; It lets users add both the content and the code necessary to interact with that content.

Also, using Java supported browsers, the interactive graphical application can reach a high performance because multiple concurrent threads of activity in the application are supported by the multithreading built into Java environment. This unique feature can provide multimedia richness of a CD-ROM over corporate network and the Internet. Java was the first way to include inline sound and animation in a Web page. One example of this application is this: A multimedia weather forecast applet written in Java can be scripted by JavaScript to display appropriate images and sounds based on the current weather readings in a region.

Although Java is designed for the Internet, its application is not limited for the Web. Java also is a programming language that let users do almost anything with a traditional programming language. The Java applet provides a user interface that lets users interact with certain components when the simulation sequence is running(53)

. In addition, some universities start using Java as their primary language for Computer Science department.


Although Java is still in its early development stage, only a year old, its products including Java applet Viewer, the Java Compiler, a prototype debugger, the Java Virtual Machine (JVM), and class libraries for graphics, audio, animation and networking. In May 1996, Sun introduced several new products and services it developed, both to take full advantage of its new Java technology and to increase its share of the corporate Intranet market; the product line consists of: Java WorkShop (an integrated Java development); Internet Workshop (a universal client/server development environment); Joe (object software for enterprise communications); Solstice Internet Mail (platform-independent mail software); SunScreen SPF-100G (an international version of Sun's popular security package for Intranets); Solstice FireWall-1 2.0 (Internet security software) the Netra 3.0 family of Internet servers; and Internet Practice (a pair of worldwide consultation service)(54)


Sun, capitalizing on its success with Java, set up a separate operating unit called JavaSoft to concentrate solely on developing new products and applications, and to provide support for Java technology and products.

Although, Java has generated a lot of industry interests and has been well received by industry, its development is far from complete. Performance of Java applications are hampered by s bytecode interpretation requirement. Also, not all features, such as platform independent, are available to current users.

Parties in Interest

Wildly popular due to its interactive multimedia capabilities and architecture independent, Java has been widely used by most internet community and other major computer and telecom corporations. Besides Netscape, more than 30 major corporations have endorsed Java, including, Microsft, America Online, Inc., AT&T, Borland International, DEC, Hewkett-Packard, IBM, Novell, Silicon Graphics, Inc., Sybase, Inc., Toshiba. In March 1996 Microsoft announced its first Java supported Internet browser - Jakarta, however, Microsoft still uses its own software, Visual Basic Script, in its Internet Exploser.

Sun states that there about 1,500 or more Jave applets and Java-enhanced development tools will soon be released and an additional 10,000 to 20,000 applications are in different dtages of development. Furthermore, general purpose and client/server applications are also underway among software-development tools vendors.

Java, because of its neutral architecture and ability to integrate Web applications with the existing computing environment, can preserve an organization's large computer and hardware investments.

Other Implications

Java has bridged the old computing world into the new multimedia communications world; its application can run as a computer program or as a interactive video program, bluring the distinctions between computing, telephony, and video services. Java's ability to run under different networks regardless of platform, begs the question of ownership of the networks? Do these networks have national boundary? Should international users pay for any cost of their communication time ?

Capacity will be another issue for using Java. Java applications will require a lot of bandwidth if they plan on taking full advantage of multimedia functionality. As a result, the most internet providers will have to upgrade their networks to accommodate this high bandwidth requirement. Currently, most Internet providers are at T-1 speed. One major complaint of current Internet users is the speed, sometime the access time could be intolerable slow. The new ATM technology might be a perfect solution to resolve Java related capacity problems.

III. Capacity Overview


We attempted to briefly explain the most important emerging telecommunications technologies in the recent past, we are not by all means trying to cover all the important issues accompanied by each of these individual technology, but to examine the most significant issues, specifically, can the current and planned capacities able to accommodate the new technologies? In addition, several important issues such as regulatory treatment, national boundaries, pricing, and resource allocation, although important, have not been covered in this initial report.


Among the ten subjects that we discussed in the first part: Packet Switching and Frame Relay, the narrow to medium bandwidth technologies, are already widely available worldwide; Java, the computer software, will not have the capacity problem of its own; Fiber Optics and SONET, the transmission technologies which create the needed capacity to support new switching technologies, therefore, have no capacity issue; SS7, the international signalling protocol, provides critical value added services to public switch networks, also does not pose any capacity issues. We will only address the capacity issues for ISDN, ATM, and the INTERNET. Since SMDS is a transition service, filling the gap between Frame Relay and the ATM, has not been well-received by the users as a separate technology and thus will be subsumed in the ATM section..

International vs. National

It is different to measure the international capacity versus domestic capacity. The international capacity is the available transmission capacity, which can be measured by the number of 64 kbps circuits or the number of DS-3 (45 Mbps) capacity, through the deployed undersea fiber optic cables. It is a predetermined capacity, in most cases, with an international agreement. The latest technology for undersea fiber optic cable is 5 Gbps with optic amplifier. Because of the high cost of laying cable across the ocean and it is distance sensitive, therefore, the number of pairs of the cable laid is limited with reasonable demand forecast.

While for domestic capacity, it is not able to count on how many 64 circuits or DS-3 units since most carriers laid their under ground cables by a "bundle" pairs, depending on the individual carriers, the laid cable could be 32 pairs of OC3 (155 Mbps) or higher OC12 (622 Mbps) or OC48 (2.488 Gbps). Also, the cable route is difficult to keep track on, there is no way of counting how many capacity each country has, even in the U.S. The only measurement of fiber cable capacity is by its deployed mileage, the general proxy for comparison purpose. Even that, however, there are still problems of redundant paths or routes to make the exact measurement very difficult(55)


There is another less degree of difference between international capacity and domestic capacity. For domestic, if unexpected demand arise, the carrier can always add a new terminal equipment, basing on the more advanced technology, at the cable terminals (for example, change OC12 for OC3 equipment), and insert more repeaters in between the underlined cable. Technology and capital cost, rather than the capacity, are the issues to be considered.

This can not be done for the international undersea cable until very recently. It is not economical by adding repeaters undersea. Due to the recent commercialized WDM technique, carriers can now add a WDM equipment at both cable head ends, depending on the number of colors that carrier chooses, the existing cable capacity can be increased by the multiplier. WDM, is by far the newest technology, other than the speed, that the fiber optic cables can provide numerous capacity, far exceed any demand growth projections.

Due to its complexity and uncertainty, we will not examine any national fiber cable capacity. We further assume that the telecom market is efficient and carriers can make their business judgements on deployment plan to meet their customers needs.

International Deployed Capacity

Table 1 shown the trans-Oceanic cables from 1988, the first fiber optic cable TAT-8 in service, to the end of 1999, the current last planned cable date (Africa One).

In Atlantic Ocean, the annual compound growth of capacity from 1988 to 1995 is almost 53%, which is far more than the annual international traffic growth of 10 - 15% in the same time frame. The latest TAT-10 and TAT-11 cables, which land from U.S. to Germany and France, respectively, are still open for ownership. CANTAT-3, another high capacity cable from Canada to Iceland, Denmark, and Germany route is relative new and should have ample capacity unused. The new arrival of TAT-12/TAT-13 will provide another doubling capacity by the end of this year. It will add another 192 DS-3 units. That alone can provide 192 new two-way video channels at 24 hours base. With ATM and SMDS switching terminals, this additional capacity should easily send back and forth of several hundreds multimedia applications.

Same observation can be seen in the Pacific Ocean. By fall of 1996, TPC-5 will be activate commercially from U.S. to Japan, and will increase that region's capacity by two-fold. Again, TPC-5 with the newest technology can carry traffic at 5 Gpbs, 10 times larger than the previous cable laid in the Pacific Ocean.

Indian Ocean by far is the least cable deployed region, however, after the new FLAG and Africa One enter into service, the whole world will be seamlessly connected by the undersea fiber optic cables. Since Africa One will be landed in 40 or more countries, AT&T plans to use the latest technology of WDM on it to provide the most secure redundancy network to prevent any single landing point failure.

Table 1, however, does not include any addition capacity gain either through adding WDM equipments (other than Africa One), nor using compression technology to increase voice channel at 4 or up gain (one voice circuit can carry 5 voice channels simultaneously through time division multiplexing technique).

Table 2 through 4 provide sub-regional undersea fiber optic cables deployed since 1988. Asia-Pacific regional cables have been deployed more than 10 times of its original capacity in the past six years. South America is another fast growing region, grown six-folded in the same time frame. European countries, as been developed early, still have a large base. More inland cables are laid in the continent which did not captured in our database count.

One additional point of the international fiber cable, is its shorter planning cycle. The new planning cycle could take one year or even less than 12 month period for a 560 mbps cable, comparing to 4 to 5 years of the first trans-Oceanic fiber optic cable TAT-8. Thus, adds benefits to wait for the latest technology available before the actual demand realized.

ISDN Capacity

As the number of internet users grows it has created a large demand for a technology that can provide fast access to multimedia and other Internet information. The traditional copper-wired telephone lines are no longer adequate for moving contents from one computer to another; it often takes minutes by using the regular analog phone line even with 28.8 kbps modem to download graphic files from the Internet.

ISDN, using 64 kbps or 128 kbps, is the transition technology available for the telcos to solve the increasing Internet connectivity. However, the actual deployed ISDN services among U.S. LEC are not very impressive. According to a study done by the Morgan Stanley, even the RBOCs/CAP claimed that ISDN is available to most customers, as follow: U.S. West 59%, SBC 66%, Pacific Bell 87%, NYNEX 76%, GTE 18%, BellSouth 64%, Bell Atlantic 90%, and Ameritech 80%, Morgan Stanley still believed that it is difficult to obtain the service in most places. More reports shown that the difficulty came from the high installation rates and usage fees (rather than the flat fees), and more common of long waiting period.

From the FCC report, the actual ISDN users are much less than the ISDN capable lines. Table 5A and 5B shown the actual ISDN customers on BRI and PRI rates, respectively, in the last three years (1992 - 1994).

A similar information is provided in the same table indicated the top 15 European countries data. Germany by far is the largest ISDN deployed country, with more than 500,000 customers on the BRI rate and 28,000 PRI customers, comparing to the U.S. reported companies of 224,450 and 3,200 respectively in 1994, much smaller customers on the PRI rate in the U.S., comparing to the Germany.

One of the main reasons for Germany's success is their rates. Table 6 shown a comparison rates among European countries for both BRI and PRI. Besides rate, Germany has other incentive programs to promote the ISDN.

Recent announcement from MicroSoft to providing on-line ISDN ordering process should help LECs to easy their order processing. Some Bell companies have reduced their rates (Bell Atlantic) and some start offering flatted rates. These should make ISDN more affordable and easy to acquire.

Again, it is not the capacity issue, it is the administrative issue or policy issue for the service offerings.


As we mentioned in the earlier section, ISPs provide the capacity required to transmit and receive data over the Internet. Users are charged a flat fee for unlimited usage. Additional fees are charged for additional service options. Traffic flows from one ISP to another under a series of transit and/or peering agreements. These agreements are similar to delivery contracts between companies. Without these agreements ISP would have no incentive to accept and deliver each others traffic. These agreements are extremely important to the functioning of the Internet, since without thee traffic could not flow.

Transit agreements are similar; they are contracts that allow ISPs to temporarily send packets over another ISPs network in cases of emergency, or when the primary network cannot handle the traffic. ISPs desiring to exchanges packet meet at MAE's or NAPs or other specified meeting points, according their peering or transit arrangements.

An ISP's capacity is directly related to the type of equipment it uses to interconnect its network directly to others, or to facilities that connect other networks to each other. The capacity of any particular ISP is directly related to the type of switches and routers these ISPs use to interconnect the network to the MAE or NAP.

The MAE in Metropolitan Washington uses an FDDI ring which uses a 100 mbps switch to carry traffic throughout the network. Although the current add on cards can handle only 2mb/per second, additional cards can be added to increase capacity when necessary. The capacity is limited only by the number of OC-3s attached to the ring. Additionally, as newer and faster switches come on market, the OC-3s can be replaced with these switches. As a result, there is no limit to the Internet traffic that can be carried.

Internet congestion problems are not the result of a shortage of capacity, but how that particular system is structured. Often the problem is that the system was not engineered to carry data traffic, but voice traffic. As a result, network protocols that are not optimized for data traffic are used. Simply upgrading a T-1 line to a T-3 will not directly improve the performance of the network. These systems can be reengineered and their performance increased dramatically; however, this often means that the network must be taken down and unavailable, while work is proceeding

As mentioned in our earlier section, The US's primary backbone before April 30 1995 was the NSFNET. On April 30, 1995 NSFNET ceased operations as its funding disappeared and private companies assumed control over the backbones or created alternative backbones. Today there are some 14 other backbones in existence. MCI, since it helped create NSFNET, is the largest carrier of Internet traffic, carrying over 40% of all Internet traffic, is using 34 Mbps to hub traffic from U.S. to U.K. (through BT network).(56)

Other backbone carriers are Sprint (another 34 Mpbs from U.S. to London and to Norway), ANS, UUNET, PSINET, AGIS, and others.

Outside the US a similar setup exists. Western European countries have national networks attached to various European backbones (E-Bone, Dante), but their backbones are immature and often inefficient. Often connections between countries are often slow or of inferior quality. In Europe, it is only MFS, a US company, that provides high speed and large bandwidth through Frame Realy or ATM networks. MFS operates Frame Relay networks in Frankfort, Zurich, Stockholm, and Paris. Additionally, it operates an ATM network connecting London with New York. MFS's Frame Relay hubs are interconnected with each other and with the ATM network in London. MFS's backbones in Europe are optimized to carry both Internet, voice, and data traffic.

These networks should never experience capacity problems because they have been engineered for data. As we discussed the international deployed capacity, there are plenty unused DS-3 units (45 Mbps, the typical backbone capacity as needed for international traffic flow on the Internet).

Congestion throughout the Internet is caused by a misallocation of resources, not any shortage of capacity. Interconnection between ISPs are done through peering arrangements that are fee free. In the past smaller ISPs could buy transit rights. these rights would allow non affiliated ISPs to carry another networks traffic. Today, however, fee paid transit agreements are rarely done. ISPs are only peering with other ISPs of similar size. As a result, smaller ISPs without a peering or transit arrangement nearby are forced to only travel certain paths resulting in delayed traffic.

Current Internet arrangements are far from settled. Increasing the number of interconnections raises a whole host of other issues, such as who will pay the capacity, what type of interconnection agreements among carriers or countries will be written. We will revisit this issue when new and improved interconnection agreements are created

ATM Capacity

Since ATM is still in its very early development stage, most of its current applications are in the private sectors, particularly, in the LAN interconnections. The public network interface standard has not completed yet, therefore, is not available for public switch services just yet. MFS is the only carrier so far deployed its commercial international ATM services in the U.S. to private connections. Finland Telcom is the only European carrier providing ATM commercially at the present.

Although ATM can provide both voice, data (and video too), it will likely require new bilateral agreements and thus will not be accessible through the public switch networks; this is in spite of its high bandwidth switching capability, up to Gbps, which will actually SAVE network bandwidth usage and ease capacity constrains allowing for broadband services. ATM because of its many advantages will be the network of choice for ISP.


Ongoing development of the broadband technology, particularly the switching technology such as the ATM, indicates that future multimedia telecommunications needs should be seamlessly matched with the supporting transmission capacity. As we have seen with fiber technology, more and more services will be deployed in the future. Demand usually is the driving force behind advances in technology, not the other way around. Java is a perfect case, it was created to solve C++ programming problems. After examined the current and oncoming international capacity, we can be sure that there is no capacity constraints limiting deployment of new technologies. In the coming INTERNET era, even with more bandwidth eating applications, there will always be a way to solve bandwidth problems. However, in the area of Internet telephony application, the telecom industry will face the challenge of how to distinguish the borderline of computing and telecommunications in the coming years. One thing that is sure is that we can expect some degree of the convergence of these two exciting industries.

1. 1 LDDS Telecom Glossary.

2. 2 ISDN FAQ as seen at Http://www.ocn.com/ocn/isdn/faq1/faq_toc.html.

3. 3 LDDS-Worldcom Telecom Library, In Perspective, "The Lessons of ISDN" and "An ISDN/PRI Primer, Spring 1993 and Fall/Winter 1992, Http://www.wiltel.com/perspect.

4. 4 Landwehr, John, The Golden Splice, Beginning a Global Digital Phone Network,NorthWestern University gopher://noc.macc.edu/70/00/isdn/papers/isdn.paper, p.2

5. 5 OpCit Supra at 2, 1.08

6. 6 Opcit Supra at 6.

7. 7 ISDN Primer, Advanced Computer Communcations (Http://www.sys.acc.com), Santa Barbara, CA

8. 8 OpCit Supra at 2, 1.02.

9. 9 OpCit Supra at 4, p.3-4

10. 10 Stewart, Ian., ISDN--It Sure Delivers Now, Global Telephony,Intertec Publishing Corporation, January, 1995.

11. 11 OpCit Supra at 1.

12. 12 ISDN User Guide, Pacific Bell, (http://www.pacbell.com/isdn/book/isguide-8.html)

13. 13 Op Cit Supra at 11.

14. 14 Op Cit Supra at 1.

15. 15 Op Cit Supra at 10.

16. 16 Balaji Kumar. Broadband Communications: A professionals guide to ATM, Frame Relay, SMDS, SONET, and BISDN. McGraw-Hill, 1995, p.185.

17. 17 Business Week, June 26, 1995.

18. 18 Harry Newton, Newton's Telecom Dictionary. Flatiron Publishing, Inc.:New York, 1994, pp.471-2.

19. 19 Balaji Kumar, Broadband Communications. McGraw-Hill, Inc. 1995, pp 141.

20. 20 Multiplexing - A technique that enables a number of communications channels to be combined into a single broadband singal and transmitted over a single circuit. At the receiving terminal, demultiplexing of the broadband signal separates and recovers the original channels. Multiplexing makes more efficent use of transmission capacity to achieve a low per channel cost.

21. 21 Craig Partridge, Gigabit Networking, Addison-Wesley Publishing Co.: Reading, MA, 1994

And Vincent W.S. Chan, "All-Optical Networks", in Scientific American, September 1995, pp.72-76.

22. 22 Gregory C. Staple, ed., Telegeography 1995: Global Telecommunications Traffic, Statistics and Commentary. Telegeography, Inc., October 1995, pp. 84-87.

23. 23 Craig Partridge, Gigabit Networking.

24. 24 Tingye Li, "Optical Amplifiers Transform Lightwave Communications", in Phototonics Spectra, January 1995, pp.115-117.

25. 25 For a more technical explanation and a detailed history of the development of EDFAs, see: Emmanuel Desurvire, "The Golden Age of Optical Fiber Amplifiers", in Physics Today, January 1994, pp.20-27.

26. 26 Vincent W.S. Chan, "All-Optical Networks".

27. 27 OpCit Supra at 4.

28. 28 OpCit Supra at 6.

29. 29 OpCit Supra at 3.

30. 30 M. Wehr, "Wavelength division multiplexing in transmission networks, " in Commutation & Transmission, published by Sotelec, no.2, 1995.

31. 31 Craig Partridge, Gigabit Networking. Addison-Wesley Publishing Co.: Reading, MA, 1994.

32. 32 All SONET information from "In Perspective".

33. 33 "Tying up Sonet's loose ends" in Telephony, 2 October 1995, pp.50-51.

34. 34 Srinivasan Ravikumar, Steven H. Hersey, and Philip M. Francisco, "The Building Blocks for SONET Success" in Telephony, 2 October 1995, pp. 40-48.

35. 35 Jeffrey K. MacKie-Mason and Hal Varian, "Economic FAQs About the Internet." University of Michigan, School of Public Policy.

36. 36 Ibid.

37. 37 Ibid.

38. 38 Ibid.

39. 39 Ibid.

40. 40 Ibid.

41. 41 Ibid.

42. 42 Ibid.

43. 43 Internet Domain Survey January 1996, Network Wizards, URL: Http://www.nw.com.

44. 44Kevin Savetz and Andrew Sears, "How Can I Use the Internet as a Telephone," Http://www.northcoast.com/~savetz/voice-faq.html.

45. 45 Ibid.

46. 46 Ibid.

47. 47 Ibid.

48. 48 Tim Dorcey, "CU-SeeMe Desktop Video Conferencing Software, Cornell University, Connexions, Volume 9, Number 3, March 1995.

49. 49 "IBM Moves for Internet Telephone Software Service," Telecomworldwire, February 8, 1996, WestLaw Allnewsplus.

50. 50 Greg Staple, ed, TeleGeography, Zachary Schrag, "The Internet Becomes an Industry," Telegeography, pp. 53-65.

51. 51 Kevin Savetz and Andrew Sears, "How Can I Use the Internet as a Telephone," Http://www.northcoast.com/~savetz/voice-faq.html.

52. 52Object-Oriented programming is taking the approach of asking "what" is the intent of the program, not "how" to program. Its goals are to find the objects and their connections and then define the operations to carat the tasks to complete its objects. The Object-Oriented software aims for robust and can be easily reused, refined, tested, maintained, and extended.

53. 53 Internet Advisor, Premiere Issue 1996, "Brewing Up Applications with Java", by Paul Phillips.

54. 54 JavaWorld, May, 1996.

55. 55Jonathan M. Kraushaar, "Fiber Deployment Update End of Year 1994",Industry Analysis Division, Common Carrier Bureau, FCC., July, 1995, P5.

56. 56 Ibid.