David G. Messerschmitt
IEEE Proceedings, August 1996.
Copyright © 1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. See http://www.ieee.org/copyright/reqperm.htm for instructions on how to obtain permission.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
As has been widely recognized for some time, the computing and telecommunications technologies are converging. This has meant different things at different times. In this review paper, we describe the current state of convergence, and speculate about what it may mean in coming years. In particular, we argue that as a result of the horizontal integration of all media (voice, audio, video, animation, data) in a common network and terminal infrastructure, telecommunications and networked-computing applications are no longer distinguishable. Considering that the old terminology is no longer meaningful, we attempt to codify networked applications in accordance with their functionality and immediacy. As application functionality is increasingly defined in software, with commensurate cost-effective programmable terminals and means for distribution of applications over the network itself, we argue that user-to-user applications will be greatly impacted, moving into the rapid-innovation regime that has characterized user-to-information-server applications in the recent past. Finally, we identify a number of areas where different technical approaches and design philosophies have characterized telecommunications and computing, and discuss how these technical approaches are merging and identify areas of needed research. We do not address complementary forms of convergence at the application or industrial level, such as convergence of the information and content-provider industries, but rather restrict attention to the infrastructure and technology.
Why should we care? The convergence has, and will continue to have, profound impact on technology, industry, and the larger society. The traditional fields of telecommunications and computing have already been irreparably changed by the other, and, as we argue below, will be even more substantially recast in the future. We argue that much more profound changes are forthcoming, changes no less weighty than the rapid disintegration of the vertically integrated industrial model (from silicon to applications). Finally, while computing in the absence of communications has led to new applications and made substantive changes to leisure and work life, computing in conjunction with communications will have a profoundly greater impact on society. This is because communications is at the heart of what makes a society and a civilization, and the convergence with computing will revolutionize the nature of that communications .
Recently, the infrastructure and applications for these technologies have become seriously blurred. In both the network  (embodied in the Internet  and asynchronous transfer mode, or ATM ) and the desktop computer , data has become integrated with continuous media (audio and video), enabling so-called multimedia applications . Applications are becoming blurred as well. Accessing bank records using a DTMF telephone and voice response unit, or with a networked computer over a computer network, differ as to medium but not basic functionality. Thus, the classical terminology of telecommunications and computing is no longer as useful, and possibly even delusory. In light of this, it is appropriate to define a more transparent classification of networked applications that is media-blind, and focuses on the functionality provided the user.
As an aid to understanding, we adopt the three-level model of Figure 1, similar to that proposed in . We define an application as a collection of functionality that provides value to a user (a person). In this paper we are concerned with networked applications, implying that they are distributed across a distributed telecommunications and computing environment. Examples of networked applications are electronic mail, telephony, database access, file transfer, World Wide Web browsing, and video conferencing. A service is defined as functionality of a generic or supportive nature, provided as a part of a computing and telecommunications infrastructure, that is available for use in building all applications. Examples of services would be audio or video transport, file-system management, printing, electronic payment mechanisms, encryption and key distribution, and reliable data delivery. Bitways are network mechanisms for transporting bits from one location to another. Examples of bitways with sufficient flexibility for integrated multimedia applications are Asynchronous Transfer Mode (ATM)  or internets interfaced with the Internet Protocol (IP) .
Each user in a networked application interacts with a local terminal, which communicates in turn with remote computers or terminals across the network.
We also separate networked applications into two classes with respect to the temporal relationship in the interaction of the user with a server or with another user:
One useful test is whether the user concentrates solely on the application (immediate) or typically moves to another task in the middle of an interaction (deferred). Immediate applications would sometimes be called synchronous  or real-time , and deferred would sometimes be termed asynchronous  or messaging .
Video on demand
Networked applications are physically realized by terminal nodes (or just terminals) interconnected by bitways. Functionally there are two basic architectures available for networked services, as illustrated in Figure 2:
Often the peer or client terminal functions will be realized in software in a desktop computer, or they may be dedicated-function terminals (like a telephone or video conference set). For simplicity, we will refer to "peers", "clients", and "servers", without the associated terms "terminal" or "computer". Note that the terminal vs. bitway is (primarily) a physical partitioning of functionality between a terminal at the edges of the bitway, and the bitway itself. The three-level architecture of Figure 1 is a logical separation of functionality, where application functionality will typically physically reside in the terminals, and services functionality may reside in the terminals or somewhere within the bitway.
Although clients and peers serve a similar user-interface functionality, there are some basic differences. Typically many clients will connect to a single server, whereas a peer must be prepared to connect to any other peer. In some applications, like multi-way video conferencing, a peer may be connected to more than one other peer simultaneously. To establish a new instance of an application, a server must always be prepared to respond to an establishment request from a client (but doesn't originate requests), whereas a client may originate establishment request (but isn't prepared to respond to such requests). A peer must be able to either originate or respond to establishment requests, and in this sense is a hybrid between a client and a server. A client can rely on the server for some functionality, whereas a peer must be self-contained. The biggest differences are in scalability to large numbers of users, interactive delay, and interoperability (see Section 4.2).
The genesis of computer technology was basic technology arising from telephony; namely, the relays used in telephone switches. Subsequently, both computers and telecommunications exploited underlying advances in electronics and optoelectronics (the latter in the case of communications). More to the point, functional as opposed to technological convergence occurred with the advent of stored-program control for telephony switches and the development of digital representation of telephony signals (through quantization and analog-to-digital conversion in the so-called pulse-code modulation) in the 1950's.
These two developments presaged two profound shifts in telecommunications. First, computers became common as control and signalling points in telephony networks, enabling more functionally complex telecommunications services, and second, digital representations of audio, image, and video signals allowed them to be stored and manipulated by standard computational hardware. While the first factor has resulted in a major shift toward the automation of the telephone networks, it has had relatively little influence on the computer industry. The second development has had far wider implications outside communications, such as compact digital audio, digital HDTV, and the extremely flexible manipulation of signals by standard or custom digital hardware (the latter called digital signal processing). Only today is this technology joining the computing mainstream, as enabled by the increasing performance of desktop computers.
Two seminal developments were the desktop computer , as well as networks devoted specifically to the communication among them, first in the "local area network"  and later the "wide area network" (two early examples of which are synchronous network architecture (SNA) and ARPANET , the latter having evolved into the Internet). Early examples of applications enabled specifically by the networked computer include electronic mail, file transfer, concurrent databases, and recently the World Wide Web (WWW). The stand-alone desktop computer had previously enabled its own set of high-value applications, such as desktop publishing, spreadsheets, and other personal-productivity applications. The networked computer provides a ready large-scale market for new applications, thereby reducing the barriers to entry for new applications developers.
Computer networking, like the control computer before it, was widely adopted in the telecommunications industry as the basis of signalling and control. This signalling function was originally realized in-band on the same voice channel, but was replaced by a signalling computer network called common-channel interoffice signalling (CCIS) . CCIS enabled the advance from simple circuit-connection functions to much more advanced features (like caller identification), and ultimately will provide terminal-to-terminal signalling capabilities (a basis for dynamic deployment, see Section 3.7).
Up to this point, there remained an infrastructure for computing that emphasized data-oriented media (graphics, animation), and a relatively separate telecommunications infrastructure that focused on continuous-media signals (voice and video). These converged in a relatively superficial way, at the physical and link layers, where telephone and videoconferencing and computer networks shared a common technology base for the physical layer transport of bits across geographical distances. The telecommunications industry made extensive use of computer and software technologies in the implementation of the configuration and control of the network. The computer industry made use of the telecommunications infrastructure to network computers, which enable networked applications. However, it is fair so say that the disciplines remained intellectually separate, sharing common hardware and communications media but pursuing distinct agendas and possessing distinct cultures.
There are a number of inventions embodied in computing, but arguably the most important is programmability. The expanding importance of programmability flows from extraordinary advances in the cost/performance of the underlying electronics and communications technologies. In the context of any single application (like control, voice, audio, video, etc.), the performance requirements in relationship to the capabilities of the underlying technology passes through three stages:
The final stage -- a software-defined solution -- has an important implication; namely, the basic functionality need not be included or defined at the time of manufacture, but rather can be modified and extended later. This property -- that the basic functionality can change and advance over time -- is the key to the triumph, for example, of the personal computer over the stand-alone word processor.
The advances in underlying technology are such that software-defined implementations are cost effective for audio as well as virtually all data media, and as time passes will become viable as well for video at increasing temporal and spatial resolution. Thus, the programmable implementation can be expected to spread to all corners of the computer and communications world (although there will always remain high-performance functions that are implemented directly in hardware).
The modern trend is toward adaptability, a capability that (usually) builds on programmability and adds the capability to adjust to the environment. For example, in a heterogeneous environment it is helpful for each element to adapt to the capabilities of other system elements (bandwidth, processing, resolution, etc.).
There are two architectural models for provisioning networked applications, as illustrated in Figure 4. In the most extreme form of vertical integration, a dedicated infrastructure is used to realize each application. The premier example is the public telephone network, which was originally designed and deployed specifically for voice telephony. In contrast, the horizontal integration model is characterized by:
A key advantage of the horizontal model is that it allows the integration of different media within each application, as well as different applications within the bitway. (For this reason, this is often called an integrated-services network in the telecommunications industry.)
An important feature of horizontal integration is the open interface, which has several properties: It has a freely available specification, wide acceptance, and allows a diversity of implementations that are separated from the specification. Another desirable property is the ability to add new or closed functionality. Open interfaces enforce modularity and thus allow a diversity of implementations and approaches to coexist and evolve on both sides of the interface . Some of the most important open horizontal interfaces in the computer industry are illustrated in Figure 5. ATM is a protocol designed specifically to accommodate a diverse mix of traffic types . The Internet Protocol  is an open standard for interconnecting bitways below it, where those bitways may incorporate a diverse set of technologies (including ATM). IP also allows for a diverse set of media types and applications to reside above it. Another critical interface is the operating system application program interface, which allows a diverse set of applications to co-exist on the same bitways and services infrastructure, while hiding as much as feasible the details from that infrastructure. Horizontal interfaces also exist for the control and signaling (e.g. control of telephony network features from a desktop computer application in the telephone application-program interface (TAPI), which supports computer-telephony integration ).
Open horizontal interfaces are not completely successful at isolating horizontal functional layers. For example, one open interface is dependent on the suite of primitive functions offered by a lower interface, a phenomenon called protocol dependence. David Clark has defined a special type of open interface called a spanning layer, which adds the characteristic that the extent of its adoption is nearly ubiquitous . A specific spanning layer called the "open data network bearer service" is proposed in . Spanning layers are particularly useful because higher interfaces can presume their existence and the services they provide, thus effectively isolating the design of the horizontal layers above and below.
The computer industry is well along in the evolution to horizontal integration. The networked desktop computer resulted in the division of the industry into distinct horizontal segments (hardware, network, operating system, and application). Today, we are in the process of integrating non-data media such as audio and video into this same environment, supported at both the bitway level (LANs and the Internet) and on the desktop. The telecommunications industry was once vertically integrated, with a focus on provisioning a single application with a dedicated network, such as voice telephony, or video conferencing, or cable television. Today this industry is also moving toward architectural horizontal integration at the bitway level with ATM bitways that flexibly mix different media; however, it remains largely vertically integrated at the services and applications layers, as bitway providers aspire to valued-added applications such as video on demand and differentiated terminals such as "set-top boxes".
We hypothesize that powerful economic and technological forces are driving us toward horizontal integration. Advances in technology have already resulted in the integration of different media in both the bitway (such as ATM or the Internet) and in the terminals (such as desktop computers). This level of horizontal integration offers the service provider substantial administrative benefits, relative to the alternatives of separate or overlay bitways, and adds value to the user, since different media can easily be incorporated into multimedia applications.
The separation of the applications from bitways and services best serves the user by encouraging a diversity of applications, including many defined for specialized as well as widely popular purposes. Vertical integration discourages this diversity because a dedicated infrastructure demands a large market, and because users don't want to deal with multiple providers. Horizontal integration lowers the barriers to entry for application developers since most of the infrastructure (bitways and services and even programmable terminals) are already available. Applications can be defined in software and coexist in the same programmable terminals with other applications, reducing the distribution cost and the incremental cost of a new application. Finally, it is unlikely that a single company can accumulate the range of expertise required to provide the best solutions across such a wide range of media and technologies.
Open interfaces offer vendors a large and immediate market for new applications. The resulting diversity of applications increases the utility of the open interface to the user. This positive reinforcement leads eventually to a dominant open interface, to be displaced only by a new interface that offers significant functional or performance advantages. The same inherent value of application diversity does not apply to bitways and services. They are generic and widely applicable to different applications, difficult to differentiate except in terms of cost and performance, and are capital intensive and benefit from economies of scale.
The computer industry is far along in the evolution to horizontal integration. The desktop computer freed the user of the constraints from the computer center bureaucracy and lowered the barriers to entry of application developers, which in turn offered greater value to the user. Our speculation is that the telecommunications industry will be pushed by market forces in the same direction, even though many companies would doubtless prefer vertical integration and closed solutions.
Here we use the term "untethered" to refer to wireless access to a bitway, "nomadic" to refer to geographic flexibility in accessing a bitway, and "mobile" to refer to bitway access while the user is in motion. In a sense, these three concepts build upon one another, but not strictly. While mobile services are necessarily untethered, nomadic services are not. Mobile services are by definition nomadic. These three concepts lead to a different but overlapping set of challenging technological issues.
Nomadic telephony has long been available in the form of extension and pay telephones. (Perhaps because there is no computing "service provider", an analogous infrastructure is yet to appear in networked computing.) Untethered telephony has been offered for some time by the cordless phone , and later, mobile telephony arose in the extraordinarily successful deployment of cellular telephone systems . Computing has remained fixed-location for some time, although one might view networked client-server computing as nomadic in the sense of making an application executed on a server available to a nomadic user, should they be able to find a bitway access point. The laptop computer has supported the nomadic and even mobile computer user (although alas not the networked computer user, except to the extent such networking can be accomplished over the telephone). Recently, there is beginning to develop an infrastructure supporting nomadic and mobile networked computers .
Nomadic and mobile services and applications have been so successful because a fixed-location constraint is a mismatch to the roving nature of human activity (indeed, even within the office or residence). To the extent services and applications can be provisioned in a cost-effective mobile (or even untethered) fashion, experience has shown that users will choose this option. Thus, it is clear that nomadic and mobile telecommunications and computing are extremely important for the future, while offering many serious technological challenges.
Nomadicity and mobility provide another point of convergence: the issues raised by mobile telecommunications and by mobile networked computing are similar. Both require the dynamic migration of resources (connections, internal state, processes, reserved memory and bandwidth, etc.), and both raise serious issues related to QoS (uninterrupted service, inability to reserve resources in advance without regard to location). Since telecommunications has addressed these difficult issues for some time, there is an excellent opportunity for cross-fertilization to nomadic and mobile computing.
Beyond a couple of applications of universal interest -- voice telephony and video conferencing -- user-to-user applications are much fewer in number than user-to-information-server applications (although nevertheless very successful). These universal user-to-user applications have previously used the dedicated telephone network but are migrating to the Internet, for example with CU-SeeMe . A less familiar example is groupware  and collaborative computing , where two or more users can performed shared functions on a document or database, as in a collaborative design project . There are also less familiar applications, such as telepresence  and telemanipulation , which are important in military, outer space, and dangerous environments, but potentially also of importance in medicine . In contrast, there are a large and expanding number of user-to-information-server applications, such as the World Wide Web (WWW) .
Why are user-to-user applications so few in number? This could be inherent, or perhaps this class of applications has been overlooked by the application software industry. Yet another is that the human factor aspects are not sufficiently developed. Another is the requirement for a cumbersome and time-consuming standardization process if two or more vendors are to achieve interoperability in a given application. In our view, none of these reasons is as important as a fundamental obstacle to the commercial exploitation of user-to-user applications that economists call direct network externality . This property of networked applications, which distinguishes them from most other market goods, is that the value of an application to a particular user grows with the number of other users that have an interoperable application available; that is, the community of interest willing and able to participate in that application. In contrast, early adopters derive very little value, which is an economic barrier to a vendor attempting to establish such an application. (Who is the first user to buy a video conferencing application if there are no other users with whom to conference?).
User-to-user applications display strong network externalities. In contrast, user-to-information-server applications have an weaker network externality that makes them much easier to establish in the marketplace. This is because, once an information server is made available on the network, the first user derives the same value as later users.
Network externality can be partly overcome by a good mechanism for distribution of application software. If a user-to-user application can be distributed to a large number of users virtually simultaneously, interoperability and a community of available users is guaranteed, even for early adopters. For software-defined applications, this is technically feasible, since an application can be distributed over the network itself. As shown in Figure 6, the user obtains a binary executable for a client or peer application over the network itself as a prelude to participating in the application. Developers of user-to-information-server applications like World-Wide Web browsers , document viewers , and audio and video players are distributing new versions of those applications over the network; in fact, they are bypassing many externality issues by distributing them for free (hoping to derive revenue from the interoperable server software), thus establishing a community of interest quickly. By bypassing traditional slow distribution channels, the velocity of innovation in these applications has been increased dramatically. Since user-to-user applications have a much stronger network externality, network distribution has the potential to make a much bigger impact on this class of applications.
The virtual machine is illustrated in Figure 7. A layer of software is inserted between the operating system and the application that separates the application from the specifics of the operating system and hardware platform. The virtual machine open interface defines a general instruction set, as well as API's to resources like network services, all in an OS-independent way. It supports transportable computation, meaning that even though the program representing application functionality is stored in one node (typically peer or server), that program can be transported to and executed on another node (typically peer or client).
Transportable computation offers four important advantages:
Dynamic deployment does not exploit the full power of transportable computation, which is embodied in the more general concept of an intelligent agent. An intelligent agent is a transportable program that includes four attributes and capabilities :
The capabilities of the intelligent agent open up a number of possibilities. Intelligent agent technology originated in artificial intelligence, where one can imagine sophisticated human-like qualities such as adaptation to the environment and higher-level cognitive functions. Here, we can conceptualize more mundane applications that provide useful generalizations of user-driven information retrieval  or even as basic as electronic mail . In this application domain, agents can act as "itinerant assistants" that are not restricted to particular servers, but cruise the network gathering or disseminating information. Such "itinerant agents" represent a different dimension of mobility; rather than the user being mobile, the user is represented by a mobile agent.
As networked applications become more sophisticated, especially as enabled by the interoperability and scalability benefits of dynamic deployment, we expect the application types and architectural models to become increasingly mixed. Typical collaborative applications will combine user-to-user and user-to-information server functionality, as in a collaborative design involving two or more users and a common information server (storing the design being modified). The compelling performance benefits (see Section 4.1) of the peer-to-peer architecture for the user-to-user interactions suggest that the peer-to-peer messaging will enjoy increasing popularity in such applications. An example of a resulting mixed architectural model is shown in Figure 11 for three users (with associated mixed client and peer functionality) and a single information server. All client/peer and server terminals or hosts can include repositories of applets, yielding the flexibility to locate computation wherever it results in the best responsiveness, lowest latency, and can access the data it needs while insuring interoperability.
The dynamic deployment of interwoven user-to-user and user-to-information-server multimedia applications in a horizontally-integrated terminal and network environment represents the pinnacle of convergence. Networked applications that freely mix the constituent elements traditional to telecommunications and computing will become commonplace. At this point, there no longer exists any technological or intellectual differences that distinguish telecommunications from computing. At this point, the dynamism and rate of progress in user-to-user applications becomes as great as has been recently experienced in user-to-information-server applications. As the availability of appropriate networked terminals is becoming widespread (for example Internet-connected personal computers with multimedia capabilities), this pinnacle of convergence will soon be upon us.
Arguably the greatest distinction between telecommunications and computing has been in performance metrics. A model of service provided by many computer systems and computer networks is best-effort, which can be described as "always strive to achieve better performance though more advanced technology or improvements in architecture, but there is no absolute performance standard; we are never satisfied". Since best-effort service does not take account of application needs, it is a "resources are cheap" model, in which applications may be provided considerably greater performance than they need, possibly at the expense of other applications that may receive less resources than they need. In fact, in the case of limited resources, most best-effort systems strive to achieve "fairness", attempting to apportion those limited resources according to some equality criteria. An early example of best-effort service (uncharacteristically in telecommunications) is digital speech interpolation (DSI), which statistically multiplexes speech sources, apportioning the available bit rate equally among the stochastically varying active sources . Because the speech quality deteriorates gracefully with increasing traffic load, high traffic can be accommodated, at the expense of no guarantee on the quality of speech reproduction for any customer. Best effort is the philosophy of design of the present Internet. For example, fair queueing allocates bitway bandwidth in packet networks during periods of congestion equitably among competing sources .
With rare exceptions like DSI, telecommunications has focused on quality of service (QoS) guarantees, which can be described as "reliably achieve a level of performance that the user finds acceptable, but no better than that". Thus, QoS is resource-conserving, assuming resources are expensive and must be conserved. Because bitways support a variety of applications, each with a different standard of what the user finds acceptable, it is usually assumed that bitways provide variable QoS (a different QoS to each application). This requires resource-allocation mechanisms that adjust resources (such as bandwidth, buffer space, etc.) to the provisioned QoS . It is inherent in QoS that there have to be pricing mechanisms that distinguish different QoS; otherwise, the application will always choose the highest available QoS. Resource-allocation and pricing and billing mechanisms add a significant level of complexity to the bitway. Further, provisioning run-time variable QoS adds additional processing mechanisms that may actually slow down the bitway, since switching electronics is a significant bottleneck in today's bitways and there is often an inverse relationship between speed and complexity in electronics.
Another related difference in approach is one of trust. Perhaps because of its QoS objectives and related pricing, telecommunications has placed defences against hostile users, for example deploying policing policies at network access points. Networked computing has placed more trust in the users, for example building flow control mechanisms into protocol suites but not enforcing them within the bitway.
These different philosophies have been driven by their different applications. In particular, telecommunications has focused on continuous-media like audio and video, where improvements in performance beyond a certain level are not perceived by the user. Further, the focus has been on immediate applications like telephony or broadcast television with broad appeal, rather than high performance applications for smaller customer groups. In computing, on the other hand, there are always technology-driving applications that stress the available technology. Further, networked-computing applications have typically been deferred, and thus have not required performance guarantees. Consistent with horizontal integration, networks of the future will integrate deferred and immediate networked applications. Thus, there has been considerable effort in mixing the QoS and best-effort service models , including into today's premier horizontally integrated bitway technologies, ATM  and the Internet IP .
Three QoS performance attributes of a bitway that we can consider guaranteeing are:
There are significant requirement variations in all three of these performance attributes across different applications, and there are typical divergent assumptions made in telecommunications and network computing. These differing assumptions have resulted in different technological solutions to resolve in horizontal integration. We will now review each of these three performance attributes in turn.
A basic distinction can be made between two basic types of bit streams:
In terms of rate characteristics, continuous media can be represented by a continuous stream of bits with variable or constant bitrate, whereas sporadic media may have periods of very high bitrate interspersed with dormant periods.
The telecommunications infrastructure traditionally focused on the continuous-media extreme, fixed bitrate (circuit) transport with no statistical multiplexing , whereas computer networking has focused on sporadic media with extremes of statistical multiplexing advantage. Circuit switching avoided congestion losses, but is forced to perform admission control in the form of blocking at establishment during traffic overloads. Computer networking has not used admission control, offering service to all comers, but has utilized best-effort techniques to divide the available capacity among all services. As mentioned below, both communities appear to be evolving toward a horizontally integrated bitway infrastructure supporting both service models.
Quite distinct transport requirements apply to immediate and deferred applications. For immediate applications, interactive latency is often a critical element of subjective quality; thus, transport latencies are often required to be both short (tens or hundreds of milliseconds) and guaranteed. The desire for low latency in such applications is a key reason for the choice of a short packet size in ATM, as this reduces the time required to accumulate a packet at the bitway access point for a low bitrate service such as voice. Guaranteed latency is particularly important for immediate applications built on continuous-media services, such as voice telephony and video conferencing. These services typically require a synchronous reconstruction with strict temporal requirements, and thus any data arriving with excess latency is not used, just as if it had been lost. This has led to attempts to insure bounded delays in packet networks . Other immediate applications have less critical latency requirements; for example, video on demand may allow multiple-second delays.
A primary advantage of the peer-to-peer architecture (when compared to the client-server architecture in user-to-user applications) is low latency, which is one reason it has been widely applied to immediate user-to-user applications in telecommunications. Client server adds not only server delay, but also possibly excess propagation delay due to more circuitous routing.
Statistical multiplexing accommodates streams with aggregate peak bitrates larger than the available bandwidth, and is therefore extremely efficient for sporadic media. A side effect of statistical multiplexing is latency associated with the buffering required to accommodate high instantaneous bitrates. In addition, sporadic media often require reliable delivery, which can only be achieved over unreliable transport bitways through multiple transmissions, with the side effect that latency cannot be guaranteed. Fortunately, sporadic media can tolerate the larger latencies imposed by statistical multiplexing and reliability.
For the future, horizontal integration requires a high degree of flexibility in accommodating both continuous and sporadic media. Similar challenges occur in the computer operating system, where additional latency is added though the statistical sharing of processing and memory resources, running counter to the latency requirements of continuous media. These are challenging issues, since the techniques usually associated with statistical speedups (caching, paging, queueing) and often at odds with performance guarantees.
One very attractive feature of transportable computation is the ability to finesse the latency issue by performing application functionality locally, avoiding bitway round-trip delays.
Reliability in transport is adversely affected by congestion, which may cause loss by buffer overflow, and bit errors caused by noise or interference in transmission (which may cause loss if they occur in the packet headers or corruption if they occur in the packet payload). The techniques available for improving reliability, including forward error-correction coding, diversity, and acknowledgment and retransmission protocols, have the fundamental side effect of increasing latency.
As in rate and latency, there is a wide gulf in reliability guarantees between the approaches traditionally used in telecommunications and computer networking. Continuous media, since they represent an analog signal, can tolerate reasonable levels of loss and corruption with adequate subjective quality. On the other hand, these media often have critical latency requirements. Thus, telecommunications has focused on transport techniques like circuit switching that guarantee latency but not reliability. Computer networking, on the other hand, has typically dealt with sporadic media and thus has focused on transport techniques such as packet switching and statistical multiplexing, appending transport protocols (like TCP/IP ) that guarantee reliable delivery at the expense of indeterminate delay. Horizontal integration at the bitways level requires an interesting mix of these service models.
Advocates of best-effort transport argue that mechanisms for controlling QoS will slow the bitrates supported by the bitways, since switching electronics is a bottleneck, and in addition the associated infrastructure required for signaling and billing will add significant costs. Thus, it is argued, a scalable best-effort bitway will provide adequate performance near-term for the lowest cost by simply provisioning adequate resources, possibly accompanied by admission control to insure that those resources are adequate under worst-case traffic conditions.
Whether or not this best-effort argument is valid, it is clear that given geometric advances with time in processing, storage, and bandwidth, many performance issues will rapidly disappear. Research should focus on serious fundamental limitations or bottlenecks that are not mitigated by technology advances. We can easily identify two such bottlenecks:
At the same time, both processing power and bandwidth in backbone bitways advance geometrically. Disturbingly, the two lasting bottlenecks are largely ignored, while most attention is focused on bandwidth efficiency and other less critical issues. For example, video compression research focuses almost entirely on minimizing bit rate (a resource increasingly available in fiber bitways and storage systems) while ignoring the resulting stringent reliability requirements (a scarce resource on interference-dominated wireless access links) and the signal processing delay. Similarly, a disturbing tendency is to solve interoperability problems in heterogeneous environments by utilizing conversions or transcoding, operations that can introduce significant delay (as well as interfere with security and privacy by precluding encryption). Most research in terminal-to-network coordination is focused on congestion mechanisms in backbone bitways, while neglecting the more fundamental interference-related impairments in wireless access links.
Similarly, information theory focuses on fidelity, providing fundamental limitations on the throughput of physical channels with high fidelity and the maximum fidelity that can be achieved for a given bit rate in a signal's digital encoding . For the most part, information theory ignores delay (in many aspects it explicitly allows delay and complexity to be unbounded as a key assumption, with notable exceptions like error exponent bounds). On the other hand, queueing theory, which has been applied extensively to both computer networks and computer systems, focuses on delay and loss due to congestion, but offers no insights on fidelity . A key issue in convergence is uniform and unified ways of dealing with delay, loss, and corruption at the practical as well as theoretical levels. Particularly challenging, as mentioned before, is the problem of integration of different media and applications with variable QoS (delay and reliability) requirements.
Associated with QoS are numerous other issues where the traditional signal processing and communications theory communities can make a strong contribution. Among them are the relationship of quantifiable transport impairments on subjective quality, the aggregation of impairments in concatenated transport media, and various optimization questions related to the allocation of end-to-end impairments to individual facilities. Also of great interest are negotiation strategies between network and terminals to arrive at acceptable solutions, and the mechanization of these negotiations.
A powerful force underlying both telecommunications and computing are exponential increases with time in the processing of electronics, the bandwidth provided by photonics, and the capacity of storage systems . These advances have a strong tendency to overwhelm performance issues, given the passage of reasonable time. Nevertheless, at any given time, it is important to be able to accommodate whatever performance level or number of users necessary by simply adding resources to the system, as opposed to replacing the technology for higher performance. An architecture with this property is scalable. A desirable form of scalability is a resource cost that is at most linear in some measure of performance or usage. Scalability and technology advances together represent a powerful force: at any given time we can accommodate any number of users or achieve any performance at a cost roughly constant per user or proportional to performance, and over time the cost-performance (if we are willing to replace the hardware) improves geometrically.
Scalability has always been an overriding requirement in telecommunications, because of the desire to serve ever larger numbers of users in a common networked system. With network externality, the utility to each user increases with the number of users, and there may actually be economies of scale so that the cost per user decreases with the number of users, resulting in extremely favorable economics. (In addition, cross subsidies have also been used to achieve "universal" service in the telephone network.) Pre-networked computing, on the other hand, has focused on the single-user model, where scalability is not an issue. For networked computers, a strength of the peer-to-peer architecture is its inherent scalability. The server in the client-server architecture, however, represents an obstacle to scalability, both with respect to bitway bandwidth and processing power, unless the server is itself a parallel processor with scalability properties  or can be mirrored indefinitely.
An example where scalability is a dominant consideration is communicating a single source simultaneously to multiple sinks, as illustrated in Figure 12. An example is multi-party video conferencing (where each user participant wishes to see all the other users) or remote learning (where each student wishes to see a common lecture). An obvious approach requiring no special measures in the bitways is for the source to simulcast to each of the sinks over separate streams. Simulcast is fine for a small number of sinks, but is not scalable to large number of sinks because, as the number of sinks increases, either the source processing power or access bitway bitrate will eventually be exceeded. A scalable approach is multicast, in which the source generates a single stream common to all sinks, and that stream is appropriately replicated within the bitway. Bitways supporting multicast are fundamentally different from unicast bitways, and a topic of intense research for both the Internet (the Multicast backbone ), ATM bitways , and ATM-based internets . An alternative architecture is to add servers to the network which perform the splitting function (as for example the reflectors in CuSeeMe ), but this approach is also not scalable.
All networked applications require some level of coordination among the terminals (peer, clients, and servers) participating in the application, and between those terminals and the network. This coordination can occur during the setup phase, using so-called establishment protocols (in computer communications ) or signaling (in telecommunications ). The distinction in terminology arises in part from the tendency to perform signaling functions over the same port and network as is used for data in computer communications (in-band signaling), and over a logically separate signaling network (out-of-band signaling) in telecommunications (as for example the modern Signaling System No. 7 ). As telecommunications moves toward horizontally integrated packet networks, there is debate as to whether to employ the in-band or out-of-band model . Signaling is usually applied to the configuration of terminals and network, including the resource reservations that may be required for QoS guarantees and establishing the state in the bitway required to maintain connections (see Section 4.4). This coordination can also occur dynamically during the execution phase of the application, called a session (computer communications) or call (telecommunications), through some form of flow control or other control mechanism.
Consider the coordination needed between a terminal originating a bit stream (called the source), the network carrying that stream, and the destination of that stream (called the sink). Both computer communications and telecommunications have used a network-reactive signaling model, in which the source makes a configuration request through the signaling channel and the network reacts to this request to perform the appropriate internal configuration. The network may also decline if it cannot provision the necessary resources, called admission control (computer communications) or blocking (telecommunications).
Network-reactive signaling is not the only way to perform active establishment configuration; in fact, enhanced mechanisms may be needed in the future. Consider, for example, configuration of the bitrate needed for a given service. Bitways such as ATM will be capable of provisioning a wide range of bitrates, and yet may or may not have wireless access at one or both ends. A broadband bitway with or without wireless access may have quite distinct capabilities, and the source may therefore have to configure in response to the network. This requires either source-reactive signaling, or better yet a two-way negotiation between source and bitway. Another important example is pricing. If bitways price their service based on resources consumed, finding the desired trade-off of resources vs. price will require a negotiation, auction, or other two-way interaction.
It is also possible to coordinate a source and bitway dynamically during a session using flow control . This coordination approach is common for best-effort bitways, and is especially natural for reliable delivery protocols (like TCP/IP ) since unacknowledged packets are an excellent estimate of traffic excesses. However, as bitway bitrates increase, propagation delay will remain constant, making flow control progressively less effective (due to the delay in receiving feedback from congestion bottlenecks coupled with more rapid variations in congestion). For the future, an interesting alternative for continuous media is to use a scalable source coding, which presents a set of N layers made visible to the bitway. The convention is that if the sink has available only layers (1,k), it can construct an increasingly accurate and subjectively pleasing representation as k increases. If the granularity of the layers is small and there are a large number of layers, there is no need for flow control since the bitway can simply throw away the highest layers as necessary. Scalable audio coding was used successfully two decades ago for voice transmission , and scalable video coders have recently been proposed .
Coordination issues become much more serious for bitways supporting multicast connections (Figure 12). It is neither scalable nor reasonable to expect a source to deal with a multiplicity of downstream bitway links and sinks, including some dynamically entering or leaving the session . Experimental multicast source coders for continuous media have thus of necessity been scalable. The most interesting approaches to configuration are sink (rather than source) driven . In a typical approach, the sink subscribes to layers (1,k), makes an estimate of the resulting reliability (say by counting lost packets), and chooses to either increase or decrease k based on that estimate.
Terminal to network coordination is an area of great divergence between the traditional approaches of the computing and telecommunications communities. It is also arguably an area of great challenge for the future, with many competing approaches, as well as new requirements such as multicast. Scalable source coding and reliability measurements will be a profitable area for research.
Telecommunications has traditionally focused on connection-oriented transport, where information is constrained to traverse the same route from source to destination. This approach enables resource allocation along that route to actively control QoS. Computer networks sometimes use connectionless transport, where information is routed dynamically according to congestion and availability. This approach is a natural outgrowth of the "efficiency by statistical means" orientation, and has many advantages, such as robustness to failure, ability to dynamically route around points of congestion, and the absence of state in the bitway (a tremendous simplification to the software). On the other hand, it makes QoS guarantees difficult to realize.
This distinction is narrowing. The Internet, while considering various forms of QoS guarantees in the future, as well as new services like multicast, add functionality and state information to the bitway, defining what are in effect connections . ATM retains the notion of "virtual circuit", or the fixed route that packets traverse, while not constraining that virtual circuit to be fixed throughout a session. This architecture enables faster routing and reduces the addressing overhead per packet (since only local addressing is required), which is important because of the small packets. There is a desire to realize connectionless IP service on public wide-area ATM networks, which requires a layer of connectionless routers interconnected by ATM virtual circuits . Issues of mobility raise another layer of complications for connection-oriented protocols, since connections must be destroyed and re-established dynamically during a session .
The evolving approaches to connections (or lack thereof) can only be described as chaotic at present, although as driven by QoS considerations there is a definite trend toward connection-oriented protocols.
Telecommunications has come full circle. The earliest electromechanical relay switches relied on a self-routing strategy for telephone calls, but with the advent of stored-program control a more centralized configuration strategy was followed based on an out-of-band signaling network , where the control and knowledge of service semantics resides primarily in the switches. As telephone switch software has suffered from inflexibility and runaway complexity, the problems of centralized control have become evident. More recently, ATM bitways have generally adopted a "command and control" approach, utilizing for example the legacy SS7 signaling network  for control of ATM switches, even though ATM could be quite amenable to a distributed control approach similar to the Internet.
The computer communications community has followed a diametrically opposite approach in which the control within the network is consciously minimized. In the Internet, the network typically doesn't store any state of particular TCP connections, but rather distributes that state information into the stream of packets passing through the network. Routing tables do reside in the network, but they are updated through a distributed adaptive algorithm. This philosophy has been successfully extended to multicast connections in the Mbone . A considerable burden is put on the terminal nodes to retain knowledge of connections, perform flow control, and insure reliability through ARQ protocols, consistent with rapidly declining cost of the required processing. Cognizance of application semantics is strictly reserved for the terminals. This approach has proven quite effective at containing complexity, and also in maintaining flexibility for ready deployment of new applications since no upgrades to the network itself are needed. Running counter to this is the client-server architecture used to realize user-to-user applications (see Figure 4), which can be considered a centralized control at the application layer (with the important distinction that the servers are administered separately from the bitway).
Regardless of the network control, application functionality will migrate to the terminals. This is consistent with the increasing cost effectiveness of terminal intelligence, and offers compelling advantages in flexibility and rapid innovation. This trend will accelerate as network deployment becomes widespread. This raises a number of issues relative to the transitioning that may occur, especially in the traditional telecommunications infrastructure. One approach is to encapsulate the existing centrally controlled telephone network for interface to computer applications, as in the Telephony Services API and Windows Telephony API . Another approach would be to migrate to ATM bitways, which accommodate more directly a distributed control model.
Since telecommunications has traditionally provisioned a small set of functionally simple "universal" applications, it has focused on interconnection as a basic issue. The goal has been to attract as many customers as possible, and fully interconnect them utilizing standardized protocols. Networked computing, on the other hand, focusing on a large number of functionally complex applications, has placed more emphasis on interoperability . How can the distributed pieces of a networked application interact properly in accordance with their shared functionality and communication protocols?
Looking to the future, interoperability will be an increasing issue for the converging infrastructure. Approaches to interoperability that avoid cumbersome standardization at the application layer are immature, as there are competing approaches with different strengths and weaknesses. The distributed operating system attempts to make a distributed collection of processors appear as one entity, whereas distributed object-based programming models explicitly highlights the distributed environment by structuring the distributed application as a set of autonomous interacting agents or objects . The virtual machine (Section 3.7) follows the object model, but with the twist of transportable computation. All these approaches fall within the category of middleware, although the boundary between operating system and middleware is fluid . It seems that the distributed operating system model is an option only for coordinated "intranets" (internets under control of a single organizational entity), while the strength of the virtual machine is its applicability to the general public network. However, a great deal of research is needed to establish the best approaches, presumably merging the best features of these disparate models and defining new ones.
In a software implementation, there are alternative implementation styles that also have significant impact on issues like application deployment. The highest-performance software approach uses embedded computing, in which a processor is dedicated to a single function or application is embedded within a larger system , with a minimal operating system, highly optimized special-purpose instruction set, optimized code (perhaps even written in assembly language), etc. Such tuned software implementations have been used extensively for digital signal processing functions in telecommunications.
Where lower performance is acceptable, a software implementation on a general-purpose (often desktop) computer can serve a variety of functions simultaneously. This approach is very flexible, but current desktop operating systems typically do not support resource reservation for a given application to guarantee, for example, real-time performance. (There is no fundamental reason they can't, however.) On the other hand, as the processor speeds increase in relation to the application, a point is reached at which it no longer matters (desktop computers are completely adequate for audio applications today) .
Perhaps the role of embedded computing would be reduced in the future with advances in technology. However, once again the role of communications in computing looms large. When a computer is networked at sufficient speed, the need for aggregating within it a variety of functions like memory, storage, etc. becomes less compelling, because some of those functions become available on the network at sufficient levels of performance. Thus, one can envision in the future embedded computers that serve the single function of running dynamically distributed applications with a minimum of local storage and peripherals. In a sense, this is a hybrid of the two models of computing, since the such a computer would be dedicated to running a single interpreter (and hence is embedded) very efficiently, and at the same time is able to serve a variety of applications (represented by the interpreted programs).
Once again, we see technology taking a full circle. At one time there were dedicated computer designs for word processing, computer-aided design, etc. Advances in technology obsoleted this approach, as users preferred a general-purpose machine. In the future, it is possible that dedicated interpreter engines for dynamically distributed network applications will reappear, partially obsoleting the general-purpose computer.
Historically, both computers and communications networks were relatively homogeneous entities. The modern digital telephone network, for example, at its heart provisions a single service, the 64 kb/s connection-oriented bit stream. Likewise, most terminals (telephones) perform a basic analog voiceband channel function. Before networking of computers, the application developer only had to worry about a single homogeneous platform.
We are entering a challenging age of heterogeneity . Heterogeneity will occur at several levels:
Due to network externality, there is a strong economic push toward universal interoperability among terminals, at least for the most common services and applications, irrespective of the details like terminal type or capability, terminal manufacturer, bitway, etc. The user wants applications to operate seamlessly across this infrastructure, configuring themselves to the infrastructure. This problem is most serious for continuous-media services, where the issue is not simply functional interoperability, but also matching resources to achieve QoS guarantees and required processing performance levels.
Historically, the telecommunications industry has pursued an end-to-end application in a vertically integrated architecture, like telephony or video conferencing. Where heterogeneity has existed in telecommunications, the approach has been to partition the subsystems at the service level. For example, wireless cellular telephony is assembled by concatenating a wireless voiceband telephone channel with a wired voiceband channel; in other words, in the base station, a voiceband telephone channel is the assumed application. Looking ahead to horizontal integration, where there will be many different services co-existing within the same facilities, this approach will not work. It will not be possible to embed within the bitways assumptions about the services being carried, without introducing a large element of complexity and inflexibility.
The different path that is necessary for the future is to modularize bitways from the services and applications insofar as possible, with coordinated resource-allocation. The services and applications will need to adapt to a variety of heterogeneous terminal and transport configurations, as well as resource allocations, and conversely the transport and terminals will need to attempt to accommodate the differing needs of a variety of services and applications. All constituent fields will need to concentrate less on point solutions to narrowly defined problems, and more on coordination to achieve objectives like interoperability and QoS on an end-to-end system-level basis.
Networks of the future will need to satisfy a variety of requirements , which are unfortunately interrelated and interdependent. Among them, we can cite:
All of these important objectives interact, and are sometimes at cross purposes. Finding a reasonable compromise among these objectives will require carefully crafted architectural concepts. A key question is what horizontal interfaces should be established. Another question is how we avoid a proliferation of multiple interfaces that have not only different syntactical structure (a minor problem), but also present different semantic models of the underlying functionality. (For example, can we define parameterized QoS models that fit universally across radically different transport media like congestion-dominated backbone bitways and interference-dominated wireless access links?)
Once such architectural concepts are established, there are numerous detailed research issues that are stimulated in areas like compression, error-correction coding and modulation, and encryption. In particular, the nature of the overall network design problem forces much greater attention to architectural issues, and much greater influence of architectural issues on detailed research areas like signal processing and networking. This is a systematic way of coordinating the activities in these detail areas to meet the many interacting objectives mentioned above.
There is inadequate research that bridges the signal processing and networking worlds, and also inadequate research bridging the backbone and wireless access worlds. Today the important constraints introduced by the wireless access bottleneck are largely unrepresented in the design of backbone networks, even though they introduce important constraints.
One impact of the coming heterogeneity at the application, transport, and terminal levels is the critical importance of complexity management . Complexity management has traditionally been a dominant consideration in the design of software systems, but is now also a dominant consideration in the larger context of large-scale systems including hardware, software, and physical channels. A whole host of techniques, many of them developed in the context of software system engineering, become important, such as architecture, modularity, and abstraction. More than anything, complexity management is a manner of thinking about system design. There is need for the infusion of this complexity management thinking throughout the domain of communications and computing, not just software design.
In the environment of converged telecommunications and computing, the old-style design problem embodied in one organization presenting a complete end-to-end turnkey solution is gone. Rather, many vendors are participating, in effect, in the collective design of the infrastructure of the future. Such designs must take in account numerous external considerations, such as network externality, standards (or lack thereof), interoperability, adaptability and etiquette, etc. The lowered barriers to application development embodied in the migration from vertical to horizontal architectures have and will play an important role in industrial organization. Considerations such as these play a seminal role in the design of products, and should also have a larger presence in research and engineering education.
New developments like platform independence, network deployment, and dynamic deployment will create an environment in which the innovation in user-to-user applications will have similar characteristics to user-to-information-server applications; namely, an rapidly evolving and fragmented application space. As in client-server computing, this will be a fertile field for research.
2. D.G. Messerschmitt, "Convergence of telecommunications with computing", invited paper in the special issue on "Impact of Information Technology", Technology in Society, to appear in 1996.
3. Merriam-Webster's Collegiate Dictionary, Tenth Edition, Merriam-Webster, Incorporated, 1995.
4. Adam, J.A., "Upgrading the Internet", IEEE Spectrum, Sept. 1995, vol.32, (no.9):24-9.
5. Adam, J.A., "Multimedia-applications, implications", IEEE Spectrum, March 1993, vol.30, (no.3):24-31.
6. Cole, B., "Multimedia-the technology framework", IEEE Spectrum, March 1993, vol.30, (no.3):32-9.
7. Flanagan, J.L., "Technologies for multimedia communications", Proceedings of the IEEE, April 1994, vol.82, (no.4):590-603.
8. Iwata, A.; Mori, N.; Ikeda, C.; Suzuki, H.; and others, "ATM connection and traffic management schemes for multimedia internetworking", Communications of the ACM, Feb. 1995, vol.38, (no.2):72-89.
9. Francik, E.; Ehrlich Rudman, S.; Cooper, D.; Levine, S., "Putting innovation to work: adoption strategies for multimedia communication systems", Communications of the ACM, Dec. 1991, vol.34, (no.12):52-63.
10. Realizing the Information Future; The Internet and Beyond. Washington D.C.: National Academy Press, 1994.
11. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington D.C.: National Academy Press, 1966.
12. Fluckiger, F., Understanding Networked Multimedia: Applications And Technology. New York: Prentice Hall, 1995.
13. Comer, D.E., Internetworking with TCP/IP: Volume I, Principles, Protocols, and Architecture. Englewood Cliffs, N.J., Prentice Hall, 1995.
14. Carl-Mitchell, S., "The new Internet Protocol", Unix Review, June 1995, vol.13, (no.7):31-4, 36, 38.
15. Woodward, B.B., "Will OSI be eclipsed by TCP?", Proceedings SHARE Europe Anniversary Meeting: Client/Server - the Promise and the Reality, The Hague, Netherlands, 25-28 Oct. 1993.
16. Breen, P.T., Jr., "Real virtual environment applications-now", Proceedings Visualization'92, Boston, MA, USA, 19-23 Oct. 1992.
17. Satava, R.M. "Emerging medical applications of virtual reality: a surgeon's perspective", Artificial Intelligence in Medicine, Aug. 1994, vol.6, (no.4):281-8.
18. Leon, R.E., "The world of Internet: a client-server architecture and the new generation of information servers", Online & CD ROM Review, Oct. 1994, vol.18, (no.5):279-84.
19. William Mitchell, City of Bits: Space, Place and the Infobahn. Boston, MIT Press, 1995.
20. Simon, S., "Peer-to-peer network management in an IBM SNA network" IEEE Network, March 1991, vol.5, (no.2):30-4.
21. Sinha, A., "Client-server computing: current technology review", Communications of the ACM, July 1992, vol.35, (no.7):77-98.
22. Broadhead, S., "Client-server: the past, present and future", Network Computing, Dec. 1995, vol.4, (no.12):38, 40, 42-3.
23. Orphanos, G.; Malataras, A.; Mountzouris, I.; Kanellopoulos, D.; and others, "Client-server computing requirements of networked multimedia services", International Seminar on Client/Server Computing, La Hulpe, Belgium, 30-31 Oct. 1995.
24. Hao, M.C.; Karp, A.H.; Garfinkel, D., "Collaborative computing: a multi-client multi-server environment", Proceedings of Conference on Organizational Computing Systems. COOCS'95, Milpitas, CA, USA, 13-16 Aug. 1995).
25. Ryan, F.; Sperry, A.B., "The desktop computer emerges as a powerful engineering tool", Electronic Design, 5 July 1979, vol.27, (no.14):62-7.
26. Metcalfe, R.M.; Boggs, D.R., "Ethernet: distributed packet switching for local computer networks", Communications of the ACM, July 1976, vol.19, (no.7):395-404.
27. Falk, G., "A comparison of network architectures-the ARPANET and SNA", AFIPS Conference Proceedings vol.47. 1978 National Computer Conference, Anaheim, CA, USA, 5-8 June 1978.
28. Dahlbom, C.A.; Ryan, J.S., "Common channel interoffice signaling: History and description of a new signaling system", Bell System Technical Journal, Feb. 1978, vol.57, (no.2):225-50.
29. Larsson, Torsten, "Prescription for Public Service: Plenty of Monopoly and Horizontal Integration", Telephony, v203, n9 (Aug 22, 1983):132-140.
30. Ostroff, Frank; Smith, Douglas, "The horizontal organization", McKinsey Quarterly v1992, n1 (1992):148-168.
31. White, Roderick E.; Poynter, Thomas A., "Achieving Worldwide Advantage with the Horizontal Organization", Business Quarterly, v54, n2 (Autumn 1989):55-60.
32. J.MacKie-Mason, S. Shenker, H.R.Varian, "Network Architecture and Content Provision: An Economic Analysis", Proceedings Public Policy and Corporate Strategy for the Information Economy, Evanston, Ill., May 10-11, 1966.
33. Borton, G.F., "Seeds of change in CTI", Business Communications Review, March 1994, vol.24, (no.3):35-40.
34. Walters, R., Computer telephone integration. London, UK: Artech House, 1993.
35. Strauss, P., "Welcome to client-server PBX computing", Datamation, 1 June 1994, vol.40, (no.11):49-50, 52.
36. Bernstein, P.A., "Middleware: a model for distributed system services", Communications of the ACM, Feb. 1996, vol.39, (no.2):86-98.
37. Clark, David, "Interoperation, Open Interfaces, and Protocol Architecture," draft white paper at NII 2000 Forum, Washington, DC, May 23, 1995.
38. Pandya, R., "Emerging mobile and personal communication systems", IEEE Communications Magazine, June 1995, vol.33, (no.6):44-52.
39. Padgett, J.E.; Gunther, C.G.; Hattori, T., "Overview of wireless personal communications", IEEE Communications Magazine, Jan. 1995, vol.33, (no.1):28-41.
40. Erdman, W.W., "Wireless communications: a decade of progress", IEEE Communications Magazine, Dec. 1993, vol.31, (no.12):48-51.
41. Tuttlebee, W.H.W., "Cordless personal communications", IEEE Communications Magazine, Dec. 1992, vol.30, (no.12):42-53.
42. Cox, D.C., "Wireless network access for personal communications", IEEE Communications Magazine, Dec. 1992, vol.30, (no.12):96-115.
43. Kucar, A.D., "Mobile radio: An overview", IEEE Communications Magazine, Nov. 1991, vol.29, (no.11):72-85.
44. Goodman, D.J., "Trends in cellular and cordless communications", IEEE Communications Magazine, June 1991, vol.29, (no.6):31-40.
45. Bagrodia, R.; Chu, W.W.; Kleinrock, L.; Popek, C., "Vision, issues, and architecture for nomadic computing and communications", IEEE Personal Communications, Dec. 1995, vol.2, (no.6):14-27.
46. Acharya, S.; Alonso, R., "The computational requirements of mobile machines", Proceedings First IEEE International Conference on Engineering of Complex Computer Systems, Ft. Lauderdale, FL, USA, 6-10 Nov. 1995.
47. Vuong, S.T.; Lau, O.; Yu, Y.Q.; Shi, H.; and others, "Issues in internetworking wireless data networks for mobile computing", Proceedings IEEE Pacific Rim Conference on Communications, Computers, and Signal Processing. Victoria, BC, Canada, 17-19 May 1995.
48. Duchamp, D., "Issues in wireless mobile computing", Proceedings Third Workshop on Workstation Operating Systems, Key Biscayne, FL, USA, 23-24 April 1992.
49. P. Haskell and D. Messerschmitt, "In favor of an enhanced network interface for multimedia services," submitted to IEEE Multimedia Magazine.
50. Young, K., "Look no server (peer-to-peer networks)", Network, March 1993:21-2, 24, 26.
51. Karan, G.M., "From current to future telepresence technologies (Did the interstate system kill route 66?)", Canadian Artificial Intelligence, Summer 1995 (no.37):8-17.
52. Telemanipulator and Telepresence Technologies. (Boston, MA, USA, 31 Oct.-1 Nov. 1994). Proceedings of the SPIE - The International Society for Optical Engineering, 1994, vol.2351.
53. Krechmer, H., "Catching up with V.34 modems", Business Communications Review, March 1995, vol.25, (no.3):62-5.
54. Le Gall, D., "MPEG: a video compression standard for multimedia applications", Communications of the ACM, April 1991, vol.34, (no.4):46-58.
55. Wilson, P., "Computer supported cooperative work (CSCW): origins, concepts and research initiatives", 2nd Joint European Networking Conference, Blois, France, 13-16 May 1991.
56. Ruhleder, K.; King, J.L., "Computer support for work across space, time, and social worlds", Journal of Organizational Computing, 1991, vol.1, (no.4):341-55.
57. Schooler, E.M.; Casner, S.L.; Postel, J., "Multimedia conferencing: has it come of age?", Proceedings of the Twenty-Fourth Annual Hawaii International Conference on System Sciences, Kauai, HI, USA, 8-11 Jan. 1991.
58. Skinner, R., "Cross-platform formatting programs", Library Software Review, Summer 1994, vol.13, (no.2):152-6.
59. David G. Messerschmitt, "Complexity management: An important issue in communications", Proc. International Conference on Communications, Computing, Control, and Signal Processing, Stanford University, Palo Alto, CA, June 22-26, 1995.
60. Nicholas Economides, "The Economics of Networks", International Journal of Industrial Organization, March 1996.
61. Antonelli, Christiano, "The Economic Theory of Information Networks," in The Economics of Information Networks, Cristiano Antonelli (ed.), North Holland: Amsterdam, 1992.
62. Antonelli, Christiano, "Externalities and Complementarities in Telecommunications Dynamics," International Journal of Industrial Organization, vol. 11, no. 3, pp. 299-450.
63. Conner, K.R.; Rumelt, R.P., "Software piracy: an analysis of protection strategies", Management Science, Feb. 1991, vol.37, (no.2):125-39.
64. Ousterhout, J.K., "Tcl: an embeddable command language", Proceedings of the Winter 1990 USENIX Conference, Washington, DC, USA, 22-26 Jan. 1990.
65. N. S. Borenstein, "E-mail with a mind of its own: The Safe-Tcl language for enabled mail," ULPAA 94 Conference, Barcelona, Spain, 1-3 June 1994.
66. J. Tardo and L. Valente, "Mobile agent security and Telescript," IEEE CompCon, 1996.
67. Woelk, D.; Huhns, M.; Tomlinson, C., "Uncovering the next generation of active objects", Object Magazine, July-Aug. 1995, vol.5, (no.4):32-9.
68. K. Arnold and J. Gosling, The Java(TM) Programming Language. Reading, Mass., Addison Wesley, 1996.
69. Wayner, P., "Net programming for the masses", BYTE, Feb. 1996, vol.21, (no.2):101-2, 104.
70. van Hoff, A., "Java and Internet programming", Dr. Dobb's Journal, Aug. 1995, vol.20, (no.8):56, 58, 60-1, 101-2.
71. W-T Chang, W. Li, D.G. Messerschmitt, and Ning Zhang, "Rapid deployment of CPE-based telecommunications services", Proc. Global Communications Conference, San Francisco, Dec. 1994.
72. Wan-teh Chang and D.G. Messerschmitt, "Dynamic Deployment of Peer-to-Peer Networked Applications to Existing WWW Browsers", Proceedings Telecommunications Information Network Architecture (TINA)'96 Conference, Heidelberg, Germany, Sept. 3-5, 1996.
73. M.J. Wooldridge, N.R. Jennings, ed., Intelligent Agents. Proceedings of ECAI-94 Workshop on Agent Theories, Architectures, and Languages. Amsterdam, The Netherlands: Springer-Verlag, Aug. 1994.
74. Wooldridge, M.; Jennings, N.R. "Agent theories, architectures, and languages: a survey", Proceedings ECAI-94 Workshop on Agent Theories, Architectures, and Languages, Amsterdam, Netherlands, 8-9 Aug. 1994.
75. Etzioni, O.; Weld, D.S., "Intelligent agents on the Internet: Fact, fiction, and forecast", IEEE Expert, Aug. 1995, vol.10, (no.4):44-9.
76. Vittore, V., "Intelligent agents may jump start PDAs", America's Network, 15 Dec. 1994, vol.98, (no.24):28-9.
77. Lyghounis, E.; Poretti, I.; Monti, G., "Speech interpolation in digital transmission systems", IEEE Transactions on Communications, Sept. 1974, vol.COM-22, (no.9):1179-89.
78. Demers, A.; Keshav, S.; Shenker, S., "Analysis and simulation of a fair queueing algorithm", Internetworking: Research and Experience, Sept. 1990, vol.1, (no.1):3-26.
79. Greenberg, A.G.; Madras, N., "How fair is fair queueing?", Journal of the Association for Computing Machinery, July 1992, vol.39, (no.3):568-98.
80. Lazar, A.A.; Pacifici, G., "Control of resources in broadband networks with quality of service guarantees", IEEE Communications Magazine, Oct. 1991, vol.29, (no.10):66-73.
81. Konstantoulakis, G.E.; Stassinopoulos, G.I., "Transfer of data over ATM networks using available bit rate (ABR)", Proceedings IEEE Symposium on Computers and Communications, Alexandria, Egypt, 27-29 June 1995.
82. Audsley, N.C.; Burns, A.; Davis, R.I.; Wellings, A.J., "Integrating best-effort and fixed priority scheduling", Proceedings of the IFAC Workshop on Real Time Programming, 1994 Lake Constance, Germany, 22-24 June 1994.
83. Nishida, T.; Taniguchi, K., "QoS controls and service models in the Internet", IEICE Transactions on Communications, April 1995, vol.E78-B, (no.4):447-57.
84. Schulzrinne, H.; Kurose, J.; Towsley, D., "An evaluation of scheduling mechanisms for providing best-effort real-time communications in wide-area networks", Proceedings IEEE INFOCOM'94, Toronto, Ont., Canada, 12-16 June 1994.
85. Shenker, S.; Clark, D.D.; Lixia Zhang, "Services or infrastructure: why we need a network service model", Proceedings of lst IEEE International Workshop on Community Networking, San Francisco, CA, USA, 13-14 July 1994). New York, NY, USA: IEEE, 1994. p. 145-9.
86. Mead, C., "VLSI and the foundations of computation", Information Processing 83. Proceedings of the IFIP 9th World Computer Congress, Paris, France, 19-23 Sept. 1983). Edited by: Mason, R.E.A. Amsterdam, Netherlands: North-Holland, 1983. p. 271-4.
87. Hwang, Kai, Advanced computer architecture: parallelism, scalability, programmability. New York: McGraw-Hill, c1993.
88. Berners-Lee, T. and Cailliau, R., "World-Wide Web", Proceedings of the International Conference on Computing in High Energy Physics'92, Annecy, France, 21-25 Sept. 1992).
89. Reichard, K.; Johnson, E.F., "GUI Web browsers", Unix Review, July 1995, vol.13, (no.8):69-74.
90. Campbell, A.; Coulson, G.; Garcia, F.; Hutchison, D., "A continuous media transport and orchestration service", Proceedings of ACM SIGCOMM'92 Conference. Communications Applications, Architectures and Protocols, Baltimore, MD, USA, 17-20 Aug. 1992.
91. Wolfinger, B.; Moran, M., "A continuous media data transport service and protocol for real-time communication in high speed networks" Network and Operating System Support for Digital Audio and Video. Second International Workshop Proceedings, Heidelberg, Germany, 18-19 Nov. 1991). Edited by: Herrtwich, R.G. Berlin, Germany: Springer-Verlag, 1992. p. 171-82.
92. Fundamentals of Digital Switching, John C. McDonald, Ed. 2nd ed. New York: Plenum Press, 1990.
93. Verma, D.C.; Hui Zhang; Ferrari, D., "Guaranteeing delay jitter bounds in packet-switching networks", First International Workshop on Network and Operating System Support for Audio and Video, Berkeley, CA, USA, 8-9 Nov. 1990.
94. Macedonia, M.R.; Brutzman, D.P., "MBone provides audio and video across the Internet" Computer, April 1994, vol.27, (no.4):30-6.
95. Chong-Kwon Kim., "Blocking probability of heterogeneous traffic in a multirate multicast switch", IEEE Journal on Selected Areas in Communications, Feb. 1996, vol.14, (no.2):374-85.
96. Armitage, G.J., "Multicast and multiprotocol support for ATM based internets", Computer Communication Review, April 1995, vol.25, (no.2):34-46.
97. Schlanger, G.G., "An overview of signaling system no.7", IEEE Journal on Selected Areas in Communications, May 1986, vol.SAC-4.
98. Krechmer, K., "Technical Standards: Foundations of the Future", Standards View, March 1996.
99. Zhang, L.; Deering, S.; Estrin, D.; Shenker, S.; and others, "RSVP: a new resource ReSerVation Protocol", IEEE Network, Sept. 1993, vol.7, (no.5):8-18.
100. Mitzel, D.J.; Shenker, S., "Asymptotic resource consumption in multicast reservation styles", ACM SIGCOMM'94 Conference on Communications Architectures, Protocols and Applications, London, UK, 31 Aug.-2 Sept. 1994.
101. Tsong-Ho Wu; Yoshikai, N.; Fujii, H., "ATM signaling transport network architectures and analysis", IEEE Communications Magazine, Dec. 1995, vol.33, (no.12):90-9.
102. Habib, I.W.; Saadawi, T.N., "Access flow control algorithms in broadband networks", Computer Communications, June 1992, vol.15, (no.5):326-32.
103. Taubman, D.; Zakhor, A., "Rate and resolution scalable subband coding of video", 1994 IEEE International Conference on Acoustics, Speech and Signal Processing, Adelaide, SA, Australia, 19-22 April 1994.
104. Bolot, J.-C.; Turletti, T.; Wakeman, I., "Scalable feedback control for multicast video distribution in the Internet", ACM SIGCOMM'94 Conference on Communications Architectures, Protocols and Applications, London, UK, 31 Aug.-2 Sept. 1994.
105. Cover, T. M., Thomas, J.A., Elements of information theory. New York: Wiley, c1991.
106. Kleinrock, L., Queueing systems. New York, Wiley, 1975.
107. Cranor, C.D.; Parulkar, G.M., "An implementation model for connection-oriented internet protocols", Internetworking: Research and Experience, Sept. 1993, vol.4, (no.3):133-57.
108. Vickers, B.J.; Suda, T., "Connectionless service for public ATM networks", IEEE Communications Magazine, Aug. 1994, vol.32, (no.8):34-43.
109. Kavak, N.; Laraqui, K.; Nazari, A.; Emberg, P., "Experience and analysis of a connectionless server for provision of broadband data communication service", Proceedings of Interworking 94 - 2nd International Symposium on Interworking, Sophia Antipolis, France, 4-6 May 1994.
110. Harris, D.G.; Perry, A.C.; Batts, M.D., "The provision and evolution of connectionless data services in the public network", Second International Conference on Broadband Services, Systems and Networks (Conf. Publ. No.383), Brighton, UK, 3-4 Nov. 1993.
111. Venieris, I.S.; Protonotarios, E.N.; Stassinopoulos, G.I.; Carli, R., "Bridging remote connectionless LAN/MANs through connection oriented ATM networks", Computer Communications, Sept. 1992, vol.15, (no.7):418-28.
112. Keeton, K.; Mah, B.A.; Seshan, S.; Katz, R.H.; and others, "Providing connection-oriented network services to mobile hosts", Proceedings of the USENIX Mobile and Location-Independent Computing Symposium, Cambridge, MA, USA, 2-3 Aug. 1993.
113. Strauss, P., "Welcome to client-server PBX computing", Datamation, 1 June 1994, vol.40, (no.11):49-50, 52.
114. Bar, Francois, Michael Borrus and Richard Steinberg (1995), Islands in the Bit-stream: Mapping the NII Interoperability Debate, BRIE Working Paper #79, University of California at Berkeley, 1995.
115. Manola, F., "Interoperability issues in large-scale distributed object systems", ACM Computing Surveys, June 1995, vol.27, (no.2):268-73.
116. Chin, R.S.; Chanson, S.T., "Distributed object-based programming systems", ACM Computing Surveys, March 1991, vol.23, (no.1):5-48.
117. Wolf, W.; Ti-Yen., "Embedded computing and hardware-software co-design", Proceedings of WESCON 95, San Francisco, CA, USA, 7-9 Nov. 1995.
118. Rao, R.S., "Embedded computing with a comprehensive reduced instruction set processor", Proceedings of the Fourth International Conference on Signal Processing Applications and Technology. ICSPAT'93, Santa Clara, CA, USA, 28 Sept.-1 Oct. 1993).
Generated with Harlequin WebMaker