Abstract
Two scaling problems face the Internet today. First, it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Second, the traffic distribution is not uniform worldwide: Clients in all countries of the world access content that today is chiefly produced in a few regions of the world (e.g., North America). A new generation of Internet access built around geosynchronous satellites can provide immediate relief. The satellite system can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and supplement terrestrial networks elsewhere. This new generation of satellite system manages a set of satellite links using intelligent controls at the link endpoints. The intelligence uses feedback obtained from monitoring end-user behavior to adapt the use of resources. Mechanisms controlled include caching, dynamic construction of push channels, use of multicast, and scheduling of satellite bandwidth. This paper discusses the key issues of using intelligence to control satellite links, and then presents as a case study the architecture of a specific system: the Internet Delivery System, which uses INTELSAT's satellite fleet to create Internet connections that act as wormholes between points on the globe.
Introduction
Satellites have been used for years to provide communication network links. Historically, the use of satellites in the Internet can be divided into two generations. In the first generation, satellites were simply used to provide commodity links (e.g., T1) between countries. Internet Protocol (IP) routers were attached to the link endpoints to use the links as single-hop alternatives to multiple terrestrial hops. Two characteristics marked these first-generation systems: they had limited bandwidth, and they had large latencies that were due to the propagation delay to the high orbit position of a geosynchronous satellite.
In the second generation of systems now appearing, intelligence is added at the satellite link endpoints to overcome these characteristics. This intelligence is used as the basis for a system for providing Internet access engineered using a collection or fleet of satellites, rather than operating single satellite channels in isolation. Examples of intelligent control of a fleet include monitoring which documents are delivered over the system to make decisions adaptively on how to schedule satellite time; dynamically creating multicast groups based on monitored data to conserve satellite bandwidth; caching documents at all satellite channel endpoints; and anticipating user demands to hide latency.
This paper examines several key questions arising in the design of a satellite-based system:
Can international Internet access using a geosynchronous satellite be competitive with today's terrestrial networks?
What elements constitute an "intelligent control" for a satellite-based Internet link?
What are the design issues that are critical to the efficient use of satellite channels?
The paper is organized as follows. The next section, Section 2, examines the above questions in enumerating principles for second-generation satellite delivery systems. Section 3 presents a case study of the Internet Delivery System (IDS), which is currently undergoing worldwide field trials.
Issues in second-generation satellite link control
We discuss in this section each of the questions raised in this paper's introduction.
Can international Internet access using a geosynchronous satellite be competitive with today's terrestrial networks?
The first question is whether it makes sense today to use geosynchronous satellite links for Internet access. Alternatives include wired terrestrial connections, low earth orbiting (LEO) satellites, and wireless wide area network technologies (such as Local Multipoint Distribution Service or 2.4-GHz radio links in the U.S.).
We see three reasons why geosynchronous satellites will be used for some years to come for international Internet connections. The first reason is obvious: it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Geosynchronous satellites can provide immediate relief. They can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and can supplement terrestrial networks elsewhere.
Second, geosynchronous satellites allow direct single-hop access to the Internet backbone, bypassing congestion points and providing faster access time and higher net throughputs. In theory, a bit can be sent the distance of an international connection over fiber in a time on the order of tens of microseconds. In practice today, however, international connections via terrestrial links are an order of magnitude larger. For example, in experiments we performed in December 1998, the mean round trip time between the U.S. and Brazil (vt.edu to embr.net.br) over terrestrial links were 562.9 msec (via teleglobe.net) and 220.7. In contrast, the mean latency between the two routers at the two endpoints of a satellite link between Bangladesh and Singapore measured in February 1999 was 348.5 msec. Therefore, a geosynchronous satellite has a sufficiently large footprint over the earth that it can be used to create wormholes in the Internet: constant-latency transit paths between distant points on the globe [Chen]. The mean latency of an international connection via satellite is competitive with today's terrestrial-based connections, but the variance in latency can be reduced.
As quality-of-service (QoS) guarantees are introduced by carriers, the mean and variance in latency should go down for international connections, reducing the appeal of geosynchronous satellites. However, although QoS may soon be widely available within certain countries, it may be some time until it is available at low cost between most countries of the world.
A third reason for using geosynchronous satellites is that the Internet's traffic distribution is not uniform worldwide: clients in all countries of the world access content (e.g., Web pages, streaming media) that today is chiefly produced in a few regions of the world (e.g., North America). This implies that a worldwide multicast architecture that caches content on both edges of the satellite network (i.e., near the content providers as well as near the clients) could provide improved response time to clients worldwide. We use this traffic pattern in the system described in the case study (Section 3).
One final point of interest is to ask whether LEO satellites that are being deployed today will displace the need for geosynchronous satellites. The low orbital position makes the LEO footprint relatively small. Therefore, international connections through LEOs will require multiple hops in space, much as today's satellite-based wireless phone systems operate. The propagation delay will eliminate any advantage that LEOs have over geosynchronous satellites. On the other hand, LEOs have an advantage: they are not subject to the constraint in orbital positions facing geosynchronous satellite operators. So the total available LEO bandwidth could one day surpass that of geosynchronous satellites.
What elements constitute an "intelligent control" for a satellite-based Internet link?
The basic architecture behind intelligent control for a satellite fleet is to augment the routers at each end of a satellite link with a bank of network-attached servers that implement algorithms appropriate for the types of traffic carried over the links. We use certain terminology in our discussion. First, given the argument above for asymmetric traffic, our discussion is framed in terms of connecting content providers (in a few countries) to end users (in all countries). In some cases (e.g., two-way audio), however, the traffic may be symmetrical. Second, we refer to the content-provider endpoint of a satellite link as a warehouse, and the end-user endpoint as a kiosk. The architecture of warehouses and kiosks must be scalable: The number of servers, storage capacity, and throughput of warehouses and kiosks must scale as the number and bandwidth of satellite links, content providers, and end users grows.
Content providers are connected via the terrestrial Internet to a router inside a warehouse. The router also connects to a local area network that interconnects various servers. The router also connects to the earth station for the satellite. Within the footprint of the satellite are many groundstations, each connected to a router within a kiosk. The kiosk is similar to the warehouse in that it connects to a local area network that interconnects servers, and optionally, to a terrestrial Internet connection. The kiosk also acts as the head end for Internet service providers (ISPs) that provide network connections to end users. More details are given in the case study in Section 3.
Intelligent controls reside in the warehouse and kiosk and are required to share limited satellite bandwidth among many users and to hide the quarter-second latency of a geosynchronous satellite. The controls are a distributed algorithm, in which part runs on warehouses and part runs on kiosks. All warehouses and kiosks must cooperate and must coordinate the use of satellite resources. Multicast groups are defined to allow communication between cooperating entities (e.g., between a warehouse and multiple kiosks).
To identify which controls make sense, it is useful to look at the characteristics of Internet traffic. Three of them represent Web pages: pages that are popular for months or longer (e.g., a news service such as cnn.com); pages that are popular for a short time (e.g., hours, days, or weeks, such as those resulting from Olympic games); and pages that are accessed only a few times. One of the facts known about this traffic is that most of the requests and most of the bytes transferred in client workloads come from a small number of servers. For example, in a study of proxy or client uniform resource locator (URL) reference traces from Digital Equipment Corporation (DEC), America Online, Boston University, Virginia Tech, a gateway to South Korea, and one high school, 80% to 95% of the total accesses went to 25% of the servers.
The next category of traffic in Figure 2 is push channels. This consists of a collection of media that a content provider assembles and distributes, for example using the proposed World Wide Web Consortium (W3C) Information and Content Exchange (ICE) protocol. The remaining two categories are real-time traffic, such as streaming audio or video from a teleconference, and what we call timely but not real time. This last category includes information that is updated periodically and has a certain lifetime, such as financial quotes and Network News Transfer Protocol (NNTP).
The point of categorizing traffic is that different intelligent controls are needed for different categories of traffic. The following are mechanisms used in the case study of Section 3:
Caching of both categories of popular URLs and push channels should be done at both the warehouse and the kiosk. Caching at the kiosk side obviously avoids the satellite delay when an end-user requests a popular document. Caching at the warehouse is desirable to decouple the process of retrieving documents from the content providers (i.e., the path between content providers and the warehouse in Figure 1) and the process of scheduling multicast transmission of documents from a warehouse via satellite to kiosks. In addition, the warehouse reduces cache consistency traffic, because consistency traffic occurs only between the content providers and the warehouse. The kiosks do not need consistency checks, because they rely on the warehouse to send them updated pages when the warehouse detects inconsistency.
Feedback of logs of documents, channel content, real-time streams, and timely documents are requested from the kiosk end of satellite connections by the warehouse side for use in adaptive algorithms.
Unpopular pages may be cached at an individual kiosk only, and retrieved from the Internet using a terrestrial link if available. Only if feedback from the kiosk logs sent to the warehouse shows that a document is popular among multiple kiosks does the document get reclassified as "popular for short term" and hence cached at the warehouse.
To hide latency, pages and updates to the content of changed pages could be preemptively delivered across the satellite link based on the feedback of logs.
Push channels could be dynamically constructed by identifying which Web documents have become popular.
Multicast could be used to deliver push channels, real-time streams and timely documents based on subscriptions.
The bandwidth available on satellite links could be allocated based on traffic categories.
Case study: Internet delivery system
IDS uses
multicast transmission to share channel bandwidth with users in many counties
caching (e.g., Table 1) at both ends of the satellite to hide or avoid latency, in the form of large (terabyte-size) content warehouses and kiosks
automated monitoring of user behavior to dynamically create multicast push channels of content
proactive content refreshing that updates inconsistent cached documents before users request those documents
The objective of the IDS is to provide fast and economical Internet connectivity worldwide. IDS also facilitates Internet access to parts of the globe that have poor terrestrial connectivity. IDS achieves this goal by two means:
1. creating satellite-based wormholes [Chen], from content providers to geographically distant service providers, thus providing a fast path from one edge of the network to the other
2. caching content such as HTTP, File Transfer Protocol (FTP), NNTP, and streaming media at the content-provider end as well as the service-provider end, thus conserving bandwidth
The idea for the IDS was conceived at INTELSAT, an international organization that owns a fleet of geostationary satellites and sells space segment bandwidth to its international signatories. Work on the prototype started in February 1998. In February 1999, the prototype system stands poised for international trials involving ten signatories of INTELSAT. A commercial version of IDS will be released in May 1999.
The building blocks of IDS are warehouses and kiosks. A warehouse is a large repository (terabytes of storage) of Web content. The warehouse is connected to the content-provider edge of the Internet by a high-bandwidth link. Given the global distribution of Web content today, an excellent choice for a warehouse could be a large data-center or large-scale bandwidth reseller situated in the U.S. The warehouse will use its high-bandwidth link to the content providers to crawl and gather Web content of interest in its Web cache. The warehouse uses an adaptive refreshing technique to assure the freshness of the content stored in its Web cache. The Web content stored in the warehouse cache is continuously scheduled for transmission via a satellite and multicast to a group of kiosks that subscribe to the warehouse.
The centerpiece of the kiosk architecture is also a Web cache. Kiosks represent the service-provider edge of the Internet and can therefore reside at national service providers or ISPs. The storage size of a kiosk cache can therefore vary from a low number of gigabytes to terabytes. Web content multicast by the warehouse is received, is filtered for subscription, and is subsequently pushed in the kiosk cache. The kiosk Web cache also operates in the traditional pull mode. All user requests for Web content to the service provider are transparently intercepted and redirected to the kiosk Web cache. The cache serves the user request directly if it has the requested content; otherwise, it uses its link to the Internet to retrieve the content from the origin Web site. The cache stores a copy of the requested content while passing it back to the user who requested it.
The layout for an IDS prototype warehouse and kiosk is shown in Figure 1 . The prototype warehouse consists of two server class Pentium II based machines, namely an application server and a cache server. The cache server houses a Web cache and other related modules. The Web cache at the warehouse has 100 gigabytes of storage. The application server is host to a transmitter application, a relational database, and a Java-based management application. These servers reside on a dedicated subnet of the warehouse network. This subnet is connected to a multicast-enabled router that routes all multicast traffic to a serial interface for uplinking to the INTELSAT IDR system. The INTELSAT IDR system provides IP connectivity, over a 2-Mbps satellite channel, between the warehouse and kiosks.
The prototype kiosk also contains a Pentium II-based application server and a Pentium II-based cache server. The kiosk cache server houses a Web cache with 50 gigabytes of storage. The application server is host to a receiver application, a relational database, and a Java-based configuration and management application. These servers reside on a dedicated subnet of the kiosk network. This subnet is connected to a multicast-enabled router. An important part of the prototype kiosk is a layer-4 server switch [Williams], which is used to transparently redirect all HTTP (Transmission Control Protocol/port 80) user traffic to the kiosk cache server.
IDS treats Web content as composed of six traffic categories as categorized in Figure 2. These categories are summarized in Table 1 below. Type A traffic consists of HTTP Web content that is identified by a human operator as content that should remain popular over a long time (e.g., months). This may include popular news Web sites such as the Cable News Network (CNN) Web site. Type B traffic refers to HTTP Web content directly pushed into the warehouse by subscribing content providers. Type E traffic refers to unicast HTTP user request-reply traffic that passes though the kiosk and is not cached at the kiosk. The reply for a type E request is cached at the kiosk on its return path from the origin server. As requests for a particular URL accumulate at multiple kiosks, such a hot-spot URL is converted from type E to type C. Type D traffic refers to real-time streaming traffic. Type F traffic refers to semi-real-time reliable traffic such as financial quotes and NNTP. Traffic of types A, B, C, D, or F is multicast to all kiosks and pushed to subscribing kiosks.
The IDS prototype implements traffic types A, C, and E. Figure 3 below shows the flow of traffic types A, C, and E through the IDS system. Type A traffic is defined by the warehouse operator by entering popular URLs through the warehouse management interface. The warehouse operator also classifies URLs into channels as part of creating type A content. Once created, content belonging to type A is registered in the relational database and subsequently crawled from the Web and stored in the warehouse Web cache. The warehouse refreshes content of type A from the origin servers based on an adaptive refresh algorithm. Content of type A is also continuously multicast by the transmitter application to the kiosks. At the kiosk, the receiver application filters the incoming multicast traffic, thus accepting only the subset of traffic that belongs to channels subscribed at the kiosk. Filtered content is then pushed into the Web cache at the kiosk.
Traffic of type E originates as an HTTP request from kiosk end users. The request is redirected to the layer-4 switch at the kiosk, the kiosk cache. If the requested content is not found in the kiosk cache, then that request is routed to the origin server. The reply from the origin server is cached at the kiosk Web cache on its way back to the end user who made the request. In Figure 2, the path for unicast type E traffic is shown as going through the satellite back channel. It must be noted, however, that type E traffic bypasses all warehouse components and is routed to the Internet.
On a periodic basis, the warehouse polls all subscribing kiosks for hit statistics regarding the type E content in their respective Web caches. Using this information and appropriate business rules specified by the management application at the warehouse, the warehouse converts a subset of type E content to type C. Once type C content has been created, the data flow for this traffic type follows the same path as described above for traffic type A.
IDS architecture
Components at the warehouse
The IDS warehouse is composed of four major components, namely the cache subsystem, transmission subsystem, management subsystem, and database subsystem. Figure 4 shows the major components of the warehouse along with their interconnections.
The cache subsystem consists of a cluster of standard Web caches that communicate among each other using standard protocols such as Internet Cache Protocol (ICP). For the IDS prototype, we have a single Squid cache. The cache subsystem also consists of refresh and crawl modules that communicate with the Web cache(s) using HTTP and are responsible for proactively refreshing and crawling newly created type A or C content from origin Web servers, respectively. The log module in the cache subsystem parses standard logs from the Web cache and communicates hit-metering data to the database subsystem.
The transmission subsystem contains scheduling and gathering modules. These modules perform the following functions:
obtain from the database subsystem lists of URLs belonging to content of types A and C that must be transmitted
obtain objects corresponding to the URLs from the Web cache
append a bit map denoting kiosks that subscribe to the channel with each URL to the corresponding object
construct object bundles sized for optimal transmission
transfer the bundles to the transmitter
The transmitter module, also a part of the transmission subsystem, receives bundles and transmits them using the Multicast File Transfer Protocol from Starburst Communications.
The management subsystem is a Web-based graphical front end that communicates with the database subsystem and provides the warehouse operator with a tool to perform the following types of activities:
add popular content to the warehouse (traffic type A)
create channels and manually classify content into channels
configure system operational parameters such as thresholds for conversion of content from type E to type C
The database subsystem consists of the relational database, the Y module, and the mapper. The relational database contains persistent information about the content stored in the warehouse Web cache as well as URL hit statistics and channel and subscription information. The Y module performs three major tasks:
1. It periodically requests per-URL hit statistics for type E content stored in the Web caches of all kiosks.
2. Based on thresholds set at the warehouse, it converts a subset of E content to C.
3. It automatically channelizes the newly converted C content into the channels available at the warehouse.
Components at the kiosk
Like the warehouse, the IDS kiosk is also composed of four major components: (1) the cache subsystem, (2) the transmission subsystem, (3) the management subsystem, and (4) the database subsystem. Figure 5 shows the major components of the kiosk.
The cache subsystem at the kiosk consists of a cluster of standard Web caches, a layer-4 switch, and a log module. The cache cluster at the kiosk is identical to the one at the warehouse in all respects except one: the Web cache(s) at the kiosk are equipped to accept an HTTP push. The HTTP push method, which is described in detail in [Chen], enables the kiosk to directly push multicast Web content received from the warehouse into the kiosk cache(s). For the IDS prototype, we have a single Squid cache at the kiosk. The kiosk Web cache(s) are connected to the rest of kiosk network through a layer-4 switch. The layer-4 switch at the kiosk is configured to redirect all user HTTP-based traffic transparently to the Web cache(s). The log module accepts log data from the Web cache and inserts hit-metering data into the database subsystem.
The transmission subsystem at the kiosk contains a receiver module and several push clients. The receiver module performs the following functions:
receives Multicast File Transfer Protocol bundles
takes the bundles apart into separate HTTP objects
filters out objects that do not belong to channels subscribed at the kiosk by inspecting associated bit maps
passes the rest of the objects along with their URLs to the push clients
The push clients push all objects forwarded to them by the receiver into the Web cache(s) using the HTTP push method.
The management subsystem at the kiosk is a Web-based graphical front end that communicates with the database subsystem and provides the kiosk operator with a tool to perform the following types of activities:
subscribing to the channels made available by the warehouse
blocking specific URLs from being pushed into the kiosk Web cache
configuring system operational parameters
The database subsystem consists of the relational database and the Y module. The relational database contains persistent information about the content stored in the kiosk Web cache as well as URL hit statistics and channel and subscription information. The Y module at the kiosk transmits per-URL hit statistics for E type content stored in its Web cache when requested to do so by the warehouse.
Design issues and goals
In this section, we discuss some of the salient design goals that make IDS a unique system that fits its requirements.
Content refresh at the warehouse
Cache consistency techniques for Web caches are a well-debated topic. In the authors describe techniques for maintaining fresh content in Web caches, namely,
Time to live (TTL) fields, implemented using the "expires" header field in HTTP, and are used by content publishers to set a TTL for objects they create. When the TTL for an object expires, the cache invalidates that object. The next request for that object to the Web cache is directed to the origin server. The caveat in this strategy is that for a significant proportion of Web content, the TTLs are either incorrect or unspecified.
Client polling is a technique in which caches periodically query the origin server to determine whether an object has changed. The query frequency is a key factor in this technique. The authors used a client-polling scheme based on the assumption that "young files are modified more frequently than old files" on the Alex FTP cache. A well-tuned client-polling algorithm can be more effective than a cache refresh policy based only on TTL.
In the IDS design, the warehouse maintains the freshness of objects residing within IDS (i.e., in the warehouse and kiosk Web caches). The IDS warehouse design includes an adaptive refresh client-polling technique that uses object TTLs as initial estimates for object refresh times. The client-polling technique is encapsulated in the following relationship:
Ci+1 = Ci + f.(Ci - M)
where Ci denotes the time when the ith query to check the freshness of an object was sent to the origin Web server. Ci+1 denotes the estimated query time for the (i + 1)th query. M denotes the time when the object was last modified at the server. Finally, f denotes a constant factor. A desirable value of f can be determined by minimizing the number of queries i, such that a subsequent modification to the object after time M is discovered as quickly as possible. A value of 0.1 for f is suggested in the HTTP 1.1 request for comments.
By having the warehouse refresh all the kiosks, IDS saves the client-polling bandwidth to origin servers. In addition, the refresh mechanism in IDS is sensitive to objects that change too frequently. Such frequently changing documents are tagged as uncacheable by the system.
Content prefetch
The IDS warehouse is designed to prefetch cacheable objects embedded in a cached Web page. The crawler module in IDS proactively parses cached objects for embedded cacheable objects, evaluates the embedded objects against evaluation parameters, and fetches them from their origin servers to be cached. The evaluation parameters, set through the management application, use the heuristic that objects associated with a popular object are likely to be popular also. Thus, caching of prefetched objects leads to better hit rates for the kiosk caches.
Content rerun -- kiosk fault tolerance
Along with new and updated Web content, the IDS warehouse constantly multicasts all information cached in its Web cache to the kiosks. This design feature provides an automatic recovery for kiosks that were offline for a certain period of time. It automatically brings new kiosks up-to-date as well.
Push channels
The IDS warehouse is designed to classify all Web content into push channels based on keywords associated with cached objects. Two methods of channelization are present. Manual channelization is offered through the management application at the warehouse. Any object in the warehouse can be manually associated with a channel. Web content brought in by the warehouse crawler module is also automatically channelized based on a keywords discovery algorithm in the crawler. Kiosks subscribe to channels offered by the warehouse. Based on kiosk subscriptions, which are communicated periodically from all kiosks to the warehouse, the warehouse is able to append a subscription bitmap to all objects being multicast out to the kiosks. Kiosks inspect the subscription bitmap and filter out all unwanted traffic.
Portable module interfaces
A specific goal in the design of IDS is to design modules with portable interfaces. This allows flexibility in choosing implementation platforms. All IDS modules within the warehouse and kiosks use Transmission Control Protocol/IP for interprocess communication. Thus, all warehouse (or kiosk) functionality may reside on a single machine or may be distributed among several machines. In addition, the relational database communicates with other modules through a single set of application programming interfaces.
Transparent redirection of HTTP traffic at kiosk
A hard requirement of the IDS kiosk design is that all HTTP traffic from users to the kiosk must be transparently redirected to the kiosk cache. Thus, the deployment of a kiosk at a national service provider or an ISP will be invisible to the customers of the kiosk. Although transparent redirection has been implemented in software by Netcache and other cache vendors, most service providers choose to deploy a hardware-based solution such as Web Cache Control Protocol running on CISCO cache engines and routers or to use a content-aware layer-4 switch. The IDS design includes a layer-4 switch to implement transparent redirection of HTTP traffic to the kiosk cache.
Content push into Web cache
The Web caches used in IDS are designed to operate in pull as well as push mode. While pull is the default mode of behavior, IDS Web caches are modified to accept object push. The kiosk Web cache accepts objects pushed from the warehouse. The warehouse Web cache can accept objects pushed from content providers.
Minimal modifications to Web cache architecture
A key goal in the IDS design was to keep modification to the Web cache to a minimum, which would allow the Web cache to be used as a pluggable module. The IDS prototype uses the Squid Web cache from the National Laboratory for Applied Network Research; however, the design allows for the substitution of Squid by any commercial Web cache that implements the push protocol.
Persistent storage of Web cache metadata
Web cache metadata including hit-metering statistics for all cached objects is stored in a relational database. This persistent storage provides IDS with the ability to query relevant metadata statistics to enforce business rules.
CONCLUSION
A new generation of Internet access built around geosynchronous satellites can provide immediate relief. They can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and supplement terrestrial networks elsewhere. This new generation of satellite system manages a set of satellite links using intelligent controls at the link endpoints. Mechanisms controlled include caching, dynamic construction of push channels, use of multicast. and scheduling of satellite bandwidth.
Lavanya.k
ReplyDeletegmail id:lavanya.kgowda5@gmail.com
electronics and communication