page 1  (12 pages)
2to next section

Enabling Large-scale Simulations:

Selective Abstraction Approach to the Study of Multicast Protocols

Polly Huang, Deborah Estrin, John Heidemann USC/Information Science Institute
University of Southern California
4676 Admiralty Way, Suite 1001
Marina del Rey, CA 90291
Phone: (310)822-1511 ext. {222, 253, 708} Fax: (310)823-6714
{huang, estrin, johnh}

Due to the complexity and scale of the current Internet, large-scale simulations are an increasingly important tool to evaluate network protocol design. Parallel and distributed simulation is one appropriate approach to the simulation scalability problem, but it can require expensive hardware and have high overhead. In this study, we investigate a
complimentary solution to large-scale simulation -- simulation abstraction. Just as a custom simulator includes only details necessary for the task at hand, we show how a general simulator can support configurable levels of detail for different simulations. We develop two abstraction techniques, centralized computation and abstract packet distribution, to abstract network and transport layer protocols. We demonstrate these techniques in multicast simulations and derives centralized multicast and abstract multicast distribution (session multicast). We show that our abstraction techniques each help to gain one order of magnitude in performance improvement (from tens to hundreds to thousands of nodes). Although abstraction simulations are not identical to more detailed simulations, we show that in many cases these differences are small. We show that these differences result in minimal changes in the conclusions drawn from simulations in reliable multicast simulations.

Keywords: Abstraction Techniques, Packet Network Simulation, Simulation Scalability, Multicast Simulation, Protocol Evaluation

1. Introduction
Modeling and simulation traditionally have been the two approaches to evaluate network protocol designs. However, modeling is often intractable with today?s large networks and complex traffic patterns, so researchers have turned increasingly to simulation. In order to evaluate wide-area protocols with thousands

of nodes, we must be able to perform large-scale simulations.

General purpose network simulators (such as ns-2 [25]) make simulation easier by capturing characteristics of real network components and providing a modular programming environment, often composed by links, nodes and existing protocol suites. For instance, a link may contain transmission and propagation delay modules, and a node may contain routing tables, forwarding machinery, local agents, queuing objects, TTL objects, interface objects, and loss modules. These composable modules provide a flexible environment to simulation network behavior, but depending on the level of details a particular simulation is investigating, these details may or may not be required. Unfortunately, this modular structure can result in significant resource consumption, especially when the simulation scenarios grow.

One approach to the scalability problem is to apply parallel and distributed simulation, dividing tasks into parts coordinated over numbers of machines. Parallelism can improve simulation scale in ratio to the number of machines added, but this linear growth is not sufficient to add several orders of magnitude scaling needed. Parallel simulation can also require hardware that is not widely available or expensive.
A complimentary solution is to slim down
simulations by abstracting out details. The basic idea is to analyze a particular set of simulations, identify the bottleneck, and eliminate it by abstracting unnecessary details (i.e., keeping the simulator slim). The risk of abstraction is that simulation results may be distorted due to the abstraction; users must be careful that their simulation results are not changed. We address this problem by providing identical simulation interfaces for detailed and abstract simulations, allowing users to make side-by-side comparisons of their simulations at small scales.