When the abstraction is validated, they can then use it
for very large-scale simulations.
Simulation scaling involves a process of
selective abstraction which is often application specific, and raises two interesting questions -- ?what to abstract to make simulations scaleable?? and ?what is the effect on simulation results??. We examine these questions, using multicast simulations as examples.
This work shows that abstracting details can speed up and reduce memory consumption
significantly while the results of simulations are not crucially affected. Application of our abstractions and the techniques may allow the research community to perform large-scale simulations that were impossible before, and to perform previously achievable large-scale simulations at much lower cost to the same user. As a result, this work can enable the Internet community to debug designs and to evaluate protocol performance in large-scale scenarios. After discussing related work, we present the techniques used to abstract details, particularly for multicast simulations, and a guideline to perform abstract simulations. We then discuss results comparing performance and accuracy for the abstract and detailed versions, and, finally, SRM (Scalable Reliable Multicast) simulations are studied to demonstrate the use of guideline and the abstraction techniques.
2. Related work
There has been a great deal of work on simulation techniques. Here we briefly summarize priori work in data structure, parallel and distributed simulation, abstraction, and hybrid simulation.
Various scheduling algorithms and event
queue structures (e.g., calendar queue, heap, and hybrid event list) have been proposed to speed up sequential simulation. Unfortunately, they do not scale to the number of events (packets) triggered by a complex and large-scale network like the current Internet. Ahn and Danzig estimated  that ?five minutes of activity on a network the size of today?s Internet would require gigabytes of real memory and months of computation on today?s 100 MIP uniprocessors.?
An alternative to sequential simulation is
parallel and distributed simulation. It exploits the cost benefits of microprocessors and high-bandwidth interconnections by partitioning the simulation problem and distributing executions in parallel. The distributed simulations require techniques such as conservative and optimistic [4-7] synchronization mechanisms to maintain the correct event ordering. Consequently, the simulation efficiency may be
degraded due to the overhead associated with these techniques . In addition, the typical simulation algorithm does not easily partition for parallel execution. Ohi and Preiss [2,3] investigated several block selection policies and found limited speedup and possibly degraded performance when there were a large number of unique event types. Although parallel and distributed simulation is useful when large computers are available, alternative techniques such as abstraction are also needed to make very large scale simulations.
Ahn and Danzig  proposed to abstract
packet streams (Flowsim) for packet network simulations and proved that Flowsim could be adjusted to the desired simulation granularity and help to study flow and congestion control algorithms more efficiently. However, Flowsim only abstracts one aspect of network simulation. We present two other abstraction techniques and hope to generalized them for a wider range of network protocols. In hybrid simulation models , both
discrete-event simulation and analytic techniques are combined to produce efficient yet accurate system models. There are examples of using hybrid simulations on hypothetical computer systems. Discrete-event simulation is used to model the arrival and activation of jobs, and a central-server queuing network models the use of system processors. The accuracy and efficiency of the hybrid techniques are demonstrated by comparing the result and
computational costs of the hybrid model of the example with those of an equivalent simulation-only model. Our abstract simulation can be thought of as an application of hybrid simulation to networking.
3. Abstraction techniques
Large-scale simulations are prevented because of resource constraints, typically in CPU consumption and memory usage. Through abstraction we reduce resource consumption, enabling large-scale simulations. Abstraction is possible because most protocol architectures are layered such that one protocol makes use of another. In design or evaluation of a level-n protocol, we need information provided by level n-1 and below, but not necessarily (depending on the research questions) all the details of the lower level protocols. For instance, a multicast transport protocol may need multicast routing tables in order to forward multicast packets, but not the detailed exchange of messages required to generate those routing tables in an actual network. If we abstract away unnecessary details, the memory and time consumption can be conserved and used to simulate larger-scale scenarios. In this section, we present two abstraction techniques and our general