SIMULATION AND MODELING

Modeling and simulation (M&S) is getting information about how something will behave without actually testing it in real life. For instance, if we wanted to design a racecar, but werent sure what type of spoiler would improve traction the most, we would be able to use a computer simulation of the car to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. Were getting useful insights about different decisions we could make for the car without actually building the car.
More generally, M&S is using models, including emulators, prototypes, simulators, and stimulators, either statically or over time, to develop data as a basis for making managerial or technical decisions. The terms "modeling" and "simulation" are often used interchangeably. The use of M&S within engineering is well recognized. Simulation technology belongs to the tool set of engineers of all application domains and has been included in the body of knowledge of engineering management. M&S has already helped to reduce costs, increase the quality of products and systems, and document and archive lessons learned.
M&S is a discipline on its own. Its many application domains often lead to the assumption that M&S is pure application. This is not the case and needs to be recognized by engineering management experts who want to use M&S. To ensure that the results of simulation are applicable to the real world, the engineering manager must understand the assumptions, conceptualizations, and implementation constraints of this emerging field.
Simulation
Simulation is the imitation of the operation of a real-world process or system over time. The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time.
Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.
Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.
Its actually easier to first discuss which models are least appropriate for simulation. If the model is solvable and deterministic and static, then it is a poor candidate for simulation. Of course it can still be simulated, but an analytical solution may be more cost-effective and provide an exact answer. On the other hand, if a model is simulatable or stochastic or dynamic, it is an excellent candidate for simulation.
Advantages of Simulation
1. Simulation is a relatively straightforward and flexible method of characterizing system behavior
2. Simulation modeling can be used to analyze large, complex systems that cannot be solved by conventional operations management models
3. Real-world system complexities (such as non-standard distributions for stochastic processes) can be used that are not permitted by most management models
4. Time compression is possible, showing effects of system polices over months or years
5. A model allows comparison of policy options while holding other variables constant
6. Computer simulations allow physical operations to continue without disruption and without risk
7. Simulation analysis can isolate the behavior of individual components within the system
Disadvantages of Simulation
1. Good simulation models can be expensive. Sound judgment and a planned development path are needed to find and focus on the most important system issues
2. Simulation is inherently a trial-and-error approach that relies on the skill and experience of the simulator. Simulation may not produce optimal solutions for models which are solvable, deterministic, and static.
3. The simulation team must generate all conditions and constraints of the system, as the model relies on realistic input.

What is a good simulation application?
1. Systems where it is too expensive or risky to do live tests. Simulation provides an inexpensive, risk-free way to test changes ranging from a "simple" revision to an existing production line to emulation of a new control system or redesign of an entire supply chain.
2. Large or complex systems for which change is being considered. A "best guess" is usually a poor substitute for an objective analysis. Simulation can accurately predict their behavior under changed conditions and reduce the risk of making a poor decision.
3. Systems where predicting process variability is important. A spreadsheet analysis cannot capture the dynamic aspects of a system, aspects which can have a major impact on system performance. Simulation can help you understand how various components interact with each other and how they affect overall system performance.
4. Systems where you have incomplete data. Simulation cannot invent data where it does not exist, but simulation does well at determining sensitivity to unknowns. A high-level model can help you explore alternatives. A more detailed model can help you identify the most important missing data.
5. Systems where you need to communicate ideas. Development of a simulation helps participants better understand the system. Modern 3D animation and other tools promote communication and understanding across a wide audience.
history of simulation software

Simulation in entertainment encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature. Advances in technology in the 1980s and 1990s caused simulation to become more widely used and it began to appear in movies such as Jurassic Park (1993) and in computer-based games such as Atari’s Battlezone (1980).
The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958 a computer game called “Tennis for Two” was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope. This was one of the first electronic video games to use a graphical display.
Type of models
Active models
Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous “Harvey” mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography.
Interactive models
More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgments, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction.
Computer simulators
Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, "cybertherapy" can be used for sessions simulating traumatic expericences, from fear of heights to social anxiety.
Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These “lifelike” simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context. Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state.
Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings.
Manual Simulation of Systems
In most simulation studies, we are concerned with the simulation of some system. Thus, in order to model a system, we must understand the concept of a system. A system is a collection of entities that act and interact toward the accomplishment of some logical end.
Systems generally tend to be dynamic – their status changes over time. To describe this status, we use the concept of the state of a system.
Simulation of Queueing Systems
A queueing system is described by
•Calling population
•Arrival rate
•Service mechanism
•System capacity
•Queueing discipline
time-shared computer model
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represented a major technological shift in the history of computing.
By allowing a large number of users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications.
job-shop model
Job shops are typically small manufacturing systems that handle job production, that is, custom/bespoke or semi-custom/bespoke manufacturing processes such as small to medium-size customer orders or batch jobs. Job shops typically move on to different jobs (possibly with different customers) when each job is completed. In job shops machines are aggregated in shops by the nature of skills and technological processes involved, each shop therefore may contain different machines, which gives this production system processing flexibility, since jobs are not necessarily constrained to a single machine. In computer science the problem of job shop scheduling is considered strongly NP-hard. In a job shop product flow is twisted, also notice that in this drawing each shop contains a single machine.

A typical example would be a machine shop, which may make parts for local industrial machinery, farm machinery and implements, boats and ships, or even batches of specialized components for the aircraft industry. Other types of common job shops are grinding, honing, jig-boring, gear manufacturing, and fabrication shops. The opposite would be continuous flow manufactures such as textile, steel,food manufacturing and manual labor.
Discrete Event Formalisms
DEVS abbreviating Discrete Event System Specification is a modular and hierarchical formalism for modeling and analyzing general systems that can be discrete event systems which might be described by state transition tables, and continuous state systems which might be described by differential equations, and hybrid continuous state and discrete event systems. DEVS is a timed event system.
DEVS defines system behavior as well as system structure. System behavior in DEVS formalism is described using input and output events as well as states. For example, for the ping-pong player of Fig. 1, the input event is ?receive, and the output event is !send. Each player, A, B, has its states: Send and Wait. Send state takes 0.1 seconds to send back the ball that is the output event !send, while Wait lasts the state until the player receives the ball that is the input event ?receive.
DEVS is a formalism for modeling and analysis of discrete event systems (DESs). The DEVS formalism was invented by Bernard P. Zeigler, who is emeritus professor at the University of Arizona. DEVS was introduced to the public in Zeiglers first book, Theory of Modeling and Simulation, in 1976, while Zeigler was an associate professor at University of Michigan. DEVS can be seen as an extension of the Moore machine formalism, which is a finite state automaton where the outputs are determined by the current state alone (and do not depend directly on the input). The extension was done by
associating a lifespan with each state [Zeigler76],
providing a hierarchical concept with an operation, called coupling [Zeigler84].
Since the lifespan of each state is a real number (more precisely, non-negative real) or infinity, it is distinguished from discrete time systems, sequential machines, and Moore machines, in which time is determined by a tick time multiplied by non-negative integers. Moreover, the lifespan can be a random variable; for example the lifespan of a given state can be distributed exponentially or uniformly. The state transition and output functions of DEVS can also be stochastic.
discrete event simulation

In the field of simulation, a discrete-event simulation (DES), models the operation of a system as a discrete sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation can directly jump in time from one event to the next. This contrasts with continuous simulation in which the simulation continuously tracks the system dynamics over time. Instead of being event-based, this is called an activity-based simulation; time is broken up into small time slices and the system state is updated according to the set of activities happening in the time slice. Because discrete-event simulations do not have to simulate every time slice, they can typically run much faster than the corresponding continuous simulation.
Another alternative to event-based simulation is process-based simulation. In this approach, each activity in a system corresponds to a separate process, where a process is typically simulated by a thread in the simulation program. In this case, the discrete events, which are generated by threads, would cause other threads to sleep, wake, and update the system state.
A more recent method is the three-phased approach to discrete event simulation (Pidd, 1998). In this approach, the first phase is to jump to the next chronological event. The second phase is to execute all events that unconditionally occur at that time (these are called B-events). The third phase is to execute all events that conditionally occur at that time (these are called C-events). The three phase approach is a refinement of the event-based approach in which simultaneous events are ordered so as to make the most efficient use of computer resources. The three-phase approach is used by a number of commercial simulation software packages, but from the users point of view, the specifics of the underlying simulation method are generally hidden.
Statistical Models in Simulation
In this section, statistical models appropriate to some application areas are presented. The areas include:
Queueing systems
Inventory and supply-chain systems
Reliability and maintainability
Limited data
Discrete Distributions
Discrete random variables are used to describe random phenomena in which only integer values can occur.
In this section, we will learn about:
1. Bernoulli trials and Bernoulli distribution
2. Binomial distribution
3. Geometric and negative binomial distribution
4. Poisson distribution
Continuous Distributions
Continuous random variables can be used to describe random phenomena in which the variable can take on any value in some interval.
In this section, the distributions studied are:
1. Uniform
2. Exponential
3. Normal
4. Weibull
5. Lognormal
Empirical Distributions

A distribution whose parameters are the observed values in a sample of data.
1. May be used when it is impossible or unnecessary to establish that a random variable has any particular parametric distribution.
2. Advantage: no assumption beyond the observed values in the sample.
3. Disadvantage: sample might not cover the entire range of possible values.
Poisson Distribution
Definition: N(t) is a counting function that represents the number of events occurred in [0,t].
A counting process {N(t), t>=0} is a Poisson process with mean rate ? if:
1. Arrivals occur one at a time
2. {N(t), t>=0} has stationary increments
3. {N(t), t>=0} has independent increments
Properties
1. Equal mean and variance: E[N(t)] = V[N(t)] =?t
2. Stationary increment: The number of arrivals in time s to t is also Poisson-distributed with mean ? (t-s)
Queueing Models
Purpose
Simulation is often used in the analysis of queueing models. Queueing models provide the analyst with a powerful tool for designing and evaluating the performance of queueing systems.
Key elements of queueing systems
Customer: refers to anything that arrives at a facility and requires service, e.g., people, machines, trucks, emails.
Server: refers to any resource that provides the requested service, e.g., repairpersons, retrieval machines, runways at airport.
A queueing system consists of a number of service centers and interconnected queues. Each service center consists of some number of servers, c, working in parallel, upon getting to the head of the line, a customer takes the 1st available server.
System Capacity: a limit on the number of customers that may be in the waiting line or system.
1. Limited capacity, e.g., an automatic car wash only has room for 10 cars to wait in line to enter the mechanism.
2. Unlimited capacity, e.g., concert ticket sales with no limit on the number of people allowed to wait to purchase tickets
Queue behavior: the actions of customers while in a queue waiting for service to begin, for example:
Balk: leave when they see that the line is too long,
Renege: leave after being in the line when its moving too slowly,
Jockey: move from one line to a shorter line.
Queue discipline: the logical ordering of customers in a queue that determines which customer is chosen for service when a server
becomes free, for example:
First-in-First-out (FIFO)
Last-in-First-out (LIFO)
Service in random order (SIRO)
Shortest processing time First (SPT)
Service according to priority (PR).
Networks of Queues
Many systems are naturally modeled as networks of single queues: customers departing from one queue may be routed to another.
Random Number Generation
A random number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that lack any pattern, i.e. appear random.
The many applications of randomness have led to the development of several different methods for generating random data. Many of these have existed since ancient times, including dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (by divination) in the I Ching, and many other techniques. Because of the mechanical nature of these techniques, generating large numbers of sufficiently random numbers (important in statistics) required a lot of work and/or time. Thus, results would sometimes be collected and distributed as random number tables. Nowadays, after the advent of computational random number generators, a growing number of government-run lotteries, and lottery games, are using RNGs instead of more traditional drawing methods. RNGs are also used today to determine the odds of modern slot machines.
Several computational methods for random number generation exist. Many fall short of the goal of true randomness — though they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). However, carefully designed cryptographically secure computationally based methods of generating random numbers do exist, such as those based on the Yarrow algorithm and the Fortuna (PRNG) and others.
Practical applications and uses
Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable. Note that, in general, where unpredictability is paramount – such as in security applications – hardware generators are generally preferred (where feasible) over pseudo-random algorithms.
Random number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed. They are also used in cryptography – so long as the seed is secret. Sender and receiver can generate the same set of numbers automatically to use as keys.
The generation of pseudo-random numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "Random Quote of the Day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms.
Some applications which appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a true random system would have no restriction on the same item appearing two or three times in succession.
"True" random numbers vs. pseudo-random numbers

There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy.
The speed at which entropy can be harvested from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring true entropy are said to be blocking i.e. rate-limited until enough entropy is harvested to meet demand. On some Unix-like systems, including Linux, the pseudo device file /dev/random will block until sufficient entropy is harvested from the environment. Due to this blocking behavior large bulk reads from /dev/random, such as filling a hard disk with random bits, can often be slow.
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. These types of generators do not typically rely on sources of naturally occurring entropy, though they may be periodically seeded by natural sources, they are non-blocking i.e. not rate-limited by an external event.
A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator in the purest sense of the word, since their output is inherently predictable if all seed values are known. In practice however they are sufficient for most tasks. Carefully designed and implemented pseudo-random number generators can even be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna (PRNG).
Generation methods
Physical methods
The earliest methods for generating random numbers — dice, coin flipping, roulette wheels — are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.

A physical random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics. Sources of entropy include radioactive decay, thermal noise, shot noise, avalanche noise in Zener diodes, clock drift, the timing of actual movements of a hard disk read/write head, and radio noise. However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor, such as a cryptographic hash function, can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate.
Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measures radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.
Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators.
Generation from a probability distribution
There are a couple of methods to generate a random number based on a probability density function. These methods involve transforming a uniform random number in some way. Because of this, these methods work equally well in generating both pseudo-random and true random numbers. One method, called the inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method, called the acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again.
By humans
Random number generation may also be done by humans directly. However, most studies find that human subjects have some degree of nonrandomness when generating a random sequence of, e.g., digits or letters. They may alternate too much between choices compared to a good random generator.
Random Variate Generation
In the mathematical fields of probability and statistics, a random variate is a particular outcome of a random variable: the random variates which are other outcomes of the same random variable might have different values. Random variates are used when simulating processes driven by random influences (stochastic processes). In modern applications, such simulations would derive random variates corresponding to any given probability distribution from computer procedures designed to create random variates corresponding to a uniform distribution, where these procedures would actually provide values chosen from a uniform distribution of pseudorandom numbers.
Procedures to generate random variates corresponding to a given distribution are known as procedures for random variate generation or pseudo-random number sampling. In probability theory, a random variable is a measurable function from a probability space to a measurable space of values that the variable can take on. In that context, and in statistics, those values are known as a random variates, or occasionally random deviates, and this represents a wider meaning than just that associated with pseudorandom numbers.
Verification and Validation of Simulation Model
Definitions
Verification is the process of determining that a model implementation and its associated data accurately represent the developers conceptual description and specifications.

Validation is the process of determining the degree to which a simulation model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model.
Modeling and simulation (M&S) can be an important element in the acquisition of systems within government organizations. M&S is used during development to explore the design trade space and inform design decisions, and in conjunction with testing and analysis to gain confidence that the design implementation is performing as expected, or to assist troubleshooting if it is not. M&S allows decision makers and stakeholders to quantify certain aspects of performance during the system development phase, and to provide supplementary data during the testing phase of system acquisition. More important, M&S may play a key role in the qualification ("sell-off”) of a system as a means to reduce the cost of a verification test program. Here, the development of a simulation model that has undergone a formal verification, validation, and accreditation (VV&A) process is not only desirable, but essential.
Consider the following definitions for the phases of the simulation model VV&A process :
Verification: "The process of determining that a model implementation and its associated data accurately represent the developer’s conceptual description and specifications."
Validation: "The process of determining the degree to which a [simulation]model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model."
Accreditation: "The official certification that a model, simulation, or federation of models and simulations and its associated data are acceptable for use for a specific purpose."
Simulation Conceptual Model: "The developers description of what the model or simulation will represent, the assumptions limiting those representations, and other capabilities needed to satisfy the users requirements."
Output Analysis
We can divide simulation models into two basic types for output analysis :
1. Terminating Systems
2. Non Terminating Systems
Terminating Systems
A retail or commercial establishment such as a bank or a store opens for business at 9.00 a.m. and closes at 5.00 p.m. On closing no new customers are admitted but existing customers are served before leaving. Initially, of course, the system contains no customers.
A ship yard has obtained an order for 5 bulk carriers which will take in total approximately 2.5 years to complete. The company would like to simulate a range of alternative production plans to minimise on production costs and time. The simulation would start, perhaps, from the date of the last ship of the preceding order leaving the yard.
Non Terminating Systems
In an absolute sense, of course, all systems are terminating since human activity has a finite duration. In terms of simulation modelling we can regard a non terminating system as one in which we wish to model a fixed time duration which forms part of a system which runs for a long time relative to the fixed duration.
As part of a North Sea Oil operation a pipe laying barge is used to continuously lay pipe on the sea bed. The modelling task is to optimise the days throughput of pipe by modelling all the associated activities (including ship delivery of pipe sections, welding sections, coupling sections, painting, X -ray inspection etc.). Clearly examining one days activity is part of a longer process and machines will be either busy or idle at the start and inventory will be at a level determined by the time of the last delivery and the work rate.
A computer network for routing telephone calls from a local area to the wide area network is to be modelled over a 24 hour period. Clearly this is non terminating but it is also different from the previous example in the fact that it is also cyclic because we expect maximum traffic in the morning period, minimum in the middle of the night. It also has a longer periodicity because we can anticipate reduced traffic at week ends.
The method of analysis of the two types of system is fundamentally different because for a terminating system we know both the starting conditions and the termination condition. Also if we make replications of the experiment run with differently seeded random numbers we can be sure that the replications are statistically independent.
Case Studies
In the literature, the simulation of manufacturing systems has either been predominantly concerned with relatively simple facilities such as job shops, batch production facilities or floweriness. In general, this work has focused on detailed issues such as the evaluation of layout, or the selection of dispatching or lot sizing rules. Other research has adopted an aggregate approach, with severe simplifying assumptions, in which important details may have been neglected.
In our work at Newcastle, a large-scale, simulation model, that allows entire manufacturing facilities to be represented has been developed for investigating manufacturing systems issues. It may be used as a research tool, or it may be used as a planning tool that allows plans to be generated and validated by simulation. The model represents a manufacturing facilities under the control of a manufacturing planning and control system.
The Manufacturing System Simulation Model is based upon the discrete event paradigm (Kreutzer, 1986). It has the capability to represent the manufacture of a range of product families with either shallow or deep product structure using jobbing, batch, flow and assembly processes. The model was developed without reference to any particular site and can be configured at run time to represent a specific company using a series of user-friendly forms. The model provides an integrated framework for investigating manufacturing systems problems. Data on the facilities, current schedules, product information such as product structures and process plans are required as input. A very wide range of issues may be investigated and evaluated in terms of a specified performance criteria.
Simulation of computer systems

A computer simulation is a simulation, run on a single computer, or a network of computers, to reproduce behavior of a system. The simulation uses an abstract model (a computer model, or a computational model) to simulate the system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulation of a system is represented as the running of the systems model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations vary from computer programs that run a few minutes to network-based groups of computers running for hours to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. Over 10 years ago, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Cobweb model
he cobweb model or cobweb theory is an economic model that explains why prices might be subject to periodic fluctuations in certain types of markets. It describes cyclical supply and demand in a market where the amount produced must be chosen before prices are observed. Producers expectations about prices are assumed to be based on observations of previous prices. Nicholas Kaldor analyzed the model in 1934, coining the term cobweb theorem (see Kaldor, 1938 and Pashigian, 2008), citing previous analyses in German by Henry Schultz and U. Ricci.

he cobweb model is based on a time lag between supply and demand decisions. Agricultural markets are a context where the cobweb model might apply, since there is a lag between planting and harvesting (Kaldor, 1934, p. 133-134 gives two agricultural examples: rubber and corn). Suppose for example that as a result of unexpectedly bad weather, farmers go to market with an unusually small crop of strawberries. This shortage, equivalent to a leftward shift in the markets supply curve, results in high prices. If farmers expect these high price conditions to continue, then in the following year, they will raise their production of strawberries relative to other crops. Therefore when they go to market the supply will be high, resulting in low prices. If they then expect low prices to continue, they will decrease their production of strawberries for the next year, resulting in high prices again. Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.

Modeling and simulation (M&S) is getting information about how something will behave without actually testing it in real life. For instance, if we wanted to design a racecar, but werent sure what type of spoiler would improve traction the most, we would be able to use a computer simulation of the car to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. Were getting useful insights about different decisions we could make for the car without actually building the car.
More generally, M&S is using models, including emulators, prototypes, simulators, and stimulators, either statically or over time, to develop data as a basis for making managerial or technical decisions. The terms "modeling" and "simulation" are often used interchangeably. The use of M&S within engineering is well recognized. Simulation technology belongs to the tool set of engineers of all application domains and has been included in the body of knowledge of engineering management. M&S has already helped to reduce costs, increase the quality of products and systems, and document and archive lessons learned.
M&S is a discipline on its own. Its many application domains often lead to the assumption that M&S is pure application. This is not the case and needs to be recognized by engineering management experts who want to use M&S. To ensure that the results of simulation are applicable to the real world, the engineering manager must understand the assumptions, conceptualizations, and implementation constraints of this emerging field.
Simulation
Simulation is the imitation of the operation of a real-world process or system over time. The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time.
Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.
Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.
Its actually easier to first discuss which models are least appropriate for simulation. If the model is solvable and deterministic and static, then it is a poor candidate for simulation. Of course it can still be simulated, but an analytical solution may be more cost-effective and provide an exact answer. On the other hand, if a model is simulatable or stochastic or dynamic, it is an excellent candidate for simulation.
Advantages of Simulation
1. Simulation is a relatively straightforward and flexible method of characterizing system behavior
2. Simulation modeling can be used to analyze large, complex systems that cannot be solved by conventional operations management models
3. Real-world system complexities (such as non-standard distributions for stochastic processes) can be used that are not permitted by most management models
4. Time compression is possible, showing effects of system polices over months or years
5. A model allows comparison of policy options while holding other variables constant
6. Computer simulations allow physical operations to continue without disruption and without risk
7. Simulation analysis can isolate the behavior of individual components within the system
Disadvantages of Simulation
1. Good simulation models can be expensive. Sound judgment and a planned development path are needed to find and focus on the most important system issues
2. Simulation is inherently a trial-and-error approach that relies on the skill and experience of the simulator. Simulation may not produce optimal solutions for models which are solvable, deterministic, and static.
3. The simulation team must generate all conditions and constraints of the system, as the model relies on realistic input.

What is a good simulation application?
1. Systems where it is too expensive or risky to do live tests. Simulation provides an inexpensive, risk-free way to test changes ranging from a "simple" revision to an existing production line to emulation of a new control system or redesign of an entire supply chain.
2. Large or complex systems for which change is being considered. A "best guess" is usually a poor substitute for an objective analysis. Simulation can accurately predict their behavior under changed conditions and reduce the risk of making a poor decision.
3. Systems where predicting process variability is important. A spreadsheet analysis cannot capture the dynamic aspects of a system, aspects which can have a major impact on system performance. Simulation can help you understand how various components interact with each other and how they affect overall system performance.
4. Systems where you have incomplete data. Simulation cannot invent data where it does not exist, but simulation does well at determining sensitivity to unknowns. A high-level model can help you explore alternatives. A more detailed model can help you identify the most important missing data.
5. Systems where you need to communicate ideas. Development of a simulation helps participants better understand the system. Modern 3D animation and other tools promote communication and understanding across a wide audience.
history of simulation software

Simulation in entertainment encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature. Advances in technology in the 1980s and 1990s caused simulation to become more widely used and it began to appear in movies such as Jurassic Park (1993) and in computer-based games such as Atari’s Battlezone (1980).
The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958 a computer game called “Tennis for Two” was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope. This was one of the first electronic video games to use a graphical display.
Type of models
Active models
Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous “Harvey” mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography.
Interactive models
More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgments, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction.
Computer simulators
Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, "cybertherapy" can be used for sessions simulating traumatic expericences, from fear of heights to social anxiety.
Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These “lifelike” simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context. Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state.
Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings.
Manual Simulation of Systems
In most simulation studies, we are concerned with the simulation of some system. Thus, in order to model a system, we must understand the concept of a system. A system is a collection of entities that act and interact toward the accomplishment of some logical end.
Systems generally tend to be dynamic – their status changes over time. To describe this status, we use the concept of the state of a system.
Simulation of Queueing Systems
A queueing system is described by
•Calling population
•Arrival rate
•Service mechanism
•System capacity
•Queueing discipline
time-shared computer model
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represented a major technological shift in the history of computing.
By allowing a large number of users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications.
job-shop model
Job shops are typically small manufacturing systems that handle job production, that is, custom/bespoke or semi-custom/bespoke manufacturing processes such as small to medium-size customer orders or batch jobs. Job shops typically move on to different jobs (possibly with different customers) when each job is completed. In job shops machines are aggregated in shops by the nature of skills and technological processes involved, each shop therefore may contain different machines, which gives this production system processing flexibility, since jobs are not necessarily constrained to a single machine. In computer science the problem of job shop scheduling is considered strongly NP-hard. In a job shop product flow is twisted, also notice that in this drawing each shop contains a single machine.

A typical example would be a machine shop, which may make parts for local industrial machinery, farm machinery and implements, boats and ships, or even batches of specialized components for the aircraft industry. Other types of common job shops are grinding, honing, jig-boring, gear manufacturing, and fabrication shops. The opposite would be continuous flow manufactures such as textile, steel,food manufacturing and manual labor.
Discrete Event Formalisms
DEVS abbreviating Discrete Event System Specification is a modular and hierarchical formalism for modeling and analyzing general systems that can be discrete event systems which might be described by state transition tables, and continuous state systems which might be described by differential equations, and hybrid continuous state and discrete event systems. DEVS is a timed event system.
DEVS defines system behavior as well as system structure. System behavior in DEVS formalism is described using input and output events as well as states. For example, for the ping-pong player of Fig. 1, the input event is ?receive, and the output event is !send. Each player, A, B, has its states: Send and Wait. Send state takes 0.1 seconds to send back the ball that is the output event !send, while Wait lasts the state until the player receives the ball that is the input event ?receive.
DEVS is a formalism for modeling and analysis of discrete event systems (DESs). The DEVS formalism was invented by Bernard P. Zeigler, who is emeritus professor at the University of Arizona. DEVS was introduced to the public in Zeiglers first book, Theory of Modeling and Simulation, in 1976, while Zeigler was an associate professor at University of Michigan. DEVS can be seen as an extension of the Moore machine formalism, which is a finite state automaton where the outputs are determined by the current state alone (and do not depend directly on the input). The extension was done by
associating a lifespan with each state [Zeigler76],
providing a hierarchical concept with an operation, called coupling [Zeigler84].
Since the lifespan of each state is a real number (more precisely, non-negative real) or infinity, it is distinguished from discrete time systems, sequential machines, and Moore machines, in which time is determined by a tick time multiplied by non-negative integers. Moreover, the lifespan can be a random variable; for example the lifespan of a given state can be distributed exponentially or uniformly. The state transition and output functions of DEVS can also be stochastic.
discrete event simulation

In the field of simulation, a discrete-event simulation (DES), models the operation of a system as a discrete sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation can directly jump in time from one event to the next. This contrasts with continuous simulation in which the simulation continuously tracks the system dynamics over time. Instead of being event-based, this is called an activity-based simulation; time is broken up into small time slices and the system state is updated according to the set of activities happening in the time slice. Because discrete-event simulations do not have to simulate every time slice, they can typically run much faster than the corresponding continuous simulation.
Another alternative to event-based simulation is process-based simulation. In this approach, each activity in a system corresponds to a separate process, where a process is typically simulated by a thread in the simulation program. In this case, the discrete events, which are generated by threads, would cause other threads to sleep, wake, and update the system state.
A more recent method is the three-phased approach to discrete event simulation (Pidd, 1998). In this approach, the first phase is to jump to the next chronological event. The second phase is to execute all events that unconditionally occur at that time (these are called B-events). The third phase is to execute all events that conditionally occur at that time (these are called C-events). The three phase approach is a refinement of the event-based approach in which simultaneous events are ordered so as to make the most efficient use of computer resources. The three-phase approach is used by a number of commercial simulation software packages, but from the users point of view, the specifics of the underlying simulation method are generally hidden.
Statistical Models in Simulation
In this section, statistical models appropriate to some application areas are presented. The areas include:
Queueing systems
Inventory and supply-chain systems
Reliability and maintainability
Limited data
Discrete Distributions
Discrete random variables are used to describe random phenomena in which only integer values can occur.
In this section, we will learn about:
1. Bernoulli trials and Bernoulli distribution
2. Binomial distribution
3. Geometric and negative binomial distribution
4. Poisson distribution
Continuous Distributions
Continuous random variables can be used to describe random phenomena in which the variable can take on any value in some interval.
In this section, the distributions studied are:
1. Uniform
2. Exponential
3. Normal
4. Weibull
5. Lognormal
Empirical Distributions

A distribution whose parameters are the observed values in a sample of data.
1. May be used when it is impossible or unnecessary to establish that a random variable has any particular parametric distribution.
2. Advantage: no assumption beyond the observed values in the sample.
3. Disadvantage: sample might not cover the entire range of possible values.
Poisson Distribution
Definition: N(t) is a counting function that represents the number of events occurred in [0,t].
A counting process {N(t), t>=0} is a Poisson process with mean rate ? if:
1. Arrivals occur one at a time
2. {N(t), t>=0} has stationary increments
3. {N(t), t>=0} has independent increments
Properties
1. Equal mean and variance: E[N(t)] = V[N(t)] =?t
2. Stationary increment: The number of arrivals in time s to t is also Poisson-distributed with mean ? (t-s)
Queueing Models
Purpose
Simulation is often used in the analysis of queueing models. Queueing models provide the analyst with a powerful tool for designing and evaluating the performance of queueing systems.
Key elements of queueing systems
Customer: refers to anything that arrives at a facility and requires service, e.g., people, machines, trucks, emails.
Server: refers to any resource that provides the requested service, e.g., repairpersons, retrieval machines, runways at airport.
A queueing system consists of a number of service centers and interconnected queues. Each service center consists of some number of servers, c, working in parallel, upon getting to the head of the line, a customer takes the 1st available server.
System Capacity: a limit on the number of customers that may be in the waiting line or system.
1. Limited capacity, e.g., an automatic car wash only has room for 10 cars to wait in line to enter the mechanism.
2. Unlimited capacity, e.g., concert ticket sales with no limit on the number of people allowed to wait to purchase tickets
Queue behavior: the actions of customers while in a queue waiting for service to begin, for example:
Balk: leave when they see that the line is too long,
Renege: leave after being in the line when its moving too slowly,
Jockey: move from one line to a shorter line.
Queue discipline: the logical ordering of customers in a queue that determines which customer is chosen for service when a server
becomes free, for example:
First-in-First-out (FIFO)
Last-in-First-out (LIFO)
Service in random order (SIRO)
Shortest processing time First (SPT)
Service according to priority (PR).
Networks of Queues
Many systems are naturally modeled as networks of single queues: customers departing from one queue may be routed to another.
Random Number Generation
A random number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that lack any pattern, i.e. appear random.
The many applications of randomness have led to the development of several different methods for generating random data. Many of these have existed since ancient times, including dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (by divination) in the I Ching, and many other techniques. Because of the mechanical nature of these techniques, generating large numbers of sufficiently random numbers (important in statistics) required a lot of work and/or time. Thus, results would sometimes be collected and distributed as random number tables. Nowadays, after the advent of computational random number generators, a growing number of government-run lotteries, and lottery games, are using RNGs instead of more traditional drawing methods. RNGs are also used today to determine the odds of modern slot machines.
Several computational methods for random number generation exist. Many fall short of the goal of true randomness — though they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). However, carefully designed cryptographically secure computationally based methods of generating random numbers do exist, such as those based on the Yarrow algorithm and the Fortuna (PRNG) and others.
Practical applications and uses
Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable. Note that, in general, where unpredictability is paramount – such as in security applications – hardware generators are generally preferred (where feasible) over pseudo-random algorithms.
Random number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed. They are also used in cryptography – so long as the seed is secret. Sender and receiver can generate the same set of numbers automatically to use as keys.
The generation of pseudo-random numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "Random Quote of the Day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms.
Some applications which appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a true random system would have no restriction on the same item appearing two or three times in succession.
"True" random numbers vs. pseudo-random numbers

There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy.
The speed at which entropy can be harvested from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring true entropy are said to be blocking i.e. rate-limited until enough entropy is harvested to meet demand. On some Unix-like systems, including Linux, the pseudo device file /dev/random will block until sufficient entropy is harvested from the environment. Due to this blocking behavior large bulk reads from /dev/random, such as filling a hard disk with random bits, can often be slow.
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. These types of generators do not typically rely on sources of naturally occurring entropy, though they may be periodically seeded by natural sources, they are non-blocking i.e. not rate-limited by an external event.
A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator in the purest sense of the word, since their output is inherently predictable if all seed values are known. In practice however they are sufficient for most tasks. Carefully designed and implemented pseudo-random number generators can even be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna (PRNG).
Generation methods
Physical methods
The earliest methods for generating random numbers — dice, coin flipping, roulette wheels — are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.

A physical random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics. Sources of entropy include radioactive decay, thermal noise, shot noise, avalanche noise in Zener diodes, clock drift, the timing of actual movements of a hard disk read/write head, and radio noise. However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor, such as a cryptographic hash function, can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate.
Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measures radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.
Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators.
Generation from a probability distribution
There are a couple of methods to generate a random number based on a probability density function. These methods involve transforming a uniform random number in some way. Because of this, these methods work equally well in generating both pseudo-random and true random numbers. One method, called the inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method, called the acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again.
By humans
Random number generation may also be done by humans directly. However, most studies find that human subjects have some degree of nonrandomness when generating a random sequence of, e.g., digits or letters. They may alternate too much between choices compared to a good random generator.
Random Variate Generation
In the mathematical fields of probability and statistics, a random variate is a particular outcome of a random variable: the random variates which are other outcomes of the same random variable might have different values. Random variates are used when simulating processes driven by random influences (stochastic processes). In modern applications, such simulations would derive random variates corresponding to any given probability distribution from computer procedures designed to create random variates corresponding to a uniform distribution, where these procedures would actually provide values chosen from a uniform distribution of pseudorandom numbers.
Procedures to generate random variates corresponding to a given distribution are known as procedures for random variate generation or pseudo-random number sampling. In probability theory, a random variable is a measurable function from a probability space to a measurable space of values that the variable can take on. In that context, and in statistics, those values are known as a random variates, or occasionally random deviates, and this represents a wider meaning than just that associated with pseudorandom numbers.
Verification and Validation of Simulation Model
Definitions
Verification is the process of determining that a model implementation and its associated data accurately represent the developers conceptual description and specifications.

Validation is the process of determining the degree to which a simulation model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model.
Modeling and simulation (M&S) can be an important element in the acquisition of systems within government organizations. M&S is used during development to explore the design trade space and inform design decisions, and in conjunction with testing and analysis to gain confidence that the design implementation is performing as expected, or to assist troubleshooting if it is not. M&S allows decision makers and stakeholders to quantify certain aspects of performance during the system development phase, and to provide supplementary data during the testing phase of system acquisition. More important, M&S may play a key role in the qualification ("sell-off”) of a system as a means to reduce the cost of a verification test program. Here, the development of a simulation model that has undergone a formal verification, validation, and accreditation (VV&A) process is not only desirable, but essential.
Consider the following definitions for the phases of the simulation model VV&A process :
Verification: "The process of determining that a model implementation and its associated data accurately represent the developer’s conceptual description and specifications."
Validation: "The process of determining the degree to which a [simulation]model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model."
Accreditation: "The official certification that a model, simulation, or federation of models and simulations and its associated data are acceptable for use for a specific purpose."
Simulation Conceptual Model: "The developers description of what the model or simulation will represent, the assumptions limiting those representations, and other capabilities needed to satisfy the users requirements."
Output Analysis
We can divide simulation models into two basic types for output analysis :
1. Terminating Systems
2. Non Terminating Systems
Terminating Systems
A retail or commercial establishment such as a bank or a store opens for business at 9.00 a.m. and closes at 5.00 p.m. On closing no new customers are admitted but existing customers are served before leaving. Initially, of course, the system contains no customers.
A ship yard has obtained an order for 5 bulk carriers which will take in total approximately 2.5 years to complete. The company would like to simulate a range of alternative production plans to minimise on production costs and time. The simulation would start, perhaps, from the date of the last ship of the preceding order leaving the yard.
Non Terminating Systems
In an absolute sense, of course, all systems are terminating since human activity has a finite duration. In terms of simulation modelling we can regard a non terminating system as one in which we wish to model a fixed time duration which forms part of a system which runs for a long time relative to the fixed duration.
As part of a North Sea Oil operation a pipe laying barge is used to continuously lay pipe on the sea bed. The modelling task is to optimise the days throughput of pipe by modelling all the associated activities (including ship delivery of pipe sections, welding sections, coupling sections, painting, X -ray inspection etc.). Clearly examining one days activity is part of a longer process and machines will be either busy or idle at the start and inventory will be at a level determined by the time of the last delivery and the work rate.
A computer network for routing telephone calls from a local area to the wide area network is to be modelled over a 24 hour period. Clearly this is non terminating but it is also different from the previous example in the fact that it is also cyclic because we expect maximum traffic in the morning period, minimum in the middle of the night. It also has a longer periodicity because we can anticipate reduced traffic at week ends.
The method of analysis of the two types of system is fundamentally different because for a terminating system we know both the starting conditions and the termination condition. Also if we make replications of the experiment run with differently seeded random numbers we can be sure that the replications are statistically independent.
Case Studies
In the literature, the simulation of manufacturing systems has either been predominantly concerned with relatively simple facilities such as job shops, batch production facilities or floweriness. In general, this work has focused on detailed issues such as the evaluation of layout, or the selection of dispatching or lot sizing rules. Other research has adopted an aggregate approach, with severe simplifying assumptions, in which important details may have been neglected.
In our work at Newcastle, a large-scale, simulation model, that allows entire manufacturing facilities to be represented has been developed for investigating manufacturing systems issues. It may be used as a research tool, or it may be used as a planning tool that allows plans to be generated and validated by simulation. The model represents a manufacturing facilities under the control of a manufacturing planning and control system.
The Manufacturing System Simulation Model is based upon the discrete event paradigm (Kreutzer, 1986). It has the capability to represent the manufacture of a range of product families with either shallow or deep product structure using jobbing, batch, flow and assembly processes. The model was developed without reference to any particular site and can be configured at run time to represent a specific company using a series of user-friendly forms. The model provides an integrated framework for investigating manufacturing systems problems. Data on the facilities, current schedules, product information such as product structures and process plans are required as input. A very wide range of issues may be investigated and evaluated in terms of a specified performance criteria.
Simulation of computer systems

A computer simulation is a simulation, run on a single computer, or a network of computers, to reproduce behavior of a system. The simulation uses an abstract model (a computer model, or a computational model) to simulate the system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulation of a system is represented as the running of the systems model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations vary from computer programs that run a few minutes to network-based groups of computers running for hours to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. Over 10 years ago, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Cobweb model
he cobweb model or cobweb theory is an economic model that explains why prices might be subject to periodic fluctuations in certain types of markets. It describes cyclical supply and demand in a market where the amount produced must be chosen before prices are observed. Producers expectations about prices are assumed to be based on observations of previous prices. Nicholas Kaldor analyzed the model in 1934, coining the term cobweb theorem (see Kaldor, 1938 and Pashigian, 2008), citing previous analyses in German by Henry Schultz and U. Ricci.
he cobweb model is based on a time lag between supply and demand decisions. Agricultural markets are a context where the cobweb model might apply, since there is a lag between planting and harvesting (Kaldor, 1934, p. 133-134 gives two agricultural examples: rubber and corn). Suppose for example that as a result of unexpectedly bad weather, farmers go to market with an unusually small crop of strawberries. This shortage, equivalent to a leftward shift in the markets supply curve, results in high prices. If farmers expect these high price conditions to continue, then in the following year, they will raise their production of strawberries relative to other crops. Therefore when they go to market the supply will be high, resulting in low prices. If they then expect low prices to continue, they will decrease their production of strawberries for the next year, resulting in high prices again. Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.
