NOTE: Exercises below may link to supporting files in a GitHub repository. If you follow such links and (at the GitHub website) right-click a file and choose “Save link as…”, it will appear to download a desired file but may in fact fail. A failure will be discovered when trying to open the downloaded file, usually in MATLAB, and learning that it is not in fact a MATLAB script, function, or SimEvents simulation model.
A remedy is to, at the GitHub website, back up to the project root (e.g. Courseware or Software), choose “Download ZIP” for the entire project, and find the desired file within the project's ZIP. Our apologies for the inconvenience.
A single workstation has six parallel servers. Each server costs $100,000, and requires 25 hours of processing time per job with ce2 = 6. Jobs arrive at a rate of 0.2/hour with ca2 = 1. The observed cycle time is just over 75 hours, and management has decided that it must be reduced to at most 45 hours.
BIG PICTURE: The purpose of this exercise is to explore performance-improvement strategies for a single workstation, specifically capacity increases versus variability reduction. While the numbers in this example were arbitrarily chosen to make variability reduction look at least as attractive as increasing capacity, note that different sets of numbers may demonstrate something different - for example, increasing from 2 to 3 servers is a 50% capacity increase, which no amount of variability reduction may be able to compete with. The VUT approximation is expected to align quite well with simulation results, which should look something like:
Open the simulation model SingleWksWithTimeBasedFailures.slx, which consists of a single GGkWorkstation_ RandomTimeUntilPreempFailure block, a source, and a sink. Configure the model with exponentially-distributed inter-arrival times with mean 25, triangular-distributed processing times with (min, max, mode) of (5, 25, 15), exponentially-distributed time-until-failure with mean 120, and deterministic time-until-repair of 40 (for deterministic, use the normal distribution with stdev=eps or the triangular distribution with min=max=mode). Set the stop time to 100,000, use the Simulation Data Inspector (accessible next to the stop time box) to monitor average cycle time, and perform the following tasks.
BIG PICTURE: The fundamental question is, given a choice of decreasing MTTR or increasing MTTF by equal fractions, which should lead to better overall system performance? In the analytical approximation for effective processing time's mean, the multiplicative factor is the inverse of availability, and comparing the equation in two cases (one with 0.85*MTTR and the other with 1.15*MTTF) shows that one yields a slightly smaller multiplicative factor, and hence a slightly smaller effective processing time mean. In the analytical approximation for effective processing time's SCV, the adjustment term is proportional to MTTR, suggesting that a reduction in MTTR should also lead to less processing time variability. These observations should be visible in the simulation results.
Open the simulation model SingleWksWithCountBasedSetups.slx, which consists of a single GG1Workstation_ RandomCountUntilSetup block, a source, and a sink. Configure the model with exponentially-distributed inter-arrival times with mean 1.5, triangular-distributed processing times with (min, max, mode) of (0.4, 1.4, 1.35), geometric-distributed count-until-setup with p=1/11 (corresponding to a mean of 10), and exponentially-distributed setup times with mean 2. Set the stop time to 50,000, use the Simulation Data Inspector (accessible next to the stop time box) to monitor average cycle time, and perform the following tasks.
BIG PICTURE: The fundamental question is, given the choice of changing various parameters by equal fractions, which should lead to better overall system performance? The most effective improvement should be reducing processing time because it increases capacity (te = 1/re). The next-most-effective improvements are either reducing the setup time or increasing the count-until-setup; these are the numerator and denominator in effective processing time mean's additive term. Reducing count-until-setup appears to reduce variability more, although that may be specific to this problem's parameters because the effective processing time SCV's additive term reduces as either ts2 decreases or Ns increases. These observations should be visible in the simulation results.
A machine has a process time which is triangularly-distributed with (min, max, mode) = (1,7,4) minutes. Suppose the arrival rate of jobs is 10/hr and that the MTTR is 2 hours.
BIG PICTURE: Should be a straightforward exercise. For both time-based failures and count-based setups, (1) plug-and-chug with the analytical approximations to determine a minimum time-until-failure or count-until-setup, (2) verify using simulation, and finally (3) add variability to repair times to random variables and see how the workstation performs at the minimum time-until-failure or count-until-setup.
A single workstation has processing times with natural mean of 15 minutes and natural CV of 0.223. The machine is subject to preemptive failures with an MTTF (mean-time-to-failure) of 744 minutes. The following table lists five scenarios for the arrival and repair processes.
Scenario | Interarrival Time (distrib, mean) | Repair Time (mean, cr2) | Utilization | te | ce2 | cd2 | cd2 (sim) |
---|---|---|---|---|---|---|---|
A | (exponential, 25) | (248, 0) | |||||
B | (exponential, 25) | (0, 0) | |||||
C | (deterministic, 45) | (248, 0) | |||||
D | (deterministic, 45) | (0, 0) | |||||
E | (deterministic, 20) | (0, 0) |
departTimes = nDepartures.time; interDepartTimes = diff(departTimes); meanInterDepartTime = mean(interDepartTimes) stdevInterDepartTime = std(interDepartTimes)
(This yields mean & standard deviation, from which you can easily compute SCV.) Add your simulation results for cd2 to the table’s final column, and comment on the analytical versus simulation results.
BIG PICTURE: (1) is plug-and-chug using analytical approximations from Hopp & Spearman sections 8.4.2 and 8.5.1 (ed. 2). (2) requires explaining the how different values in a table row influence the “linking equation” for cd2. For (3), depending on at what point in a class this exercise if offered and also the MATLAB sophistication of students, an instructor may or may not want students typing commands into the MATLAB command window. An alternative is for an instructor to run the simulation themselves, use the first two lines of code above to compute inter-departure times, copy those values from the MATLAB workspace into Excel, and have students compute cd2 in that environment.
1. Shared Vs Separate Queues: Jobs arrive to a workstation at a rate of 0.5/hour, and historical data shows that they approximate a Poisson process. Compute and compare average cycle time across the following scenarios.
Scenario | Configuration | Processing Time | Average CT (VUT Approximation) | Average CT (Simulation) |
---|---|---|---|---|
A | One shared queue for three servers | Triangular (1,4,7) | ||
B | Three separate queues for three servers | Triangular (1,4,7) | ||
C | One shared queue for three servers | Lognormal (Mu=0.837, Sigma=1.048) | ||
D | Three separate queues for three servers | Lognormal (Mu=0.837, Sigma=1.048) |
Note that three servers with three separate queues should be functionally equivalent to independent G/G/1 workstations, each with an equal fraction of the arrival process. For shared queues use the simulation model SingleWorkstation.slx, and for separate queues use the simulation model SingleWorkstation_ThreeInParallel.slx. In both cases set the stop time to 100,000, run several simulation replications, and use the Simulation Data Inspector (accessible next to the stop time box) to visualize and estimate average cycle time.
EXTRA CREDIT: Simulate scenarios B and D in two separate ways - once with the Output Switch's switching criterion set to Equiprobable, and once with it set to Round robin. Explain any differences in results between these two cases.
BIG PICTURE: This was intended to be a straightforward exercise - separate queues are expected to perform worse than shared queues, and analytical results are expected to align with simulation results, but something interesting emerged when using simulation. In a first pass only the simulation model SingleWorkstation.slx was used, and for the separate-queues scenario only one of the three queues was simulated - the arrival rate was divided by three and the number of servers reduced to one. In a second pass, however, the simulation model SingleWorkstation_ThreeInParallel.slx was created, and introduced the issue of how to divide jobs between the three separate queues (a routing control decision). Without much thought the Round robin and Equiprobable switching criteria were expected to be functionally equivalent, but it turns out that there's a big difference between the two. We found it fascinating that while there exist elegant theoretical results for thinning a Poisson process, if you thin it in a certain way (Round robin) rather than just flip a three-sided coin (Equiprobable) then you can get much better performance. This may be close to a simplest interesting example for why control matters and how it can substantially affect performance.
2. Finite Vs Infinite Queues: Use the simulation model SingleWorkstation.slx to compute average cycle time and average throughput in each of the following scenarios. Use the Simulation Data Inspector (accessible next to the stop time box) to visualize and estimate these parameters; in all cases set the stop time sufficiently high and run a sufficient number of replications before making an estimate.
Scenario | Buffer Capacity | Processing Time Distribution | Arrival Rate (exponential) | Average CT (Simulation) | Average TH (Simulation) |
---|---|---|---|---|---|
E | Infinite | Triangular (1,4,7) | 0.2 jobs/hour | ||
F | B=3 | Triangular (1,4,7) | 0.2 jobs/hour | ||
G | Infinite | Lognormal (Mu=0.837, Sigma=1.048) | 0.2 jobs/hour | ||
H | B=3 | Lognormal (Mu=0.837, Sigma=1.048) | 0.2 jobs/hour | ||
I | Infinite | Lognormal (Mu=0.837, Sigma=1.048) | 0.25 jobs/hour | ||
J | B=3 | Lognormal (Mu=0.837, Sigma=1.048) | 0.25 jobs/hour | ||
K | Infinite | Lognormal (Mu=0.837, Sigma=1.048) | 0.4 jobs/hour | ||
L | B=3 | Lognormal (Mu=0.837, Sigma=1.048) | 0.4 jobs/hour |
BIG PICTURE: Some general themes should be visible in the results. One is that finite buffers enable a stable system even when the arrival rate exceeds the processing rate, although that might be argued as disingenuous if an ever-growing backlog is simply relocated elsewhere. Another theme is that finite buffers are expected to reduce both CT and TH, but CT by a larger fraction than TH, and both by larger fractions for larger processing time variability.
Consider a single workstation with parallel batch processing (setup time is for an entire batch). Suppose that jobs arrive in an approximately-Poisson process with a rate of 160 jobs/day (there are two 8-hour shifts per day), service times are exponentially-distributed with a rate of 1/6 batches/hour, the workstation has one server, and that the maximum feasible batch size is 125.
A. Use analytical formulas in Hopp & Spearman (section 9.4 in ed. 2) to answer the following questions.
B. Open the MATLAB script DEMO_Sim_ParallelBatchProcessing_SingleWorkstationThCtWipUtil.m (which controls the simulation model GGkWorkstation_MakeAndMoveBatches_Parallel.slx through the wrapper function SimWrapper_GGkWorkstation_ MakeAndMoveBatches_Parallel.m. Configure the model (in the script’s top section, not the simulation model itself) with the given parameters, and then use it to answer the following questions:
C. Are there any large differences between analytical approximation and simulation results? If so, suggest possible explanations.
BIG PICTURE: (A) should be a straightforward exercise, designed to introduce batching semantics and the analytical approximations for performance of single workstations with batching. (B) should also be straightforward for parallel process batching. One way to measure stability is to monitor the utilization output of the simulation, increasing k and waiting for u to fall from >99%. Finding the optimal batch size can be challenging using simulation because batching introduces so much variability - the curve smooths out with long run lengths and lots of replications, so strategies listed above to limit the search space are recommended. In (C), no large differences are expected between analytical approximation and simulation results.
The maximum batch size of 125 (127?) is a technical limitation in SimEvents which inhibits auto-connecting ports which are too densely packed into a single OutputSwitch; if correct, then re-architecture of the first-generation batching logic is needed. Since the analytical optimum for the numbers chosen in this problem lies close to this maximum, consider tuning the numbers to push it lower.
Consider a single workstation with serial batch processing (setup time is for each element in a batch) and a setup before each batch's processing. Suppose that jobs arrive in an approximately-Poisson process with a rate of 2 jobs/hour, service times are exponentially-distributed with a rate of 4 jobs/hour, the workstation has one server, each setup takes 5 hours, and lot-splitting after service is possible.
A. Use analytical formulas in Hopp & Spearman (section 9.4 in ed. 2) to answer the following questions.
B. Open the MATLAB script DEMO_Sim_SerialBatchProcessing_SingleWorkstationThCtWipUtil.m (which controls the simulation model GG1Workstation_MakeAndMoveBatches_SerialWithSetups.slx through the wrapper function SimWrapper_GG1Workstation_ MakeAndMoveBatches_SerialWithSetups.m). Configure the model (in the script’s top section, not the simulation model itself) with the given parameters, and then use it to answer the following questions.
C. Are there any large differences between analytical approximation and simulation results? If so, suggest possible explanations.
BIG PICTURE: (A) should be a straightforward exercise, designed to introduce batching semantics and the analytical approximations for performance of single workstations with batching. (C) is not straightforward for serial process batching - we observe a huge discrepancy between analytical approximation and simulation results for average cycle time when using the minimum feasible batch size. One reason is that the analytical approximation assumes that the variability of inter-arrival and processing times is not influenced by the batch size, and this suggests a correction - use ca2+ce2/2k instead of ca2+ce2/2 in the VUT equation. This correction may overcompensate, however, because the analytical approximation also oversimplifies WIBT (wait-in-batch-time) by only considering processing time mean and not its variability - one lesson from studying G/G/k single workstations is that an increase in only the variability (and not the mean) of processing time can increase average cycle time.