User Tools

Site Tools


single_workstation_models

Courseware: Single Workstation Models

NOTE: Exercises below may link to supporting files in a GitHub repository. If you follow such links and (at the GitHub website) right-click a file and choose “Save link as…”, it will appear to download a desired file but may in fact fail. A failure will be discovered when trying to open the downloaded file, usually in MATLAB, and learning that it is not in fact a MATLAB script, function, or SimEvents simulation model.

A remedy is to, at the GitHub website, back up to the project root (e.g. Courseware or Software), choose “Download ZIP” for the entire project, and find the desired file within the project's ZIP. Our apologies for the inconvenience.


Exercises


Improve Performance: Increase Capacity or Reduce Variability?

A single workstation has six parallel servers. Each server costs $100,000, and requires 25 hours of processing time per job with ce2 = 6. Jobs arrive at a rate of 0.2/hour with ca2 = 1. The observed cycle time is just over 75 hours, and management has decided that it must be reduced to at most 45 hours.

  1. For the given data, estimate what the average cycle time should be using the parallel-servers version of the VUT approximation.
  2. Open the simulation model SingleWorkstation.slx, which consists of a single GGkWorkstation block, a source, and a sink. Configure the model with the given data. Use lognormally-distributed processing times, and note that (mean, SCV) of (25, 6) corresponds to lognormal parameters (threshold, mu, sigma) of (0, 2.246, 1.395). Set the stop time to 500,000, run five simulation replications, and use the Simulation Data Inspector (accessible next to the stop time box) to visualize average cycle time. Copy & paste your figure here for context, and compare the simulation estimate to the analytical estimates obtained above.
  3. To reduce cycle time, suppose capacity is increased by adding another server at a cost of $100,000. What would be the effect of this change on average cycle time? (Use both the VUT approximation and also simulation)
  4. Reset the number of servers to its original value. To reduce cycle time, suppose variability reduction strategies are implemented for 10% of the cost of a new server, and reduce ce2 to 1. What would be the effect of this change on average cycle time? (Use both the VUT approximation and also simulation, and note that (mean, SCV) of (25, 1) corresponds to lognormal parameters (threshold, mu, sigma) of (0, 2.8723, 0.8325)). Comment on the efficacy of this change.

BIG PICTURE: The purpose of this exercise is to explore performance-improvement strategies for a single workstation, specifically capacity increases versus variability reduction. While the numbers in this example were arbitrarily chosen to make variability reduction look at least as attractive as increasing capacity, note that different sets of numbers may demonstrate something different - for example, increasing from 2 to 3 servers is a 50% capacity increase, which no amount of variability reduction may be able to compete with. The VUT approximation is expected to align quite well with simulation results, which should look something like:


Time-Based Failures and their Mitigation

Open the simulation model SingleWksWithTimeBasedFailures.slx, which consists of a single GGkWorkstation_ RandomTimeUntilPreempFailure block, a source, and a sink. Configure the model with exponentially-distributed inter-arrival times with mean 25, triangular-distributed processing times with (min, max, mode) of (5, 25, 15), exponentially-distributed time-until-failure with mean 120, and deterministic time-until-repair of 40 (for deterministic, use the normal distribution with stdev=eps or the triangular distribution with min=max=mode). Set the stop time to 100,000, use the Simulation Data Inspector (accessible next to the stop time box) to monitor average cycle time, and perform the following tasks.

  1. Visualize baseline performance by running five simulation replications and visualizing the average cycle time traces. For context, copy & paste your figure here.
  2. Reduce MTTR (mean-time-until-repair) by 15%, run five simulation replications, and visualize the average cycle time traces. For context, copy & paste your figure here.
  3. Reset MTTR to its original value. Increase MTTF (mean-time-until-failure) by 15%, run five simulation replications, and visualize the average cycle time traces. For context, copy & paste your figure here.
  4. What do you observe? Are these observations consistent with what analytical approximations for effective processing time with time-based failures would suggest? (see Hopp & Spearman's section on “Variability from Preemptive Outages”, section 8.4.2 in ed. 2) What can you conclude?

BIG PICTURE: The fundamental question is, given a choice of decreasing MTTR or increasing MTTF by equal fractions, which should lead to better overall system performance? In the analytical approximation for effective processing time's mean, the multiplicative factor is the inverse of availability, and comparing the equation in two cases (one with 0.85*MTTR and the other with 1.15*MTTF) shows that one yields a slightly smaller multiplicative factor, and hence a slightly smaller effective processing time mean. In the analytical approximation for effective processing time's SCV, the adjustment term is proportional to MTTR, suggesting that a reduction in MTTR should also lead to less processing time variability. These observations should be visible in the simulation results.


Count-Based Setups and their Mitigation

Open the simulation model SingleWksWithCountBasedSetups.slx, which consists of a single GG1Workstation_ RandomCountUntilSetup block, a source, and a sink. Configure the model with exponentially-distributed inter-arrival times with mean 1.5, triangular-distributed processing times with (min, max, mode) of (0.4, 1.4, 1.35), geometric-distributed count-until-setup with p=1/11 (corresponding to a mean of 10), and exponentially-distributed setup times with mean 2. Set the stop time to 50,000, use the Simulation Data Inspector (accessible next to the stop time box) to monitor average cycle time, and perform the following tasks.

  1. Visualize baseline performance by running five simulation replications and visualizing the average cycle time traces. For context, copy & paste your figure here.
  2. Reduce setup time by 15%, run five simulation replications, and visualize the average cycle time traces. For context, copy & paste your figure here.
  3. Reset setup time to its original value. Reduce processing time by 15% (only adjust the mode), run five simulation replications, and visualize the average cycle time traces. For context, copy & paste your figure here.
  4. Reset processing time to its original value. Increase the mean number of jobs between setups by 15%, run five simulation replications, and visualize the average cycle time traces. (If consulting a reference for the geometric distribution, SimEvents' EventBasedRandomNumber generator block's geometric distribution is the geometric0 version, e.g. the one with support on the non-negative integers.) For context, copy & paste your figure here.
  5. What do you observe? Are these observations consistent with what analytical approximations for effective processing time with time-based failures would suggest? (see Hopp & Spearman's section on “Variability from Nonpreemptive Outages”, section 8.4.3 in ed. 2) What can you conclude?

BIG PICTURE: The fundamental question is, given the choice of changing various parameters by equal fractions, which should lead to better overall system performance? The most effective improvement should be reducing processing time because it increases capacity (te = 1/re). The next-most-effective improvements are either reducing the setup time or increasing the count-until-setup; these are the numerator and denominator in effective processing time mean's additive term. Reducing count-until-setup appears to reduce variability more, although that may be specific to this problem's parameters because the effective processing time SCV's additive term reduces as either ts2 decreases or Ns increases. These observations should be visible in the simulation results.


Variability in Time-Based Failure and Count-Based Setup Parameters

A machine has a process time which is triangularly-distributed with (min, max, mode) = (1,7,4) minutes. Suppose the arrival rate of jobs is 10/hr and that the MTTR is 2 hours.

  1. What is the smallest MTTF that allows the machine to achieve its maximum throughput?
  2. Verify your answer using the simulation model SingleWksWithTimeBasedFailures.slx. Configure the model with the parameter values above, and let inter-arrival times, MTTF, and MTTR all be deterministic (use the normal distribution with stdev=eps or the triangular distribution with min=max=mode). Set the stop time large enough to reach steady-state, run several simulation replications, use the Simulation Data Inspector (accessible next to the stop time box) to visualize average throughput and average cycle time, and copy & paste your figures below.
  3. Using the minimum MTTF computed above, suppose that MTTR is an exponential random variable with a mean of 2 hours. Can the line still achieve its desired throughput? Why or why not? Verify your answer using simulation.
  4. Suppose now that failures are non-preemptive; that is, they are mitigated using preventative maintenance performed between jobs. Maintenance times are still 2 hours. What is the smallest mean number of jobs between maintenance which allows the machine to achieve its maximum throughput?
  5. Verify your answer using the simulation model SingleWksWithCountBasedSetups.slx. Configure the model with the parameter values above, and let inter-arrival times, jobs-until-maintenance, and maintenance times all be deterministic (use the normal distribution with stdev=eps or the triangular distribution with min=max=mode). Set the stop time large enough to reach steady-state, run several simulation replications, use the Simulation Data Inspector (accessible next to the stop time box) to visualize average throughput and average cycle time, and copy & paste your figures below.
  6. Using the smallest mean number of jobs between maintenance computed above, suppose that maintenance time is an exponential random variable with a mean of 2 hours. Can the line still achieve its desired throughput? Why or why not? Verify your answer using simulation.

BIG PICTURE: Should be a straightforward exercise. For both time-based failures and count-based setups, (1) plug-and-chug with the analytical approximations to determine a minimum time-until-failure or count-until-setup, (2) verify using simulation, and finally (3) add variability to repair times to random variables and see how the workstation performs at the minimum time-until-failure or count-until-setup.


Departure Process Variability, Analytical vs Simulation

A single workstation has processing times with natural mean of 15 minutes and natural CV of 0.223. The machine is subject to preemptive failures with an MTTF (mean-time-to-failure) of 744 minutes. The following table lists five scenarios for the arrival and repair processes.

Scenario Interarrival Time (distrib, mean) Repair Time (mean, cr2) Utilization te ce2 cd2 cd2 (sim)
A (exponential, 25) (248, 0)
B (exponential, 25) (0, 0)
C (deterministic, 45) (248, 0)
D (deterministic, 45) (0, 0)
E (deterministic, 20) (0, 0)
  1. Use analytical approximations to compute all missing table values except the final column.
  2. Comment on the differences in cd2.
  3. Open the simulation model SingleWksWithTimeBasedFailures.slx, which consists of a single GGkWorkstation_ RandomTimeUntilPreempFailure block, a source, and a sink. Configure the model for each scenario in the table - processing times with natural mean of 15 minutes and natural CV of 0.223 can be implemented with a triangular distribution with (min, max, mode) of (6.8, 15, 23.2), the analytical approximations assume that MTTF is exponentially-distributed, and deterministic MTTR can be implemented using a normal distribution with stdev=eps or the triangular distribution with min=max=mode. Run one simulation replication for 100,000 time units, and in the MATLAB Command Window use code such as the following to compute inter-departure times.
departTimes = nDepartures.time;
interDepartTimes = diff(departTimes);
meanInterDepartTime = mean(interDepartTimes)
stdevInterDepartTime = std(interDepartTimes)

(This yields mean & standard deviation, from which you can easily compute SCV.) Add your simulation results for cd2 to the table’s final column, and comment on the analytical versus simulation results.

BIG PICTURE: (1) is plug-and-chug using analytical approximations from Hopp & Spearman sections 8.4.2 and 8.5.1 (ed. 2). (2) requires explaining the how different values in a table row influence the “linking equation” for cd2. For (3), depending on at what point in a class this exercise if offered and also the MATLAB sophistication of students, an instructor may or may not want students typing commands into the MATLAB command window. An alternative is for an instructor to run the simulation themselves, use the first two lines of code above to compute inter-departure times, copy those values from the MATLAB workspace into Excel, and have students compute cd2 in that environment.


Single Workstation Queue: (Shared vs Separate) and (Finite vs Infinite Capacity)

1. Shared Vs Separate Queues: Jobs arrive to a workstation at a rate of 0.5/hour, and historical data shows that they approximate a Poisson process. Compute and compare average cycle time across the following scenarios.

Scenario Configuration Processing Time Average CT (VUT Approximation) Average CT (Simulation)
A One shared queue for three servers Triangular (1,4,7)
B Three separate queues for three servers Triangular (1,4,7)
C One shared queue for three servers Lognormal (Mu=0.837, Sigma=1.048)
D Three separate queues for three servers Lognormal (Mu=0.837, Sigma=1.048)

Note that three servers with three separate queues should be functionally equivalent to independent G/G/1 workstations, each with an equal fraction of the arrival process. For shared queues use the simulation model SingleWorkstation.slx, and for separate queues use the simulation model SingleWorkstation_ThreeInParallel.slx. In both cases set the stop time to 100,000, run several simulation replications, and use the Simulation Data Inspector (accessible next to the stop time box) to visualize and estimate average cycle time.

EXTRA CREDIT: Simulate scenarios B and D in two separate ways - once with the Output Switch's switching criterion set to Equiprobable, and once with it set to Round robin. Explain any differences in results between these two cases.

BIG PICTURE: This was intended to be a straightforward exercise - separate queues are expected to perform worse than shared queues, and analytical results are expected to align with simulation results, but something interesting emerged when using simulation. In a first pass only the simulation model SingleWorkstation.slx was used, and for the separate-queues scenario only one of the three queues was simulated - the arrival rate was divided by three and the number of servers reduced to one. In a second pass, however, the simulation model SingleWorkstation_ThreeInParallel.slx was created, and introduced the issue of how to divide jobs between the three separate queues (a routing control decision). Without much thought the Round robin and Equiprobable switching criteria were expected to be functionally equivalent, but it turns out that there's a big difference between the two. We found it fascinating that while there exist elegant theoretical results for thinning a Poisson process, if you thin it in a certain way (Round robin) rather than just flip a three-sided coin (Equiprobable) then you can get much better performance. This may be close to a simplest interesting example for why control matters and how it can substantially affect performance.

2. Finite Vs Infinite Queues: Use the simulation model SingleWorkstation.slx to compute average cycle time and average throughput in each of the following scenarios. Use the Simulation Data Inspector (accessible next to the stop time box) to visualize and estimate these parameters; in all cases set the stop time sufficiently high and run a sufficient number of replications before making an estimate.

Scenario Buffer Capacity Processing Time Distribution Arrival Rate (exponential) Average CT (Simulation) Average TH (Simulation)
E Infinite Triangular (1,4,7) 0.2 jobs/hour
F B=3 Triangular (1,4,7) 0.2 jobs/hour
G Infinite Lognormal (Mu=0.837, Sigma=1.048) 0.2 jobs/hour
H B=3 Lognormal (Mu=0.837, Sigma=1.048) 0.2 jobs/hour
I Infinite Lognormal (Mu=0.837, Sigma=1.048) 0.25 jobs/hour
J B=3 Lognormal (Mu=0.837, Sigma=1.048) 0.25 jobs/hour
K Infinite Lognormal (Mu=0.837, Sigma=1.048) 0.4 jobs/hour
L B=3 Lognormal (Mu=0.837, Sigma=1.048) 0.4 jobs/hour

BIG PICTURE: Some general themes should be visible in the results. One is that finite buffers enable a stable system even when the arrival rate exceeds the processing rate, although that might be argued as disingenuous if an ever-growing backlog is simply relocated elsewhere. Another theme is that finite buffers are expected to reduce both CT and TH, but CT by a larger fraction than TH, and both by larger fractions for larger processing time variability.


Parallel Process Batching

Consider a single workstation with parallel batch processing (setup time is for an entire batch). Suppose that jobs arrive in an approximately-Poisson process with a rate of 160 jobs/day (there are two 8-hour shifts per day), service times are exponentially-distributed with a rate of 1/6 batches/hour, the workstation has one server, and that the maximum feasible batch size is 125.

A. Use analytical formulas in Hopp & Spearman (section 9.4 in ed. 2) to answer the following questions.

  1. What is the maximum capacity of the workstation? (recall that capacity is an upper limit on throughput)
  2. Using the maximum feasible batch size, what is the average cycle time?
  3. What is the minimum feasible batch size?
  4. Using the minimum feasible batch size, what is the average cycle time?

B. Open the MATLAB script DEMO_Sim_ParallelBatchProcessing_SingleWorkstationThCtWipUtil.m (which controls the simulation model GGkWorkstation_MakeAndMoveBatches_Parallel.slx through the wrapper function SimWrapper_GGkWorkstation_ MakeAndMoveBatches_Parallel.m. Configure the model (in the script’s top section, not the simulation model itself) with the given parameters, and then use it to answer the following questions:

  1. Using the maximum feasible batch size, what is the average cycle time? How does this compare to the analytical approximation?
  2. What is the minimum feasible batch size? (How do you measure this with the given simulation model?)
  3. Using the minimum feasible batch size, what is the average cycle time? How does this compare to the analytical approximation?
  4. What is the optimal batch size for minimizing cycle time? (There are several possible ways to answer this. One is to implement the analytical approximation in a spreadsheet to enable trying it quickly for all possible values of k. Another is to use the MATLAB script DEMO_Sim_SweepBatchSizeParProc_SingleWorkstationThCtWipUtil.m which is almost identical to the one above except sweeps over a range of batch sizes. With this, you still may want to use the analytical approximation to decrease the search space (the curve is convex-shaped), or use it in iterations - first with a large step size to see the general shape, and then zoomed-in with larger run lengths and more replications to smooth out the curve.)

C. Are there any large differences between analytical approximation and simulation results? If so, suggest possible explanations.

BIG PICTURE: (A) should be a straightforward exercise, designed to introduce batching semantics and the analytical approximations for performance of single workstations with batching. (B) should also be straightforward for parallel process batching. One way to measure stability is to monitor the utilization output of the simulation, increasing k and waiting for u to fall from >99%. Finding the optimal batch size can be challenging using simulation because batching introduces so much variability - the curve smooths out with long run lengths and lots of replications, so strategies listed above to limit the search space are recommended. In (C), no large differences are expected between analytical approximation and simulation results.
The maximum batch size of 125 (127?) is a technical limitation in SimEvents which inhibits auto-connecting ports which are too densely packed into a single OutputSwitch; if correct, then re-architecture of the first-generation batching logic is needed. Since the analytical optimum for the numbers chosen in this problem lies close to this maximum, consider tuning the numbers to push it lower.


Serial Process Batching with Setups Between Batches

Consider a single workstation with serial batch processing (setup time is for each element in a batch) and a setup before each batch's processing. Suppose that jobs arrive in an approximately-Poisson process with a rate of 2 jobs/hour, service times are exponentially-distributed with a rate of 4 jobs/hour, the workstation has one server, each setup takes 5 hours, and lot-splitting after service is possible.

A. Use analytical formulas in Hopp & Spearman (section 9.4 in ed. 2) to answer the following questions.

  1. What is the minimum feasible batch size?
  2. Using the minimum feasible batch size, what is the average cycle time? What fraction of that time is waiting due to batching (e.g. everything but setup and service time)?
  3. Using double the minimum feasible batch size, what is the average cycle time? What fraction of that time is waiting due to batching (e.g. everything but setup and service time)?
  4. Why do the average cycle times computed in the previous two questions differ for different batch sizes?
  5. Without recalculating, what do you expect will happen to your answers if lot-splitting after service is not possible? Why?
  6. Without recalculating, what do you expect will happen to your answers if the arrival process SCV is doubled? Why?

B. Open the MATLAB script DEMO_Sim_SerialBatchProcessing_SingleWorkstationThCtWipUtil.m (which controls the simulation model GG1Workstation_MakeAndMoveBatches_SerialWithSetups.slx through the wrapper function SimWrapper_GG1Workstation_ MakeAndMoveBatches_SerialWithSetups.m). Configure the model (in the script’s top section, not the simulation model itself) with the given parameters, and then use it to answer the following questions.

  1. What is the minimum feasible batch size? (How do you measure this with the given simulation model?) How does this compare to the analytical approximation?
  2. Using the minimum feasible batch size, what is the average cycle time? How does this compare to the analytical approximation?
  3. Using double the minimum feasible batch size, what is the average cycle time? How does this compare to the analytical approximation?
  4. What is the optimal batch size for minimizing cycle time? (There are several possible ways to answer this. One is to implement the analytical approximation in a spreadsheet to enable trying it quickly for all possible values of k, and then verify around the optimum using simulation. Another is to use the MATLAB script DEMO_Sim_SweepBatchSizeSerialProc_SingleWorkstationThCtWipUtil.m which is almost identical to the one above except sweeps over a range of batch sizes. With this, you still may want to use the analytical approximation to decrease the search space (the curve is convex-shaped), or use it in iterations - first with a large step size to see the general shape, and then zoomed-in with larger run lengths and more replications to smooth out the curve.)

C. Are there any large differences between analytical approximation and simulation results? If so, suggest possible explanations.

BIG PICTURE: (A) should be a straightforward exercise, designed to introduce batching semantics and the analytical approximations for performance of single workstations with batching. (C) is not straightforward for serial process batching - we observe a huge discrepancy between analytical approximation and simulation results for average cycle time when using the minimum feasible batch size. One reason is that the analytical approximation assumes that the variability of inter-arrival and processing times is not influenced by the batch size, and this suggests a correction - use ca2+ce2/2k instead of ca2+ce2/2 in the VUT equation. This correction may overcompensate, however, because the analytical approximation also oversimplifies WIBT (wait-in-batch-time) by only considering processing time mean and not its variability - one lesson from studying G/G/k single workstations is that an increase in only the variability (and not the mean) of processing time can increase average cycle time.

single_workstation_models.txt · Last modified: 2020/08/31 15:55 (external edit)