exupero's blog
RSSApps

Time to completion in the penny game

In the previous post I argued that in a manufacturing line it's less important to maximize the utilization of non-bottlenecks than to minimize the amount of overall work in progress. I've mentioned one reason to minimize work in progress: the expense of storing and moving it. But there's another reason. Big backlogs of work in progress also create delay. The more pennies on the line, the longer it takes pennies entering the line to be finished and get shipped. Pennies spend more time waiting in station backlogs than when there's less work in progress.

Let's measure time to completion in our simulated penny game. To do that, we'll have to change how we represent stations, which means modifying our initial state and our update function.

So far we've represented each station with an integer of how large its backlog is, but to track how long each penny is in the line, we'll need a separate number for each penny:

(def initial-state
  {:stations (repeat 6 [0 0 0 0])})
{:stations
 ([0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0])}

Each station starts with four pennies, just like before, and at the start each penny has spent 0 rolls on the line.

Our new update-state function will be more complicated. On each step, a station will increment how long the pennies in its backlog have been on the line, then take as many pennies as its capacity allows and pass them on to the next station:

(defn update-state [{:keys [stations]} [incoming & capacities]]
  (let [after-processing (map (fn [station capacity]
                                (->> station
                                     (map inc)
                                     (split-at capacity)))
                              stations capacities)
        processed (map first after-processing)
        new-stations (map (fn [queue incoming]
                            (concat queue incoming))
                          (map second after-processing)
                          (cons (repeat incoming 0) processed))]
    {:stations new-stations
     :processed processed}))

Let's check that it does what we expect:

(update-state initial-state [6 1 2 1 3 5 4])
{:stations
 ((1 1 1 0 0 0 0 0 0) (1 1 1) (1 1 1 1 1) (1 1) (1 1 1) (1 1 1 1)),
 :processed ((1) (1 1) (1) (1 1 1) (1 1 1 1) (1 1 1 1))}

New pennies on the line are added to the end of the first station's backlog, and all the other pennies have been incremented by 1 and passed along according to the given capacities. Here's the line after two steps:

(-> initial-state
  (update-state [6 1 2 1 3 5 4])
  (update-state [1 6 4 1 2 1 4]))
{:stations ((1 1 1 0) (2 2 2 1 1 1) (2 2 2 2 2 2 2) (2) (2 2 2 2) (2)),
 :processed ((2 2 2 1 1 1) (2 2 2) (2) (2 2) (2) (2 2 2 2))}

To measure time to completion, we'll look at the first penny off the last station:

(defn time-to-completion [step]
  (-> step :processed last first))

Here's the time to completion of each step of our original scenario, in which all the stations process between 1 and 6 pennies:

Time to completion (original scenario)Step 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 100048121620

The straight, diagonal line segment for the first ten steps or so on the graph is an artifact of our initial-state: all stations start with pennies that haven't clocked any processing, so the first ones to roll off the last station haven't been there very long. To correct that, let's change our initial state to this:

(def initial-state
  {:stations (for [i (range 6)]
               (repeat 4 i))})
{:stations
 ((0 0 0 0) (1 1 1 1) (2 2 2 2) (3 3 3 3) (4 4 4 4) (5 5 5 5))}

Which produces this graph:

Time to completion (original scenario)Step 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 100048121620

That's still a bit artificial, but it is more realistic.

Now let's look at time to completion after we improve productivity of all but one station:

Time to completion (improved productivity)Step 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000612182430

Initially time to completion is slightly better, but as the amount of work in progress increases, the line takes longer and longer to deliver pennies that have entered the system.

Let's see what happens when we set the input to the line based on how much the bottleneck can process:

Time to completion (input set to bottleneck's capacity)Step 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000612182430

Time to completion fluctuates but generally stays quite flat. It averages a little more than 9 steps from start to finish. Without an input constraint, time to completion averages about 19 steps, more than twice as much and continuing to increase the longer the line works.