exupero's blog
RSSApps

Constraining input in the penny game

In the previous post we improved the productivity of all but one station in our simulated manufacturing line, and we found that while outputs increased, so did work in progress. To reduce the amount of work in progress, we'll have to limit how many pennies are let into the line at the start.

Rather than giving the first station as many pennies as we could, let's give it only as many pennies as the bottleneck could produce:

(defn constrain-input-to-bottleneck [capacities]
  (cons (nth capacities 4) (rest capacities)))
(def constrained-steps
  (simulate initial-state update-state 100
    (fn [roll]
      (-> (repeatedly 7 roll)
          more-efficient
          constrain-input-to-bottleneck))))

Compared to before, the amount of work in progress is much more stable:

Work in progressStep 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000265278104130

Let's see, though, how we've impacted total outputs:

Output
Original290
Improved336
Constrained331

It's reduced by less than 2% compared to the improved scenario.

The constraint we added to the input does suggest quite a bit of control over input, not just the ability to have it hold back and keep from flooding the line, but also at times to provide more input than it did under even improved productivity. Specifically, our logic uses the productivity of the bottleneck, whatever it is, regardless of the input's original productivity. At times the bottleneck might produce 5 or 6 pennies, while the input originally produced only 4, and our updated logic always uses the bottleneck's productivity, even when it's more than what the input would have produced. Let's see what happens if we use the minimum of the two:

(defn constrain-input-to-bottleneck-2 [capacities]
  (cons (min (first capacities)
             (nth capacities 4))
        (rest capacities)))
Work in progressStep 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000265278104130
Output
Original290
Improved336
Constrained331
Constrained 2304

Work in progress is lower, but so is output, only about 5% better than the first, before we improved productivity. That suggests we're starving our bottleneck:

Work in progressStep 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 100024681012

Sure enough, during a lot of steps the bottleneck has less than 6 pennies to process; a big die roll could clear its queue. In the previous simulation, it almost always had enough pennies to work at full capacity:

Work in progressStep 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000369121518

Taking this into consideration, we can bump up our input constraint to let in a little more work than the bottleneck has capacity for:

(defn constrain-input-to-bottleneck-3 [capacities]
  (cons (min (first capacities)
             (inc (nth capacities 4)))
        (rest capacities)))
Work in progressStep 10Step 20Step 30Step 40Step 50Step 60Step 70Step 80Step 90Step 1000265278104130
Output
Original290
Improved336
Constrained331
Constrained 2304
Constrained 3336

This gets the output back up to the most productive levels, though at a cost of more work in progress. While the amount of work in progress doesn't grow nearly as fast as it did before we limited input, don't be fooled by the downslope on the right side of the graph: work in progress is still growing, as we can see by running the simulation further:

Work in progressStep 30Step 60Step 90Step 120Step 150Step 180Step 210Step 240Step 270Step 300020406080100

There are many ways we could limit the number of pennies we let into the line, but unless we want to game the statistics, we should make decisions based on the simulation's state. That's not currently available in our code, so in the next post we'll rework some functions to provide it.