- You can play an early session with no clips so the players can see how inventory builds up (you can also push done story-cards to the next edge's corner, rather than waiting for them to be pulled).
- The clips that hold the story-cards are a crucial part of the game. They make it a kanban game.
- You can limit the number of clips per edge to create a natural work-in-progress (wip) limit.
- You can add a new rule: players can also spend a 1 to split a story-card in two, eg a 4 into a 3 and a 1 (assuming they have a spare clip).
- You can record the day a story-card comes off the backlog, and also the day it gets to done and thus measure the cycle time.
- You can simulate scrum-style discrete sprints.
- You can vary the number of dice at different edges.
Hi. I'm Jon Jagger, director of software at Kosli.
I built cyber-dojo, the place teams practice programming.
my kanban 1's board game
Isolating legacy C code from external dependencies
Code naturally resists being isolated if it isn't designed to be isolatable.
Isolating legacy code from external dependencies can be awkward.
In C and C++ the transitive nature of #includes is the most obvious and direct
reflection of the high-coupling such code exhibits.
However, there is a technique
you can use to isolate a source file by cutting all it's #includes.
It relies on a little known third way of writing a #include.
From the C standard:
6.10.2 Source file inclusion
... A preprocessing directive of the form:
#include pp-tokens(that does not match one of the two previous forms) is permitted. The preprocessing tokens afterincludein the directive are processed just as in normal text. ... The directive resulting after all replacements shall match one of the two previous forms.
An example. Suppose you have a legacy C source file that you want to write some unit tests for. For example:
/* legacy.c */
#include "wibble.h"
#include <stdio.h>
...
int legacy(int a, int b)
{
FILE * stream = fopen("some_file.txt", "w");
char buffer[256];
int result = sprintf(buffer,
"%d:%d:%d", a, b, a * b);
fwrite(buffer, 1, sizeof buffer, stream);
fclose(stream);
return result;
}
Your first step is to
create a file called nothing.h as follows:
/* nothing! */
nothing.h is a file containing nothing and is an example of the
Null Object Pattern.
Then you refactor legacy.c to this:
/* legacy.c */
#if defined(UNIT_TEST)
# define LOCAL(header) "nothing.h"
# define SYSTEM(header) "nothing.h"
#else
# define LOCAL(header) #header
# define SYSTEM(header) <header>
#endif
#include LOCAL(wibble.h) /* <--- */
#include SYSTEM(stdio.h) /* <--- */
...
int legacy(int a, int b)
{
FILE * stream = fopen("some_file.txt", "w");
char buffer[256];
int result = sprintf(buffer,
"%d:%d:%d", a, b, a*b);
fwrite(buffer, 1, sizeof buffer, stream);
fclose(stream);
return result;
}
Now structure your unit-tests for legacy.c as follows:First you write null implementations of the external dependencies you want to fake (more Null Object Pattern):
/* legacy.test.c: Part 1 */
static FILE * fopen(const char * restrict filename,
const char * restrict mode)
{
return 0;
}
static size_t fwrite(const void * restrict ptr,
size_t size,
size_t nelem,
FILE * restrict stream)
{
return 0;
}
static int fclose(FILE * stream)
{
return 0;
}
Then #include the source file.
Note carefully that you're #including legacy.c here
and not legacy.h and you're #defining UNIT_TEST
so that legacy.c will have no #includes of its own:
/* legacy.test.c: Part 2 */ #define UNIT_TEST #include "legacy.c"Then write your tests:
/* legacy.test.c: Part 3 */
#include <assert.h>
void first_unit_test_for_legacy(void)
{
/* writes "2:9:18" which is 6 chars */
assert(6, legacy(2,9));
}
int main(void)
{
first_unit_test_for_legacy();
return 0;
}
When you compile legacy.test.c you will find your first problem -
it does not compile! You have cut away all the #includes
which cuts away not only the function declarations but also the type definitions,
such as FILE which is a type used in the code under test, as well as
in the real and the null fopen, fwrite, and
fclose functions.
What you need to do now is introduce a seam only for the functions:
/* stdio.seam.h */
#ifndef STDIO_SEAM_INCLUDED
#define STDIO_SEAM_INCLUDED
#include <stdio.h>
struct stdio_t
{
FILE * (*fopen)(const char * restrict filename,
const char * restrict mode);
size_t (*fwrite)(const void * restrict ptr,
size_t size,
size_t nelem,
FILE * restrict stream);
int (*fclose)(FILE * stream);
};
extern const struct stdio_t stdio;
#endif
Now you legacy.c
to use stdio.seam.h:
/* legacy.c */
#if defined(UNIT_TEST)
# define LOCAL(header) "nothing.h"
# define SYSTEM(header) "nothing.h"
#else
# define LOCAL(header) #header
# define SYSTEM(header) <header>
#endif
#include LOCAL(wibble.h)
#include LOCAL(stdio.seam.h) /* <--- */
...
int legacy(int a, int b)
{
FILE * stream = stdio.fopen("some_file.txt", "w");
char buffer[256];
int result = sprintf(buffer,
"%d:%d:%d", a, b, a*b);
stdio.fwrite(buffer, 1, sizeof buffer, stream);
stdio.fclose(stream);
return result;
}
Now you can structure your null functions as follows:
/* legacy.test.c: Part 1 */
#include "stdio.seam.h"
static FILE * null_fopen(const char * restrict filename,
const char * restrict mode)
{
return 0;
}
static size_t null_fwrite(const void * restrict ptr,
size_t size,
size_t nelem,
FILE * restrict stream)
{
return 0;
}
static int null_fclose(FILE * stream)
{
return 0;
}
const struct stdio_t stdio =
{
.fopen = null_fopen,
.fwrite = null_fwrite,
.fclose = null_fclose,
};
And viola, you have a unit test.
Now you have your knife in the seam you can push it in a bit further.
For example, you can do a little spying:
/* legacy.test.c: Part 1 */
#include "stdio.seam.h"
#include <assert.h>
#include <string.h>
static FILE * null_fopen(const char * restrict filename,
const char * restrict mode)
{
return 0;
}
static size_t spy_fwrite(const void * restrict ptr,
size_t size,
size_t nelem,
FILE * restrict stream)
{
assert(strmp("2:9:18", ptr) == 0);
return 0;
}
static int null_fclose(FILE * stream)
{
return 0;
}
const struct stdio_t stdio =
{
.fopen = null_fopen,
.fwrite = spy_fwrite,
.fclose = null_fclose,
};
This approach is pretty brutal, but it might just allow you to create an initial seam which you
can then gradually prise open. If nothing else it allows you to create
characterisation tests to familiarize yourself with legacy code.
You'll also need to create a trivial implementation of
stdio.seam.h
that the real code uses:
/* stdio.seam.c */
#include "stdio.seam.h"
#include <stdio.h>
const struct stdio_t stdio =
{
.fopen = fopen,
.fwrite = fwrite,
.fclose = fclose,
};
The -include compiler option might also prove useful.
-include file
Process file as if #include "file" appeared as the first line of the primary source file.
Using this you can create the following file:
/* include.seam.h */ #ifndef INCLUDE_SEAM #define INCLUDE_SEAM #if defined(UNIT_TEST) # define LOCAL(header) "nothing.h" # define SYSTEM(header) "nothing.h" #else # define LOCAL(header) #header # define SYSTEM(header) <header> #endif #endifand then compile with the
-include include.seam.h option.
This allows your
legacy.c file to look like this:
#include LOCAL(wibble.h)
#include LOCAL(stdio.seam.h)
...
int legacy(int a, int b)
{
FILE * stream = stdio.fopen("some_file.txt", "w");
char buffer[256];
int result = sprintf(buffer, "%d:%d:%d", a, b, a*b);
stdio.fwrite(buffer, 1, sizeof buffer, stream);
stdio.fclose(stream);
return result;
}
every teardrop is a waterfall
I was listening to Coldplay the other day and got to thinking about waterfalls.
The classic waterfall diagram is written something like this:Analysis
leading down to...
Design
leading down to...
Implementation
leading down to...
Testing.
The Testing phase at the end of the process is perhaps the biggest giveaway that something is very wrong. In waterfall, the testing phase at the end is what's known as a euphemism. Or, more technically, a lie. Testing at the end of waterfall is really Debugging. Debugging at the end of the process is one of the key dynamics that prevents waterfall from working. There are at least two reasons:
The first is that of all the activities performed in software development, debugging is the one that is the least estimable. And that's saying something! You don't know how long it's going to take to find the source of a bug let alone fix it. I recall listening to a speaker at a conference who polled the audience to see who'd spent the most time tracking down a bug (the word bug is another euphemism). It was just like an auction! Someone called out "3 days". Someone else shouted "2 weeks". Up and up it went. The poor "winner" had spent all day, every day, 9am-5pm for 3 months hunting one bug. And it wasn't even a very large audience. This 'debug it into existence' approach is one of the reasons waterfall projects take 90% of the time to get to 90% "done" (done is another euphemism) and then another 90% of the time to get to 100% done.
The second reason is Why do cars have brakes?. In waterfall, even if testing was testing rather than debugging, putting it at the end of the process means you'll have been driving around during analysis, design and implementation with no brakes! You won't be able to stop! And again, this tells you why waterfall projects take 90% of the time to get to 90% done and then another 90% of the time to get to 100% done. Assuming of course that they don't crash.
In Test First Development, the testing really is testing and it really is first. The tests become an executable specification. Specifying is the opposite of debugging. The first 8 letters of specification are S, P, E, C, I, F, I, C.
A test is specific in exactly the same way a debugging session is not.
Coupling, overcrowding, refactoring, and death
I read The Curious Incident of the Dog in the Night Time by Mark Haddon last week. I loved it.
At one point the main character, Christopher, talks about this equation:
Pg+1 = α Pg (1 - Pg)
This equation was described in the 1970s by Robert May, George Oster, and Jim Yorke. You can read about it here. The gist is it models a population over time, a population at generationg+1 being affected by the population at generation g. If there is no overcrowding then each member of the population at generation g, denoted Pg, produces α offspring, all of whom survive. So the population at generation g+1, denoted Pg+1 equals α Pg. The additional term, (1 - Pg ) represents feedback from overcrowding. Some interesting things happen depending on the value of α
- α < 1: The population goes extinct.
- 1 < α < 3 : The population rises to a value and then stays there.
- 3 < α < 3.57 : The population alternates between boom and bust.
- 3.57 < α : The population appears to fluctuate randomly.
You can think about the process of writing software with this equation.
You can think of over-crowding as being analogous to over-coupling. We feel that a codebase is hard to work with, difficult to live in, if it resists our attempts to work with it. When it resists it is the coupling that is resisting.
You can also think of death as being analogous to refactoring. Just as death reduces overcrowding, so refactoring reduces coupling.
Refactoring is a hugely important dynamic in software development. Without refactoring a codebase can grow without check. Growing without check is bad. It leads to overcrowding. Overcrowding hinders growth.
Out of the crisis
is an excellent book by W. Edwards Deming (isbn 0-911379-01-0). As usual I'm going to quote from a few pages:
All industries, manufacturing and service, are subject to the same principles of management.
Quality comes not from inspection, but from improvement of the production process.
Today, 19 foreman out of 20 were never on the job that they supervise.
Fear amongst salaried workers may be attributed in large part to the annual rating of performance.
Absenteeism is largely a function of supervision. If people feel important to a job, they will come to work.
He that would run his company on visible figures alone will in time have neither company nor figures.
There has never been a definitive study of quality in the dental profession; nor is there likely to be one. Partly because they tend to work alone, dentists resist the idea of being evaluated, or even observed, by others.
Where there is fear, there will be wrong figures.
It is well known that rework piles up: no one wishes to tackle it.
This company had been sending a letter to every driver at every mistake. It made no difference whether this was the one mistake of the year for this driver, or the 15th: the letter was exactly the same. What does the driver who has received 15 warnings, all alike, think of the management?
Cause is effect and effect is cause and vice versa
Defects cause lateness.
The more defects code has the more time and effort it takes to get it to done. This seems a self-evident truth. But beware! The Causation Fallacy says it is not easy to know what is cause and what is effect. If a feature misses its deadline pressure often builds to ensure it doesn't miss the next deadline. And under pressure people don't think faster. Extra pressure usually increases the likelihood of defects. This suggests
Lateness causes defects.
So do defects cause lateness, or does lateness cause defects? Or do they rotate around each other like partners on a dance floor?
sprints, time-boxing, and capacity
A team is doing Scrum with 3 week sprints. Suppose at the end of a sprint they've got nothing to done. What should they do? There's a strong temptation to ask for more time. To make this sprint a 4 week sprint. Most of the work in progress is 90% done, they say. Another week and things will have got to done, they say. It seems reasonable.
Trying to run systems beyond their capacity is not a good idea. In this situation Scrum's fixed-duration time-box constraint has served it's purpose admirably. The problem is not the choice of 3 weeks. Changing 3 weeks into 4 weeks is not addressing the problem. The problem is the team planned to pull in an amount of work and get it to done in 3 weeks. But they're not yet in control of their process - they don't know what their capacity is. They pulled in more than 3 weeks worth of work. Probably a lot more. But we just don't know!
In The Toyota Way, Jeffrey Liker writes:
Taiichi Ohno considered the fundamental waste to be overproduction, since it causes most of the other wastes.
Advice from a genius with a lifetime's experience. Toyota manufactures cars. It makes cars. Its production line is an actual line. If manufacturers are prone to overproduction imagine how much more prone software developers are! The things we make are not even physical things. In software, things are mostly invisible. It's difficult to manage what you can't see. In Quality Software Management volume 2, First-Order Measurement, Jerry Weinberg writes:
Without visibility, control is not possible.
If you can't see, you can't steer.
Rather than asking for another week, the team should really be thinking about addressing their real problem. Their real problem is that they're pulling in too much work. They have to somehow learn to pull in less work. So they can start to be in control of their process rather than their process being in control of them.
culture
From Quality Software Management: Vol 2. First Order Measurement
Culture makes its presence known through patterns that persist over time.
One of the most sensitive measures of the cultural pattern of any organization is how quickly it finds and removes problems.
From The Toyota Way
Building a culture takes years of applying a consistent approach with consistent principles.
From XP and Culture Change
A process change will always involve a cultural change.
Because culture embodies perception and action together, changing culture is difficult and prone to backsliding.
From Quality Software Management: Vol 3. Congruent Action
Culture makes language, then language makes culture.
From Beating the System
Culture is what we do when we do not consciously decide what to do.
From Freedom from Command and Control
Consultants who see culture change as something distinct from the work and, as a corollary, something that can be the subject of an intervention, miss the point. When you change the way work is designed and managed, and make those who do the work the central part of the intervention, the culture changes dramatically as a consequence.
From Leverage points
One aspect of almost every culture is the belief in the utter superiority of that culture.
From John Seddon
Culture change is free [because] it's a product of the system.
From Notes on the Synthesis of Form
Culture is changing faster than it has ever changed before...what once took many generations of gradual development is now attempted by a single individual.
From Slack
Successful change can only come about in the context of a clear understanding of what may never change, what the organization stands for... the organization's culture... If nothing is declared unchangeable, then the organization will resist all change. When there is no defining vision, the only way the organization can define itself is its stasis.
From The Hidden Dimension
As Freud and his followers observed, our own culture tends to stress that which can be controlled and to deny that which cannot.
From The Silent Language
Culture hides much more than it reveals, and strangely enough what it hides, it hides most effectively from its own participants.
An often noted characteristic of culture change is that an idea or a practice will hold on very persistently, apparently resisting all efforts to move it, and then, suddenly, without notice, it will collapse.
hunger is the best source
I've previously blogged about being taught ITA spelling at primary school. About how it causes me spelling problems. I was reminded of this when speaking to Geir Amdal at the excellent Agile Coach Camp in Oslo.
Geir showed me this wonderful blog post with lovely twist on the famous quote:
Knowledge is power.
Francis Bacon
It reminded me of something my Mum used to say to me when I was little:
Hunger is the best source.
For many many years I didn't understand what she was saying. I was seeing the word sauce as source. She was actually saying:
Hunger is the best sauce.
Food tastes better when you're hungry. Reflecting on my confusion I realize I'm actually quite proud of this mistake. This was a long time ago remember. I was a small boy at the time. Even then, it seems, software was calling me.
Smoking cigarettes, eating sweets, dropping litter, and drinking coffee
I was speaking to Olaf Lewitz at the awesome Oslo coach camp last week. We were discussing why drinking coffee doesn't create the same social dynamic as smoking cigarettes. I chatted with Geir Amdal too and quite by chance he mentioned he's given up smoking. And how approaching a work colleague and asking if they want to go outside for a smoke is not the same as asking if they want to go outside for a talk.
Then I remembered something Olve Maudal said to me recently. He said that kids being allowed to eat sweets on Sundays was not really about kids being allowed to eat sweets on Sundays at all. It was really about kids not being allowed to eat sweets on any day except Sunday. Similarly, apparently in the USA when you're driving along you sometimes see a big sign at the side of the road saying "Litter here" and then another sign a mile or so later saying "Stop littering". These signs are also not really about littering. They're about not littering in the places outside the designated littering zones.
There's a crucial difference between smoking and drinking coffee. Smokers tend to smoke in groups in designated areas because smoking is not allowed except in those areas. Coffee is different. Drinking coffee is, by default, allowed everywhere. When you want a coffee you walk to the coffee machine and make a cup of coffee. There's often no one else at the coffee machine so you take your cup of coffee back to your work desk. It is precisely this take-it-back-to-your-desk default which is why there is only rarely someone else at the coffee machine. It is a self-fulfilling dynamic.
If you want to encourage more social interaction between your team members here's what you might do:
- Buy machines that make really good coffee.
- Put them in a nice area with lots of space to congregate in.
- Ban drinking coffee at work-desks.
my kindle book-case
When I bought my kindle I forgot to get a case to protect it. I searched around on a few sites looking for a case but didn't find anything I particularly liked. Before I knew it, it was time to head off to the awesome Agile Coach Camp Norway 2012. I wanted to take my kindle but needed a case to protect it. It was too late too order a case via the internet. But I had an idea. I could use a book! A regular old-fashioned hardback book.
I simply cut out a kindle-sized panel from the middle of about 100 pages and then glued the holed pages all together:
Viola, I have a case for my kindle. A book-case you might say.
I showed off my new book-case at the coach camp. It was a hit. At dinner one evening Marc Johnson mentioned he too has a kindle and loves it but misses the social aspect of a real book. The simple fact that most real books display the book's title on its front cover. People can see what you're reading. I sat next to a really interesting man on a plane once. He noticed I was reading Jerry Weinberg's Quality Software Management, vol 2, First-Order Measurement and asked me about it. We chatted away the whole flight.
My kindle book-case allows me to regain this missing social aspect. I can simply print the cover the publishers use for the real book and stick it to the front. So now I have something close to my ideal kindle case. It just needs a clear front cover sleeve so I can easily slide a cover in. And some kind of clasp. As a final bonus, I can pay homage to one of my favourite films:
Managing the design factory
Whenever we see an intense need for communications it is typically a sign that the system has been incorrectly partitioned.
A complex system can often be built faster when there are stable steps along the way. This is what Nobel laureate Herbert Simon called "stable intermediate forms" in his book The Sciences of Artificial.
We cannot predict the behaviour of a system simply by understanding the behaviour of its components.
There are more possible interactions in a system of 150 components than there are atoms in the universe.
The act of partioning the system is extremely important, because it creates interfaces… these interfaces are both the primary source of value within a system and the primary source of complexity.
The nonlinear behaviour of queueing systems will amplify variability within the system.
We get into an interesting death spiral when we overload our development process. Overloads cause queues; queues, being nonlinear, raise the variability of our process, and variability raises the size of queues.
The weak cross-functional communication of the functional form sacrifices our other economic objectives.
In life, we design most processes for repetitive activities because a process is a way of preserving learning that occurs when doing an activity. … We need to find some way to preserve what we have learned without discouraging people from doing new things.
We get large queues whenever we have large batch transfers in the process.
There is a strong interaction between the design of our organisation structure, our architecture, and our development process.
6000 degree-minutes
If you use the same recipe you get the same bread.
That's the White Bread Warning from Jerry Weinberg's truly excellent The Secrets of Consulting.
I was thinking about that the other day and I realized something important. I realized that when I read the word recipe I thought about the ingredients but not really about the non-ingredient related instructions in the recipe. About time. A recipe doesn't just tell you what to mix with what, and in what order, it tells you how long to apply heat. And how much heat. These two things matter just as much as the ingredients. If you change the ingredients you'll get different bread. But if you change the time or the amount of heat you'll also get different bread. Although it might not look much like bread.
Suppose the recipe says to heat the oven to 200 degrees and then cook for 30 minutes. That's 6000 degree-minutes. Now 1200 degrees for 5 minutes is also 6000 degree-minutes. But the bread will be predictably black. Similarly 1 degree for 6000 minutes is also 6000 degree-minutes. But the bread will still be ingredients. Or rather it won't. You see 6000 minutes is 100 hours. Which is 4 days as near as makes no difference. That matters because ingredients are organic. They have a shelf life. A sell-by/eat-by expiry date. They decay. And even if baking the ingredients for 4 days at 1 degree did produce something vaguely bread-like the extra time would create extra cost. In lots of ways. Extra time does that.
Quality Software Management
Vol 3. Congruent Action
is the title of an excellent book by Jerry Weinberg (isbn 0-932633-28-5). This is the second snippet review for this book
(here's the first).
As usual I'm going to quote from a few pages:
Management is the number one random process element.
If you cannot manage yourself you have no business managing others.
Congruent behaviours are not stereotyped behaviours - quite the contrary. Congruent behaviours are original, specific behaviours that fit the context, other, and self, as required by the Law of Requisite Variety.
Congruence is contagious.
It takes a long time and a lot of hard practice to raise your congruence batting average.
A basic law of perception is that we tend to minimise small differences (tendency toward assimilation) and to exaggerate appreciable differences (tendency toward contrast). Thus, our perceptions make the world more sharply differentiated than it is, and we're a lot more alike than we are different.
The simplest idea about curing addiction is to stop the use of X, under the belief that X causes addiction. X does not cause the addiction, the addiction dynamic causes the addiction.
To change the addiction you'll have to use something more powerful than logic.
One of the manager's primary jobs is to set the context in which interactions takes place and work is accomplished.
Management sets the context almost exclusively through the use of language.
Culture makes language, then language makes culture.
In all Steering (Pattern 3) organisations, the team is the fundamental unit of production.
I've learned that there's simply no sense trying to solve software engineering problems, or create software engineering organisations, when I'm not able to be congruent. So I work on that first, and that's what I hope you do, too.
Fit for any type of sea voyage
Et skip som må øses 3 ganger på 2 døgn er sjødyktig til all slags ferd.
which translates as:
A ship that has to be bailed 3 times in 2 days is fit for any type of sea voyage.
I just love that.
Butter sighted at Olve's house
This is a photo of a pack of butter belonging to my good friend Olve Maudal.
Olve has exactly 157 packs of butter in his house right now. All safely housed in his new super-sized fridge.
Many of his 157 packs have been flown in specially by relatives visiting from abroad.
Olve would only allow one pack out of the fridge for the photo. Even then he insisted it be taken out under the watchful eyes of the two security guards he's specially hired to guard the fridge - Lars by day and the other Lars by night. Well, you can't be too careful right now. Butter is selling for crazy money on the black (or should that be yellow) market.
Yes, it's just one small example of the butter shortage here in Norway at the moment. Apparently the cause is a new fat-rich fad-diet sweeping the population combined with the seasonal tradition of making butter-rich Xmas cookies.
Shortages like this are, as Stephen Fry might put it, quite interesting. At one point there was probably a very mild shortage. Word of the mild shortage started to spread (sorry) and anyone buying butter bought a few extra packets just to be safe. The shortage got a bit worse. Word of the worsening shortage spread further and faster. People bought even more. A self-fulfilling dynamic was thus set in place. Soon the shelves were stripped of all butter.
The shortage the customers are experiencing is, no doubt, fractally mirrored by the shops selling (or rather not selling) butter. Butter wholesalers just don't have enough butter to meet the orders from shops. Shops that get any butter get less than they ordered. Any butter the shops do get is bought in a flash (but only by relatively few people because of the bulk butter buying behaviour) and they're out of stock again. You can imagine the shop keepers pulling their hair out in exasperation. If only they could get more butter they could make a small fortune. But right when there's the most demand they have none on their shelves! They increase the size of their wholesale reorder hoping to cash in.
What will happen in a few weeks time? One possible (perhaps even likely) outcome, is that the wholesalers will finally get enough butter to meet their over-inflated orders. The shop keepers pile the butter onto their shelves and wait for the Krona to roll in... Some of the butter is sold. But not very much. After all, Xmas is now over. The fat-rich fad-diet has gone the way of all fads and the glossy magazines are now preaching a low-fat diet. And lets not forget that a fair percentage of the population has, like Olve, over 100 packets of butter in their new fridges. They're certainly not going to be buying butter any time soon.
The shop keepers then face the daunting prospect of vast butter-walls sitting unsold on their shelves, fast approaching its sell-by date. Lowering the price doesn't help. It all has to be thrown away. Again the same thing will be fractally mirrored at the smaller scale. Lots of people, such as Olve, will have more butter than they can possibly use in time. They too will have to throw out loads of butter as it goes past its use-by date.
The same lurching from one extreme to another can happen when the number of people trying to make phone calls starts to approach network capacity. People can't get through. So they try again. And when they do get through the line gets dropped. So when they do get through they stay on a bit longer. It happens on roads too.
It's dangerous to run systems at full capacity. They reach a tipping point and topple into a death spiral. Busy work and inventory pile up. That causes even more busy work and even more inventory. But almost no butter is being bought or sold. There is no flow.
Everyday heroes of the quality movement
is an excellent book by Perry Gluckman and Diana Reynolds Roome,
subtitled From Taylor to Deming : The Journey to Higher Productivity (isbn 0945320078).
As usual I'm going to quote from a few pages:
Look for the flaws in the system not in each other.
When we reduce complexity, we start to see the organism behaving as a whole rather than a series of parts.
The effects of preventative medicine are hard to measure.
Theories are only the beginning. Why do we find it so hard to exercise, or give up smoking, even when we know all the arguments.
Quality and productivity are results, not goals.
Automating complexity is never as effective as removing it.
I'm not trying to be destructive. I just want to open the doors to some breezes that feel a little chilling to start with.
If there are problems in the company, we don't borrow money. We solve the problems.
If you automate without first getting rid of complexity, you cast the complexity in concrete.
We do almost nothing to control our workers productivity. They are already doing their best without being goaded. What we all try to control is the process itself.
You need to know your financial direction as far as it can be known, and make sure that you don't hit any big rocks. But something else is more important: to design the ship so that it can withstand the blows when they come.
C sequence points
Olve Maudal and I created a Deep C/C++ slide-deck recently. It's been downloaded over 500,000 times
indicating no small appetite for learning some of the deep secrets of C and C++. So...
In this C fragment
z is initialized to the value of n after
n += 42 takes place.
if (n += 42)
{
int z = n;
...
}
But how do you know this? For sure? The answer is perhaps not as obvious as you might think. The C standard says:
5.1.2.3 Program execution
(paragraph 2)
Accessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment. At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place.
In C parlance,
n is an object, and n += 42 modifies n.
So n += 42 is a side effect.
The only thing governing the sequencing of side-effects are sequence points.
And there are a lot less sequence points in C and C++ code that you might imagine.
There is a sequence point between n += 42 and the initialization of
z. But where? And why?
The standard says:
6.8 Statements and blocks
(paragraph 4)
A full expression is an expression that is not part of another expression or of a declarator. ... The end of a full expression is a sequence point.
and:
6.8.4 Selection statements
Syntax
selection-statement:
if ( expression ) statement
If we lexically enlarge the expression
n += 42 to its left or right we hit the parentheses that form part of the if statement. In other words, the expression stops being an expression and starts to become a statement. That means n += 42 in the fragment is a full expression. That's why there's a sequence point at the end of n += 42. In pseudo code it looks like this:
n += 42; sequence-point if n != 0 goto __false__; int z = n; ... __false__:
The Toyota Way
is an excellent book by Jeffrey Liker (isbn 978-0-07-139231-0). As usual I'm going to quote from a few pages:
One day a Ford Taurus mysteriously disappeared. It had been in the factory so they could try fitting it with some prototype mirrors. When it vanished, they even filed a police report. Then it turned up months later. Guess where it was. In the back of the plant, surrounded by inventory.
Extra inventory hides problems... Ohno considered the fundamental waste to be overproduction, since it causes most of the other wastes… big buffers (inventory between processes) lead to other suboptimal behaviour, like reducing your motivation to continuously improve your operation.
…was that data was one step removed from the process, merely "indicators" of what was going on.
Building a culture takes years of applying a consistent approach with consistent principles.
It seems the typical U.S. company regularly alternates between the extremes of stunningly successful and borderline bankrupt.
Flow where you can, pull where you must.
When I interviewed [Fujio] Cho for this book, I asked him about differences in cultures between what he experienced starting up the Georgetown, Kentucky plant and managing Toyota plants in Japan. He did not hesitate to note that his number-one problem was getting group leaders and team members to stop the assembly line.
Every repair situation is unique.
The more inventory a company has,… the less likely they will have what they need [Taiichi Ohno]
I posit here that Toyota has evolved the most effective form of industrial organisation ever devised. At the heart of that organisation is a focus on its own survival. [John Shook]
You cannot measure an engineer's value-added productivity by looking at what he or she is doing. You have to follow the progress of an actual product the engineer is working on as it is being transformed into a final product (or service).
Everyone should tackle some great project at least once in their life. [Sakichi Toyoda]