Almost done!
‘I’m almost done!’
The voice coming from the children’s room sounds confident and a little annoyed at the same time. ‘I need three more minutes, then I’ll have reached the level and will come to dinner,’ calls out the son. Three minutes turn into ten, then twenty. In the end, half an hour has passed.
Many parents are familiar with this situation. And many people also know it from their everyday working lives:
- ‘We’re just about done with the feature.’
- ‘The presentation just needs a little fine-tuning.’
- ‘The offer is practically ready. All that’s missing is the signature of the management.’
‘Almost done’ sounds like certainty. However, experience shows that this statement is often not reliable. It describes more of a feeling: I’m practically done and will be ready for dinner soon. You can already take the screenshots for the documentation of the new feature.
That’s where the problem begins. The remaining work, however small it may seem, quickly turns into a huge mountain. In a computer game, a new monster appears just before the end. In the office, the management is in a meeting, after which there are more queries than expected, and the final step turns into a new loop.
This is how ‘almost done’ becomes a tricky phase in an activity, a plan or a project. Not because people want to systematically deceive (except maybe the son), but because what was previously not very visible often comes up at the end: coordination, approval, testing, edge cases and lots of details.
If we want to assess progress more reliably, a feeling is not enough. We need clarity about what ‘finished’ really means. And we need methods that make the remaining effort visible before the last ten percent becomes the longest phase.
When the last 10 per cent changes everything
Those who carry out projects and develop products professionally will not coordinate their collaboration based on a statement such as ‘almost done’. Companies plan budgets, prioritise work packages, manage projects and report to stakeholders. To do this, they need guidance. That is why many organisations work with defined degrees of progress.
- A specification is 50 per cent complete.
- A project is 70 per cent complete.
- A feature is 90 per cent finalised.
Percentage values create structure. They enable comparability. And they convey control.
The problem begins when these figures suggest a degree of accuracy that does not actually exist. 90 per cent sounds precise. In fact, it often hides a subjective assessment. A feeling becomes a key figure. And this is precisely where a systematic distortion arises.
This distortion is not just a theoretical problem. It almost always manifests itself where it hurts: just before the finish line. Because the closer something gets to completion, the more percentages become a kind of reassurance. And the more often the remaining 10 per cent is underestimated.
This pattern is known as the 90 percent done syndrome syndrome.
It describes the systematic tendency to underestimate the remaining effort. The essential steps seem to have been completed, the biggest hurdles overcome. The remaining 10 per cent seems small, manageable and quickly done.
In practice, however, the opposite is often true. Especially in the final phase, many things come together. The integration of a feature does not go smoothly. What works on the developer’s computer suddenly throws up error messages in the test environment. Unclear acceptance criteria need to be clarified. Acceptance involves detailed discussions.
At the same time, the effort required for activities that sound routine but are time-consuming increases: testing, documentation, handover, final coordination. Sometimes the technical documentation for a feature is much more extensive than for comparable features before, because new edge cases, new dependencies or new configuration paths have been added. And suddenly it becomes apparent that the last 10 per cent can be very time-consuming.
The consequences are well known in many organisations. Deadlines are jeopardised because downstream work packages cannot be started. Buffers are used up without this initially looking dramatic in the reporting. Sometimes teams start follow-up work because only minor details seem to be missing. If these details change later, additional effort is required for rework.
And last but not least, credibility suffers. Those who repeatedly make overly optimistic estimates lose trust, even if the estimate was not made out of negligence, but rather out of a distorted perception of the remaining effort.
The 90 per cent syndrome does not describe individual failure. It describes a structural pattern.
The so-called 90-90 rule provides a pointed exaggeration of this pattern. It is often attributed to computer scientist Tom Cargill and sums up the experience of many teams:
- The first 90 per cent of the work takes 90 per cent of the time.
- The remaining 10 per cent takes the other 90 per cent.
What may sound exaggerated at first describes a recurring phenomenon. Visible progress is made early on. The real effort lies in stabilising, integrating, testing, documenting and securing.
The last 10 per cent is rarely the small remainder. It is not the most difficult part, but it is the part where we most often misjudge.
Subjectivity and the attempt at standardisation
Why do we so often misjudge effort and progress?
One key reason lies in the way we determine progress in the first place. Progress is not a measurable value like temperature or weight. It is determined by assessment. And every assessment is subjective. What sounds like 80 per cent to one person may seem more like 60 per cent to another. Some people base their assessment on the effort already invested. Others on the perceived degree of difficulty. And some on what still lies ahead of them.
As long as progress is based on individual assessments, inaccuracies are inevitable. And that is precisely why organisations try to limit this subjectivity through standardised approaches. One example of this is the 0/100 method.
The idea behind the 0/100 method is as simple as it is consistent: a task is either not completed or completed. There is no intermediate percentage score. As long as a work package is not completed, it has a completion rate of 0 per cent. Only when it is actually finished is it rated at 100 per cent.
The key effect of this method is not accuracy, but the avoidance of over-optimism. It prevents interim results from creating an overly positive statement about the progress of the project. There is no 70 per cent and no 90 per cent that creates expectations that cannot be met later.
This is precisely why many consider the 0/100 method to be conservative and safe. It reduces the risk of reassuring yourself or others shortly before completion. At the same time, it often appears pessimistic in its assessment of progress because it does not make work that has already been done visible until the result is complete. The method thus makes a clear trade-off: less perceived progress, but less illusion.
Those who want to mitigate this trade-off quickly end up with a variant that at least makes the start of an activity visible: the 20/80 method. [1] As soon as a work package has been started, 20 per cent is included in the completion value. Work that has been started does not disappear completely in the shadow of 0 per cent. At the same time, most of the progress remains tied to something that really counts: an accepted result. Only when the result is accepted are the remaining 80 per cent added.
This makes the method applicable in practice. It seems less pessimistic than 0/100, without immediately slipping into the next percentage illusion. However, this relaxation comes at a price. If the start already increases the completion value, there is an incentive to make cosmetic changes. If you want to make progress look good, you declare many work packages as ‘started’. The degree of progress increases, but the actual completion does not come any closer.
The 50/50 method is even less conservative. [2] It places even greater emphasis on the start. As soon as a task has been started, it is given a rating of 50 per cent. Only when it is completed is it rated 100 per cent. This seems much less pessimistic than 0/100. And it is easier for many people to communicate: started means that something is happening. 50 per cent is a visible signal.
The catch here is also the flat rate. 50 per cent says nothing about how much work has actually been done or how much is still to be done. The method provides orientation, but can easily create a false sense of security if ‘started’ is confused with ‘half done’.
Added to this is an effect that can exacerbate the problem of the 20/80 method. When many work packages are started in parallel, the reported progress increases rapidly. Overall, the project suddenly looks well advanced, even though a large part of the results are still far from being realised.
Progress in comparison
All three approaches pursue the same goal: they want to replace subjective assessments with simple rules. However, they do this with different weightings. Depending on how much consideration is given to work already started, the picture of project progress changes considerably. The differences are not only theoretical in nature. They have a direct impact on reporting, expectation management and control.
Example:
Imagine six work packages (A, B, C, D, E, F). Each has a planned effort in person-days (PD), and each has an actual effort documented by time tracking. The difference between the two is the remaining effort. How does progress compare between the methods?
| Work Package | Planned Effort | Actual Effort | Remaining Effort | 0/100 Method | 20/80 Method | 50/50 Method |
| A | 10 PD | 10 PD | 0 PD | 10 PD | 10 PD | 10 PD |
| B | 15 PD | 15 PD | 0 PD | 15 PD | 15 PD | 15 PD |
| C | 20 PD | 10 PD | 10 PD | 4 PD | 10 PD | |
| D | 15 PD | 10 PD | 5 PD | 3 PD | 7,5 PD | |
| E | 25 PD | 10 PD | 15 PD | 5 PD | 12,50 PD | |
| F | 15 PD | 11 PD | 4 PD | 3 PD | 7,50 PD | |
| Sum | 100 PD | 66 PD | 34 PD | 25 PD | 40 PD | 62,5 PD |
If you compare the total number of person-days for each method with the total planned effort, this means:
- According to the 0/100 method, 25 per cent of the project has been completed. So it appears to be still in its early stages.
- According to the 20/80 method, 40 per cent of the project has been completed. So it appears to be approaching the halfway point.
- And according to the 50/50 method, 62.5 per cent of the project has been completed. It therefore appears to be well advanced, even though a significant portion of the work packages have not yet been completed.
It is impressive how different these figures sound and what conclusions they suggest, isn’t it?
And if you now look at the cumulative planned and actual effort, you can see that 66 per cent of the planned effort has already been completed. But what does this say about the progress of the project? [3]
One thing is certain: the higher the weighting of the proportion of work started, the greater the reported progress. Not because more has been completed, but because activity is evaluated as a result. And this is precisely where you decide which method suits your control logic. The crucial question is therefore not which figure is ‘correct’. The crucial question is what information you really need for control and decision-making in your context.
Managing progress means more than counting percentages
The information required for management and decision-making depends on the specific context. This is precisely why the methods differ not only technically, but also in their effect.
- The 0/100 method prioritises reliability. It only shows completed results. This creates a conservative, robust picture.
- The 20/80 method strikes a balance. It makes progress that has already been made visible, but keeps most of the value tied to the actual completion.
- The 50/50 method signals movement early on. It makes activity visible quickly, but increases the risk of overestimating progress.
The methods therefore differ not only in terms of numbers, but also in their control logic. Those who need a high degree of planning reliability will tend to choose conservative approaches. Those who need early visibility of activity may opt for a less strict logic. And those who report in a highly aggregated manner must be aware of the distortion that work in progress creates.
However, the question of methodology does not tell the whole story.
Methods for determining the degree of completion provide guidance. They determine how progress is reported. However, they do not manage a project. They do not replace prioritisation, coordination of dependencies, or clarification of what ‘finished’ actually means.
Whether a method works in practice or just produces good-looking reports is therefore decided elsewhere.
Lever 1: Cut work packages so that progress can be observed
Large, vaguely defined work packages are a breeding ground for false impressions of progress. The larger the unit, the more progress becomes a matter of interpretation.
Small, clearly defined work packages reduce this scope for interpretation. They force concrete commitments, make blockages visible earlier and prevent a single ‘huge package’ from getting stuck at a perceived interim stage for weeks on end.
This is exactly where percentage methods come into play. Whether 0/100, 20/80 or 50/50, they work much better when what is being evaluated is manageable and clearly defined.
Lever 2: Definition of Done, so that ‘done’ remains non-negotiable
One of the most common reasons for costly loops is not a lack of performance, but a lack of clarity. ‘Done’ sounds clear. But it rarely is.
- For one person, done means: code written.
- For another: integrated.
- For the next: tested.
- And for the one who accepts it: documented, checked, usable, approved.
A definition of done creates a common point of reference here. It makes it clear which criteria must be met for a work package to be considered complete. This makes ‘almost done’ less of a perspective and more of a verifiable state.
Incidentally, it also stabilises the methodology. Because whether it’s 0/100 or 20/80, without clarity about what counts as completion, every percentage becomes negotiable again.
Lever 3: Make dependencies visible before they explode at the end
The final phase is often expensive because many dependencies converge there. Interfaces, external input, approvals, infrastructure, test environments, security checks, stakeholder feedback.
If these dependencies are not visible, progress appears stable until it suddenly stalls. Then the pattern from the first chapter emerges: the last 10 per cent is not difficult, but it is full of feedback loops.
A practical difference is therefore whether projects treat dependencies as a peripheral issue or as an integral part of control. Clarifying them early on protects you from surprises at the end.
Lever 4: Don’t ask about percentages, ask about remaining work
The question ‘How far along are you?’ invites percentages. And percentages invite optimism.
A more robust question is: ‘What specifically is still missing before it is really finished?’ This question forces substance. It makes the remaining effort visible before it becomes a surprise. And it changes communication within the team because it does not focus on a feeling, but on a list of open issues.
This is where the interplay between method and practice comes into play: percentages structure the presentation. The specific remaining work structures the control process. Only when both come together do progress figures become a reliable basis for decisions.
From almost done to truly done
‘Almost done’ is an expression we have probably all used many times. In a private context, this may sometimes be understandable. In a professional context, however, it is inaccurate. Since individual estimates are inevitably subjective, companies try to standardise progress on work packages, plans or projects. They use methods such as 0/100, 20/80 or 50/50.
These approaches have different focuses. The 0/100 method prioritises reliability, but often appears conservative. The 20/80 method reflects progress made, but ties most of the value to an accepted result. The 50/50 method makes activity visible early on, but reinforces the effect that many packages that have been started can create an optimistic overall picture.
Of course, many organisations also use their own methods or adapted rules to aggregate exactly the information that is helpful for control and decision-making in their context.
The important thing to remember is that methods provide guidance, but they do not manage a project. Progress management only becomes viable with additional levers. Clearly defined work packages, a clear definition of done, visible dependencies and consistent questioning of specific remaining work reduce the gap between numbers and reality.
And if a new monster does appear shortly before the end, dinner for the son will just have to be cold.
Notes (partly in Geman):
[1] Here you can find more details on the 20/80 method.
[2] Here you can find more details on the 50/50 method.
[3] Since effort is not equal to progress, some statements CANNOT be derived exactly from the values: We do not know whether the project is 66% complete, whether 34% of the work is still missing, or whether the project is on schedule. Only if the work progress were proportional to the effort could one say: The project is 66% complete.
Here you can find information about increasing or decreasing stages of completion.
And here you can find an interesting article about slicing work as an essential skill for agility. Well worth reading.
Would you like to discuss degrees of progress in projects as a multiplier or opinion leader? Then share this post in your networks.
Michael Schenkel has published more articles on the t2informatik Blog, including:

Michael Schenkel
Head of Marketing, t2informatik GmbH
Michael Schenkel has a heart for marketing - so it is fitting that he is responsible for marketing at t2informatik. He likes to blog, likes a change of perspective and tries to offer useful information - e.g. here in the blog - at a time when there is a lot of talk about people's decreasing attention span. If you feel like it, arrange to meet him for a coffee and a piece of cake; he will certainly look forward to it!
In the t2informatik Blog, we publish articles for people in organisations. For these people, we develop and modernise software. Pragmatic. ✔️ Personal. ✔️ Professional. ✔️ Click here to find out more.


