This series of articles draws on our disputes experience and identifies 7 common Procurement Pitfalls. When we advise on procurement challenges we tend to find the same types of problems, irrespective of the sector in which they arise. Often these are problems which emerge from the content of the tender documents, and which lead to problems for evaluators. The objective of these articles is to forewarn so that early thought can be given to avoiding these issues.
We will be focussing on:
- Price/evaluation methodology
- Waiving requirements
- Imposing unreasonably high requirements
Our sixth Procurement Pitfall is on overly rigid scoring methodology.
Procurement Pitfalls 6 - Overly rigid scoring methodology
Evaluators must ensure that the scores they award within each criterion/sub-criterion can be justified in accordance with the published scoring methodology.
The scoring methodology will usually appear in the Invitation to Tender and the authority will not be able to adjust it after submission of bids. Problems can arise from scoring methodologies that are too rigid and that do not allow evaluators sufficient discretion in the scoring of bids.
In our first example (taken from a UK Court case), the authority’s scoring methodology for quality was as follows:
|Number of Points||Definition|
|0||Response does not meet requirements and/or is unacceptable. Insufficient information to demonstrate Tenderer's ability to deliver the services.|
|2||Response partially meets requirements but contains material weaknesses, issues or omissions and/or inconsistencies which raise serious concerns.|
|4||Response meets requirements to a minimum acceptable standard, however contains some weaknesses, issues or omissions which raise minor concerns.|
|6||Response generally of a good standard. No significant weaknesses, issues or omissions.|
|8||Response meets requirements to a high standard. Comprehensive, robust and well justified showing full understanding of requirements.|
|10||Response meets requirements to a very high standard with clear and credible added value and/or innovation.|
The Court interpreted the definition for a score of zero as meaning that any failure to meet the authority’s requirements must score zero – even if the parts of the response that did meet requirements were of high quality. The Court acknowledged that to score zero, the failure must be significant or material, however the Court commented that it was easy to see how difficulties could arise with such a rigid scoring scale.
Such difficulties did indeed arise in this case.
The Court effectively carried out a re-scoring exercise and made reductions to the scores awarded by the authority to the winning bid as follows:
- For two questions where the winning bidder failed to explain how it would meet very important parts of the specification the Court reduced the authority’s award of “10” to zero.
- For a question where bidders had been asked to explain their proposals for “X” and “Y” and the winning bidder had only given their proposals for X, but not Y, the Court reduced the authority’s award of “6” to an award of zero.
- For a question where the winning bidder’s response was not compliant with the KPIs that were to form part of the contract the Court reduced the authority’s award of “8” to zero.
- For a question where the winning bidder did not demonstrate added value and/or innovation, the Court reduced the authority’s award of “10” to “6”.
What should the authority have done?
In the above case, there were a number of problems with how the authority carried out the evaluation and the difficulties were made worse by the fact that the evaluators did not have clear notes to justify the scores they had awarded.
However, authorities might consider using a scoring methodology that gives their evaluators more flexibility and discretion in how scores are awarded. One example is as follows:
|0||Very weak or no answer|
The above is a very flexible scoring scale and if used it is important to:
- Ensure the individual questions posed to bidders are clear about what is being assessed;
- Ensure the specification is clear about what the authority’s requirements are;
- Include immediately after the above table an explanation of the approach evaluators will take to evaluation. The precise wording will need to be considered on a case by case basis.
Another example is a specification being broken down into numerous sections and sub-sections with bidders asked to respond to each section and sub-section with a separate weighting (in many cases of 1%) allocated for each.
This type of approach may lead to:
- spoon-feeding bidders to respond to each and every point (with the risk that differentiating between bidders can be difficult);
- key points being lost in the detail (i.e. not seeing the wood for the trees) and less incentive on bidders to "shine" where the value of enhanced performance is limited;
- evaluation fatigue - i.e. the quality of evaluation and record keeping reducing where there is an excessive amount of scoring to be done; and
- a lack of flexibility for evaluators in how scores are awarded – particularly when combined with a rigid scoring scale.
What should an authority do?
We suggest a more flexible approach with a carefully defined set of criteria and sub-criteria that make it clear to bidders what the authority considers is important for the performance of the contract and what the authority wishes to assess – together with appropriate weightings and sub-weightings. However, the authority should resist too granular an approach unless the authority is clear that this is required.
Our specialist procurement litigation team frequently bring and defend court challenges, both for suppliers and contracting authorities. In the past year alone, we have advised on a range of disputes covering health and social care, infrastructure and development, waste collection and disposal, pathology, defence and telecommunications.
 Woods v Milton Keynes Council  EWHC 2011