In the current economic climate, establishing a price/quality evaluation model that delivers the right level of quality at the best possible price is important.  There is not a single "best practice" model.  In this article we look at a variety of different models that exist, and provide some insight into potential pitfalls if using particular models.

The starting point

The starting point is that contracting authorities have flexibility of choice in selecting their price/quality evaluation model. 

  • Authorities must base the award on the Most Economically Advantageous Tender (the "MEAT")
  • The MEAT may be identified on the basis of price or cost alone, using a cost-effectiveness approach
  • The MEAT may be identified by using a price-quality ratio (note the use of the word "ratio which we will come back to)
  • The MEAT may be identified by fixing the cost so that operators compete on quality only.

This is set out in Regulation 67 of the Public Contracts Regulations 2015 ("the PCR") and therefore means that authorities can either just award on price (having set a specification first), or can fix price and just evaluate quality, or can do a combination of the two.

Compliance with general principles

Whichever method is used, authorities must still comply with general principles of equal treatment, transparency, non-discrimination, relevance and proportionality.  As we will see, simply publishing a methodology may not be enough to comply with transparency if the underlying methodology creates unpredictable results, and simply evaluating all bidders with the same methodology may not be enough to comply with equal treatment if the methodology does not result in a rational outcome.

The guiding principle is that the methodology must identify the MEAT.  If it does not, there is a risk of challenge.  The best route to achieving this is to select a methodology which is relevant and proportionate to the particular procurement being undertaken.

Achieving relevance and proportionality

What will be relevant and proportionate to a procurement for IT services and supplies may well not be relevant and proportionate to a procurement for health care services.  It is important that the authority considers the range of different approaches and directs its mind to the most suitable.  Some examples of different approaches and considerations in respect of each of them are set out below:

Evaluation Approach

Likely to be suitable where…


Price Only

  • Purchase of standardised goods or commodities
  • Authority can clearly define its specification by reference to well-defined standards
  • The number of suppliers in the market will generate reasonable competition
  • Over-emphasis on price may lead to unsustainability and attempts to re-negotiate/default post award which create risk
  • With a race to the bottom onn price, beware of abnormally low tenders:  At what point is a price so low that it triggers an obligation to investigate under r69 of PCR?

Quality Only

  • Authority has a clear budget/can set a financial envelope
  • Authority needs to test what the market can deliver
  • Scope for dispute over marks awarded for quality?  The majority of procurement challenges include some complaint about marks awarded on quality.
  • But this can be mitigated by clear evaluation criteria, well-briefed evaluators, and a scoring matrix which is not overly prescriptive


  • Neither price nor specification is fixed in advance.
  • Weightings can vary
  • Structure can vary – e.g may be a two-staged process allowing for quality evaluation first and then price evaluation for those that pass on quality (approved in Irish Waste v Northern Irish Water 2013] NIQB 41
  • Wide variety of different formulae (see for example A comparative study of Formulas for choosing the most economically advantageous tender – Stilger, Siderius and van Raaij)
  • Beware of very low price weighting which effectively negates prices evaluation – 25% was considered too low in Case T-461/08 Evropaȉki Dynamiki although this is not a hard and fast rule)
  • Some commentators suggest the sum of a price score added to the sum of a quality score does not comply with the PCR, and that in order to do so, a true "ratio" must be used which gives a price per quality point (see below)

 Price / Quality Ratios

Some commentators argue that as Regulation 67 refers to a price / quality "ratio", the sum of a price score added to the sum of a quality score does not achieve this.  This criticism does not reflect current practice:  many authorities use a "sum" approach and the author is not aware of any legal challenge to this.

How might a true "price / quality ratio" work?  To break it down to a simple example, it is established by taking the price submitted by a bidder, and dividing it by the quality points earned by a bidder.  Proponents of this method argue that this delivers a price per quality point, which enables a true comparison of what an authority is paying between bids for each point of quality.  They also argue that this is an absolute method of evaluation which is fairer to bidders as bidders are not being compared against each other, but only assessed by virtue of their own bid submission.

However, an obvious consideration is that the same ratio may be achieved by two very different bids, as the example below demonstrates.

Quality Points









In order to manage this, the authority needs to introduce additional parameters, for example by use of a financial envelope to indicate to the market the level of quality it is willing to pay for.  Indifference curves can be introduced into the formula so that the authority rewards just the right amount of quality (in "the Goldilocks zone") but that beyond a certain level of quality it values consecutive units of quality less and less.

When might a methodology be unlawful?

Whilst this section is not exhaustive, a number of decisions from Courts in other member states indicate that the Court may be willing to intervene where the methodology used produces an irrational result (i.e. one that does not identify the MEAT).


Case 5293-10


Gothenburg Administrative Court of Appeal


Re purchase of GPS map and guidance system for road maintenance vehicles

In this case the authority intended to give price a high weighting as it amounted for 80% of the overall available weighting:

  • Price – 60%
  • Operational Costs – 20%

Despite this, a bidder who was more expensive overall ended up winning, even though it did not do better on quality.  How did this happen?

The bidder undercut other bidders on the 60% price weighting and gained a sufficient lead this way to mean that it could still load very high costs onto the operational costs (at only 20% weighting this did not reduce its lead).

The Court held that this was not equal treatment because it was not an award to the MEAT.


Communes de Lognes





In this case the authority intended to score bidders between 0 and 40, depending on how they compared to the most expensive bid (which would score 0) and the least expensive bid (which would score 40).

However, only two bidders submitted a tender.  That meant that one of the bidders scored 40 on price, and the other scored 0 on price.  This would have been the case regardless of the price differential between them (i.e. even if they were separated by just £1).

The Court struck out this methodology because it did not take account of actual or real differences in the bid prices.

 Another methodology which has received some criticism is that of average pricing, whereby those tenders close to the average price score more highly than those further away:











The obvious problem with this is that a low priced tender will receive the same price as a much higher tender.  The European Commission (in its Public Procurement Guidance for Practitioners on the avoidance of the most common errors in projects funded by the European Structural and Investment Funds (2015)) has said that:

"…[this] average pricing methodology represents unequal treatment of tenderers, particularly those with valid low tenders"

The Commission has also stated that for projects funded by ESIF, average pricing is not allowed.

Relative Methodologies – do these comply with general principles?

Many different relative methodologies are currently in use, although not without some criticism from commentators.    Whilst we do not think that this necessarily means authorities should stop using relative methodologies, authorities should be aware of their particular features.

The main criticisms relate to:

  • Transparency, as bidders are ranked against each other and cannot hope to know therefore how they will perform as it is dependent on how other bidders perform.
  • Equal treatment, as some methodologies mean that bidders who are "middle of the range" will effectively be penalised by the methodology more than those bidders who are at either the top end (the most expensive) or the bottom end (the cheapest).
  • Relevance, as ranking is affected by an arguably irrelevant factor to the bid under consideration, namely the performance of another tender.

An example of a relative methodology (which is commonly used) is inverse proportionality, whereby the lowest priced bid is divided by the score of the bid being evaluated and multiplied by the price weighting:

In this example (with thanks to authors of papers referenced at the end of this article) the bids would score as follows:


Bid Price

Lowest Price divided by Bid Price

Price Weighting

Price Score

Bid A





Bid B





Bid C

£300* (Lowest Price)




Note that Bid B (which was submitted at a price exactly in the middle of Bid A and Bid C) has not attained a score which is mathematically in the middle of Bid A and Bid C.  It has effectively been penalised because it is middle of the range.

Potential Criticism?




  •  Is it equal treatment that those bidders who are middle of the range score less well than those who are more expensive and those who are less expensive?
  •  Is it transparent to use a methodology in which bidders will not be able to predict whether they will suffer as a result of being middle of the range or not?

This sensitivity to external factors is exacerbated if we consider the above example of Bids A, B and C before and after we add in a fourth bid.

BEFORE adding a fourth bid:

Before adding in a fourth bid, the combination of price scores together with quality scores gave the following result:


Price Score

Quality Score

Overall ranking

Bid A - £400

(300 ÷ 400 x 60)





Bid B - £350

(300 ÷ 350 x 60)



83.6 (1st)


Bid C - £300

(300 ÷ 300 x 60)





As can be seen, Bid B wins.

AFTER adding in a fourth bid:

After adding in a fourth bid (Bid D at £250) the ranking changes.  Bid B is now no longer the winner.  Instead, Bid A wins:


Price Score

Quality Score

Overall ranking

Bid A

(250 ÷ 400 x 60)



75.5 (1st)

Bid B

(250 ÷ 350 x 60)




Bid C

(250 ÷ 300 x 60)




Bid D

(250 ÷ 250 x 60)




Although the quality scores have remained the same (and indeed the content of the bids has not altered in any way) paradoxically the ranking of the bidders has now changed.  Bid A is now the winner, despite costing more than Bid B.  Critics say that because of this ranking paradox, this type of methodology is not capable of identifying the MEAT with sufficient certainty.

That is not to say that this methodology is necessarily unlawful (and the author is not aware of any decision saying that it is).  The methodology is commonly used, and indeed the effect of this paradox could have been mitigated by either:

  • Introducing a quality threshold (such as 20) in which case Bid D would not have got through, and would not have affected the ranking of the other bids in this way.
  • Lowering the price weighting


It is important, given the level of scrutiny over price evaluation, and given other recent judgments from courts in other member states, that authorities ensure that the methodology they select is:

- Relevant and proportionate for what they are purchasing e.g:

  • is a financial envelope best, or
  • a fixed specification, or
  • a blend of price and quality.
  • If the latter, what weightings are appropriate?
  • Is a two-staged process better than evaluating price and quality at the same time?

- One that will work within that particular market (e.g. based on what the authority knows about how many bidders will respond, whether anyone will submit a price of £0, what assumptions bidders might make).

- Modelled before publication, to ensure that it does not throw up any irrational results.



  • A comparative study of Formulas for Choosing the Economically Most Advantageous Tender – Stan Stilger, Jan Siderius and van Raaij
  • Price-Quality ratios in Value-for-Money Awards - Kilver and Kodym
  • Aspects of Evaluation from a Decision Theory Perspective – Derek W. Bunn
  • Random Effects of scoring price in a tender evaluation – Michael Bowsher QC

 If you woud like to discuss this topic in more detail please contact Emily Heard.

Our use of cookies

We use necessary cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. We won't set optional cookies unless you enable them. Using this tool will set a cookie on your device to remember your preferences. For more detailed information about the cookies we use, see our Cookies page.

Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

We'd like to set Google Analytics cookies to help us to improve our website by collection and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.
For more information on how these cookies work, please see our Cookies page.