This kind of testing is new to me, I never heard that these kind of
tests were done. In the end it is very expensive to build a test engine solely
to measure the performance with so called “end of life ”clearances”. Normally
the clearances in the engine are constantly measured so you get a relation
between the change of the clearances due to wear and rub ins and the change of
the performance (both fuel burn and compressor stability).
To build in intentionally larger tip clearances makes sense only for the
compressors to test compressor stability. And if these tests showed a SFC shortfall
of 4-5% (I now heard 5%) then either the performance retention of the LEAP
engine is very bad or there indeed is a performance shortfall as the effect of
larger tip clearance should not lead to a loss in SFC larger than 2%.
Now CFM always mad big noise
about the good performance retention the LEAP engine family would have, in particular
compared to the P&W GTF Family. One could ask: how do they want to know
that? And I think, neither CFM nor P&W nor anybody else can know today and
will know until real life data is available. With “real life data” I mean data
that will come out of the field once airlines really fly the aircraft and engines
for some time.
So we don’t know for sure how much of the (obviously measured) 4-5% are attributable
to a new and with production tip clearances build engine. But CFM’s argumentation
does not add up to the point where they would meet not only their own expectations.
To meet the own pretest prediction within 0.5% does not mean a whole lot to the
outside world until you know where the prediction was relative to specification,
i.e. relative to the target SFC where the B737MAX would meet 14% fuel burn
reduction vs. the B737NG.