It's the percentage performance loss moving from 1440p to 4K. So if it's 0% then you lose no performance, while if it's 33.3%, you lose a third of the performance etc.
You originally said that the performance "dies" at higher resolutions with RDNA 2 and that this "isn’t in line with other architectures". My point is that RDNA 2 scales similarly to RDNA 1 and Turing as per the benchmark data I presented. So it's not as if RDNA 2 went backwards in any way. Rather Ampere improved Nvidia's performance scaling vs Turing and this put AMD in a worse competitive situation. So I am not trying to deny that AMD scaling vs. Ampere was worse than RDNA 1's scaling vs. Turing. I am saying this was not a failure of RDNA 2, unless you think every advancement your competitor makes that you don't is a "failure".
And with RDNA 3, based on the benchmark data, we seem to have a situation where the 7900 XTX achieves scaling parity with Ada/Ampere, the 7900 XT is between Ada/Ampere and RDNA 2, and the 7800 XT is in line with RDNA 2. (Check out the below meta review where we see the 6800 XT hold its ground against the 7800 XT)
)
The comparison is not between the 2080 Ti and the 5700 XT but between RDNA 2 and RDNA 1/Turing. Now it's fair to object that RDNA 1 was only used in the 5700 XT which is midrange card, so not designed for 4K. But at 1080p you will be CPU limited on RDNA 2, so that's not ideal either. If we go back to the Radeon VII which has a lot more bandwidth, we still see it behind RDNA 2 in terms of scaling.
Yes, because Ampere scales better than RDNA 2 and Turing.