Thanks. It's always good to get a developer with year of experience to give their opinion on a situation like this.I would avoid speculating on latency of memory subsystems unless there are 1:1 benchmarks available. Eg. PS2 was lambasted for supposed Ram latency for years even though its real-world access times were actually 2nd fastest of the 4 consoles of its era.
You're in a thread where people are taking an amalgation of CPU&GPU jobs spread across 8 cores and X-GPU resources, get a 25% difference on the end and go "AHA - it's because of CPU clock-speed being 10% different" - it's just silly assertions against silly assertions on all sides.
I mean we're discussing an application that is barely holding together at its seams - there's little guarantee different platforms have even shipped on the same data/code-version and a difference in either could account for dramatic performance differences even if hw-performance was identical. And that's assuming performance differences aren't purely down to bugs, rather than optimization, to begin with.
IMO it's meaningless to even speculate on this - anything near complexity of ACU is not a use-able CPU benchmark with the amount of variables involved (see my comments above).
And even if we assume those variables are NOT an issue - there's platform-specific codebase unknowns. As an extreme example - a number of Ubisoft's early PS3 games ran on a DirectX emulation layer, badly obfuscating any relative-hw performance metrics you might expect to get comparing cross-platforms.