You still haven't provided specifics about what exactly can be cut out beyond [huge list of bells and whistles]. You cut out completely the lighting or what?
No, it is calculated in a completely different way, as I tried to explain before in the most possible abstract method.
Forward is: (G*(L+1)+A) = R
where G = Geometry, L = number of lights in a scene, and A is a forward rendering based Anti-aliasing solution.
R is the final Rendertime, likely to be 16ms or 32ms depending on the game.
L is a fixed cost of
at least 1. A is probably the cheapest option available at the time.
so to hit a given value of R where L and A are fixed, you can extrapolate G linearly.
Deferred Rendering is
vastly more complicated to arrive at R, because
almost none of the individual variables involved are fixed, and almost none scale linearly.
G is fixed, but G in a modern pipeline is almost nothing.
A modern games render frame components would be something like:
(0.1ms
Geometry) + (0.4ms
lighting prepass) + (0.5ms
Z-buffer) + (2ms lighting quality 1st pass) + (1ms edge AA) + (2ms lighting quality second pass) + (2ms Bokeh DOF blur) + (2ms high quality bloom) + (2ms lighting quality third pass) + (1ms temporal AA) + (2ms SSAO) + (1ms colour grading)
of which the only parts of the effects chain required are underlined, and where all of the rendercosts are also
all variables, where you can literally have a slider between 'speed' and 'quality' (or turn them off completely).
e:
Even so, at <200 GFlops you are not going to get away with just post-processing or even minor shading shading quality reductions in most high-end games, if you want to render them at 720p.
No, which is why I don't think any developers supporting the Switch are going to take "X1 / PS4 console" settings then dial down resolution until it runs docked.
It has a completely different performance metric. It would require a commensurate level of porting effort.
e: I mean, for starters, if your target is a 720p handheld, there's no point using any textures over 512x512.