Indeed. It's a bit odd how some folks equate a purely arbitrary block count to the actual IP revision.
Interesting news:
https://twitter.com/mochi_wsj?ref_src=twsrc^tfw
WSJ analyst is claiming after the recent Sony investors meeting that Sony is planning to announce a PS4 Slim on top of Neo in September at Tokyo Game Show.
The only GPU upgrade they'd need is an up clock though if I'm not mistaken? With a CPU and power upgrade, up clocking the GPU makes a lot of sense.Osiris' original thread only talked about a upgraded CPU not a GPU upgrade. That came a lot later and from other sources? I can't honestly see how a upgraded GPU was ever actually on the cards outside Sony doing what (I assume) Microsoft to be doing i.e. going late 2017 and Vega.
The only GPU upgrade they'd need is an up clock though if I'm not mistaken? With a CPU and power upgrade, up clocking the GPU makes a lot of sense.
With the sound issues on the 480 though, I wonder, what are the chances we'd get a liquid cooled GPU solution? That would be interesting, and a first for consoles.
Yeah, that didn't make sense to me.
See, I never believed that Neo was a PS4 Slim. I figured if there was going to be a PS4 Slim, it would be in addition to Neo. Don't know why some people thought otherwise.
Which might back up the idea of a delay to neo? If they held back waiting for more power, they'd need something to compete against Xbox one S
The only GPU upgrade they'd need is an up clock though if I'm not mistaken? With a CPU and power upgrade, up clocking the GPU makes a lot of sense.
With the sound issues on the 480 though, I wonder, what are the chances we'd get a liquid cooled GPU solution? That would be interesting, and a first for consoles.
They'll also need to up clock he GDDR5 ram for more bandwidth.
Since 8GB GDDR5 on the 256bit bus maxes out at 256 gigaflops, the max up clock on the Neo GPU is to 1080 MHz without running into bandwidth bottlenecks.
No. What the patent describes is more expensive than directly changing frame-buffer resolution.thuway said:Is this how Neo plans to get to 4K!?
1080 MHz means a 5tf PS4K. In 2016. Thats insane.They'll also need to up clock he GDDR5 ram for more bandwidth.
Since 8GB GDDR5 on the 256bit bus maxes out at 256 GB/s, the max up clock on the Neo GPU is to 1080 MHz without running into bandwidth bottlenecks.
It appears that Sony plans to take advantage of the XB1 Scorpio delay and NEO - VR in 2017.
Aim for the second time in the operating income of 5,000 billion yen in Sony history in FY '17
2017 with HEVC and HTML5 <video> in browsers creates an opportunity for Sony Network services in everyones platform.
Might ease the pain for holiday buyers that they ended up purchasing a smaller, quieter & cheaper PS4 if neo comes early next year. Though I'm not even sure audience who are interested in a smaller, quieter & cheaper PS4 three years after launch is going to care that a bigger & more expensive version is coming.
Me either? tech guys!! Insight please!What is being said here? I don't get what's happening.
Bottleneck is also shared on the PS4 between CPU and GPU. Would like to see them use GDDR5x to hit ~270 Gb/s to prevent throttling.They'll also need to up clock he GDDR5 ram for more bandwidth.
Since 8GB GDDR5 on the 256bit bus maxes out at 256 GB/s, the max up clock on the Neo GPU is to 1080 MHz without running into bandwidth bottlenecks.
He isnt saying anything. Always going on about hvec. Im 99% convinced this is the same guy That would go on about shitty power VR or dreamcast on end.Me either? tech guys!! Insight please!
Is Sony doing something with PSVue or PSnow with web browsers?
Me 3 years ago said:I know this sound silly but it seems like it's exactly what Sony is planning to do with the 8 ACE's.
It's a few things that I have read over the last year or so that's leading me to believe this is what they are doing I'll try to go back & find all the quotes later but for now I have a question.
If Sony was to config the 64 command queues to make the pipelines emulate real fixed function pipelines could they work just as efficient as real fixed function hardware?
--------------------------------------------
By creating the fixed function pipelines at the driver level once you figure out just what fixed functions you want the pipelines to be used for.
---------------------------------------
I'm not saying that the hardware would be fixed function I'm saying create what would look like a fixed function pipeline so that the software would see it as if it was running on fixed function hardware.
OK this is what I'm saying create fixed function pipelines so it would be like this 'Pipeline 1 will run the physics code & Pipeline 2 will run the lighting code' because the pipelines are designed to look as if they are actually hardware created for Physics & Lighting.
The fixed function pipelines haven't been created yet but in a few years the devs & Sony will chose how the data paths should be laid out to create the fixed function pipelines.
-------------------------------------------------------
Anyway I'll explain again.
Instead of creating 3 or 4 fixed function hardware chips you will use one general purpose chip but creating 3 or 4 fixed function pipelines.
A single Graphics Command Processor up front is still responsible for dispatching graphics queues to the Shader Engines. So too are the Asynchronous Compute Engines tasked with handling compute queues. Only now AMD says its command processing logic consists of four ACEs instead of eight, with two Hardware Scheduler units in place for prioritized queues, temporal/spatial resource management and offloading CPU kernel mode driver scheduling tasks. These aren’t separate or new blocks per se, but rather an optional mode the existing pipelines can run in. Dave Nalasco, senior technology manager for graphics at AMD, helps clarify their purpose:
"The HWS (Hardware Workgroup/Wavefront Schedulers) are essentially ACE pipelines that are configured without dispatch controllers. Their job is to offload the CPU by handling the scheduling of user/driver queues on the available hardware queue slots. They are microcode-programmable processors that can implement a variety of scheduling policies. We used them to implement the Quick Response Queue and CU Reservation features in Polaris, and we were able to port those changes to third-generation GCN products with driver updates."
Quick Response Queues allow developers to prioritize certain tasks running asynchronously without preempting other processes entirely. In case you missed Dave's blog post on this feature, you can check it out here. In short, though, flexibility is the point AMD wants to drive home. Its architecture allows multiple approaches to improving utilization and minimizing latency, both of which are immensely important in applications like VR.
My words from 3 years ago vs what HWS in Polaris is
Could Sony create fixed function pipelines for the PS4 even after release?
Dave from AMD explaining what HWS is
http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616.html
Someone has taken Jeff Rugby as master.
I'm going with it having a 3.68 tflops GPU from moving to 14nm from 28nm & fitting 36CUs where it used to be 18 CUs.
I think AMD polaris 10 is the PS4.5 GPU & all the AMD polaris 11 GPUs are for the Windows 10 devices that will replace Xbox One.
Also if Polaris 10 isn't basically PS4 GPU @14nm & 2X the CUs why is AMD making a DirectX11 GPU in 2016?
So realistically, what's the strongest neo could be given upclocking?
So realistically, what's the strongest neo could be given upclocking?
I don't think you are going to see a shift in how devs use the Gpu. Its simply more muscle to provide better framerate or resolution. You wre forgetting ps4 support is a mandate. You're making it out to be way more flexible then it is.My thoughts is PS4 Neo is PS4 but the extra 18 compute units will be used to create virtual co-processors/accelerators/GPUs
Devs will basically see it as a PS4 besides tagging some codes to run on the virtual co-processors , these co-processors will be for the up-rendering & so on. Neo will be a beast that work smart & not hard.
I don't think you are going to see a shift in how devs use the Gpu. Its simply more muscle to provide better framerate or resolution. You wre forgetting ps4 support is a mandate. You're making it out to be way more flexible then it is.
I don't think you are going to see a shift in how devs use the Gpu. Its simply more muscle to provide better framerate or resolution. You wre forgetting ps4 support is a mandate. You're making it out to be way more flexible then it is.
You have to stop looking at it this way. You can upclock the GPU, but what's the point if your memory isn't fast enough to take advantage of it, or if you don't have enough of it?
The current Neo is nearly perfectly balanced (it could use a little more memory bandwith). However, if you were to try and force it to be >5 TF, you'd probably want to design an entirely new machine. At that GPU point, you'd be looking at something that would require around 280 GB/S or > memory bandwith, and atleast 12 GB of memory. There's no point in upclocking your GPU that hard if you can't actually use it.
So you're saying that the AMD has purposely supplied RX 480 with a bandwidth that is restricting it with its 224 or 256GB/s bandwidth (depending on the amount of memory) ?
Perhaps the 480 has the benefit of not having to share bandwidth with an on chip cpu like the Neo's APU?
I was looking the Polaris (RX 480) results and for my surprise looks like it is not suitable for a closed console machine due it bad power draw.
I know Sony will use TSMC 16nm FF+ and that can hold the hungry of Polaris 10... 911Mhz is set to maintain it below 100W (I'm not sure if it is possible even at lower clocks because the card hits 200W at 1266Mhz in some situations).
.
Sony never used GF and GF itself didn't have production enough to supply AMD... now about the differences:How do you know that Sony are moving Neo APU from Glofo to TSMC ? I cant find any information anywhere.
Reading the 480 reviews many are suggesting TSMC could be better, but is there any foundation in that ? Is the Glofo 14 nm that much more power draw ?
Also TSMC are struggling to supply Nvidia beig suggested on many forums....so does not compute what you have said if Sony are trying to get Neo made in time.
So your belief is that Option B is, " or we just ruin it"? That strikes me as unlikely.You have to stop looking at it this way. You can upclock the GPU, but what's the point if your memory isn't fast enough to take advantage of it, or if you don't have enough of it?
The current Neo is nearly perfectly balanced (it could use a little more memory bandwith). However, if you were to try and force it to be >5 TF, you'd probably want to design an entirely new machine. At that GPU point, you'd be looking at something that would require around 280 GB/S or > memory bandwith, and atleast 12 GB of memory. There's no point in upclocking your GPU that hard if you can't actually use it.
Perhaps the problem isn't with Polaris but rather GF's fabrication? They're historically behind schedule, and wouldn't significant leakage at higher clocks be expected from an immature process? Seems like a lot of Polaris' issues might be solved or at least mitigated with better fabrication.I was looking the Polaris (RX 480) results and for my surprise looks like it is not suitable for a closed console machine due it bad power draw.
I know Sony will use TSMC 16nm FF+ and that can hold the hungry of Polaris 10... 911Mhz is set to maintain it below 100W (I'm not sure if it is possible even at lower clocks because the card hits 200W at 1266Mhz in some situations).
What Sony can do? I don't believe they will try to increase the clock anymore... so 5TF, 5.5F or better raw power is out of question in my predictions... 4.2TFs is yet a thing with a really big machine with thermal issues (the external power brink makes sense now).
What do you guys thing?
Polaris 10 is so behind in thermal / power draw that it is a disappointment to be fair after AMD claims about aiming perf/watts... it is a great card for desktop due it prices but for Neo I don't think it is a good option.
Let's see if GF process gets better over the time.Perhaps the problem isn't with Polaris but rather GF's fabrication? They're historically behind schedule, and wouldn't significant leakage at higher clocks be expected from an immature process? Seems like a lot of Polaris' issues might be solved or at least mitigated with better fabrication.
Apart from the use of HBM2 rather than GDDR5, what are the differences between Vega and Polaris?
I know Sony will use TSMC 16nm FF+ and that can hold the hungry of Polaris 10...
So Osiris saying that the 5,5 Tflops option could delay the launch to next year to make it more affordable(with cheaper cooling solution,smaller case...) has its logic in waiting the chip to be made by TSMC.No way Neo could launch with a GF chip at 5,5 Tflops.
I was looking the Polaris (RX 480) results and for my surprise looks like it is not suitable for a closed console machine due it bad power draw.
I know Sony will use TSMC 16nm FF+ and that can hold the hungry of Polaris 10... 911Mhz is set to maintain it below 100W (I'm not sure if it is possible even at lower clocks because the card hits 200W at 1266Mhz in some situations).
What Sony can do? I don't believe they will try to increase the clock anymore... so 5TF, 5.5F or better raw power is out of question in my predictions... 4.2TFs is yet a thing with a really big machine with thermal issues (the external power brink makes sense now).
What do you guys thing?
Polaris 10 is so behind in thermal / power draw that it is a disappointment to be fair after AMD claims about aiming perf/watts... it is a great card for desktop due it prices but for Neo I don't think it is a good option.
Polaris 10 is simply over-volted. You can undervolt it and its power consumption goes down significantly.
Lottery chip you mean because you need luck to get a card that works with undervolt lolPolaris 10 is simply over-volted. You can undervolt it and its power consumption goes down significantly.
1, The 5.5 Tflops was never part of Osiris' original thread. I assume the speculation came from others a lot later ("Bob" or "Diana"?). 2, I doubt Neo is coming next year after Sony acknowledged it before E3. Why not the usual "we don't comment on rumours and speculation"? Apart from that, delaying until Mar '17 isn't going to enable 5.5 Tflops become a reality IMO.
I have a question to those in the know. If you look at die shots of both PS4 and Xbox One you see "©2012" etched into the chip despite both consoles not releasing until Nov '13. Does this not signify that the chips were locked down in this year outside of tweaks to clocks? i.e. Xbox One upped the clocks by 53MHz and 150MHz.
I don't know anything about Neo, but the reason they acknowledged it is so the media would not hound them about it at E3 and fans would not be disappointed when it wasn't revealed. Microsoft acknowledged Scorpio and it is well over a year away.
Nope... read the article... Xbone lacks what made a GNC 1.1 card while GCN 1.1 was based in what they did with PS4.
An easy comparison without enter in details...
GCN 1.0 = 2 ACEs = Xbone
GCN 1.1 = 8 ACEs = PS4
That was discussed at PS4/Xbone launch.
What is being said here? I don't get what's happening.
jeff rigby posts are always nonsense
Would Sony's engineers modify the GPU or do they just use the exact same hardware that a consumer would?
I'm sure it is a co op work using AMD tech... AMD and Sony confirmed that a lot of times.sony is commissioning a chip from AMD, their engineers don't have any detailed control over the design. it won't be standard consumer hardware just because AMD doesn't make APUs with 8 jaguar cores and mid-size GPUs for the PC market, but the IP blocks that make up the chip are pretty standard yes.
Whoa! Sony filed a patent this January for this "Uprendering" technique. - http://www.freepatentsonline.com/20160005344.pdf
Is this how Neo plans to get to 4K!? The patent describes a hardware based technique that uses multiple frames of data and reconstructs them into a higher resolution image. This will NOT be as good as native 4K - but this could be a very intelligent and elegant compromise.
The interesting thing about this tecnhiqe? You need super high resolutions to make it more accurate. 60 FPS = more data = more image information = accurate reconstruction. I'm sure it falls apart in some areas, but motion blur and high framerates should go a long way in easing those concerns.