• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

New PS3 Model (CECH-4000) Registered on FCC (Jeff_Rigby alert)

patsu

Member

patsu

Member
The Nasne is interesting because you can control it from PS3, Vita, Vaio and Xperia. But it would be even better if it supports iOS and Android as well.

I don't know what Nasne uses to support these devices over the net (DLNA ? RemotePlay ? app specific ?). Vita DLNA was in Sony's slides but never officially announced.
 
More reasonable speculation might be that the new model features a new connection to an optional 16GB chip on the motherboard. Or they don't want to keep the old production line just to build new debug units, say.
I guess that depends on why Sony would go with a new chassis designation from 3000 to 4000 model.

Sony in 2009 went with the 2000 designation with the Slim and a reduction in die size for Cell, RSX, change in the HDMI chip and the addition of CEC support and a change in the outside case along with a release of firmware 3.0. So what triggered the major change in chassis #. In the 3 years since the Slim launch there have been changes in models and minor changes in model # and a change from 2000 to 3000 with the RSX Shrink but not Cell.

The major software change that came with, at the same time as the 2000 Slim was Firmware 3.0 and planned was support for 3-D and Sony starting to implement support for IPTV. To this point Sony has not implemented IPTV or HTML5 <video> but has been putting in place the libraries to fully support a webkit2 browser and IPTV, in other words XTV. They transitioned to firmware 4.0 without a new chassis and from past experience Firmware 3.0 supported both fhat and Slim except for CEC commands. So while my opinion is that major Firmware version number changes loosely tracks changes in major model number changes because there is a long range plan in place, it does not dictate the change. (Still think 4K resolution is coming during the 4.X version numbers and it may be supportable on older models. Firmware 4.0 may signal OpenCL support, h.265 as well as Full HTML5, WebGL, WebCL and AR (most of these changes would only apply to the XMB side not game side)).

The amount of change is debatable. My opinion is major change and it's just that, a guess based on the trends I've noticed and the published patents that could support such a major redesign.

So given it's not going to be just a dumb shrink for Cell (everyone agrees that will not be possible) or changes in top loading or slot loading Blu-ray player, what changes are coming?

Sony tried to hide the Slim FCC registration from us by registering the PS3 Slim 3000 model using a Shell corporation with a different name. The press saw through that and leaked the information that a Slim was coming and Jack Tretton mentioned Sony's displeasure that they couldn't keep a secret. Super_secret was also leaking information about firmware 3.0 and a new chassis.

This time they are not trying to hide that a new model is coming but are hiding pictures of the outside and inside as well as owner manual and theory of operation. Hiding that it's a top loader? Smaller? No, something else is coming with this new 4000 chassis.

"a new connection to an optional 16GB chip on the motherboard." Possible but I don't think they would totally redesign the PS3 chassis to just support a cheaper interface to a serial Flash SSD. They can do so with economy of scale and support a SSD drive that is just internally attached to the same SATA interface the hard disks attach. It also gives no advantages unless SATA is upgraded to a newer faster SATA interface. A newer faster SATA interface or USB 3.0 (maybe not fully) is a major major redesign and not possible without more than just replacing a SATA or USB support chip as the data thru-put is massively higher.

AMD PCIe and Southbridge would support this but would require faster memory access as would Flex I/O. Cheaper DDR3 memory could be used with a 256 bit buss not 128 bit. XDR (Rambuss) faster memory was required because of motherboard trace length and a choice to go with 128 bit wide memory. It's a differential pair for each buss line to memory (costly). We now have TSVs and substrates between Chip and motherboard that can easily/cheaply support SHORT traces to 256 bit wide memory. That is a major game changer....more expensive motherboard and XDR memory is no longer needed. Cheaper custom 256 bit wide DDR3 memory can be used and it's also fast enough to support USB 3.0 and faster SATA. This same TSV technology inside memory chips is making possible serial stacked Flash ram with logic layer on bottom and stacked custom high density ram.

We already know a PS3 Cell refresh to 32nm is going to require a redesign as XDR and Flex I/O interfaces on each side of the chip can't be downscaled. Multiple thru Cell connections using TSVs will be needed rather than current connections at each end of the chip. A new memory interface to L2 cache would be needed and with TSVs a larger buss width for cell is possible. Using a large multi layer substrate between SOC chip (CPU, GPU and more on one SOI Silicon chip) and the motherboard. The substrate would have custom 256 bit wide memory attached to it.. Using TSVs in Chip to connect to the substrate to support connections between SOC chip and memory. Short trace length and 256 bit wide memory would allow cheaper DDR3 memory to be used in a PS3 instead of XDR RamBuss memory.

This is only one of a possible number of changes that could come because of what TSVs make possible. My understanding is the Cell design was best design at the time given TSVs and custom memory were not available. Connections internally to Silicon were wire bonds connected to pins and the interface pins had to be accessible at the edge of the chip. This is no longer the case. This opened up new chip design models and now we have CPU-GPU and more on the same silicon. Because of this the PS3 Cell design is dead but not PPUs and SPUs.

IBM as of Dec 2011 is making a all on one SOI wafer @32nm CPU-GPU-eDRAM for WiiU and what else as both GF8 and IBM are both making these chips.
 
I think the big fw update comes with the new Slim ps3 release aswel as the big fw update for the Vita.
Following how they did it with the past PS3 slim that would follow. IT's now more than just the PS3 as the Network connected Sony blu-ray player is using EXACTLY the same browser and XMB that the PS3 now uses. It's now ecosystem changes and updates across almost all Sony networked platforms this time, Vita included of course.

Firmware 3.0 came with the Slim, what we thought was coming with 4.0 (major ecosystem and browser updates) are coming with or nearly the same time as the Super slim in my opinion.
 

patsu

Member
Yo jeff, they are not using the same exact browser version. Close but not the same.

Nonetheless, I suspect they need to rev the firmware just because the new SKUs probably use different drivers for the parts.
 
Yo jeff, they are not using the same exact browser version. Close but not the same.

Nonetheless, I suspect they need to rev the firmware just because the new SKUs probably use different drivers for the parts.
What differences, I've not been checking in on browser developments as often recently? It will be based on the GTKwebkit2 1.9X version APIs right? Same exact UI on PS3 and 2011 Networked blu-ray players..looks & works exactly the same just different controller. Vita looks different because it's designed for a touch screen but the webkit core version is the same using the GTKwebkit2 APIs.

Different drivers yes, but not as much as one might think. Gstreamer-OpenMax in PS3 and Vita but Gstreamer in blu-ray, Cairo-eGL in all supported by EGL (not CairoFB (Frame Buffer) anymore), (X)nix OS (eLinux or eFreeBSD Unix with 2013 a new eLinux for everything), HTTP library and other support libraries may be different as I have not checked. I guess I should check disclosures again to be absolutely sure of the above.
 
http://www.theregister.co.uk/2012/01/10/ibm_globalfoundries_fab_8_ramp/ said:
IBM's statement about Fab 8 said that the chips that are coming out of Fab 8 using 32 nanometer SOI tech were being produced last year at its own East Fishkill facility and sport eDRAM. IBM added that the chips would be "used by manufacturers in networking, gaming, and graphics," which probably means that GlobalFoundries is not etching Power7+ or System z mainframe engines.

The 18-core PowerPC A2 processors used in the BlueGene/Q massively parallel supercomputers is currently implemented in 45 nanometer processes like the Power7, and includes eDRAM on the chip for L2 cache much as the Power7 does for L3 cache. It could be that IBM's shrink to Power7+, which will be based on 32 nanometer processors, is being tested for dual-fabbing by Big Blue, or a PowerPC A2+ kicker is being tested. There could be another kicker to the Power and Cell derivatives that IBM has sold to Sony, Microsoft, and Nintendo for their game consoles, too. The chip, or chips, that GloFo is baking for IBM are a mystery. IBM did not return calls for comment on what chips are being made at Fab 8.
Repeat of previous quotes but probably more accurate. Start of wafer production (10,000 wafers apx 5 million chips at IBM) Dec 2011-90 days till wafer comes out of oven/done. March 2012 packaging-testing and potential production of test machines. From that point building inventory for sale this season 2012 starting Oct-Nov. "Chips being made by GloFlo for IBM are a mystery" as are the chips made at IBM Dec 2011. It's unconfirmed rumor from SemiAccurate that they are for Microsoft's Xbox 720. Same chips now being produced by GloFlo @ 32nm as "test and for Xbox 720 Development" NO way. Again, it should be obvious that the initial 10,000 wafer volume at IBM and GloFlo making the same chips for IBM is not for testing or for developer platforms.

Two foundries making the same chips would also exceed the demands for just WiiU right? If PS3 then it's not going to be a Cell dumb shrink...what updated changes to the design? New PPC a given as the Xbox 360S got a new PPC and had to emulate the older PPC. The 2010 Sony 1PPU4SPU patent had text that read like past tense; "a new PPC designed from the ground up to support a fan-out of 4". All on one 32nm SOI wafer (CPU-GPU-eDRAM and more) or discrete CPU and GPU?

32nm this year and 28nm next year; 32nm for PS3, Xbox and Wiiu this year with 28nm for PS4 and Xbox720 next year?

http://www.eetimes.com/electronics-news/4371752/GlobalFoundries-installs-gear-for-20nm-TSVs said:
SAN JOSE &#8211; GlobalFoundries is installing equipment to make through-silicon vias in its Fab 8 in New York. If all goes well, the company hopes to take production orders in the second half of 2013 for 3-D chip stacks using 20 and 28 nm process technology.

The systems should be in place and qualified by the end of July, with about half of them installed today, McCann said. The company aims to run its first 20 nm test wafers with TSVs in October and have data on packaged chips from its partners by the end of the year.

GlobalFoundries&#8217; schedule calls for having reliability data in hand early next year. The data will be used to update the company&#8217;s process design kits so its customers can start their qualification tests in the first half of the year.

If all goes well, first commercial product runs of 20 and 28 nm wafers with TSVs can start in the second half of 2013 and ramp into full production in 2014, McCann said.
Which might explain the Sony statements that they will wait till processes are in place; a 3-6 month delay till March of 2014 if Sony or Microsoft want's to take advantage of the new processes. 28nm and 2.5D with interposer is available for 2013 but 3D with some critical parts 20nm (like AMD Southbridge quotes at 22nm) will wait for 2014.
 
Because they have a new chassis? Like, a physical outer shell that is different from the previous physical outer shells?

Phat
CECHAxx · CECHBxx · CECHCxx · CECHExx · CECHGxx · CECHHxx · CECHJxx · CECHKxx · CECHLxx · CECHMxx · CECHPxx · CECHQxx
Slim
CECH-20xx · CECH-21xx · CECH-25xx · CECH-30xx · CECH-40xx

Motherboards
Phat
COK-00x Cell/RSX 90nm/90nm Watts/Max 200/380 Release 8/2006 Models CECHAxx, CECHBxx
SEM-00x Cell/RSX 65nm/90nm Watts/Max 160/280 Release 11/2007 Models CECHGxx
DIA-00x Cell/RSX 65nm/90nm Watts/Max 160/280 Models CECHHxx
VER-00x Cell/RSX 65nm/65nm Watts/Max 140/280 Release 8/2008 Models CECHLxx, CECHMxx, CECHPxx, CECHQxx

Slim Starts 2009
DYN-00x Cell/RSX 45nm/65nm Watts/Max 100/250 Models CECH-20xxA / CECH-20xxB
JTP-00x Cell/RSX 45nm/40nm Watts/Max 80/250 Models CECH-25xxA / CECH-25xxB
JSD-00x Cell/RSX 45nm/40nm Watts/Max 80/250
KTE-00x Cell/RSX 45nm/40nm Watts/Max 80/210 Models CECH-30xxA / CECH-30xxB

It's hard to tell what triggers the 2000 - 3000 - 4000 model change. Cell is overdue for a die size reduction but the Technoblog (Brasil photo) has power supply at 190 watts which is too small a reduction for Cell and RSX, it could just be a RSX reduction. Very disappointing if true and no reason for IBM to be making a 32nm SOI chip for PS3 as RSX is bulk process. Which would have only a new WiiU and Xbox getting a major refresh which I don't believe. Looking at the Technoblog pictures and past FCC pictures from Sony:

FCC pictures of PS3 Slim here, click on external photos https://fjallfoss.fcc.gov/oetcf/eas...=N&application_id=386133&fcc_id='AK8CBEH18C1' Notice in-focus with background colors. Not for resale tag printed where Model/label tag goes. None of this is in the Technoblog photos. Either Sony massively changed everything in their FCC submission or the Technoblog pictures are fake.

Technoblog pictures: http://www.neogaf.com/forum/showpost.php?p=39836931&postcount=608
 
Really?

Xbox 360S doesn't use emulation.
The PPC, AMD GPU and Southbridge in the Xbox 360S are not original designs. The OS has to emulate the older hardware when there are differences in instruction sets and timing, particularly timing. It has to emulate older hardware 100% including timing to be 100% compatible for GAMES. Most of the PPC ISA family calls are identical so emulation is minor. http://www.hotchips.org/wp-content/uploads/hc_archives/archive22/HC22.23.215-1-Jensen-XBox360.pdf

Just about everyone mentions how much effort it took to slow down the Xbox 360S to match older 360's. An identical design at the same clock speed @ at smaller die size draws less power but would run exactly the same with the same timings. Only if there are internal design differences in opcode/registers etc. that are more efficient would timing change. The 360S was not a dumb shrink, it was a major design change using new technologies that are cheaper and faster at the same time.
 

Rolf NB

Member
Phat
CECHAxx · CECHBxx · CECHCxx · CECHExx · CECHGxx · CECHHxx · CECHJxx · CECHKxx · CECHLxx · CECHMxx · CECHPxx · CECHQxx
Slim
CECH-20xx · CECH-21xx · CECH-25xx · CECH-30xx · CECH-40xx

<...>

It's hard to tell what triggers the 2000 - 3000 - 4000 model change.
Theory time: shell numbering bumps refer to shell changes, but not all changes are immediately obvious. Say, you recess your mounting holes for USB ports. You change placement of screw holes and other mounting facilites. Maybe you change aspects of the casing that aren't even visible from the outside, but still lie squarely in the plastic-case-production domain. I.e. you don't change your shell number just because you mount a different fan in the machine. But you do bump it when you change your mounting spots and intake paths (which only in turn allows mounting a different fan).

Cell is overdue for a die size reduction but the Technoblog (Brasil photo) has power supply at 190 watts which is too small a reduction for Cell and RSX, it could just be a RSX reduction.
People continue to misunderstand line power labelings on PSUs.

K-model slims consume 70W at full tilt, measured at the wall. It is impossible for them to draw more and still work normally.

The label on the line connector indicates the maximum current draw of a device before its internal fuse triggers. This value can only be reached when the device is already malfunctioning. Like, you drive a bunch of nails through it with a hammer and short a few internal circuits. It'll draw your 190W then. For a second or maybe two. Then it will never draw any power again.

The audience for this information is not the consumption-conscious consumer. It's information for whoever is speccing out wiring and circuit breakers in your building. So they can make safety statements with confidence.
"This wire cannot melt itself even during a fire where all connected equipment starts frying up itself. I know that because the line load can never exceed 25A, which the wires can handle, and then comes the main breaker anyway".

"This device has such a high line load that it will trigger the main breaker if/when it malfunctions, bringing the whole west hall down with it. We better connect it to a differect circuit."
 

DenogginizerOS

BenjaminBirdie's Thomas Jefferson
For XMB, has it always been "Remove Disk" to eject a disk? I thought it said "Eject Disk" before. More indication of a lidded disk compartment?
 

CorrisD

badchoiceboobies
For XMB, has it always been "Remove Disk" to eject a disk? I thought it said "Eject Disk" before. More indication of a lidded disk compartment?

You're about a month late on that, it was one of the first clues that lead us to believe it was some sort of top loading system.
 

THE:MILKMAN

Member
Another point about the Cell and RSX chips being shrunk........

If they where both die shrunk to 32nm and both saw a 50% power consumption reduction, this in of itself wouldn't give a massive difference IMO.

We know the 45nm Cell is at ~19W and I guess the RSX is at ~25W?

A 50% Power consumption reduction (complete guess!) would yield a ~20-22W total reduction. Would that be a fair assessment? Or am I way out on this?

If I'm anywhere near right with the above, I don't think it would be worth the effort/time/cost for Sony to do it. Especially with Sony's financial problems.
 

coldfoot

Banned
Another point about the Cell and RSX chips being shrunk........

If they where both die shrunk to 32nm and both saw a 50% power consumption reduction, this in of itself wouldn't give a massive difference IMO.

We know the 45nm Cell is at ~19W and I guess the RSX is at ~25W?

A 50% Power consumption reduction (complete guess!) would yield a ~20-22W total reduction. Would that be a fair assessment? Or am I way out on this?

If I'm anywhere near right with the above, I don't think it would be worth the effort/time/cost for Sony to do it. Especially with Sony's financial problems.

Shrinking chips isn't solely intended for reducing power consumption. Smaller chips cost less since you can make more per wafer, assuming same size/cost wafers and same yields.
A $10 reduction in Cell+RSX costs would be $150M savings in BOM alone for the 15M PS3's Sony expects to sell in a year, before taking into account savings from cooling system and power supply.
 

Rolf NB

Member
I don't think power consumption drops like that anymore with node shrinks. A bit less, yes, but half is way optimistic.
 

Rolf NB

Member
Weird shit going on on Amazon.de. 160GB PS3slim (K) was 199€ for the past few weeks, now it jumped to 240,70€. The 320GB version similarly went from 209€ to 239,57€. Weird prices, conspicuous jumps. Stock drying up? Discuss.
 

TheD

The Detective
Just because they integrated the FSB into the same die, it doesn't mean that they emulate PPC opcodes.


Yes, the 360S just has some transistors on the CPU GPU to add some delay to the FSB so that games won't break due to timings.
It was NOT a major change.

A new PS3 will NOT drop the Cell, RSX or XDR!
Any major changes like that will break all games!

Cell can be shrunk and Nvidia will never, ever give AMD a license to the low level RSX commands!
 

THE:MILKMAN

Member
Shrinking chips isn't solely intended for reducing power consumption. Smaller chips cost less since you can make more per wafer, assuming same size/cost wafers and same yields.
A $10 reduction in Cell+RSX costs would be $150M savings in BOM alone for the 15M PS3's Sony expects to sell in a year, before taking into account savings from cooling system and power supply.

Oh I know! What we really need to know is how much would it cost Sony to redesign and shrink Cell+RSX vs how much it would save them on BOM?

I don't think power consumption drops like that anymore with node shrinks. A bit less, yes, but half is way optimistic.

Yes, so they would not be able to make additional savings on a smaller/cheaper cooling system for example?
 

Rolf NB

Member
Oh I know! What we really need to know is how much would it cost Sony to redesign and shrink Cell+RSX vs how much it would save them on BOM?
It's always worth it. If you ship ten million units of anything, every dollar saved on per-unit production is 10 million dollars you can invest in process optimization while still breaking even.

Yes, so they would not be able to make additional savings on a smaller/cheaper cooling system for example?
Maybe. 20%+ would be a solid, worthwhile step. I just don't know either way if the next shrink can immediately deliver that, s'all.
 
Weird shit going on on Amazon.de. 160GB PS3slim (K) was 199&#8364; for the past few weeks, now it jumped to 240,70&#8364;. The 320GB version similarly went from 209&#8364; to 239,57&#8364;. Weird prices, conspicuous jumps. Stock drying up? Discuss.
Welcome to Currency War, Part 2: Massive Euro Devaluation. Some of the economists here in the US are expecting both 100% inflation and a depression by the end of 2012 IN THE US.

I guess the above is a possible reason for price increases.
 

drkohler

Banned
Oh I know! What we really need to know is how much would it cost Sony to redesign and shrink Cell+RSX vs how much it would save them on BOM?



Yes, so they would not be able to make additional savings on a smaller/cheaper cooling system for example?
Nobody is going to answer that question for you. However, we can make a few things clearer concerning some of jeff_rigby's pipe-dreams:
a) Die shrink of the rsx chip. This is, simply put, a stupid idea. No engineer will even think about redesigning a 40nm chip into a 32nm chip. Such a redesign would cost several million $, with almost no gains in number of chips per wafer.
b) Die shrink of cell chip. A shrink from 45nm to 32nm would theoretically make sense as it gives you about 30% more chips per waver. However, shrinking the cell already was hell last time and I doubt IBM would do that without getting major cash. Unfortunately, the real world works as in c)
c) As it happens, 45nm lines have just become the "bread and butter" lines - translation: best price/yield rates. It was just announced that PC southbridges are now moving from 90/110nm lines to 45nm lines. The reason is that 45nm line capacity has become available as stuff is migrating to 28nm/32nm lines - and 45nm lines are "fully understood" by the engineers.

What would happen if Sony went from 45nm to 32nm (28nm lines are out of question, as those capacities are fully assigned for at TSMC) ?
Cell production has been running on 45nm lines for several years now, the engineers have fully optimized machines. Out of 320 cell dies per waver, probably over 90% of them work (also due to the high redundancy built into the design. The same goes for the rsx chips, NVidia is known to have high redundancy built into their gpu designs). So as a rough estimate, a cell chip costs around $13 to manufacture (cost of $3500 to run a waver through the line: $3500/320/0.9). Now move to a 32nm line. Apart from requiring better machines/wafers, you also enter the learning curve for the new process. In the first year, you probably achieve 50% yields. So while you have about 30% more dies on a wafer, yo end up with a cell chip cost around $20 to manufacture (cost of $4000 to run a wafer: $4000/400/0.5). End result: Sony pays a lot more for a cell chip (in the first year) than they are doing right now. Given th PS3 is probably past its "half-life" in sales numbers, savings would probably close to nothing in the end.
Conclusion: It does not make sense for Sony to invest time, money and engineers into shrinking the main PS3 components. Particularly as they have a new console in the making that requires all the forces and time they have.
And lastly, Savings on cooling designs etc would be inexistent. Die-shrunk chips would still be in the same chip carriers, so redesigning the cooling stuff for the few watts saved would be a waste of money.
 
From the size of the base, I'd say the motherboard should be something like this size*:
kHMuv.jpg

*ignore the orientation.

Not a huge shrink, it's mainly moving the HDD somewhere else, possibly on the top end just underneath the new space saving BluRay drive. Conveniently that's the only end we don't have a photo of.
 
Nobody is going to answer that question for you. However, we can make a few things clearer concerning some of jeff_rigby's pipe-dreams:
a) Die shrink of the rsx chip. This is, simply put, a stupid idea. No engineer will even think about redesigning a 40nm chip into a 32nm chip. Such a redesign would cost several million $, with almost no gains in number of chips per wafer.
b) Die shrink of cell chip. A shrink from 45nm to 32nm would theoretically make sense as it gives you about 30% more chips per waver. However, shrinking the cell already was hell last time and I doubt IBM would do that without getting major cash. Unfortunately, the real world works as in c)
c) As it happens, 45nm lines have just become the "bread and butter" lines - translation: best price/yield rates. It was just announced that PC southbridges are now moving from 90/110nm lines to 45nm lines. The reason is that 45nm line capacity has become available as stuff is migrating to 28nm/32nm lines - and 45nm lines are "fully understood" by the engineers.

What would happen if Sony went from 45nm to 32nm (28nm lines are out of question, as those capacities are fully assigned for at TSMC) ?
Cell production has been running on 45nm lines for several years now, the engineers have fully optimized machines. Out of 320 cell dies per waver, probably over 90% of them work (also due to the high redundancy built into the design. The same goes for the rsx chips, NVidia is known to have high redundancy built into their gpu designs). So as a rough estimate, a cell chip costs around $13 to manufacture (cost of $3500 to run a waver through the line: $3500/320/0.9). Now move to a 32nm line. Apart from requiring better machines/wafers, you also enter the learning curve for the new process. In the first year, you probably achieve 50% yields. So while you have about 30% more dies on a wafer, yo end up with a cell chip cost around $20 to manufacture (cost of $4000 to run a wafer: $4000/400/0.5). End result: Sony pays a lot more for a cell chip (in the first year) than they are doing right now. Given th PS3 is probably past its "half-life" in sales numbers, savings would probably close to nothing in the end.
Conclusion: It does not make sense for Sony to invest time, money and engineers into shrinking the main PS3 components. Particularly as they have a new console in the making that requires all the forces and time they have.
And lastly, Savings on cooling designs etc would be inexistent. Die-shrunk chips would still be in the same chip carriers, so redesigning the cooling stuff for the few watts saved would be a waste of money.
Just some polishing to your argument.

1) 45nm is pushing technology on older foundry equipment while 32nm is the largest and easiest half node size on NEW equipment meant to produce 32nm-28nm-20nm die size wafers. As such yields should be good.
2) reducing the size of RSX if a separate chip from 40nm to 32nm would not be worth doing, good observation. But combined with Cell on the same silicon just as with the 360S SOC @45nm it might be practical.

So much for the guessing on practicality. What about the 32nm SOI chips being made by both IBM and GloFlo with 5, 000 wafers that started production Dec 2011 at just IBM with ramp up to full production the second half of 2012; just WiiU? Hints are multiple chip designs at 32nm that contain CPU-GPU-eDRAM and maybe more. No one is saying what's being produced and I don't understand the secrecy if just WiiU as we all know it has to be in the pipeline at either 32nm or 28nm, more likely at 32 as that will be easier right?

Both Xbox 360 and PS3 are due for a refresh but Xbox is in better shape as Xbox 360S was a refresh to not only one silicon chip containing CPU and GPU but to one SOC (Chip and substrate) containing eDRAM and custom southbridge. This can be pushed even further with 32nm for the CPU-GPU but now eDRAM can be on the same SOI wafer and the SOC substrate can contain more components that used to be on the motherboard like memory. Wider memory buss and cheaper memory can be used.

IBM should be able to do the same with Cell and RSX that was done/is to be done with Xbox 360 right? Past PS3 shrinks have been dumb shrinks but Xbox 360S was a total redesign. PS3 Cell is way overdue because it couldn't be dumb shrunk and they haven't redesigned Cell for some reason and that's what I find interesting. Are they waiting for something, some technology to make it practical or some second use for the design that makes it practical.

By the way, IBM built a new PPC processor and GPU for the Xbox 360S silicon, they didn't use an off the shelf design. They emulated the original Xbox360 CPUs and GPU with HARDWARE and minor software emulation. This is now the state of the Art and it's also possible to hardware emulate a RSX given published calls and timing.
 

drkohler

Banned
Just some polishing to your argument.

1) 45nm is pushing technology on older foundry equipment while 32nm is the largest and easiest half node size on NEW equipment meant to produce 32nm-28nm-20nm die size wafers. As such yields should be good. .
Yields are never good when you start manufacturing a new die layout. Don't assume that if Company X can maufacture die A with good yields that it can also manufacture die B with good yields - the learning curve will bite you with every new design. Fact is that AMD recently went out of their contracts with GF (at a great cost to AMD, despite selling its shares of GF) because (supposedly) GF 32nm lines are still hampered with problems (so the rumours go). So in essence 32nm technology still has problems achieving good yields. Nobody will tell you what MS had to fork over in cash to get the cpu+gpu onto a single die - implicating revealing the design rules of one foundry to another foundry as those two dies were made by different foundries. I have not seen dieshots of the slim combo but I highly doubt it is a complete redesign, as only one design rule had to be broken in the process. MS certainly paid _a lot_ more for the first run of the new die than they would have paid for separate chips.

As for what IBM is manufacturing right now for whom is anyone's guess. Given that Sony (according to the majority of rumours) went the AMD road, it is not related to PS3 or PS4. Again, pouring major money into a PS3 redesign would be a waste of time and money for Sony. As for the 5000 wafers from Dec 2011, that is not a big number at all. Remember the learning curve: assuming dies over 300nm^2 and initial yields around <10%, you are not looking at very many usable chips to burn through in testing equipment.

Also please be more precise what you mean when you mention SoCs:

"Both Xbox 360 and PS3 are due for a refresh" (why?) "but Xbox is in better shape as Xbox 360S was a refresh to not only one silicon chip containing CPU and GPU but to one SOC (Chip and substrate) containing eDRAM and custom southbridge."

The slim XBoxes contain a single CHIP CARRIER with separate cpu/gpu die and with separate eDram die (which I think isn't even a die shrink from the original eDram die). On that note, I find it very unlikely that someone would ever incorporate such massive amounts of eDram (rumoured to be >32MB in the next consoles) into a single cpugpu die.
 
Yields are never good when you start manufacturing a new die layout. Don't assume that if Company X can maufacture die A with good yields that it can also manufacture die B with good yields - the learning curve will bite you with every new design. Fact is that AMD recently went out of their contracts with GF (at a great cost to AMD, despite selling its shares of GF) because (supposedly) GF 32nm lines are still hampered with problems (so the rumours go). So in essence 32nm technology still has problems achieving good yields. Nobody will tell you what MS had to fork over in cash to get the cpu+gpu onto a single die - implicating revealing the design rules of one foundry to another foundry as those two dies were made by different foundries. I have not seen dieshots of the slim combo but I highly doubt it is a complete redesign, as only one design rule had to be broken in the process. MS certainly paid _a lot_ more for the first run of the new die than they would have paid for separate chips.

As for what IBM is manufacturing right now for whom is anyone's guess. Given that Sony (according to the majority of rumours) went the AMD road, it is not related to PS3 or PS4. Again, pouring major money into a PS3 redesign would be a waste of time and money for Sony. As for the 5000 wafers from Dec 2011, that is not a big number at all. Remember the learning curve: assuming dies over 300nm^2 and initial yields around <10%, you are not looking at very many usable chips to burn through in testing equipment.

Also please be more precise what you mean when you mention SoCs:

"Both Xbox 360 and PS3 are due for a refresh" (why?) "but Xbox is in better shape as Xbox 360S was a refresh to not only one silicon chip containing CPU and GPU but to one SOC (Chip and substrate) containing eDRAM and custom southbridge."

The slim XBoxes contain a single CHIP CARRIER with separate cpu/gpu die and with separate eDram die (which I think isn't even a die shrink from the original eDram die). On that note, I find it very unlikely that someone would ever incorporate such massive amounts of eDram (rumoured to be >32MB in the next consoles) into a single cpugpu die.
Why is a refresh overdue, historically refresh occurs every two years.

You missed or misunderstood a few things: please carefully read this PDF on the Xbox360S SOC It is a single chip containing both CPU and GPU with eDRAM on the SOC not chip carrier....definitions can be overlapped here as it's not a full SOC but it does contain Southbridge and transposer connected eDRAM as well as 8 domain power connection paths.

As to eDRAM, in IBM PPC power 7+ on 32nm SOI it looks like from articles that it can have as much a 80 megs of eDRAM. SOI process makes it easier to embed eDRAM on the silicon.

Last, read the IBM360S PDF carefully as I think it's saying that IBM built a GPU from the ground up to emulate the original AMD GPU timing and all on SOI (GPU and CPU on same SOI wafer and AMD GPUs were bulk process not SOI which changes transistor design and would require too many low level changes to use the old AMD GPU design). Same can be done for RSX which is also bulk process not SOI.

http://www.samsung.com/us/business/oem-solutions/memory-logic/foundry/foundry-32nm.html said:
Samsung&#8217;s 32nm LP HKMG Process Node has been a proven volume production process since mid-2010. This was the industry&#8217;s first 32nm LP HKMG logic process to be qualified, and was designed for a remarkably simple migration path to the 28nm node.

The 32nm process is a gate-first High-K Metal Gate process. Jointly developed with IBM, the gate-first approach enables it to deliver twice the logic density of 45nm processes while maintaining low power, making it ideal for mobile applications. HKMG, High-K Metal Gate process first developed by Samsung, IBM, and other partners in 2007 as a way to improve performance and reduce transistor gate leakage at reduced geometry process nodes. Conventional Ploy/SiON material is replaced with High-k material to continue gate dielectric scaling (Tox/Tinv).

In Samsung&#8217;s Gate-First HKMG process, the transistor&#8217;s gate stack is fabricated first, followed by source and drain. This more cost-effective approach facilitates superior area scaling and preservation of layout rules without complex design rules.
Samsung's 32nm Gate-First HMKG Process Features
&#8226;2x gate density increase compared to 45nm
&#8226;Over 100x lower gate leakage
&#8226;Greater than 40% delay improvement at fixed leakage
&#8226;10X leakage reduction at fixed speed


http://www.eetimes.com/electronics-blogs/semi-conscious/4304059/Globalfoundries-claims-250-000-HKMG-wafers-shipped said:
Wednesday (March 21), when Globalfoundries issued an announcement stating that has shipped a quarter million wafers based on its 32-nm high-k metal gate (HKMG) technology. Globalfoundries can't claim a lot of technical advantages over TSMC, but as the announcement states the 250,000 shipped HKMG wafers represents a significant lead over other foundries in HKMG manufacturing.

The statement also includes a quote from AMD CEO Rory Read stating that the HKMG milestone is a "testament" to the progress that the two companies have made together. "In just one quarter, we were able to see more than a doubling of yields on 32-nm, allowing us to exit 2011 having exceeded our 32-nm product shipment requirements," Read said. "Based on this successful ramp of 32-nm HKMG, we are committed to moving ahead on 28-nm with Globalfoundries."

In the same statement, Globalfoundries CEO Ajit Manocha referenced problems with early yield learning on 32-nm HKMG, but said several organization and operational changes (presumably including his own appointment as CEO) led to dramatic improvement in production velocity and yield on 32-nm HKMG. "And since our 28-nm technology uses the same HKMG implementation as 32-nm, AMD and other customers will benefit greatly from our high-volume ramp of leading-edge APUs at 32-nm."


http://images.kontera.com/IMAGE_DIR/12197/43293/BB_enterpriseHD_gen_UI_041912.flv said:
SOI technology is well known to the computer enthusiasts all over the world, as this is the technology AMD used to manufacture its CPUs since the 2003 K8 architecture. The initiator of SOI was in fact IBM, and they&#8217;ve had a close collaboration with AMD on SOI.

The difference between the SOI 28nm wafers and the bulk 28 nm manufacturing process is the fact that SOI offers less leakage current, less power consumption and, consequently, less heat dissipation. It may be a little more expensive, but when you want your CPUs to work at 4 GHz instead of 2 GHz, you&#8217;ll probably look at anything but the bulk 28 nm manufacturing process.

A clear proof of what SOI can do for chip manufacturing is the fact that, because of the higher quality of the 28 nm manufacturing process at GlobalFoundries (GF) that uses SOI, AMD and Qualcomm gave up on TSMC and went to GF for manufacturing their new designs in 28 nm.

The current SOI process used by AMD is actually called PD SOI. That&#8217;s short for Partially Depleted SOI. The difference is the fact that FD-SOI has an ultra-thin Buried Oxide over the base silicon, while PD-SOI actually is thicker having a &#8220;Body&#8221; over the Buried Oxide.

It doesn&#8217;t really matter how it looks, but Soitec&#8217;s compatriot, STMicro says that from their own study of the FD-SOI technology, the advantage FD-SOI has over their own 28 nm bulk process is 61 percent higher at 1V (volt) and gets even more interesting at lower VDD (Voltage Drain Drain), showing a 550 percent improvement at 0.6V.
 
Repeat of news with a new slant that I feel is more accurate:

New reports and statements from Microsoft are strong evidence that the Xbox Lite is already in production.

The next Xbox however, possibly a brand new, streamlined version of the Xbox 360, may very well be at said show. According to new rumors, Microsoft&#8217;s next piece of video game playing hardware is already in production at a Texas manufacturer.

IGN reported on Thursday that, according to its source, Flextronics in Austin, Texas is already putting together the next generation of Xbox consoles.
Flextronics does have a long history with Microsoft. The company manufactured the original Xbox when that machine debuted back in 2001 and it still assembles Xbox 360 hardware today.The report claims that Flextronics actually formed a new testing team that operated away from the main body of the company. This team&#8217;s only job was to perform hardware, software, and even marketing tests on the next Xbox. Now they&#8217;ve moved on to actually building the consoles.

With the Xbox 360 selling well and Microsoft representatives insisting in no small terms that there won&#8217;t be an Xbox 720 at E3 2012, it&#8217;s doubtful that these Flextronics-made boxes are the Durango machines that people will eventually play games on. If they are next-generation Xboxes, then they are likely development consoles made for studios.

Microsoft&#8217;s statement on the rumor, however, indicated that these units may very well be the rumored Xbox Lite version of the Xbox 360. Its statement to IGN reads: &#8220;Xbox 360 has found new ways to extend its lifecycle like introducing the world to controller-free experiences with Kinect and re-inventing the console with a new dashboard and new entertainment content partnerships. We are always thinking about what is next for our platform and how to continue to defy the lifecycle convention. Beyond that we do not comment on rumors or speculation.&#8221;

The Chips being made at GloFlo are identical to the chips made at IBM in December 2011 (5,000 SOI wafers). If for Xbox 720 development platforms....too many; 5000 wafers at IBM and more on line at GloFlo @ 32nm. WiiU, nope there are now two unconfirmed sources 1) Chips being made for Microsoft and 2) Being assembled for Microsoft (timing is right for this 90 days to come out of oven, 30 days to test and then in production).

Xbox lite with a 16 gig Flash drive built in perhaps? HDMI pass-thru, Xbox361?

Both Xbox360 and PS3 are getting a refresh at the same time; this for the first time, is there a connection?
 

drkohler

Banned
It is a single chip containing both CPU and GPU with eDRAM on the SOC not chip carrier....definitions can be overlapped here as it's not a full SOC but it does contain Southbridge and transposer connected eDRAM as well as 8 domain power connection paths.
Again I must ask you to stay precise and not muddle the issue with expressions like "transposer" and "8 domain power connection paths" which agreeably sound totally cool in writing but only confuse the casual reader here because they don't understand.
Again, the words chips/SoC are used by you in a very loose way. In the old days, a chip was synonymous with chip carrier or die, because each of those black, macroscopic bugs served a single purpose. The core of the XBox slim is made of a _single chip carrier_, not a single chip. The chip carrier contains two "chips", one cpugpu die and one eDram die (not counting the usual barrage of RLC components). The word SoC ("System-on-a-chip" is used very imprecisely nowadays because the distinction between "chip","chip carrier" and "die" is often lost in the argument.
It is cool that you like to quote so many grand GF press releases but remember the one single most damaging fact: AMD essentially dropped its connection with GF (at great costs) by canceling its contracts. This is the best indication we have that AMD is not satisfied with GF yields. Again, 5000 wafers in a start-up run is not inpressive and no conclusions can be drawn from this number. Even 250'000 processed wafers (as in that GF press blurb you cite) is not impressive and don't tell whether there is a profitable foundry operation or not.
 

Rolf NB

Member
Back in the glory days where people only used words they understood, we sometimes used the term "multi-chip module" (MCM), too.
 
Again I must ask you to stay precise and not muddle the issue with expressions like "transposer" and "8 domain power connection paths" which agreeably sound totally cool in writing but only confuse the casual reader here because they don't understand.
Again, the words chips/SoC are used by you in a very loose way. In the old days, a chip was synonymous with chip carrier or die, because each of those black, macroscopic bugs served a single purpose. The core of the XBox slim is made of a _single chip carrier_, not a single chip. The chip carrier contains two "chips", one cpugpu die and one eDram die (not counting the usual barrage of RLC components). The word SoC ("System-on-a-chip" is used very imprecisely nowadays because the distinction between "chip","chip carrier" and "die" is often lost in the argument.
It is cool that you like to quote so many grand GF press releases but remember the one single most damaging fact: AMD essentially dropped its connection with GF (at great costs) by canceling its contracts. This is the best indication we have that AMD is not satisfied with GF yields. Again, 5000 wafers in a start-up run is not inpressive and no conclusions can be drawn from this number. Even 250'000 processed wafers (as in that GF press blurb you cite) is not impressive and don't tell whether there is a profitable foundry operation or not.

Yes the term, substrate, SOC and Multi-chip module gets used where it shouldn't but I am QUOTING the PDF

Microsoft launches CPU-GPU combined Xbox 360 SoC

GPU bulk to SOI major redesign

Software giant Microsoft has announced the release for the new SoC (System-on-chip) processor of the new Xbox 360 250GB Slim Kinect-ready model.

The new SoC will be enhanced to display better speed and improved heat and power management.

The processor has been manufactured with the IBM/GlobalFoundries 45nm process and is the first production desktop-class processor which will combine CPU, GPU, memory, and I/O logic onto a single silicon and can be a tough competitor for AMD&#8217;s Fusion and Intel&#8217;s Sandy Bridge processors.

Officials at Microsoft announced that the combination has been made to allow better power efficiency. The processors are expected to be cheap as manufacturing will be inexpensive due to fewer chips. The heat management has also been enhanced and the sizes of the motherboard and the power supply unit have been drastically reduced.

The SoC has taken five years to be developed and contains about 372 million transistors which are very less in comparison to modern GPUs and processors. The processor has a tri-core CPU, an ATI GPU, a dual-channel memory controller, I/O, and a new front side bus (FSB) instead of an easier internal connection.

The New Xbox 360 CPU-GPU SoC

Argue with Microsoft or IBM for using the term incorrectly. I'm more interested in what they did and how they did it.

"And since our 28-nm technology uses the same HKMG implementation as 32-nm, AMD and other customers will benefit greatly from our high-volume ramp of leading-edge APUs at 32-nm." Qualcomm and ST have recently transferred their 28nm business from TSMC to GloFlo, which seems to indicate along with the AMD quote above that problems have been solved.

Everything you post is accurate and I have cited it in this and other threads but it's old news. Also confusing was AMD stating that they are/were going to bulk process for their APUs and GPUs because 1) it's cheaper and 2) TSMC does not support SOI. "AMD essentially dropped its connection with GF (at great costs) by canceling its contracts." Again, I read that too and recently a retraction. The money was for R&D and just happened to be at the same time AMD went to TSMC to make up for a shortfall in GloFlo production. TSMC dropped 32nm and went straight to 28nm as they were having problems also.

A clear proof of what SOI can do for chip manufacturing is the fact that, because of the higher quality of the 28 nm manufacturing process at GlobalFoundries (GF) that uses SOI, AMD and Qualcomm gave up on TSMC and went to GF for manufacturing their new designs in 28 nm.
 

omonimo

Banned
Wow if 360 superslim it's true I have the big suspect this generation will be more long than someone wish... by the way a 360 superslim could defintely ruins the plain of sony to expand a bit more in the usa market. I can't see the problem in the rest of the world but in the microsoft territories ps3 superslim could have a lot of problems with a new super economic 360.
 
this is probably happening to compete with the wii u.
My guess too but it goes further than that. Both started before 9/2010 on these refreshes.

Microsoft provably has an agenda (XTV) reveled by the Powerpoint leak and I believe Sony always planned for the same thing this season. Look at what Xbox has to upgrade to support XTV (Video out upgrade to 1080P, purchased Skype, Browser) and what Sony had since day one (browser, Video Chat, 1080P). This (XTV) starts this year and expands with next generation.

From another NeoGAF thread:

Meisadragon said:
Logically when you think about it, the only reason why they would launch in 2013 is to not give too much of a lead to the Wii U, but they will probably counter that with a 360 price cut and launch its successor in 2014. Same goes for Sony. I would love a 2013 release though.

And Pachter, well, he talks that way in his show on GT. It's normal :p
In response to this quote from Patcher:

http://www.computerandvideogames.com/354648/xbox-720-spring-2014-launch-makes-more-sense-pachter/ said:
According to Pachter, it would "make more sense" for Microsoft to target a Spring 2014 release and avoid the late 2013 buying season completely. "If I were a betting man (and I am), I would say a spring 2014 launch makes more sense, since hard core Xbots could get a console without having to compete with moms buying gifts at holiday, and it is likely that they won't manufacture more than a few million units for launch," he told X360 Magazine.

Microsoft has made a big deal of Xbox 360's entertainment apps in recent months, and Pachter more boldly went on to say that "it's pretty clear" MS will take this much further with the next console."It's pretty clear to me that Microsoft intends to allow the Xbox 720 to function as a cable TV box, allowing cable television service providers to broadcast over the Internet through the box, with SmartGlass as the remote controller, and with the Xbox 720 using Windows 8 to split the TV signal into multiple feeds, allowing consumers to divert different channel feeds to different displays within the home," he said.
Edit: I read about some technology I can't remember now, only available June 2013 I thought was a must have and moved back my launch guess to March 2014.
 

coldfoot

Banned
Wow if 360 superslim it's true I have the big suspect this generation will be more long than someone wish... by the way a 360 superslim could defintely ruins the plain of sony to expand a bit more in the usa market. I can't see the problem in the rest of the world but in the microsoft territories ps3 superslim could have a lot of problems with a new super economic 360.
360 is never super economic thanks to Xbox Live.
 
http://ps3ultraslim.com/news/?p=171 said:
What is known thus far about the new &#8216;super slim&#8217; PlayStation 3 model CECH-4000 series in terms of technical specifications? The following:
Available in 3 capacities:
- A: 16 GB (flash memory/solid state storage)
- B: 250 GB (hard disk drive)
- C: 500 GB (HDD)
Maximum power consumption: 190 W
Power Supply Unit: internal, i.e. no &#8216;brick&#8217; (external PSU)
Optical Disc Drive: top loader type

One unconfirmed source states the following dimension:
The new model&#8217;s dimensions are 290 x 60 x 230 mm, compared to 290 x 65 x 290 mm for PS3 Slim, which means it&#8217;s become a little slimmer and a lot less deep. Not sure what source this statement is based on. The proportions could be inferred from the size comparisons but this assertion by same source (Neogaf contributor Road) cannot:

Weight: 2.1 kg compared to 3.2 kg for PS3 Slim

What we do know is that Sony Computer Entertainment have scheduled their GamesCom 2012 press conference to take place on Tuesday August 14th, at the Staatenhaus am Rheinpark, Köln (Cologne), Germany. While SCEE hasn&#8217;t said yet what they&#8217;re going to talk about you can bet that the CECH-4000 series model is going to be one of the main topics.
30% drop in weight is significant, much more than the 10% drop in max power (210w down to 190w). It's a major refresh. 3000 slim is 2.6 Kg not 3.2 Kg.
 
Top Bottom