Hi everyone,
I sent a friend of mine an e-mail with some thoughts about PS3, Xbox 360 and even PS2 architectures. Just so it doesn't get lost, here it is. Enjoy!
Cheers,
Paul
This might be easier to digest. ^_^
Here's the PS3 final dev kit - the prototype dev kits were much bigger.
http://ps3.ign.com/articles/726/726255p1.html
That's a much more manageable size.
The PS3 has 256 megs RAMBUS XDR main memory and 256 megs GDDR3 VRAM GPU memory. However - the GPU can access main memory - apparently up to 440 Megs max. There's probably a speed penalty in that, I'm guessing.
By contrast, the Xbox 360 has a unified memory architecture - 512 Megs of GDDR3 RAM shared between the CPU and GPU.
PS3 has a hyperthreaded PPU (PowerPC) and 7 SPUs - one of which is reserved. This means 2 PPU threads and 6 SPU threads. This means kind of 2-8 threads, at any given time, depending on what you're doing.
The Xbox 360 has 3 cores, hyperthreaded PowerPC. This means 6 full threads - probably faster clock speed than the PPU too. To my way of thinking, these architectures aren't that radically different in terms of performance. Sure - pure Vector math benchmarks may favor the PS3, but real world?
meh.
I don't really see any advantages one way or the other here. In fact, the Sony architecture might annoy standard PC developers, who haven't worked with the PS2 very much. ^_^
I mean - develop an application for the PC - heck let's make it dual core. Now, port it to Xbox 360. Optimize for 6 threads. Runs well. Now, port to PS3. Initially, you're probably running on the PPU - two threads. Performance sucks. (Some companies release it at this point. ^_^) Smart developers start pawning pure math functions off to individual SPUs. They are incredibly fast, but there's only so much pure math you can do on them. Performance basically equals Xbox 360, if you optimize enough. Hair is pulled out constantly.
Alternatively - do PS3 dev first. Run great. All math functions designed to run independently on SPUs - advanced physics included. Port to XBox 360. Performance sucks. Now start consolidating SPU functions as independent threads and take advantage of the triple core. Performance basically equals PS3, with possibly simplified math approximations - which you don't notice on an HDTV anyway.
This is what I'm seeing in the real world based on the available games.
For example, I've been playing games developed on the PC/Xbox first. Best example of this is The Orange Box. Most of the time, it's fine. Sometimes, though, it slows down terribly. Valve didn't want to do the PS3 port at all - they handed it off to EA UK. Now, patches are on their way.
On the other hand, I have the Burnout Paradise demo. It was originally developed on the PS3 and ported to Xbox 360. It looks stunning on the PS3, but word on the street is that the Xbox 360 version looks great too.
I don't think you're gonna see any big differences between the two until developers abandon DX9/DX10 architecture, and start thinking outside the box. You'll see start to see radical games like this from established PS2 companies like Insomniac and Naughty Dog, since they don't do any DX9/DX10 architecture, and understand the Vector Unit concept.
There are several groovy things that you can do with the PS3 architecture. Remember that, unlike the PS2 which had no GPU, the PS3 has the RSX - which is an nVidia 7800 equivalent - for its GPU. You can feed the pixel and vertex shader pipelines in the RSX from the SPUs directly. Now we're starting to talk - write physics and AI code on the SPUs, and then send it to the RSX's pixel and vertex shaders. No main CPU (PPU) overhead. Or - use full motion HD video on surfaces, courtesy of the SPU. Apparently, one or two SPUs can decode h.264 video streams at HD resolution.
By the way - that's the reason that everyone is having such trouble emulating the PS2. The GS chip in the PS2 is not a GPU - it's a frame buffer manager/compositor. The GS writes to video buffer memory using a 2560 bit wide pipeline - using low latency DRAM!!! This is crazy talk. The GS has 1024 write bits, 1024 read bits and 512 read/write bits, even though it's only 4 megs! No current graphic card uses this kind of craziness. The PS2 doesn't really use pixel or vertex shaders - it uses one of its vector units in a pure "read the frame buffer, do some crazy math on it, and write the frame buffer back" kind of a way, thanks to its huge pipeline. The EE
is pretty straightforward - it's a MIPS core with two vector units. A dual-core Intel can simulate it rather well - or a Cell for that matter.
Which is exactly what the 80 gig PS3s do - emulate the EE with the Cell. But - the 80 gig PS3 still has a GS in silicon. That's what Sony removed from the 40 gig unit, and why it can't run PS2 games. The original 20 and 60 gig PS3s have a PS2 EE+GS in silicon. All PS1 games are done with full software emulation on all PS3 consoles.
I've often wondered if the PS3 was seriously gimped during the design phase. The initial PS3 designed seemed more PS2 like - they were prototyping with a twin Cell system - and each Cell had 8 SPUs, not 7. Add a crazy, hi-def GS-like chip, and now we're talking. Sony designers claimed that the second Cell didn't add anything to the performace, but I wonder. Grafting a stock nVidia GPU into this architecture seems like changing out a sports car's wheels for a Humvee's. They're nice wheels, and have some great features - but they really shouldn't be on there. Sony's teams may have caved to pressure from folks like Epic (unreal engine) who now use a lot of Pixel and Vertex shaders on the GPU. They may have freaked out if they had heard that Sony was about to release another console without a GPU.
Hmm - that's a lot to think about. ^_^
1 comment:
This comparison helped me know about these Playstations and Xbox. Frankly speaking I am not a computer savvy, but I love playing Download Games. Whichever game console I am using. I always have fun and enjoy every game, actually.
Post a Comment