xboxscene.org forums

Pages: 1 [2] 3 4 5

Author Topic: Another Anandtech Article  (Read 589 times)

Twasi

  • Archived User
  • Sr. Member
  • *
  • Posts: 427
Another Anandtech Article
« Reply #15 on: June 29, 2005, 09:14:00 PM »

Every system has it's downfall and I don't see how M$ could have gotten around this without making it expensive. I don't know if I'm taking the article the wrong way but I just didn't like it because they are making it sound like the two systems are horrible. They could have expressed the faults without make it one giant hyperbole.
Logged

KAGE360

  • Archived User
  • Hero Member
  • *
  • Posts: 2445
Another Anandtech Article
« Reply #16 on: June 30, 2005, 12:29:00 AM »

just hearing from m hael is enough to put faith in the next gen.  as much as i would love to believe that the SPE's are useless im sure they arent.  and im not sure the writer of the article understands that with a pc there is major bandwith issues, as with a console that bandwith is much greater then on a pc.  if the general conception is that the systems are basically equal then great.  id much rather it be that way then sony of all companies being right and their console being 3 times the power of the 360 (which we all know it wont be) either way im happy with what ive seen so far and how can ANY dev team give an honest opinion of the hardware if they dont have the final dev kits from either system??!!

all this ranting is only from what ive read from forums and other sites, anandtech took the article down before i could read it. damn shame too, ive read everything from them with confidense that they know what they are talking about
Logged

Twasi

  • Archived User
  • Sr. Member
  • *
  • Posts: 427
Another Anandtech Article
« Reply #17 on: June 30, 2005, 12:58:00 AM »

For those of you who didn't get to read it I saved the article as with all of them, I don't know why they pulled it EVERY site that had a copy got pulled too so someone didn't like that article. Here it is.

Learning from Generation X

The original Xbox console marked a very important step in the evolution of gaming consoles - it was the first console that was little more than a Windows PC.
user posted image
-
The original Xbox was basically a PC

It featured a 733MHz Pentium III processor with a 128KB L2 cache, paired up with a modified version of NVIDIA's nForce chipset (modified to support Intel's Pentium III bus instead of the Athlon XP it was designed for). The nForce chipset featured an integrated GPU, codenamed the NV2A, offering performance very similar to that of a GeForce3. The system had a 5X PC DVD drive and an 8GB IDE hard drive, and all of the controllers interfaced to the console using USB cables with a proprietary connector.

For the most part, game developers were quite pleased with the original Xbox. It offered them a much more powerful CPU, GPU and overall platform than anything had before. But as time went on, there were definitely limitations that developers ran into with the first Xbox.

One of the biggest limitations ended up being the meager 64MB of memory that the system shipped with. Developers had asked for 128MB and the motherboard even had positions silk screened for an additional 64MB, but in an attempt to control costs the final console only shipped with 64MB of memory.

-
Developers wanted more memory, but the first Xbox only shipped with 64MB

The next problem is that the NV2A GPU ended up not having the fill rate and memory bandwidth necessary to drive high resolutions, which kept the Xbox from being used as a HD console.

Although Intel outfitted the original Xbox with a Pentium III/Celeron hybrid in order to improve performance yet maintain its low cost, at 733MHz that quickly became a performance bottleneck for more complex games after the console's introduction.

The combination of GPU and CPU limitations made 30 fps a frame rate target for many games, while simpler titles were able to run at 60 fps. Split screen play on Halo would even stutter below 30 fps depending on what was happening on screen, and that was just a first-generation title. More experience with the Xbox brought creative solutions to the limitations of the console, but clearly most game developers had a wish list of things they would have liked to have seen in the Xbox successor. Similar complaints were levied against the PlayStation 2, but in some cases they were more extreme (e.g. its 4MB frame buffer).

Given that consoles are generally evolutionary, taking lessons learned in previous generations and delivering what the game developers want in order to create the next-generation of titles, it isn't a surprise to see that a number of these problems are fixed in the Xbox 360 and PlayStation 3.

One of the most important changes with the new consoles is that system memory has been bumped from 64MB on the original Xbox to a whopping 512MB on both the Xbox 360 and the PlayStation 3. For the Xbox, that's a factor of 8 increase, and over 12x the total memory present on the PlayStation 2.

The other important improvement with the next-generation of consoles is that the GPUs have been improved tremendously. With 6 - 12 month product cycles, it's no surprise that in the past 4 years GPUs have become much more powerful. By far the biggest upgrade these new consoles will offer, from a graphics standpoint, is the ability to support HD resolutions.

There are obviously other, less-performance oriented improvements such as wireless controllers and more ubiquitous multi-channel sound support. And with Sony's PlayStation 3, disc capacity goes up thanks to their embracing the Blu-ray standard.
user posted image
-
The Xbox 360: two parts evolution, one part mistake?

But then we come to the issue of the CPUs in these next-generation consoles, and the level of improvement they offer. Both the Xbox 360 and the PlayStation 3 offer multi-core CPUs to supposedly usher in a new era of improved game physics and reality. Unfortunately, as we have found out, the desire to bring multi-core CPUs to these consoles was made a reality at the expense of performance in a very big way.
Logged

Twasi

  • Archived User
  • Sr. Member
  • *
  • Posts: 427
Another Anandtech Article
« Reply #18 on: June 30, 2005, 01:00:00 AM »

Problems with the Architecture

At the heart of both the Xenon and Cell processors is IBM’s custom PowerPC based core. We’ve discussed this core in our previous articles, but it is best characterized as being quite simple. The core itself is a very narrow 2-issue in-order execution core, featuring a 64KB L1 cache (32K instruction/32K data) and either a 1MB or 512KB L2 cache (for Xenon or Cell, respectively). Supporting SMT, the core can execute two threads simultaneously similar to a Hyper Threading enabled Pentium 4. The Xenon CPU is made up of three of these cores, while Cell features just one.

Each individual core is extremely small, making the 3-core Xenon CPU in the Xbox 360 smaller than a single core 90nm Pentium 4. While we don’t have exact die sizes, we’ve heard that the number is around 1/2 the size of the 90nm Prescott die.
user posted image
-
Cell's PPE is identical to a single core in Xenon. The die area of the Cell processor is 221 mm^2, note how little space is occupied by the PPE - it is a very simple core.

IBM’s pitch to MS was based on the peak theoretical floating point performance-per-dollar that the Xenon CPU would offer, and given MS’s focus on cost savings with the Xbox 360, they took the bait.

While MS and Sony have been childishly playing this flops-war, comparing the 1 TFLOPs processing power of the Xenon CPU to the 2 TFLOPs processing power of the Cell, the real-world performance war has already been lost.

Right now, from what we’ve heard, the real-world performance of the Xenon CPU is about twice that of the 733MHz processor in the first Xbox. Considering that this CPU is supposed to power the Xbox 360 for the next 4 - 5 years, it’s nothing short of disappointing. To put it in perspective, floating point multiplies are apparently 1/3 as fast on Xenon as on a Pentium 4.

The reason for the poor performance? The very narrow 2-issue in-order core also happens to be very deeply pipelined, apparently with a branch predictor that’s not the best in the business. In the end, you get what you pay for, and with such a small core, it’s no surprise that performance isn’t anywhere near the Athlon 64 or Pentium 4 class.

The Cell processor doesn’t get off the hook just because it only uses a single one of these horribly slow cores; the SPE array ends up being fairly useless in the majority of situations, making it little more than a waste of die space.

We mentioned before that collision detection is able to be accelerated on the SPEs of Cell, despite being fairly branch heavy. The lack of a branch predictor in the SPEs apparently isn’t that big of a deal, since most collision detection branches are basically random and can’t be predicted even with the best branch predictor. So not having a branch predictor doesn’t hurt, what does hurt however is the very small amount of local memory available to each SPE. In order to access main memory, the SPE places a DMA request on the bus (or the PPE can initiate the DMA request) and waits for it to be fulfilled. From those that have had experience with the PS3 development kits, this access takes far too long to be used in many real world scenarios. It is the small amount of local memory that each SPE has access to that limits the SPEs from being able to work on more than a handful of tasks. While physics acceleration is an important one, there are many more tasks that can’t be accelerated by the SPEs because of the memory limitation.

The other point that has been made is that even if you can offload some of the physics calculations to the SPE array, the Cell’s PPE ends up being a pretty big bottleneck thanks to its overall lackluster performance. It’s akin to having an extremely fast GPU but without a fast CPU to pair it up with.

-------------------------------------------------

What About Multithreading?

We of course asked the obvious question: would game developers rather have 3 slow general purpose cores, or one of those cores paired with an array of specialized SPEs? The response was unanimous, everyone we have spoken to would rather take the general purpose core approach.

Citing everything from ease of programming to the limitations of the SPEs we mentioned previously, the Xbox 360 appears to be the more developer-friendly of the two platforms according to the cross-platform developers we've spoken to. Despite being more developer-friendly, the Xenon CPU is still not what developers wanted.

The most ironic bit of it all is that according to developers, if either manufacturer had decided to use an Athlon 64 or a Pentium D in their next-gen console, they would be significantly ahead of the competition in terms of CPU performance.

While the developers we've spoken to agree that heavily multithreaded game engines are the future, that future won't really take form for another 3 - 5 years. Even MS admitted to us that all developers are focusing on having, at most, one or two threads of execution for the game engine itself - not the four or six threads that the Xbox 360 was designed for.

Even when games become more aggressive with their multithreading, targeting 2 - 4 threads, most of the work will still be done in a single thread. It won't be until the next step in multithreaded architectures where that single thread gets broken down even further, and by that time we'll be talking about Xbox 720 and PlayStation 4. In the end, the more multithreaded nature of these new console CPUs doesn't help paint much of a brighter performance picture - multithreaded or not, game developers are not pleased with the performance of these CPUs.

What about all those Flops?

The one statement that we heard over and over again was that MS was sold on the peak theoretical performance of the Xenon CPU. Ever since the announcement of the Xbox 360 and PS3 hardware, people have been set on comparing MS's figure of 1 trillion floating point operations per second to Sony's figure of 2 trillion floating point operations per second (TFLOPs). Any AnandTech reader should know for a fact that these numbers are meaningless, but just in case you need some reasoning for why, let's look at the facts.

First and foremost, a floating point operation can be anything; it can be adding two floating point numbers together, or it can be performing a dot product on two floating point numbers, it can even be just calculating the complement of a fp number. Anything that is executed on a FPU is fair game to be called a floating point operation.

Secondly, both floating point power numbers refer to the whole system, CPU and GPU. Obviously a GPU's floating point processing power doesn't mean anything if you're trying to run general purpose code on it and vice versa. As we've seen from the graphics market, characterizing GPU performance in terms of generic floating point operations per second is far from the full performance story.

Third, when a manufacturer is talking about peak floating point performance there are a few things that they aren't taking into account. Being able to process billions of operations per second depends on actually being able to have that many floating point operations to work on. That means that you have to have enough bandwidth to keep the FPUs fed, no mispredicted branches, no cache misses and the right structure of code to make sure that all of the FPUs can be fed at all times so they can execute at their peak rates. We already know that's not the case as game developers have already told us that the Xenon CPU isn't even in the same realm of performance as the Pentium 4 or Athlon 64. Not to mention that the requirements for hitting peak theoretical performance are always ridiculous; caches are only so big and thus there will come a time where a request to main memory is needed, and you can expect that request to be fulfilled in a few hundred clock cycles, where no floating point operations will be happening at all.

So while there may be some extreme cases where the Xenon CPU can hit its peak performance, it sure isn't happening in any real world code.

The Cell processor is no different; given that its PPE is identical to one of the PowerPC cores in Xenon, it must derive its floating point performance superiority from its array of SPEs. So what's the issue with 218 GFLOPs number (2 TFLOPs for the whole system)? Well, from what we've heard, game developers are finding that they can't use the SPEs for a lot of tasks. So in the end, it doesn't matter what peak theoretical performance of Cell's SPE array is, if those SPEs aren't being used all the time.
user posted image
-
Don't stare directly at the flops, you may start believing that they matter.
Logged

Twasi

  • Archived User
  • Sr. Member
  • *
  • Posts: 427
Another Anandtech Article
« Reply #19 on: June 30, 2005, 01:01:00 AM »

QUOTE
10 - Posted on Jun 29, 2005 at 10:04 PM by KristopherKubicki     Reply
Ecmaster76: Eh, something was messed up with the content management system. PS3 article is pulled for now because Anand is worried about MS tracing his anonymous insider.

Kristopher
Blank
Logged

geo22

  • Archived User
  • Newbie
  • *
  • Posts: 13
Another Anandtech Article
« Reply #20 on: June 30, 2005, 03:39:00 AM »

I read on the forums post on anandtech that there was a problem with there system, thats why the link no longer works, but the post said something about it being pulled because anand was afraid of his source getting traced..whatever that means..but it could be possible, since the article deals with a lot of insider info regarding the tech on these consoles. The author did say that he's been getting this stuff directly from the dev's themselves. So basically hardware on both systems (CPU's) weren't what they fully expected, but as is the case with every previous console..the developers always want more..but the console makers can only give them so much, due to price constraints. Sure the performance of both CPU's may not be on par with a Pentium 4 or an athlon 64 (as the article states), but hey this is a console CPU not a PC's, and much more powerful than the current gen. So yea, the devs may be bitching the tech isn't all that, but what choice do they have? Once they get used to the hardware, i'm pretty sure they can pump out some amazing games that can truly be called "next gen", but how long before we get to play those games is what the article is also trying to state.
Logged

crustyteacup

  • Archived User
  • Jr. Member
  • *
  • Posts: 89
Another Anandtech Article
« Reply #21 on: June 30, 2005, 05:37:00 AM »

regardless of whether the article is complete bulls*it or is accurate, how can anybody expect the CPU's in a console to rival that found on a desktop? they seem to forget that the lastest PC's will set you back up to 4x that either of these consoles would. they obviously haven't thought about this at all, maybe a little common sense would help them to write better articles.

and ok, yeah PC's are faster and they are ever evolving, but there also crap for playing games, lets face it. you have to go through the lengthy install, which frequently these days runs over several CD's as if the PC industry can't adopt its own standards of DVD. can they not realise that if somebody has the mimimum specs for some new game, then they will most probably have a DVD drive.

oh and whats this, i need the latest graphic card drivers again and i need a 200gig hard drive to install 3 games and i better go and buy a £300 graphics card to play the latest games at decent speed......or how about i just go and buy a £300 console, and never ever have any of those troubles and it will play every game thats on the platform.

PC gaming is dying on its arse, nobody wants a £1500 machine to play games when they can get a machine that plays games for 1/5 of that so i wish they wouldn't talk about it like its the holy grail of gaming.
Logged

incognegro

  • Archived User
  • Hero Member
  • *
  • Posts: 1764
Another Anandtech Article
« Reply #22 on: June 30, 2005, 06:52:00 AM »

The problem I had about the article is that, if the gpu's are the deciding factor then why arent there a real comparison between them. The article focuses mostly on trashing the cpus. How does the 360 gpu compare to the rsx?
Logged

zero129

  • Archived User
  • Jr. Member
  • *
  • Posts: 73
Another Anandtech Article
« Reply #23 on: June 30, 2005, 07:21:00 AM »

rolleyes.gif .

Another thing that seems to of been left out is consoles are always more weaker then pc's, but it still doesn't mean it can keep up, take for instance the xbox, now looking at a p3@600Mhz, and then looking at a [email protected], the is a very big difference between the 2 in so called "Real World Performance" but yet the p3@600Mhz that's in the xbox can run doom 3 almost as good as the [email protected], so in other words the point is mute, since the next gen of consoles will be able to keep up with pc's right up until the end of their lives.
Logged

scienide

  • Archived User
  • Full Member
  • *
  • Posts: 149
Another Anandtech Article
« Reply #24 on: June 30, 2005, 07:27:00 AM »

Grabbed from another forum:

Anyways, let's see what David Milford, who works at IBM has to say about the article, shall we, haters?

"
Anand is out to lunch on a lot of this. His previous article was pretty sketchy, but this one is pretty much overboard with inaccuracies and random leaps to conclusions.

For one, he bases a lot of his discussions on the fact that the PPE is "identical" to a Xenon core, which is far from the truth. In fact, internally the Xenon team and the Cell team were not allowed to communicate with eachother for legal reasons. The cores were developed by different teams, they just happened to have made the same design decisions for a couple fundamental aspects (like SMT and in-order execution).

He gets so much fundamental basic things wrong that I'm shocked it was published, usually Anand is very knowledgable.
Floating point multiplies are also not "1/3" as fast as on Xenon, I've no idea where he got that from. It sounds like some developer told him that and he took it as verbatim truth.

What I think is going on here is Anand talked to some (curiously anonymous) developers who took their code designed for more PC-like processors (like the G5/PowerPC 970) and ran it on Xenon/Cell and was amazed that it didn't perform that well.

Xenon and Cell are in-order processors. The order of the instructions is very important, because the processor doesn't dynamically re-order them on Xenon/Cell, unlike the PC processors. A straight port will give you sub-par performance, easily.

Once they learn more about the chip, and how to program for it, the performance will be much, much higher.

This article is from a PC-centric website likely talking to PC-centric developers who don't have much experience with in-order cores. It's pretty worthless."
Logged

sunkist

  • Archived User
  • Jr. Member
  • *
  • Posts: 50
Another Anandtech Article
« Reply #25 on: June 30, 2005, 07:50:00 AM »

QUOTE(incognegro @ Jun 30 2005, 07:03 AM)
The problem I had about the article is that, if the gpu's are the deciding factor then why arent there a real comparison between them. The article focuses mostly on trashing the cpus. How does the 360 gpu compare to the rsx?
Logged

incognegro

  • Archived User
  • Hero Member
  • *
  • Posts: 1764
Another Anandtech Article
« Reply #26 on: June 30, 2005, 08:24:00 AM »

QUOTE
The GPUs are definitely what I am interested in. I wonder how different they are going to be than the computer equivalent. Especially since ATI keeps pushing back the release of their next chip, I wonder if its so they can focus on the Xbox GPU.


Since they started talking about how good the rsx is then one would think it would be so easy to compare them since the rsx isnt finished while the ati chip is finished. Its just weird that they shy away from the comparison since it would be the only thing positive they could add to the article.
Logged

krazyshane

  • Archived User
  • Jr. Member
  • *
  • Posts: 67
Another Anandtech Article
« Reply #27 on: June 30, 2005, 10:14:00 AM »

rolleyes.gif

what a bunch of crap.

Shane
Logged

thax

  • Archived User
  • Sr. Member
  • *
  • Posts: 420
Another Anandtech Article
« Reply #28 on: June 30, 2005, 12:19:00 PM »

QUOTE(miggidy @ Jun 30 2005, 06:10 PM)
I was pretty certain that Anand's source for this info was full of shit.
It was probably some disgruntled small house developer who didn't know what he was talking about beerchug.gif

I don't think it was the developers giving bad information rather it was the author of the article asking leading questions and infering the answers to mean things that the developers never intended.

anandtech: <shows that the cores are slow through a comparison to the Pentium architecture>

anandtech: "We of course asked the obvious question: would game developers rather have 3 slow general purpose cores..."

Developer Response: "We really don't want 3 slow cores, we would rather have fast cores". (likely)

anandtech: "Despite being more developer-friendly, the Xenon CPU is still not what developers wanted." This is logically valid, but unsound because the premise that the cores are slow is untrue.

Implied Anandtech Premise: Either the developers want the Xenon CPU OR they want the Pentium/Athlon CPU. (This is a logical fallacy called a False Dilemma, works great on small children.)

Anandtech Conclusion: "The most ironic bit of it all is that according to developers, if either manufacturer had decided to use an Athlon 64 or a Pentium D in their next-gen console, they would be significantly ahead of the competition in terms of CPU performance."

I bet the developers were pissed when they read the article and saw how their statements were interpreted.
Logged

twistedsymphony

  • Recovered User
  • Hero Member
  • *
  • Posts: 6955
Another Anandtech Article
« Reply #29 on: June 30, 2005, 12:58:00 PM »

QUOTE(thax @ Jun 30 2005, 02:30 PM)
I don't think it was the developers giving bad information rather it was the author of the article asking leading questions and infering the answers to mean things that the developers never intended.
Logged
Pages: 1 [2] 3 4 5