Breaking the Wii U down; what each bit means

////
29 mins read

Want to know what the difference in a GPU and a GPGPU actually is? We’ve got just the information you need to gain a better understanding of what makes the Wii U tick, and bring it to you in such a way that makes it simple to understand by all.

 In our recent article, Is the Wii U Powerful enough?, we took a deep look into the rumours surround the Wii U’s supposedly “slow” processor and found out that the idea of the Wii U being a true “next generation” console from a power standpoint was loosely based on it having a tri-core IBM POWER7-based architecture, which it didn’t ship out with. We also explored the insight of a few respectable developers and based on the information we had acquired at the time, we came to the conclusion that the Wii U was likely to be on-par, to a little below the specifications of current generation consoles, but even today we still don’t have conclusive evidence to make an official claim to this.
Today, the arguments are still on-going and while for the majority of those enjoying their time with their shiny new Wii U units (we’re having a blast with ours) most likely could care less, we’re back again to discuss a new rumour and most importantly to give you the knowledge and understanding of what’s inside the Wii U and how these parts actually work – once you understand how it works, you’ll realise just how powerful it could actually be.
Another Rumour

CVG ran a surprising article yesterday and the first two lines were the most important bits, which I’ll duplicate in their entirety for you to read:

“A well-known hardware hacker has published what is believed to be the processor and graphics card specs of Wii U.”

“It is said that the Wii U processor carries a clock speed of 1.24 GHz – less than half the speed of the PS3 and Xbox 360. However, its GPU core is believed to run at 550 MHz, which is the same speed as Sony’s home console and a tad faster than Microsoft’s.”

Let’s get this over with really quickly, there’s been a lot of talk back and forth about if the processing speed reported here was an idle speed, which means this number means absolutely nothing if so and is just more talk for the media’s attention. I’m also not going to debate this issue here today, as there’s simply not enough substantial evidence to even try to formulate an opinion on here. What I am going to do, is show you how the Wii U actually processes information and that will in turn show you how far off base this rumour is – there are entirely too many things that aren’t factored into the equation here.
Let’s look back at CVG’s piece though. In the latter lines they speak of Wii U’s supposed clock speeds for the CPU, but then immediately drop a comparison to how the Wii U’s GPU stands against it current competition. While I trust that the author at CVG understands that there’s a difference in a CPU and a GPU, the vast amount of comments I’ve personally read around the internet and social media sites clearly show that the majority of ‘gamers’ don’t know the difference. It’s understandable too, seeing how console gamers enjoy not having to worry about upgrading their consoles’ hardware (e.g. graphics card, etc.) to keep up with new game releases, but with the current arguments about the hardware inside of the Wii U trending within the industry, I’ve noticed that these same people are the least equipped of them all. So here today, let’s leave the arguments on the side-line and gain a better understanding of how the Wii U actually works – shall we?

The CPU
Wii U’s CPU is the smaller component on the left
The CPU, or “Central Processing Unit,” is the brain of the console. The CPU executes and handles of the arithmetical and logical calculations stored within it. What you really need to know here is that every CPU has an internal clock within it and this clock regulates how many cycles (also known as “ticks”) the CPU needs to execute each new instruction. In short, the faster the clock speeds, the more information the CPU can handle within one seconds time.
Just for a little more background here, you oftentimes hear people talk about the “complexity” of the CPU – what does that mean though? This refers to the how many transistors are embedded into the microprocessor. A transistor is the very start of where information flows. When powered they form a “1” and when deactivated they produce a “0”: binary language. These transistors can activate thousands of times per second and how many are embedded into CPU is the fundamental basis on how fast the CPU ticks. When you take into consideration that there are hundreds of millions of these tiny transistors embedded into each CPU, you’ll understand why it’s considered to be “complex.” This is also where the generational gap comes from too, seeing how the transistor count typically doubles within a year’s time.
According to Wikipedia (who’s been doing an awesome job staying up-to-date with this information), the Wii U features an “IBM PowerPC 750-based tri-core processor.”
What’s a “tri-core processor?” This is where three separate CPU units, or “cores”, are linked together to form one multi-purpose CPU unit. Where a single core processor has to do everythingon its own, a tri-core processor would in turn break these exact same instructions apart for three separate processors to work through – greatly improving the speed of the job being completed. If it were humans doing the work, if one man can complete the job in eight hours alone, two men working together can complete the job in four hours’ time, etc. Simple, isn’t it? While the clock speed of each individual core stays the same, the speed of the information being computed is greatly increased as a whole. On paper, the clock-speed of the tri-core unit isn’t the speed of one core, but the speed of how fast the information is computed through all three cores.
The GPU

Wii U’s GPU
The GPU, or “Graphics Processing Unit,” is a very efficient unit that’s used for computing images, especially three-dimensional. See, graphics rendering is overly taxing on a CPU and a separate GPU unit helps to alleviate the CPU being pulled down from the constant strain of producing the visuals, which is why they’re commonly found in videogame consoles. But a typical GPU has strict limitations as to what it can be utilized for.
On the other hand, a GPGPU, which is what is what is found within the Wii U, is a “General Purpose Graphics Processing Unit,” which means that some of  the these limitations to only compute graphics are removed and it can in turn help out with the normal function of the CPU as well. In easier terms, if a Wii U title is pushing the CPU to its limits and creating issues within the game, a slight drop in visuals would free up the GPGPU to supply a boost to the CPU for stability. Also, developers could scale down their games’ graphics and use the GPGPU’s additional power to bring better physics and features to their games that wouldn’t be possible on the CPU’s power alone. While the inner workings of the GPGPU is overly complex (going far beyond my basic knowledge on the subject), I do know this – the addition of a GPGPU for Wii U does add a significant amount of complexity to its architecture and it’s one that will likely be a focal point that developers take full advantage of in the future to push Wii U to its maximum performance levels.
Wii U features a custom AMD 7 Radeon High Definition GPU with a “significant” amount of DRAM embedded into the die. As I mentioned in my earlier piece, this is a great piece of tech and one that looks to doubles that of what’s found inside the Sony and Microsoft’s current consoles.
RAM

Memory Chips (RAM) found within the Wii U
This is a big confusion point that I’ve seen regarding the Wii U’s specifications – by itself RAM does notcreate “next-generation” consoles. RAM, or “Random Access Memory” is where the data is stored and manipulated via the same type of transistors found inside the microprocessor (in binary code). RAM comes in the form of a “Memory Chip.” On each memory chip, there are millions of tiny transistors and capacitors, where the transistor fires the code to be stored ( “1” or “0”) and the capacitor stores it for future accessibility.
If a CPU has little memory available, it has to output the memory stored during its computing process forever to allocate enough memory for its next calculations to be made. So if there’s not enough memory and a certain calculation has to be constantly reprocessed, instead of just accessed from storage, the overall speed of the CPU will be slower. To put it in the simplest of terms to understand, the greater the amount of RAM that’s available, the better chances you have of allowing your CPU run at full capacity.  
This brings us to DRAM, which is “Dynamic Random Access Memory.” This is volatile memory. In short, this means that it forgets what’s stored within it very frequently. This is the primary type of RAM used in gaming consoles, because of its simplicity: one transistor to one capacitor. As I just mentioned, the downside to DRAM is that the capacitors leak their charge and need their charge to be constantly refreshed, but this easily remedied via specialized on-board circuitry.
But, how does DRAM make things better (faster)? That’s actually quite simple: cache memory. Because of its simple complexity, it’s a perfect memory type to store cache memory upon. Cache memory is very small, extremely accessible random access memory that much faster to access than the larger memory banks stored in typical RAM. Cache data is data that’s frequently used, so when the CPU needs it, it goes to cache memory to search for it first and if it isn’t there, then it goes into the banks of memory stored in RAM searching for it, which takes additional time – DRAM improves the clock speed of the CPU!
Wii U packs 2GB of RAM, with one GB allocated specifically for playing games. The other GB of RAM will likely be used for system operations, etc.

eDRAM

While eDRAM should probably be in the above section under “RAM,” I’m separating it here for a reason – this is a big confusion spot for many people who are trying to figure out all of the Wii U’s specifications.
Firstly, much like the GPU vs. GPGPU confusion, eDRAM is not an entirely separate entity either. Instead, eDRAM is actually, “embedded Dynamic Random Access Memory.” This means that the DRAM is just that, embedded into the microprocessor of the CPU itself – for Wii U, the eDRAM is found within the GPU. This is a cool trick that brings along big advantages for the processor: larger buses and faster operating speeds. Buses are the ‘highways’ of the computer, where information flows from one component to the next; the wider the lanes (buses) are determines the amount of information that can travel down them at once. The faster the information flows, the faster the computing process is, which means the overall clock speed of the CPU are once again increased.
This is why the amount of eDRAM is another major factor at play with the Wii U’s maximum processing speed and capacity. We know there’s a “significant” amount of eDRAM found within the GPU, but we don’t know the full extent of what’s embedded within it – the more the better. This becomes exponentially more important when you factor in that the Wii U utilizes a MCM, which is next up to cover – make sure to refresh your capacitors to ensure that you can compute what I’ve only just told you, it’s another vital factor for finding Wii U’s true processing capacity.

MCM

Here you can see the MCM
Multi-Chip Module, this is another factor that’s isn’t talked about very much around the Internet, but it’s another bit of trickery that’s found within Wii U and one that like the many other, is another way to squeeze more power out of the CPU. A multi-chip module is where multiple components are placed onto the same die, or square on the circuit board – allowing the multiple components to efficiently work together in unison.
For the Wii U, the multi-chip module setup finds the CPU and the GPU both placed together on the same die, allowing them to work together at the highest efficiency possible – another clever trick Nintendo’s used to maximise CPU performance, especially when you consider that the GPU and its unknown amount of eDRAM can attribute to the CPU, because of its general purposing abilities (GPGPU).
Here you can see the massive Heat Sink 
But there is a downside to putting all of this onto a single die and this comes in the form of heat. A tri-core processor and a GPU already create a substantial amount of heat on their own, but placing all of this together means that all of this heat is being created within a very small area. This is why the Wii U includes a very large heat sink, as well as an oversized fan within it. If you’re unfamiliar with a heat sink, it’s an aluminium alloy component that not only redirects heat, but it can also dissipates some of it as well.
For any of you reading this that have ever experienced the dreaded Red Ring of Death (RROD) on Microsoft’s earlier Xbox 360 consoles, you know exactly what the end result of heat can do to a system. At a lesser extreme though, overheating can also slow the processors’ speeds. If you’re tried playing a high-end PC title on an average system and pushed its performance to its maximum for an extended period of time, you’ve likely experienced these negative effects first-hand before. The longer you play and the hotter the system gets (if not efficiently cooled), the further the overall gameplay experience continues to falls apart.
On a brighter note, Nintendo is known for developing consoles that are relatively free of issues and I’ve no doubt that they exhumed all of the problems that utilizing a MCM configuration within the Wii U could produce. Still, I would make certain that your Wii U console is placed in an area with adequate ventilation – don’t cram it into a small entertainment centre! This is also why this a top priority in the Quick Start Guide that comes with the Wii U and you should most definitely abide by these instructions to the fullest of their extent.
But, I’d be remiss for not bringing forth the question that’s in my head right now into fruition within this article – will we see Wii U owners ignoring these warnings, cramming their Wii U units into an unventilated space and find them with a system failure from overheating? I’d think it’s entirely possible, but I’d like to think that it’s likely not going to be seen until developers start making use of the advantages of the MCM – greatly increasing the heat produced throughout extended gameplay sessions.

The Breakdown

Now that we know what the major parts that make the Wii U tick and know how these parts actually work – what does all of this mean for the Wii U in general?

For starters and most importantly, it means that the Wii U is expertly designed to make a somewhat unimpressive CPU (individual cores) run at impressive speeds. Utilizing tri-core architecture greatly increases the overall clock speed of the CPU already, but placing the GPGPU and its “substantial” amount of eDRAM (which can both boost the clock speeds of the CPU) on a multi-chip module increases their combined efficiency even greater. This also means that the Wii U’s architecture holds numerous ways to maximise its performance when put into the right developers’ hands. It’s likely we’ll see jaw dropping HD titles that run in native 1080p, taking full advantage of the GPU (and it’s eDRAM), but offers a gameplay experience that’s less taxing on the CPU – think thatgamecompany’s Journey on Sony’s PS3.

We could also see games like Gran Theft Auto 5 come to Wii U without issue, if the developer scales down the graphics to 720p and takes full advantage of the numerous ways to boost the CPU’s performance to handle the burden of running a constantly moving open world environment. Of course, these are my personal speculations, but when broken down into this perspective, I’d like to think that we haven’t even begun to see the true potential of what the Wii U can produce – making the recent rumour I brought up earlier seem somewhat irrelevant- doesn’t it?  

Was the speed found in the recent rumour that of a singlecore of the Wii U’s CPU alone? Was the GPGPUfactored into the equation when checking the CPU’s clock speed? Were the advantages of the MCM utilized when that speed was clocked? Do you now see how vital all of this is to finding out how powerful Wii U truly is? I hope so.
The last point I’d like to touch on briefly, is that there’s a reason that Nintendo has opted for all of this: cost. Developing a home console with a tablet-like controller is already expensive, but having to build a powerful HD console that can stream the information to it from afar makes that cost even greater. Nintendo could have easily slapped quad-core POWER7-based architecture into it and had themselves a true “next generation” console, but that would’ve come with a much higher cost to the consumer – that’s notthe Nintendo way of doing things.
Instead, their engineers have cleverly crafted a powerful console from somewhat outdated technology (one that I believe will be shown to be more powerful than current generation consoles in the coming months) that offers up a near overabundance of control options and allows developers to run wild with their creative minds upon. When the President of Nintendo in North America, Reggie Fils-Aime, stated that Wii U is a “new generation” of home consoles, I think he was right on the money. The abundance of ways to play games on Wii U sets it apart from other dedicated gaming consoles and as you can now see, there’s plenty of options to get additional horsepower out of Wii U if and when it’s needed – now it’s up to the developers to make the best of these hardware options that have been placed before them.
As I bring this piece to a close, I hope that I leave you with a better understanding of how these components that are vital to powering the Wii U (and other computers) actually work and that you see that sometimes (like I said in my last piece) you’ve just got to dig a little bit deeper to find the ‘truths’ behind the matters at hand. In the coming months, we will eventually know what Wii U’s maximum performance level actually is and it’s quite likely that by that time comes, most everyone will be focused on other things within the industry. Here at Digitally Downloaded, we share a common mentality among us and that’s to “have fun playing games, no matter what you’re playing them on.” If there’s one thing I’m absolutely certain of, it’s that there will be a lot of fun to be had with a Wii U GamePad in hand. 
Now that you have this information – does this change your mind about the recent rumours surrounding the Wii U’s power, or lack thereof? We’d love to know your thoughts in the comments section below. 

This is the bio under which all legacy DigitallyDownloaded.net articles are published (as in the 12,000-odd, before we moved to the new Website and platform). This is not a member of the DDNet Team. Please see the article's text for byline attribution.

  • Excellent article, Wii U is really a marvel, I am enjoying every second I play on it. Generation defining games will be released for it down the road as is always the case with new a console release.

  • Bravo, Chris. You have presented an article in such a way that very complex (and apparently often misunderstood) subjects are laid out in a more digestible straightforward manner.
    Ive seen several other articles from other sites that purported to have the same goal in mind but ended up shallow to the point of uselessness.

    All the talk of hardware power makes for a fun ride but what I find most fascinating is how, when we take a step back, those of us who are waist deep in games (the demanding users) are ultimately heading to a point that a majority of people would regard as "nitpicking" hahah. Performance is getting pretty dang near good enough and ubiquitous enough such that a whole lot of people (and not just "casuals"!)) are not going to care about hardware specs. They will only care that what they are looking at is attractively crafted with care, as Nintendo's next Zelda or Mario will surely be.

    In a more general sense, The largest bit of unfortunate cynicism IMO is when I see enthusiast consumers dismiss a product on the basis of "laziness" or "cheapness" or even "terrible engineering". These comments feel so shortsighted and dismissive of surely dozens (if not hundreds) of very talented and well-paid professionals. In the case of Nintendo, I am sure that creating a new gaming console for a worldwide mass market must be a terrifficly difficult and complex undertaking involving countless personnel and facilities around the globe and millions upon 10s of millions of dollars invested each year.

    Having a gripe with a final product for a myriad of reasons is absolutely within the right of a consumer, and many complaints may be justified. But c'mon peoples, "lazy, cheap stupid engineers" is not one of them, most particularly not of the company that has the most experience in SUCCESSFULLY bringing such products to market!

  • Hi Andrew,

    Firstly, thanks for your kind words!

    Some people and the things they say disturb me. When you look at Wii U, a company that doesn't care about their cost to the consumer would have released a console such as this with a cost that could easily double what Nintendo has done. Just the GamePad alone is expensive to create, but having to build a game console and a tablet-like controller too, that's hard to do at a cost that's fitting for families.

    I said it in the article, but the fact that they've created a console that has the power that it does with processors that most companies would laugh at, is pretty impressive. Using a GPGPU or a MCM isn't ground breaking – it's been done before – but it was a smart decision to make and will be a factor in finding how fast the clock speeds really are for the console.

    In the end, it's all about the games and Wii U will deliver. I'm not an owner, as I've decided to focus more on the mobile side of the industry here at Digitally Downloaded at the moment (I purchased a 4th Gen iPad), but we've got several Wii U units here at DD and they are loving them too!

    Cheers! 🙂

  • The Wii U looks great now but wait until Microsoft and Sony catch up. When the Dreamcast came out, it was the first game system of that generation and a little better than the other currant game systems. But when the other companies entered the next generation the Dreamcast was killed by the superior PS2. Hopefully this won't happen to the Wii U when the PS4 comes out.

  • People are all assuming that the Xbox 360 and PS4 will be all-powerful consoles. The reality is that even if they have all that extra power, there will be so few games produced that take advantage of the technology that it will be irrelevant. The costs of developing for the hypothetical power that people are talking about will but those games beyond the reach of anyone but Call of Duty, Final Fantasy and Assassin's Creed.

    The rest of the games will be roughly as good as what we're seeing now. They'll be downloadable and the Wii U will be an affordable platform to develop on.

    It's a very different situation to what SEGA faced.

  • Well if Sony and MS made new hardware with specs similar to a mid-range PC, Developers could just port over the games at medium settings and it still look much better.

  • The visual fidelity might well be better in such a scenario, but the reality is that for everything else (AI scripting, for instance), the Wii U will be capable of most of what the next gen consoles will be capable of.

    The Wii's main problem when it came to hardware was not the visual quality. It was the other features that increased power could, at the time, offer.

    However, this current generation PS3 and Xbox 360 is about as capable in those areas as high-end PC games, because the budgets that the next step would take would bankrupt most developers. I don't see the next gen pushing that much further ahead in those areas.

    Consequently, visuals aside, I don't multiplatform games being any fundamentally different with the Wii U. The era where Nintendo consoles would get shafted with the COD series etc are over, I think.

  • Very well written and goes to show that raw numbers don't make the full picture for example did you know that the SNES had a 5Mhz cpu while the Genesis had a 7Mhz cpu and we all know how that turned out in the end. As for Sony and MS releasing machines with monster specs…………..No, I say this with confidence mainly because of various reasons.

    Sony is in a terrible financial situation and cannot afford to take a loss on there next console or if they do take a loss they can't afford to take on e quite as large the PS3 at launch was sold at a loss of about $250-$300 now unless Sony wants to give Nintendo a 2 year head start in order for tech costs to be lower then well we all saw how that worked for them when they gave MS the lead.

    As for MS some of the latest rumors say that the 720 CPU is going to be about 1.6Ghz which is interesting to say the least because that's about the same as the 360 now before you all say no it's 3.2 that's only when Multithreading and even then that doesn't make it better. Also again there's the cost issue MS had to take massive losses same as Sony and with the addition of extending the warranty in order to cover the RROD in the early consoles from 1 year to 3 cost them around 3 BILLION. Also to consider is that MS has a gold mine with the Kinect and the casual audience so if they want to keep that crowd they need to make the console affordable especially if they bundle it with Kinect 2.0. Now while MS has extra cash to soak the damages that Sony doesn't no company wants to take continuous losses from any division.

    People also need to consider the cost to develop games for these machines, at the moment for example God of War 3 cost around 50 million to make FF13 was around 100 million or so. If they indeed release beastly machines how long and how much money do you think it would take get them looking as good as people would expect?

    So in conclusion I think people will be surprised when it turns out that MS and Sony release machines that will be more in line with the power that the WiiU has for both profit and ease on developer costs.

  • Thanks for the comment! This is my first comment from my Wii U, so let's see how this goes 🙂

    I agree and disagree with you. The developer costs will mean that the next gen consoles will not be THAT much more powerful than the Wii U.

    However, if it was commercially viable, both Sony and Microsoft would have no issue with loss leading. The bulk of revenue in the console business comes from licensing, not hardware, and so Sony and Microsoft would both build a console that makes a loss in order to attract the most developer support possible.

    Because developers won't want expensive consoles to work on, Sony and Microsoft will likely only jump the Wii U by a small margin.

    That was an easy post to make! Good work, Wii U browser!

  • I haven't read all of this yet…

    BUT

    Why didn't Nintendo just release the specs? IMHO that is where this issue stems from and now Nintendo fans and Wii U early adopters are all (in a manner of speaking) playing Nintendo apologist to justify this entry into the console marketplace.

    Just a thought that has been really bugging me about this. Nintendo's lack of PR has caused this mess. I suppose every Wii U console teardown is another one sold right?

    🙂

  • Because of the focus on the Xbox 360 comparisons many persons around the net are ignoring the fact that Wii U's performance out of the gate in relation to 3rd party ports is not substandard with the exception of Batman and Epic Mickey 2 – it's more in line with a PS3 port of a game developed on 360 as lead platform. Batman and Epic Mickey 2, are sadly, "hot garbage". Clearly rushed and farmed out ports with no quality control in the case of Epic Mickey 2.

    An interesting and overlooked example of how Wii U's hardware performs is actually found in the Nintendo-published upgrade to Ninja Gaiden III. While this game does not have perfect performance, comparing it to the 360 version reveals a few intriguing things.

    First, NG3 on 360 suffered from massive tearing, to the point that the screen was sometimes in tatters. This is seemed due to the game's requirement to run at 60fps while managing a large number of enemies with complex AI and large environments. The game also suffered from very flat lighting, giving it a grey, washed out look.

    The Wii U rebuild engages v-synch to solve the tearing issue. And in spite of v-synch being turned on, its performance in many areas of a level is better than the 360. Smoother, dropping from 60fps less frequently. Intense battles that caused the 360 version to suffer extreme screen tearing now instead drop frames or suffer micro-stutter. But visual clarity of what's happening on the screen is improved due to lack of tearing.

    In addition, the lighting in the game (as with several other Wii U titles) appears more advanced than the 360 version, getting rid of the flat tones and grey overcast of the original. On top of all this, the Wii U version must deal with a larger visual overload than the 360 build, as the traditional exploding and dismembered enemies that were removed from NG3 are restored, resulting in much more flying particles and alpha on the screen (sometimes literally spraying the camera with transparent liquid and debris). The Wii U handles this fine most of the time, with many of these encounters staying at full framerate and full game speed.

    I believe the overall disappointment some have expressed with Wii U is based on a lack of traditional, expected next-gen power curve – the idea that a "next gen" console should be able to trivially run ports of current gen games at superior resolutions and framerates. In part these expectations seem based on gamers' experience with PC ports. Put a 360 built game on a powerful PC, and raw, brute force can be used to run the game with superior performance to the console build.

    But in reality, it appears many do not realize that to achieve such an increase in performance requires a PC that is not merely a few times more powerful than an xbox 360. Instead a current day PC is fifteen or twenty times as powerful. Only such brute methods could deal with unoptimized code and engines with limited capabilities, not designed to take advantage of what modern technology can do. Given the way game engines used in the current consoles are designed, I wouldn't be surprised to see a quick and dirty port of a given game have performance problems on the next Xbox or PS3 – not for lack of power in the host console, but lack of efficiency.

  • And yet another article that believes performance lies in the clock speed. It doesn't. A lower clocked architecture can outperform a higher clocked one, especially in the case of the WiiU as it is OOO (out of order) execution.

    Do you have a GFLOPS rating for any of these chips? As long as those aren't known, people have got shit comment on the performance of the WiiU.

  • I haven't completed reading this entire article yet (I will, though, don't worry), but I wanted to correct something which you mentioned about CPUs and its cores. Having a tri-core CPU is not the same as having 3 CPUs. 3 CPUs actually gain you more power but less efficiency (with today's pipelines) than a CPU with 3 cores. You may understand this, but the way you worded what a tri-core CPU is, made it sound like you were saying it's about the same as 3 CPUs.

  • The PS2 was barely any more powerful than the Dreamcast. Sega had gained a bad reputation by then and Sony had pre-order signage at all of the game stores even though the system was a year off. At this point, we know next to nothing about the next Xbox or PS, the Wii U is here now to enjoy.

  • Reading on, I also think you should explain just WHY the Xbox 360's heat was so bad for that console at first. The type of solder, way it was soldered and the other types of connections on the main board all had a huge play in the RROD. I used to fix Xbox 360s all the time and I know first hand tat even though the inside got pretty hot, it never crossed any threshold which hasn't been crossed before. The solder was cheap and would crack due to the board warping a bit. However, Boards warp all the time and should be given an allowance to do so with the way compnents are soldered, glued and screwed together. The Xbox 360 wasn't. The PS3's famous YLOD was similar, but not due to solder, but simply not enough airflow.

  • I can answer this. First off, Nintendo doesn't compete in terms of raw power. They have stated this multiple times. Even when they showed off the physics in Luigi's Mansion on the GameCube at E3 one year, they STILL didn't release the full specs of the console and it was more powerful than the PlayStation 2.
    THis weird little conspiracy theory Nintendo-haters come up with about the "lack of released specs" is quite odd IMO. They haven't cared about releasing full specs since the N64 and stopped caring after the GameCube about being top dog in specs because, as you can see by sales numbers, it didn't matter.

  • Pretty much going forward any non-basic GPU is going to be a GPGPU. Traditionally the functions that have become GPGPU functions were done on the GPU. The whole birth to GPU computing is thanks to higher end gaming video cards. Mostly GPGPU computing is manner in how you program but now the GPUs have been given additional instruction sets for GPGPU code since it has become a popular means to compute in a relatively small form factor.

    That being said GPGPUs are generally for optimized tasks and not really "programs" or "applications" in the manner that most gamers know of. As the author points out physics is a very good fit for this as well as many other "next gen" graphical bells and whistles.

    In the manner of speaking for this article all the new consoles will contain GPGPUs. PS3's cell processor can be utilized in ways that are very similar to GPGPU processing.

    The 2GB of RAM is a decent chunk and will allow Wii U to generate larger or prettier experiences than what you see on PS3/360, the speed of the memory being reported is a little troubling, although not a show stopper. I can say with some confidence that PS4/X720 will feature faster memory than the Wii U.

    Both the Wii and the Xbox 360 feature eDRAM. Saying eDRAM boosts clocks speeds is not exactly accurate. eDRAM helps to get the most out of each cycle.

    I would speculate that the Multi-Chip Module perhaps has contributed to the slower clock speeds. Maybe Wii U features a turbo mode? * SPECULATION ON MY PART *

    The whole reason this information is even news is because these speeds are alarmingly low, even with the absolute best instruction sets and an optimal amount of eDRAM.

    One thing I have not seen touched on is the reason why Wii U reserves 1GB of RAM for the system. I have to say my interest in Wii U was diminished when I heard that news. Are they using that much memory for the controller? I am assuming (due to lack of information) the tablet is just a streaming device with only a minimal amount of resources. Which would explain why 1GB is reserved for the system.

    I just don't see the Wii U becoming the core console of choice. The battery times and heavy controller will likely have people gravitating to the standard Wii U controller (one that looks like an X360 controller).

    The problem with Wii U going first is they are likely to get more slightly better ports from this generation. Unless the system really flies off the shelves there isn't much motivation for Developers (who are hurting) to risk doing awesome things on the Wii U. Not saying this as a soothsayer, saying this as a possibility.

  • There are reports that AI is a challenge on the Wii U, especially with a large amount of characters on screen. Something that the PS3 / Xbox 360 didn't have too many problems with. As a hardcore gamer that is eye-brow raising.

  • Rendering characters on-screen is a different issue to AI. Warriors Orochi is the only game that I've seen criticism regarding the port struggling with rendering characters on-screen.

    Those criticisms are fair – the game does struggle to display the same stuff on screen as the PS3/ Xbox 360 game. But then you realise the game is actually rendering double the number of characters – because it's streaming to the GamePad – and the game is something of a technical marvel, even if it doesn't look as good as the other console versions. Ironic!

    But that doesn't have much to do with AI. The AI for Warriors games is always very simple. The Wii was able to handle the Warriors' AI rountines.

  • @facebook-632146952:disqus

    So now anyone skeptical of Nintendo is a Conspiracy Theorist and a Nintendo hater? Seriously that comment is loaded with so much rhetoric I am wondering if you are renting a room from Reggie. I am no Nintendo Hater, they were my console of choice when I was younger, but I grew up and Mario and Zelda didn't. I started preferring head-shots to tri-forces, giblets to goombas. You get my drift.

    The fact is Nintendo came out of the gate with Wii U trying to court the core game player. If you aren't going to come prepared with some specifications and allow the community to speculate you are doing a disservice to that core marketplace IMHO.

    Sure the Wii was a great success for Nintendo, but tell me how do the Third Parties feel? Considering most of them ported lack luster PS2 ports to the Wii, I would suggest that they may have a different opinion of the Wii than you do because of the lack of profitability for them. I am afraid when the next PlayStation and Xbox are released history may just repeat itself. The fact is the reason you have a Wii U Pro Gamepad is because third parties INSISTED on it, meaning "no normal controller, don't expect games".

  • Just to clarify Matt, the Wii U isn't double rendering to stream the image, the rendering is done just as it is on other consoles and then the output itself is streamed over to the GamePad. The only difference here is that the Wii U uses a bit of power to stream the image away from the console, instead of through a cable to a television.

    If two completely separate images are produced and being played simultaneously by two separate gamers – one on the GamePad, the other on the television – then here you are indeed double rendering.

    But, Wii U isn't the first console to do this – the PS3 has been capable of doing this. If you've got the PlayStation 3DTV, then you can play SimulView compatible games with the PS3 double rendering two separate images onto the same screen (both players are looking at the same screen, but each one sees an entirely separate image). Even Killzone 3 and Gran Turismo 5, which are both overly taxing on the console, can be played in SimulView!

  • Well in reference to the PC vs Consoles is that PCs are designed to run Windows, sure there are lots of tricks to open up that hardware directly but it is never quite as good as a console (in terms of maximizing the performance from the hardware).

    The console soup to nuts is made for gaming, so all the headroom available goes to gaming. If the Xbox 360 had to run something like Windows either the games would suffer or it would take 15 minutes to pull up your dashboard.

  • Hi DarthDiggler,

    Thanks for your insightful comment!

    eDRAM does do exactly what you've said – boost it's efficiency – but the more efficient it is, the faster it processes information. It doesn't significantly improve its speeds, but even a small amount of an increase, is still an increase.

    I wanted to touch on the 1GB of reserve RAM in the piece too, but this article was only to give basic information to the many who don't understand any of this information at all. I do think that a good portion of that is used in the steaming process, because (to the best of my knowledge) the controller doesn't have any processing power.

  • Hi Gregory,

    Thanks for stopping by!

    I'll answer both of your comments here in one – with the tri-core, I do indeed understand that it's as you stated it. The reason I worded it the way I did in the article wasn't to mislead, it was only to give a very basic understanding of how things work for the ones who have no idea of what a "core" or "processor" in it actually is. I'm actually just walking in from work and trying to get to all of the comments between here and N4G, but I will indeed give it a read through and rearrange the wording if need be – thanks for the tip though, it's much appreciated.

    With the solder on the Xbox 360, I pondered placing this into the article, but I felt that I might get too off track with it, seeing how the entire piece is all but entirely focused on the Wii U. If someone didn't know what solder was – would I need to explain it and it's use in the piece too? So I chose not to get into that here, but your point is well taken. I didn't put that part in there to be used as a scare tactic by any means – I only wanted to give precaution to owners and future owners in my piece.

    It's obvious that your knowledge is beyond by basic knowledge of these issues – do you think that a completely unventilated Wii U could potentially fail? The amount of heat shielding and size of the heat sink makes me think that this thing could get extremely hot in the right (err… wrong!) conditions.

  • Hi Moribund Cadaver,

    There have been a few lacklustre ports and we will indeed see things get better as developers have more time to learn to build on its architecture.

    Ninja Gaiden III on Wii U is getting little talk because the game was nowhere near expectations to the hardcore fans that have supported the series from its roots (including myself). I (sadly) have a 3/10 published review that's factored into the pot at Metacritic. I'm glad that Nintendo did what they did with the title and that it's finding a few more fans now, but I hope that Team Ninja never does this again to it's loyal fan base.

    In response to another thing you stated, I think that the real issue that people are having with Wii U is that there's not a new AAA Nintendo title that absolutely stuns its owners. New Super Mario Bros. U is great, but it's been done before. It's needs either a brand new IP, or a new Metroid, Zelda to really show off what it can do.

  • It's mind boggling – isn't it?

    I think the reason they don't is because they would get so much negativity before release that it would turn off potential buyers. Nintendo doesn't need massive amounts of power to sell consoles any more and it's easier for them to avoid this altogether.

    Heck, with both the 3DS and now Wii U, people fully believed they were going to be "next generation" powerful. Wii U was 50% more powerful than the PS3 and the Resident Evil Revelations on 3DS had better graphics than the PS3 too. Why should they release the specs? Haha

  • "People are all assuming that the Xbox 360 and PS4 will be all-powerful
    consoles. The reality is that even if they have all that extra power,
    there will be so few games produced that take advantage of the
    technology that it will be irrelevant."

    It all depends on the development platform and Sony has been listening to Devs with the PS3 creating a development platform that was useable. I hear the VITA dev platform is great and adds a lot of productivity to the development cycle.

    Suggesting that Sony and MS are including technology in their systems that developers won't be able to utilize because of a learning curve isn't exactly true. For one I am sure Naughty Dog got a early look at the system to dig into it to help with the Development Platform (kind of like how they helped with PS3).

    The fact is there will likely be launch games that use most of the bells and whistles because they will want to visually separate them from the older generation. They will likely not be the best examples as the consoles age the software prowess will increase.

  • It is double rendering – it's possible to play "split screen" Warriors Orochi where one player has the entire TV screen to themselves, and the other player has the Gamepad screen, and can happily be hacking away at the other side of the battlefield.

    As far as I've seen there's none of the usual concessions that Tecmo Koei makes when running split screen – the experience is exactly the same as if two people were playing single player. So that suggests to me the console is double-rendering.

    And yeah, I'm aware that the Warriors games are also double-rendered on the PS3. I'm not really interested in trying to figure out which one is or isn't more powerful – I'm just clarifying non-technical things for folks 😛

  • My favourite thing about the game – and the reason I decided to trade in my PS3 version of it to keep the Wii U version. My wife and I can play local multiplayer in complete comfort! 🙂

  • Great points guys.
    Digg, your last comment about consoles being made just for gaming apps got me thinking….because with future consoles this may no longer be the case. The next Xbox may very likely run some win8 compatible variant. Sony will have to keep pace with MS and mobile threats while furthering their own ambitions to carve out a chunk of living rooms worldwide. And the Wii U, of course, has a large amount of resources reserved for persistent system processes, of which web browsing and a social network may be just the tip of the iceberg.
    I have seen rumors that place anywhere from 4 to 8 gb in the new XBox…now how much of that will be reserved for the OS? :p

    At play may be the crux of traditional consoles going forward…is going more all-purpose the best way to compete and thrive this next generation or is there a point of too much functionality compromising core gaming performance to the point that customers are turned off?

  • I did not know people cared so much about those ducks, hahah.
    A broader point here though is that trumpeting specs and tech demos sometimes serves to mislead…because they are marketing points that serve a marketing purpose.
    This is probably repeating something oft-said, but We can again look to the Gamecube as an example. When the paper specs from the three consoles of that generation were lined up, people bemoaned the 'Cubes lower numbers. In the end however, that unfortunate box was shown to be quite a beastly performer vs it's chief competitor…but that competitor wiped the floor because it turned out that awesome specs don't make superior games, awesome developers do.

    Also, thumbs up to "every teardown = one mo sale!"

  • We would just be stating the obvious…those systems MUST offer a certain jump on performance because they differentiate from Nintendo (and encroaching upstart platforms) on that axiom.
    However, it seems you are making the point that their bump in hardware will place them above what the Wii U can handle comfortably?
    If I were MS or Sony, of course I would absolutely want to position my hardware as such…I just don't know if we could afford to do so.

  • I found this to be an excellent article. It's honestly the first non-biased, fact driven piece I've read. Whether this system will match Xbox or PS3 systems or keep pace with the new models doesn't matter to me. It's a totally different system. One thing I'd like to know is what reason does Nintendo have for not releasing the full spec of their console? There is enough negativity out there already for a machine that looks awesome and plays even better. (Many side by sides look like the U is more crisp and detailed.) I guess I found myself astounded by the ridiculous comments I was seeing all over the place especially having owned one and experienced nothing like what I was reading. So this is more than refreshing to see someone take the time to dispel a lot of the hot air out there.

  • Oh, that's indeed the perfect reason to do so! Um… how do I get my wife to play Warriors Orochi 3? Now that's a question to find an answer too! Haha

  • Excellent article Chris.
    I agree that games like GTAV are possible. It is a question of optimisation. I hate it when devs like Crytek talk about WiiU like it is an outdated piece of hardware. Crytek should be the first one to shut up, with their un-optimised games.
    Sometimes to be honest I wonder if developers do not optimise games properly just to promote pc upgrades/ the need new consoles.
    Look at games like Uncharted 3. PS3 supposed to be "outdated" and yet it looks better than most PC games. The usual excuses devs say about graphic updates in their games, that "new algorithms are found" or existing ones are improved are complete lies. Algorithms, data structures, etc are all already developed. We got data structures with efficiency O(I) for various functions which is as good as it can get. Then why we see this graphics update?

    My theory:
    -They are lazy.
    -They deliberating hold resources back. Why invest in better graphics, thus more costs, in a recently established console?
    -Keep something "new" for their audiences.
    -Able to moan about new consoles.

    I believe WiiU has the power needed packed inside it. It just needs devs to invest time learning parallel programming for GPGPU and work things out.

    The only thing that worries me in that console, in terms of hardware:
    -Lack of space.
    -Tri-core PowerPC CPU. Why not go an AMD four cores CPU as PS3/360 will go? It would have made devs lives easier.

  • Previous Story

    The 24 Games of Christmas! Day #2: Colors! 3D

    Next Story

    lords-knights-brims-with-christmas

    Latest Articles

    >