It's interesting that nVidia's Coremark slide uses a recent GCC 4.4.1 build on their Tegras but uses a much older GCC 3.4.4 build on the Core 2 Duo. I can't help but think nVidia is trying to find a bad Core 2 Duo score in order to make their own CPU look more impressive.
------------------------------------------------------------------------------------------------------------------
Did anyone look at the fine print in the chart with the Coremark benchmark?
Not only do they use more aggressive compiler flags for their products than for the T7200, they also use a much more recent version of gcc. At the very least, they are comparing apples and oranges. Actually, I'm more inclined to call it cheating...
------------------------------------------------------------------------------------------------------------------
As for the performance metrics demonstrated... The 'gaming' is most likely due to the improved graphics, which is unquestionable NVIDIA's strength. The "Coremark 1.0" results meanwhile are yet more amusing. If that Kal-el score is indicative of final frequency performance, then I'd expect it to still be running at 1GHz because Coremark is an unrealistic benchmark that scales linearly with number of cores. It's also basically just an integer benchmark (more information is available on their site.) aka, that benchmark implies zero per-core performance increase for Kal-el over Tegra 2.
------------------------------------------------------------------------------------------------------------------
Very cool chip, lots of great technology. But it will not be successful in the market.
a 1080p high profile decode onto a tablet's SXGA display can easily jump into the 1.2GB/s range. if you drive it over HDMI to a TV and then run a small game or even a nice 3D game on the tablet's main screen, you can easily get into the 1.7 to 2GB/s range.
why is this important? a 533Mhz lpddr2 channel has a max theoretical bandwidth of 4.3GB/s. Sounds like enough right? well, as you increase frequency of ddr, your _actual_ bandwidth lowers due to latency issues. in addition, across workloads, the actual bandwidth you can get from any DDR interface is between 40 to 60% of the theoretical max.
So that means the single channel will get between 2.5GBs (60%) down to 1.72 (40%). Trust me, ask anyone who designs SOCs, they will confirm the 40 to 60% bandwidth #.
So the part will be restricted to use cases that current single core/single channel chips can do.
So this huge chip with 4 cores, 1440p capable, probably 150MT/s 3D, has an Achilles heel the size of Manhattan. Don't believe what Nvidia is saying (that dual channel isn't required). They know its required but for some reason couldn't get it into this chip.
Ništa novo, rade isto što i svi drugi, postoje laži, prljave laži i benchmark testovi

hmm, koliko puta se desilo da kompanija prerano najavi naslednika i tako ubije prodaju postojećeg proizvoda? Recimo da je sve što govore istina i da će u tim vremenskim intervalima izdavati procesore naznačenih specifikacija i razumne cene. Zašto bi neko kupovao prvi model ako zna da će za šest meseci moći da nabavi dva, tri, pet puta moćniji (to što neće umeti da tu snagu iskoristi je druga priča)?