NVIDIA Pascal Graphics Cards Priced $600 & $900 Leaked in Shipping

When exactly will nVidia demo and then release Pascal-based cards?

While we have been hearing about the impending unveiling and demonstration of GPUs based on nVidia’s latest Pascal architecture for a while, there hasn’t been much substance behind all the rumors until now.

In what appears to be a leaked shipping log for nVidia GPUs, a bunch of details stood out and caught our eye.

The leaked ‘shipping logs’ sourced from WCCFTech

There are two distinct GPUs listed, one priced at $600 and the other at a whopping $900, clearly hinting that they are high-performance cards. Now the serial number is where it gets interesting: nVidia’s code for its opening keynote (ignoring the prefixes) is also 699.

Normally, I would treat it as a coincidence. The 699 serial number, though, is synonymous with Pascal boards. That it was first spotted in December points to the same conclusion as well.

If indeed nVidia is planning to demo Pascal-based boards at the GPU Tech Conference, scheduled to be held in April, it’s going to headline the whole event.

We would like to remind our readers at this point that the Pascal architecture is a major upgrade over its predecessor, Maxwell, on paper. Pascal has a brand new approach towards GPU memory as well as being a 16nm architecture, compared to Maxwell’s 28nm. There is NVLink to spice up the GPGPU side of things as well although it’ll only be available once IBM releases its PowerPC processors in 2017.

When it comes to memory, gone are the days of GDDR5. High-Bandwidth-Memory is starting to cement its place as the industry standard now. Add GPU and system memory unification to that and you’ve got a headline-grabber.

It’s not difficult to see why Pascal is generating so much press interest. The question is:will it be able to live up to the hype? Will it be able to surpass the success of Maxwell? We can only wait and see!

  • James

    If the top-end card carries 32GB of VRAM, like holy shit! If I SLI two of them thats…64GB…OF VIDEO RAM. I don’t even have 16GB of system RAM. I’m slowly planning to empty out my bank account for a new machine to feed my addiction…

    • BlatantFool

      Don’t think you know a lot about SLI and VRAM… Whatever VRAM one card has, it’s mirrored into the second. In other words, two cards in SLI will still use the same 32GB of VRAM in this case. Also, if two cards in SLI have different amount of VRAM, that means they’ll use only as much as the smaller VRAM of the two.

      • Juhni Blazn

        You need to keep up with developments from a few years ago then.

        http://www.guru3d.com/news-story/both-mantle-and-dx12-can-combine-video-memory.html

      • James

        LOL I was being facetious intentionally.

        DX11 and OpenGL have been using AFR (Alternate Frame Rendering) since Multi-GPU technologies started (2004). AFR duplicates the data copied from the main GPU, to all the other GPU(s). Each GPU then ‘alternates” between which frame needs to be rendered, versus the next frame(s) in the queue. AFR1 uses odd frame technique and AFR2 is even frames.

        My comment is based on this new technique called Multi-Adapter – where the total amount of GPU’s installed (including the intergrated GPU in the CPU and the motherboard) sum up to one large pool of memory, allowing the 3D Pipeline to render passive elements on the slower GPU(s) and vice versa.

        Multi-Adapter is different from SFR (Split-Frame Rendering), which divides the frame itself into sections as low as 2 and up to 4, allowing each section of that frame to be completed by a specific GPU. SFR also allows programmable tasks to be completed only on 1 GPU instead of the total amount available. So you can have post processing/shading/shadow maps on the 2nd GPU, while GPU 1 compresses texture maps, draws objects etc.

        Don’t expect everybody who posts to be F.O.B.’s. I’ve been following the semiconductor industry for over 15 years so I know my shit. I just like being playful ;).

        • Scott Morel

          Nvidia already confirmed that geforce gpus will max at 16gb. 32gb is for high performance like deep learning. Still if that $900 is the Pascal Titan (Titan P?) I will tote buy it.

          • Luke William

            I have a feeling 16GB will be enough for most for at least 5-10 years.

          • xostrowx1991

            I’m doubting it. There was a previous leak of prices that suggested a $700 GTX 1080, a $900 GTX 1080 TI, and an $1,100 GTX “TITAN P”. And the recently revealed specs show that they are indeed releasing a 1080 TI model. The 1080 will have 4,096 cuda cores, the 1080 TI with 5,120 cuda cores, and the “TITAN P” topping out at a whopping 6,144 apparently, which is surprising and sounds a bit too good to be true, but is actually in-line with their “2x performance per watt” figures if the “175 watt” tdp leak of the 1080 holds true.

            This means that the GTX 1080 will likely be between 15 and 25% faster than the GTX 980 TI, and the GTX 1080 TI will be closer to 3-40% faster than the GTX 980 TI, with the “TITAN P” being over 50% faster than the TITAN X (which is around 55-58% faster than 980 TI)

            These figures could go down a “bit” if they are really conservative with power even more-so than we are currently seeing; but i doubt it would change much.

            Having an $600-700 GTX 1080 with 8GB of GDDR5X high-bandwidth RAM that’s around half-way between GDDR5 and HBM2 (so just a bit slower than HBM1 like on the fury-x) with 4,096 cuda cores, NVLink, Tier 3 DX12 support, and only a single 8 pin power connector using 175 watts? That’s well worth the cost in my opinion….

          • Scott Morel

            Yea I saw that but idk how accurate that chart is. Guess we will just wait and see.

      • James

        AFR (Alternate Frame Rendering) duplicates the data in the VRAM buffer to all the cards, then ‘alternates’ which frames are rendered by the GPUs. Explicit Multi-Adapter in DX12 allows memory pools to be combined, by treating all GPU’s as a single device.

        My comment is assuming Explicit Multi-Adapter changes to the DX12 API.

        You can’t expect everybody on these forums to be clueless…