DirectX 12 (DirectX 12 download) benchmarks for Nvidia’s fastest graphics card ever, the GeForce GTX 1080 have finally been released. Impressive entries for the GTX 1080 have made their way to the Ashes of The Singularity benchmarking database and we’re going to share them with you.

Graphics CardGTX 980GTX Titan XGTX 1070GTX 1080
Manufacturing Process28nm28nm16nm16nm
Transistors5.2 Billion8 Billion7.2 Billion7.2 Billion
CUDA Cores20483072TBA2560
Memory Bus256-bit bus384-bit bus256-bit bus256-bit bus
Launch DateSeptember 2014March 201510th June 201627th May 2016
Launch Price$549$999$449 (Founder's Ed)$699 (Founder's Ed)

NVIDIA GeForce GTX 1080 DirectX 12 Performance Benchmarks Revealed

The “Founder’s” edition is just the name that Nvidia decided to give to the reference designed card, featuring the blower fan and the metallic shroud. Many gamers found this quite bizarre as the reference designed cards are usually the least sought after due to their higher noise output, higher temperatures and lower clock speeds. Nvidia’s decision to market the reference design as a premium option very likely stems from the initial limited availability of the card, so the $100 premium will as an early adopters tax for the time being. Until Nvidia’s board partners launch their own custom versions of the GTX 1080 at $599 later on.

Reviews for the GTX 1080 will go live on May 17th, 10 days before the Founder’s edition GTX 1080 is made available for purchase. The GTX 1070 Founder’s edition will launch June 10th for $449, with the board partner cards launching later for $379. The embargo date on GTX 1070 reviews hasn’t been revealed as of yet, but it will likely precede the June 10th date by at least a week similar to the GTX 1080.

According to Nvidia the GeForce GTX 1080 will be roughly 20% faster than the GTX 980 Ti and the GTX 1070 will be slightly faster than reference GTX 980 Ti cards and on par with factory overclocked variants.

GeForce GTX 1080 DirectX 12 Performance Revealed

The entries include two resultions, 2560×1440 and 1920×1080, both of these ran at the “Crazy” graphics preset. The GTX 1080 was 13% faster than the GTX 980 Ti and 11% faster than the R9 Fury X at 1920×1080.  At 2560×1440 and the same preset the GTX 1080 was 9% faster than the GTX 980 Ti and 11% faster than the R9 Fury X.

DirectX 12 Ashes Of The Singularity – 1080p Crazy Presetgraph_35 - Copy

DirectX 12 Ashes Of The Singularity – 1440p Crazy Presetgraph_34 - Copy

These numbers aren’t as high as the ones touted by Nvidia during the press event over the weekend but they’re respectable none the less. We also can’t draw any definitive conclusions just by looking at the card’s performance in one game. So keep your eyes peeled for those reviews on the 17th.

Nvidia-GTX-1080-BenchmarksNvidia Marketing Benchmarks – Take With A Grain Of Salt

The Pascal Building Block Of Every GTX 1080 And GTX 1070 Graphics Chip

The basic building block of every Pascal GPU is called the streaming multiprocessor or SM for short. The streaming multiprocessor is a graphics and compute engine that schedules and executes instructions on many threads simultaneously.

NVIDIA Pascal GP100 SM

Each Pascal streaming multiprocessor houses 64 FP32 CUDA cores, half that of a Maxwell SM. Within each Pascal streaming multiprocessor there are two 32 CUDA core partitions, two dispatch units and a brand new, smarter, scheduler. In addition to an instruction buffer that’s twice the size of Maxwell per CUDA core. This gives each Pascal CUDA core access to twice the registers compared Maxwell.

The end result is more performance per clock per CUDA core, lower power consumption and a higher overall clock speed. The updated hardware scheduler extends Pascal’s abilities to execute code asynchronously, which will no doubt have a positive impact on the architecture’s performance when it comes to DirectX 12 Async Compute.

If you want to read more about the Pascal architecture, check out our in-depth break-down of the architecture versus its predecessors Maxwell ( GTX 900 series ) and Kepler ( GTX 600 and 700 series ) here.

DirectX 12 Async Compute Still As Important As Ever – Can Nvidia Catch Up To AMD?

I dove deep into the Pascal architecture last month and explored the nitty and gritty of details. One of the more important architectural changes that Nvidia has introduced with pascal is the addition of a hardware scheduler, similar to what AMD did with the GCN architecture in 2011 starting with the HD 7000 series

This new hardware scheduler will play a crucial role in allowing Pascal GPUs to perform better at executing tasks asynchronously, even though it still evidently relies on pre-emption and context switching according to what Nvidia has revealed in its Pascal whitepaper.

So while this scheduler doesn’t actually allow tasks to be executed asynchronously it will still improve the performance of Pascal GPUs when it comes to executing code that’s written asynchronously. It’s sort of a hack to hold Pascal off until proper async compute is implemented in Nvidia’s future architectures.

Async Compute has always been a controversial issue for Nvidia, largely because the company refused to talk about it for many months and promised a driver that would enable it on Maxwell that never came. The addition of an updated hardware scheduler in Pascal signals a change of heart for Nvidia. It represents a walk-back on some of the trade-offs that the company decided to make with Maxwell to achieve its power efficiency goals.

These trade-offs many developers argued were reasonably sound for DirectX 11 and traditional generic APIs, but not as much for the new era of VR and low level APIs such as DirectX 12 (download DirectX 12) and Vulkan. Where executing code asynchronously has proven to be of benefit to latency and performance. Pascal should be better at Async Compute thanks to the new hardware scheduler. Although how much better exactly no one knows yet. However, if these DirectX 12 Ashes of The Singularity – a game that makes plentiful use of DirectX 12 async compute – benchmarks are of any indication, then we’re likely only looking at minimal improvements with Pascal.

  • Shamz

    Gimped benchmark. It doesn’t even use parallel async compute which is what it should be using.

  • Red Bull

    Nvidia Async is fake. Their new GPU’s will suffer same fate as Maxwell line up when more DX12 games comes in late this year and in 2017 onwards.

    • lozandier

      Wait till Big Pascal and if Async compute will be even used commonly by AAA games to say such opinionated things disguised as facts first.

      This could be DX11 DCL all over again

  • Diglo1

    My best guess is that gtx 1080 is as much better as it has more gflops which represent the real performance of the card and it’s about 28% faster then gtx 980ti. Of course this doesn’t tell us much about the architecture it self and what advantages it has until we see more benchmarks.
    So if Fury X performs only 13% worse and has 5% less compute power then 1080 it’s just marginal improvement for Nvidia (50% not 100%). I give that Pascal seems to be great overclocker but i doubt that majority will reach 2ghz…
    Given that Polaris should be 2,5x more efficient then previous generation AMD’s Fury X performing card should consume less power then 1080 even when adding 13% more to power consumption to even the performance gap seen in AOTS. This is true even when only expecting 2x efficiency, which is more likely.
    So we are gonna see very nice competition while Polaris has full support for async and Pascal has something on baby steps.
    We didn’t see GTX 1070 benchmarks which I though would be the the one to compete with Polaris 10, but it seems that Polaris 10 then will be just bellow gtx 1080.
    AOTS to my opinion is the one to benchmark and see what async can really do when fully enabled. People scream it’s not fair for Nvidia when it has not been that fare for AMD past a few years. Asynch has always been the way to go and CPUs already do this.
    Quantum Brake is broken, but we can see that AMD’s asynch and console market is paying off sense the game is a lot easier to optimize for AMD then Nvidia.
    I know this sounds all bad for Nvidia, but it’s smart marketing from AMD and it wasn’t that long ago when I heard same from other people saying about GameWorks.
    1080ti will come along, but so will AMD’s Vega and then we’ll see the real performance king fight.

  • Piiilabyte III

    Just like Intel, NVIDIA will always hold the flag. I wish this weren’t true.

  • wargamer1969

    Waiting on legit reviews at 4k not lame 1080p or 1440p.

    • The vast, VAST majority of gamers are still on 1440p or lower. That’s where their target market is right now. There’s plenty of bad things to say about Nvidia but they’re not dumb enough to ignore 90% of their customers to cater to 5% of them. There’s nothing “illegitimate” about 1080 and 1440 benches unless you’re bound and determined to be salty and can’t find another more valid reason to be.

  • Prince Chèn

    No real Asynchronous Compute, but a temporary fix for things coded for Asynchronous compute. I wonder what the 980 ti’s price will be after this releases, the Fury is good, but power hungry. Either way, not looking to buy a GPU anytime soon. Bought a GTX 970 recently.

    • Sergiu Petrica

      Prices on new 900 cards will probably not change that much because the cards will go out of stock soon.
      However prices on used 900 cards are already plummeting.

  • Incognito Jay

    Okay. How do we get rid of the left boarder and align the text left? I hate narrow centered text. I’m on a 4k monitor if that has to do with anything.

  • koila maoh

    Dat Power efficiency, Need something low power for the dam summer! too hot.
    The benchmarks don’t seem that great, we need more games and more cards
    to reference.

  • mariovsgoku

    I’m interested in seeing 12K performance with four 1080s in SLI. There were a lot of tessellation issues when trying to run three 4K monitors with a Quad-Titan X setup, so I’m curious to see if Nvidia has fixed the issue. Admittedly, very few people are rich enough to have experienced this issue, but it should be interesting to see nonetheless.

    • 3R45U5

      not gonna happen. pascal only supports two way sli according to Linus.

      • mariovsgoku

        Well that’s disappointing. How are we supposed to get good 12K performance now?

    • Phyxsyus

      1080 only supports 2-way SLI…

  • Dragos Lucian

    These are results of the reference card. Wait for a gtx 1080 Kingin edition and then we’re talking.

    • Thomas Olson

      You misspelled MSI Lightning Edition. 😉

  • Natural Gamer

    So this is the second time Nvidia lied to us about it async compute.. I’m moving to AMD, f**k you Nvidia!

    • Buddydudeguy

      You like many, are way too obsessed with async compute.

      • Protoss

        async compute is giving to my R9 390 10 more fps than the GTX 980 on Hitman on DX12, compare the pricetag of the cards and say again that async compute doesnt require that much of attention

        • renz

          and that’s the problem with async compute. they need DX12. and let’s face it. not every developer willing to go low level for various reason. even with Hitman. sure they can give more frame rates but they also a lot more unstable than DX11. more times and effort needed to make it work not to mention the need to tweak async compute per card basis as mention by hitman developer themselves.

          • eucalyptux

            but they will in the futur, especially in console ported game, DX12 is still very young.
            This is also very important for VR

          • renz

            you don’t need DX12 to use VR. and DX12 is not like previous DX where the new DX are meant to replace older version of DX altogether. DX12 is for those that want low level access. for those that not willing to deal with low level complexity DX11 is there for them to use.

        • Sergiu Petrica

          The 390x is neck and neck with the Fury X on Hitman. You have to be mentally challenged to trust that title as a good indication of performance.

          • Protoss

            This is funny because Nvidia said the exact same thing about Ashes of the Singularity “it’s not representative of DX12” really? So what about Quantum Break? Farcry Primal? Hitman? All of these new games gives 20 more fps on the R9 390 compared to the GTX 970!!! 2 cards sold at the same price-tag, are all of theses games broken because Nvidia cards are behind on them? Come on… Even on The Division wich is a Gameworks title the Nvidia cards are behind if not performing the same and being more expensive than the AMD ones, let’s hope this will make Nvidia rework their strategy

          • Sergiu Petrica

            I feel for Nvidia when it comes to Ashes. It’s practically a best case scenario for parallel compute because it uses so much of it.
            In fact, it uses roughly as much parallel compute as GCN can take. By definition this will not run well on Maxwell. I read that these settings have to be fine-tuned by architecture since too much parallel compute can hurt performance on either side.

            Really? Quantum Break? It’s a mess of a title, that thing runs like crap on both AMD and Nvidia. An overclocked Titan X struggles with it.
            Hitman, again, when a 390x is neck and neck with the Fury X and the Ti you know something went terribly wrong.
            And I’m not sure why you mentioned Far Cry Primal, it does not support DX12 as far as I know, at least not yet. Even so I fail to see what’s wrong, the Ti takes the crown while at 1080p the 970 and 980 deliver good performance. 1440p does show the 980 and 970 succumb, I’ll give you that – but these two cards were marketed for 1080p in the first place.

            And it doesn’t matter if The Division is a Gameworks title or not. RoTR is an AMD title as well yet Nvidia performs well in it so it’s irrelevant.

            Lastly, I don’t know why you’re putting so much hope in today’s DX12 titles – they’ve barely scratched its surface. For example conservative rasterization allows for a very low performance penalty when it comes to global illumination, yet no title uses this technique as of now. And GCN does not support it. You can bet that such a title would make GCN bend down fast, just like parallel compute is bending Maxwell.

      • Cody McCormick

        Your blatant denial of Async’s advantages doesn’t make others “Too Obsessed” when they are concerned about a cards performance with it. DX12 and Vulcan are both written with this in mind. Why not take advantage of it? It reduces latency and boosts overall performance, I guess you like parts of your GPU sitting idle…..

        • renz

          DX11 multi threaded (DCL) can also improve things in games. but why AMD decided not to support it and many dev decided not to use it in their games? when hitman dev mention the need to tweak async compute for each card and too much tinkering can have negative performance impact even on radeon i see the situation is not really that different with DX11 DCL. and developer tend to look to all of their user base not just some with specific hardware. hence you don’t see gpu PhysX being use outside nvidia sponsored title.

      • kroms

        Welp, if you dont understand it then you don’t understand how it makes so much a difference.

        • Buddydudeguy


          • Protoss

            DX12 is the future, every game dev will use it, and it is just a matter of time when they will be able to play nicely with every features that it will bring

          • renz

            Except it is not. DX12 is more like optional option for those that want low level access in developing their games. Dev already mention the cpu part in DX12 is not that hard to figure it out but on the gpu side even matching DX11 performance is already challenging. Ultimately only gpu vendor engineer and driver team that understand the architecture the most. Game dev can also do it but it will take time to learn the architecture. By the time they have decent understanding of one architecture gpu maker already coming up with a new one.

  • With freesync monitors getting super duper cheap these days and gsync still ridiculously expensive, I’m probably moving from my GTX 970 to an AMD card this year. AMD only needs a 10% bump in performance to match the 1080? I think their entire business line is pretty excited to see these numbers right now.

    • Jose Gonz

      if you go for amd, remember to also buy a new AC.

      • Mast3r Race

        Smaller node and more efficient design. Heat won’t be an issue anymore unless massively overclocked. You’ll need a new punch line, probably about time too.

        • 1011101001001

          The 1080 runs as hot as the 290x did.

          • Stylic

            sure my Strix idles at 26-30 degrees and 65 degrees max under load. I had the same temperatures while OCed to 2.1Ghz.

      • Incognito Jay

        Assassin’s Creed has been all the same lately.

        • Piiilabyte III

          He didn’t mean Assassin’s Creed, he meant buy an air conditioner. I sincerely hope you were joking.

          • 3R45U5

            it was an apt response,imo. i chuckled.

          • Piiilabyte III

            At first I thought the same thing, then I was like, “Why talk about Assassin’s Creed?” 🙂

      • Lord Xantosh

        clearly you never owned an Nvidia GTX 550ti, those things went thermonuclear faster than anything AMD has

        • 3R45U5

          yeah. 400 and 500 series were awful. Lost 4! Thermis in total over the course of a year. i hate these cards with a passion. the reason why i switched to amd. Thermi burned me for life.

      • Anonymous

        Actually, Polaris will be the most power efficient GPU architecture ever. With Pascal only delivering +50% power efficiency despite 28nm Planar to 16nm FinFet shrink. Meanwhile, Polaris is expected to at least double power efficiency and has already been proven in demos.

      • I mean, really, why bother bringing anything logical to the discussion when you can fanboy instead?

        • Jose Gonz


          • psssst – I was calling YOU the fanboy. Just in case you misunderstood. I understand, it’s not easy to catch nuance when you’re too busy fanboy trolling, so I figured I should probably explain it to you.

      • Caratacus

        I have two R9 290X’s, and I never need to have the heat turned on in my room in the winter. XD

    • Ramir Cool

      It doesn’t like the 1080 has full async computing support. For it to average 50fps (49.6 to be exact) at 1440p in ashes, My fury (non x) averages 48.1fps with same settings. Seems like the 1080 is scoring kind of low to me in Dx12

      • Steve Smith

        The question comes down to is the game gpu limited or is it hitting a cpu limit more at this point. If its a cpu limit then can’t really draw anything from it anymore.

        • Ramir Cool

          well the 1080 in the benchmark is using a 6 core with 12 threads so I doubt the CPU is bottle-necking it if that’s what you mean. And I don’t think the game is limited as other cards and multi-gpu seconds score rather high in Ashes.

      • (edit to add – I mis-read what you were saying, although your first sentence has a typo – “It doesn’t LOOK like the 1080 has full async computing support.” I think we’re on the same page now.)

        Except that it doesn’t have “full async computing support”. From the article above:

        This new hardware scheduler will play a crucial role in allowing Pascal GPUs to perform better at executing tasks asynchronously, even though it still evidently relies on pre-emption and context switching according to what Nvidia has revealed in its Pascal whitepaper.

        So while this scheduler doesn’t actually allow tasks to be executed asynchronously it will still improve the performance of Pascal GPUs when it comes to executing code that’s written asynchronously. It’s sort of a hack to hold Pascal off until proper async compute is implemented in Nvidia’s future architectures.

    • Incognito Jay

      Hopefully you won’t have to wait for Vega. Unfortunately, AMD is far more concerned with efficiency instead of performance and hopefully that efficiency will translate into more overclocking. You could overclock to a previous gen’s power usage and see some really big gains if that’s possible.

    • Steve Smith

      Um using AOTS as benchmark to say 1080/1070 that AMD only needs 10% bump in performance. You are such a moron to think that.

    • Shades Of Red

      If you are looking for a new monitor that makes sense.

  • Steven Osorio

    These 1080/1070 cards from Nvidia already seem to be just a minor spec bump from the last gen of graphics cards. Really disappointing. It will also be disappointing if their Async Compute performance also just gets a minor bump in performance given that these cards SHOULD have been built to exploit ALL of DX12/Vulcan API’s features. I’m not a fan of AMD but I sure hope that their performance numbers are better than this new gen of Nvidia cards because it is only through competition with AMD that we’ll see anything better come out from Nvidia. Same for Intel as well.

    • Jeff Beresford

      Massive overclock capability though with the low power and heat stats. Impressive considering the performance of the card.

    • 9TFlops do not seem minor to me.

      • Someone else

        It does when AMD was at 8.6 TFlops on the Fury X

        • Sergiu Petrica

          1. Compute =/= performance in games.
          2. The 1080 is not the big Pascal.

          The 1080 is the successor to the 980 and it already seems to be 50% more powerful than it in AoTS. Add in the possibility of custom cards with decent factory overclock and you might be looking at more than that.
          Use your brain bud.

    • John Bison

      This is why I hate bullshit rhetoric like “twice as fast as the Titan X.” Perhaps they meant twice as fast as the Titan. They know the sweeping majority of consumers want to know about actual video game performance. Based on the earlier deception, we were all expecting 60 fps in 4k at high settings+. It looks like we’ll be getting <10% improvement over the 980 ti instead. I really hope I'm wrong, but things are very promising at the moment. They really had me convinced that even the 1070 would do better than the Titan X.

      • superlee

        Wording is everything, twice as powerful and twice as fast are two different things? Twice as fast means clock speed to me. Their demo showed the card running at 2100mhz(overclocked of course), the Titan X looks like it runs at 1000mhz base clock. Twice as fast, thought we all know this doesn’t mean twice the performance gain.

      • malcmilli

        Even if its only a 10% improve over the ti, which was a 30% improvement over the regular 980. That’s still about a 50% improvement over the last year’s card for the same price. Still isnt too shabby.

        • James Tompsett

          980 was a 2014 card bud.

          • Casecutter

            And MSRP $550! Now 1080’s can be as as low as $599 or up to $699. Meaning AIB mediocre customs are $630-650, most of what we’re seeing at this point are the Uber/Huge 3-slot 3-fan, which are more going to be $670-680. 10% gains for the appropriate 15% increase. The GTX680 of 4 years ago was $500 and has slowly marched up!

    • Prince Chèn

      I know, basically they added new memory gddr5x and more memory, and upped the clocked speeds. Still has no support for what it didn’t have before. And isn’t a whole lot faster than the 980ti. No one’s even talking about the 1070, it doesn’t look like much if anything new. After this releases my GPU will start having worse and worse support (GTX 970). The 980ti was also better than the Titan and less expensive, they should have named a card close to the 1080’s price range: “10% faster than the 980ti”.

    • Prince Chèn

      Well, I mean. The GPU’s are now 16nm, and more power efficient. There’s more performance per-watt. The performance increase of 10% is not extremely insane, but it isn’t tiny either. These are DirectX 12 benchmarks, I wonder how other games are when benchmarked. The 1080 doesn’t have Asynchronous Compute support, but a temporary fix for things coded for Asychronous Compute to slightly increase performance, so theoretically the performance wouldn’t be all that much better than a 980ti in DirectX 12 seeing as it still doesn’t have support for Asynchronous Compute to really increase performance by a pretty big margin. The 1080 looks like the biggest change from the previous generation. The 1070 is claimed to be faster than the Titan X at basically the same price as a GTX 970, and at 150-watts. I haven’t seen a benchmark yet however. This will cripple the prices very badly of current GPU’s if this is true.

    • renz

      people thinking too much about async compute. in reality though will many dev going to use it unless they were sponsored?