• Portada
    • Recientes
    • Usuarios
    • Registrarse
    • Conectarse

    Can we say that Maxwell supports Asynchronous?

    Programado Fijo Cerrado Movido Tarjetas Gráficas
    18 Mensajes 6 Posters 6.4k Visitas 1 Watching
    Cargando más mensajes
    • Más antiguo a más nuevo
    • Más nuevo a más antiguo
    • Mayor número de Votos
    Responder
    • Responder como tema
    Accede para responder
    Este tema ha sido borrado. Solo los usuarios que tengan privilegios de administración de temas pueden verlo.
    • HandroxH Desconectado
      Handrox
      Última edición por

      Hello everyone, today I came across a Microsoft PDF from SIGGRAPH2015 where there are some slides talking about Asynchronous in Direct3D, in their practical example they use a GTX 970 to demonstrate the queues, 3D, Copy and Compute in addition to the CPU draws. Can we use that as a reference on this topic or should we continue as usual?

      Link to the PDF -> http://nextgenapis.realtimerendering.com/presentations/3_Boyd_Direct3D12.pptx

      Link to other interesting PDFs -> http://nextgenapis.realtimerendering.com/

      Best regards.

      1 Respuesta Última respuesta Responder Citar 0
      • FassouF Desconectado
        Fassou MODERADOR
        Última edición por

        @Handrox:

        We can do that for a stone on that topic or do we continue the same?

        I don't understand this sentence you put.

        As I assume the topic is about the performance problem that nVIDIA has in Async, it seems that at first they were more than enough promising that they would make previous models compatible with Kepler and bla, bla, because they had the power to emulate by software, touching the drivers, and that if Maxwell supports DX12.1 and AMD doesn't yet.

        Peeeeero, after the mountains of benchmarks in DX12 where AMD beats nVIDIA by a head, now it seems that in their drivers they are dedicated to capping Async, because if they leave it the performance in DX12 drops quite a bit, and if you force it it's even worse, because you get frames from the "rejected" :facepalm:

        The Maxwell architecture can't handle Async by hardware, and there are already malicious rumors that the future Pascal doesn't do much better, so either they put another additional chip on the cards, or the guys from nVIDIA are going to have a rough time in this new generation.

        But everything is rumors and speculations of course ?

        Salu2!

        Intel i5 3570k / ASRock Z77 Extreme 4 / G.Skill F3-12800CL9D-8GBRL / Sapphire HD5850 / Samsung HD103UJ / TR TrueSpirit / NZXT Source 210 / OCZ ZS550W
        Intel i5 4570 / ASRock H87 Pro 4 / 2x G.Skill F3-14900CL8-4GBXM / Samsung 850 EVO 250Gb + ST1000DM003 + ST2000DM003 + HGST HDS723020BLA642 + Maxtor 6V250F0 / CM Seidon 240M / Zalman MS800 / CM MWE 550
        AMD Ryzen 7 1800X / B350 / 2x8GB Samsung DDR4-2400 CL17 / NVIDIA GTX 1070 8GB / SSD 120GB + ST4000DM004 + ST6000DM003 / EVGA Supernova 650 G2

        hlbm signature

        HandroxH W 2 Respuestas Última respuesta Responder Citar 0
        • HandroxH Desconectado
          Handrox @Fassou
          Última edición por

          @Fassou:

          I don't understand this phrase you put.

          As I assume the topic is about the performance problem that nVIDIA has with Async, it seems that at first they were overdoing it by promising to make previous models compatible with Kepler and bla, bla, because they had the power to emulate by software, touching the drivers, and that if Maxwell supports DX12.1 and AMD doesn't yet.

          But, after the mountains of benchmarks in DX12 where AMD beats nVIDIA by a head, now it seems that in their drivers they are dedicated to capping Async, because if they leave it on the performance drops quite a bit in DX12, and if you force it it's even worse, because you get frames from the "rejected ones" :facepalm:

          The Maxwell architecture can't handle Async by hardware, and there are already malicious rumors that the future Pascal doesn't do much better, so either they put another chip in the cards, or the guys from nVIDIA are going to have a rough time in this new generation.

          But it's all rumors and speculation of course ?

          Saludos!

          Don't stay with that phrase, it's the least of it. ;D

          Well, we see a device from M$ demonstrating Async in NV hardware, that caught my attention because the normal thing would be to use AMD hardware there, which they say is the only compatible with that resource. Upon seeing that PDF from M$, the sensationalism and partiality of wccftech and extremetech is huge, bringing information different from everything so far, where in 2015 M$ used a GTX 970 to show what Asynchronous would be.

          From what I've understood the topic is going wrong for 2 reasons, first the queues need to be prepared according to each hardware(AMD&NV) due to different architecture characteristics(it costs a lot to do it) and second is that this in Maxwell doesn't bring sufficiently good results, even when well applied because the chip already works at full capacity, without leaving SMX idle.

          Regarding AMD beating NV, so far we've seen (with all due respect) sloppy work using DX12, using Hitman, Ashes, GOW or Tomb Raider as a base for something is very premature.

          Saludos.

          F 1 Respuesta Última respuesta Responder Citar 0
          • W Desconectado
            wwwendigo @Fassou
            Última edición por

            @Fassou:

            I don't understand this sentence you put.

            I suppose the topic is about the performance problem that nVIDIA has with Async, it seems that at first they were overdoing it by promising to make previous models compatible with Kepler and bla, bla, because they counted on being able to emulate by software, touching the drivers, and that if Maxwell supports DX12.1 and AMD doesn't yet.

            But, after the mountains of benchmarks in DX12 where AMD beats nVIDIA by a head, now it seems that in their drivers they are dedicated to capping Async, because if they leave it on, their performance in DX12 drops quite a bit, and if you force it, it's even worse, because you get frames from the "discarded":facepalm:

            The Maxwell architecture can't handle Async by hardware, and there are already malicious rumors that the future Pascal doesn't do much better, so either they put another chip in addition to the cards, or the guys from nVIDIA are going to have a rough time in this new generation.

            But all of that is rumors and speculation of course ?

            Best regards!

            All that you say would be very nice, if it weren't for two things:

            1.- Async Shaders DO NOT belong to any of the sets of MANDATORY features that define both 12_0 and 12_1 (speaking in silver, the modes of operation of the API where the real graphic novelties in DX12 and DX12.1 really work, the other modes are of "compatibility" and allow to work, using the DX12 API, graphics compatible with older graphic features).

            2.- Async shaders are not only optional. It's that they were also incorporated as an optional "feature" recently, VERY LITTLE. A year or so. In fact, I swear that compatibility had already been defined for 12_1 before this option so clearly tailored to AMD came to light.

            Anyway, it's interesting to see how people dedicate rivers of literature to a "feature" that really DOESN'T do anything at all, it's just a way to optimize the execution of shader code and, to make matters worse, in certain architectures more than others.

            And if we add the developers' own statements who have used async shaders recently, some under the AMD umbrella, saying nothing less than it was hell to optimize for this feature, and all to get a 15% improvement in graphic performance (for AMD, basically nothing for nvidia), it looks like another patchwork feature that is being shoehorned in to do favors between "partners" (it's clear that this last-minute inclusion of something that is very close to how AMD's hardware works, especially in the XBOX One console, is no coincidence, although I see that microsoft was thinking very hard about it before putting it in, because they could have put the option when they published the features to be met for DX12.0).

            In short, a lot of literature for very little in terms of performance, and absolutely nothing at the graphic level (zero novelty or new effects implemented "thanks" to these async shaders, as I said, it doesn't do anything new).

            1 Respuesta Última respuesta Responder Citar 0
            • HandroxH Desconectado
              Handrox
              Última edición por

              Now we already have Async Shader in DX11 -> http://humus.name/Articles/Persson_LowlevelShaderOptimization.pdf

              Shader Model 6 will work on the FL12_1 cards, it will use a new open source language based on C and an LLVM with pre-compiled codes like Vulkan.

              1 Respuesta Última respuesta Responder Citar 0
              • F Desconectado
                fjavi @Handrox
                Última edición por

                @Handrox:

                Don't stick with that phrase, it's the least of it. ;D

                Well, we see a M$ device demonstrating Async in NV hardware, that caught my attention because the normal thing would be to use AMD hardware there, which they say is the only one compatible with that resource. Upon seeing that M$ PDF, which huge of sensationalism and partiality from wccftech and extremetech, bringing information different from everything so far, where in 2015 M$ used a GTX 970 to teach what Asynchronous would be.

                From what I've understood the issue is going wrong for 2 reasons, first the queues need to be prepared according to each hardware(AMD&NV) for different architecture characteristics(it costs a lot to do it) and second is that this in Maxwell doesn't bring sufficiently good results, even when well applied because the chip is already working at full capacity, without leaving SMX idle.

                Regarding AMD taking a lead over NV, up to now we've seen (with all due respect) crappy work using DX12, using Hitman, Ashes, GOW or Tomb Raider as a base for something is very premature.

                Best regards.

                Why do you want it?
                What I want are tests, DX12 games and see what happens, I don't care about the Draw calls, the shaders, I didn't care about Dx10.1.
                What's needed is to demonstrate that it's useful for something, that's happened many times, Dx10.1 would be an advantage according to what was said, which in the end wasn't seen, Mantle, the draw calls.
                The best thing is time that puts things in their place, and if AMD stands out against Nvidia I'll be happy if in the end I can buy something cheaper and more powerful.
                I hope those Polaris put the Pascal ones in a tight spot and Nvidia can't pull off the mid-range as high-end, and also puts Intel in a tight spot with the Zen and they have to get off their high horse and allow OC, sell cheaper.
                In the end that's what matters most competition.

                regards

                whoololonW HandroxH 2 Respuestas Última respuesta Responder Citar 0
                • whoololonW Desconectado
                  whoololon Veteranos HL @fjavi
                  Última edición por

                  It looks like it's going to rain... ¬¬

                  ...me lo dicen las voces...

                  hlbm signature

                  1 Respuesta Última respuesta Responder Citar 0
                  • Kernel1.0K Desconectado
                    Kernel1.0 Veteranos HL
                    Última edición por

                    We will seek shelter and wait for the weather to clear up

                    1 Respuesta Última respuesta Responder Citar 0
                    • HandroxH Desconectado
                      Handrox @fjavi
                      Última edición por

                      @fjavi:

                      Why do you want it?
                      What I want are tests, DX12 games and to see what happens, I don't care about the Draw calls, those shaders, I didn't care about Dx10.1.
                      What is needed is to prove that it's useful, that it's been many times, Dx10.1 would be an advantage according to what was said, which in the end was not seen, Mantle, the draw calls.
                      The best thing is time that puts things in their place, and if AMD stands out against Nvidia I'm happy if in the end I can buy cheaper and more powerful.
                      I hope that these Polaris put Pascal in a tight spot and Nvidia can't sell the mid-range as high-end, by the way that it puts Intel in a tight spot with Zen and they have to get off their high horse and allow OC, sell cheaper.
                      In the end that's what matters most competition.

                      regards

                      How's it going old fjavi, how's everything going buddy?

                      I don't think that "having competition" has anything to do with async. The Async thing will get cold here in a few days, when the tech gossip press starts talking about Shader Model 6, Async will return to being what it always was, nothing. If the Polaris chip doesn't have DX12.1 instructions, it will be a real mess, because NV will charge a huge advantage in possibilities and performance.

                      Back to (holy) Async Shader, it bothers me to read so much misinformation cultivated by Extremetech, wcctech and a certain Mohigan, the worst part is that their stories fell in the middle, I see people with vast knowledge repeating what these salsa dancers throw out there. Without any informational criteria, just the desire to confuse, deceive.

                      Regards xiquet!;D

                      F 1 Respuesta Última respuesta Responder Citar 0
                      • F Desconectado
                        fjavi @Handrox
                        Última edición por

                        @Handrox:

                        What's up old fjavi, how's everything going?

                        I don't think the "having competition" thing has anything to do with async. The Async thing will get cold here in a few days, when the tech gossip starts talking about Shader Model 6, Async will be back to what it always was, nothing. If the Polaris chip doesn't have DX12.1 instructions, it will be a real mess, because NV will have a huge advantage in possibilities and performance.

                        Back to (holy) Async Shader, it bothers me to read so much misinformation cultivated by Extremetech, wcctech and a guy named Mohigan, the worst part is that their stories fell in the middle, I see people with vast knowledge repeating what those salsa dancers throw out there. Without any informational criteria, just the desire to confuse, deceive.

                        Greetings xiquet!;D

                        I think the same now, whether it's asynchronous shaders or shader model 6, what I mean is that all this should be seen in games, now you can talk a lot but time will tell who does better, or even if they do the same.
                        I don't care if Nvidia or AMD puts out the smoke, here until they demonstrate it in new games I don't care what they say now, because in the end it's always the same.
                        They tell us wonders that we never see and maybe that Dx12.1 will end up being another fiasco like Dx10.1, here I only believe what I see and to see it I hope they release cards and see the performance in different games, which in the end is what matters.
                        I'm here now with a PC that I made from a 4GB 960 and it doesn't seem very exciting, an Asus Strix, but only because they had to return money and it came out very well.
                        But it's small and fresh which is what I wanted, I hadn't had cards like that for a long time.

                        Regards

                        HandroxH 1 Respuesta Última respuesta Responder Citar 0
                        • HandroxH Desconectado
                          Handrox @fjavi
                          Última edición por

                          @fjavi:

                          I think the same now, whether it's asynchronous shaders or shader model 6, what I mean is that all this should be seen in games. We can talk a lot now, but time will tell who performs better, or even if they perform the same.
                          I don't care if Nvidia or AMD comes up with the smoke, here, until they demonstrate it in new games, I don't care what they say now, because in the end, it's always the same.
                          They tell us wonders that we never see, and maybe that Dx12.1 will end up being another fiasco like Dx10.1. Now, I only believe what I see, and to see it, I hope they release cards and see the performance in different games, because that's what matters in the end.
                          I'm here now with a PC that I built with a 4GB 960, and it doesn't seem very exciting, an Asus Strix, but only because they had to return money and it came out very well for me.
                          But it's small and fresh, which is what I wanted. I hadn't had cards like that in a long time.

                          Regards

                          I can't disagree with what you say, it's the reality, like a house. But, what would that medium be without all that sensationalism? jajaja

                          All that controversy is set up to boost sales. Look, with the Async one, AMD got rid of a lot of its leftovers, and if you zoom in deeper, that controversy starts just when some numbers come out that give NV more than 80% share, since then the reviews are clearly pro AMD and the sweet apple that was Maxwell becomes a rotten apple. It's a very well planned game where only one side loses, the consumer's.

                          Regards.

                          FassouF 1 Respuesta Última respuesta Responder Citar 0
                          • FassouF Desconectado
                            Fassou MODERADOR @Handrox
                            Última edición por

                            It's a conspiracy, because the thing with the GTX970 and the problem with its last GB of memory, that doesn't matter :facepalm:

                            And of course, AMD always looking for the funny side of things ?

                            What bad guys :troll:

                            Salu2!

                            Intel i5 3570k / ASRock Z77 Extreme 4 / G.Skill F3-12800CL9D-8GBRL / Sapphire HD5850 / Samsung HD103UJ / TR TrueSpirit / NZXT Source 210 / OCZ ZS550W
                            Intel i5 4570 / ASRock H87 Pro 4 / 2x G.Skill F3-14900CL8-4GBXM / Samsung 850 EVO 250Gb + ST1000DM003 + ST2000DM003 + HGST HDS723020BLA642 + Maxtor 6V250F0 / CM Seidon 240M / Zalman MS800 / CM MWE 550
                            AMD Ryzen 7 1800X / B350 / 2x8GB Samsung DDR4-2400 CL17 / NVIDIA GTX 1070 8GB / SSD 120GB + ST4000DM004 + ST6000DM003 / EVGA Supernova 650 G2

                            hlbm signature

                            F 1 Respuesta Última respuesta Responder Citar 0
                            • F Desconectado
                              fjavi @Fassou
                              Última edición por

                              @Fassou:

                              It's a conspiracy, because the thing with the GTX970 and the problem with its last GB of memory, that doesn't matter :facepalm:

                              And of course, AMD always looking for the comic side of things ?

                              What bad guys :troll:

                              Regards!

                              I don't know the problem but the program is here and it's supposed to happen the same thing to the Fury
                              http://www11.pic-upload.de/28.06.15/3v87mcvjenj.jpg
                              But that shouldn't matter, although I look out for my own interest, I want competition but of course many websites are already like the media, you should trust just the right amount and less.

                              regards
                              @Handrox:

                              I can't disagree with what you say, it's the reality, like a house. But, what would that medium be without all that gossip? jajaja

                              All that controversy is mounted to encourage sales, look at that with Async, AMD got rid of a lot of its leftovers, and if you zoom in deeper, that controversy starts just when some numbers come out that gave NV more than 80% share, since then the reviews are clearly pro AMD and the sweet apple that was Maxwell becomes a rotten apple. It's a very well planned game where only one side loses, the consumer's.

                              Regards.

                              I, as someone who hasn't believed websites, forums or media for a long time, listen to people who try it and not to everyone.
                              Anyway, I'm interested in them selling, if they have to sell the bike and say that the new Fury that dual consumes less than its monogpu part (according to a certain news website) and has three 8-pin connectors for decoration, since it doesn't consume they're not needed.
                              It's up to people to open their eyes or continue believing in that wonderful world they tell them.
                              There are very obvious things.
                              Although I would be much more interested if they really released cards and processors in good condition and posed a real danger to Intel and Nvidia, no matter how much they say I don't see effective competition with cards with 512-bit TDPs very high and 8GB of memory to compete against cards like the 970 that should be quite cheap for Nvidia, they could afford to lower the 970 to 200€, the 390 and 390x must be more expensive to manufacture.
                              With Intel, it's better not to talk because they're abusing a lot, that's why I'm not interested in AMD getting too weak and being able to react so much in both graphics and processors.
                              Regards

                              FassouF 1 Respuesta Última respuesta Responder Citar 0
                              • FassouF Desconectado
                                Fassou MODERADOR @fjavi
                                Última edición por

                                @fjavi:

                                I don't know about the problem, but the program is here and it's supposed to be the same for the Fury
                                But that shouldn't matter, although if I'm looking out for my own interests, I do want competition, but of course many websites are already like the media, you have to trust just enough and less.

                                regards

                                I, for a long time now, don't trust websites, forums, or the media, I listen to people who try it and not everyone.
                                Anyway, I'm interested in them selling, if they have to sell the bike and say that the new Fury is dual-core and consumes less than its single-GPU counterpart (according to a certain news website) and has three 8-pin connectors for decoration, since it doesn't consume, it doesn't need them.
                                It's up to people to open their eyes or continue believing in that wonderful world they tell them.
                                There are things that are very obvious.
                                Although I would be much more interested if they really released cards and processors in good condition and posed a real threat to Intel and Nvidia, no matter what they say I don't see effective competition from cards with 512-bit TDPs that are very high and 8GB of memory to compete against cards like the 970 that should be quite cheap for Nvidia, they could afford to lower the 970 to 200€, the 390 and 390x must be more expensive to manufacture.
                                With Intel, it's better not to talk because they are abusing a lot, that's why I'm not interested in AMD being weakened too much and being able to react in both graphics and processors.
                                Regards

                                I don't know if the Fury also has some problem (I would have to win the lottery to look at that segment with attention), but from what I see you're moving from a pro-nVIDIA phase to another more ¿ anti-AMD? <:(

                                Relax, Man :fumeta:

                                Salu2!

                                Intel i5 3570k / ASRock Z77 Extreme 4 / G.Skill F3-12800CL9D-8GBRL / Sapphire HD5850 / Samsung HD103UJ / TR TrueSpirit / NZXT Source 210 / OCZ ZS550W
                                Intel i5 4570 / ASRock H87 Pro 4 / 2x G.Skill F3-14900CL8-4GBXM / Samsung 850 EVO 250Gb + ST1000DM003 + ST2000DM003 + HGST HDS723020BLA642 + Maxtor 6V250F0 / CM Seidon 240M / Zalman MS800 / CM MWE 550
                                AMD Ryzen 7 1800X / B350 / 2x8GB Samsung DDR4-2400 CL17 / NVIDIA GTX 1070 8GB / SSD 120GB + ST4000DM004 + ST6000DM003 / EVGA Supernova 650 G2

                                hlbm signature

                                F 1 Respuesta Última respuesta Responder Citar 0
                                • F Desconectado
                                  fjavi @Fassou
                                  Última edición por

                                  @Fassou:

                                  I don't know if the Fury also have some problem (I would have to win the lottery to look at that segment closely), but from what I see you are moving from a pro-nVIDIA phase to another more ¿anti-AMD? <:(

                                  Relax, Man :fumeta:

                                  Best regards!

                                  I am really always relaxed, but there are things I don't like, the thing is left with the 3.5gb of the 970, because of that program but it does the same thing on the fury and it brushes it off and nobody cares.

                                  I am not moving from a pro-NVIDIA phase to anti-AMD, because deep down I don't care about both, I am moving to a phase of freely saying what I think with arguments, without wanting to be sent to the stake for it.

                                  Because I see that having an opposing opinion is not liked and I see that opinion forums are disappearing, as is often said with the media it is about manufacturing opinion and the one who strays from the herd is a radical black sheep.
                                  It is only about being able to give an opinion with respect and arguments.

                                  Best regards

                                  FassouF W 2 Respuestas Última respuesta Responder Citar 0
                                  • FassouF Desconectado
                                    Fassou MODERADOR @fjavi
                                    Última edición por

                                    On the one hand, the memes about the GTX970 and the 4GB ghost is a response to the comment from Handrox that precedes it.

                                    About the Fury, I really have no idea, but as I consider it a monstrosity like the Titans, the one who spends that money on a graphics card is presumably well-informed, or if it's to be a show-off, then it falls into the category of those who buy the supercar and crash it on the first day. I would never do it, they will know.

                                    Regarding expressing your opinion, I see no problem with it, and I think you have already done so when you said: "I just think the same now whether it's asynchronous shaders or shader model 6, what I mean is that all this should be seen in games, now you can talk a lot but it's time that will tell who performs better, or even if they perform the same."

                                    But as I assume that the idea of the post is not with the intention of waiting to have the graphics cards at home in a few months and being able to do the tests himself, I don't see where the contribution is in your comment.

                                    If you are going to divert the main topic (you or anyone) to others that not even the original author has brought up, then it's better to open another post.

                                    Salu2!

                                    Intel i5 3570k / ASRock Z77 Extreme 4 / G.Skill F3-12800CL9D-8GBRL / Sapphire HD5850 / Samsung HD103UJ / TR TrueSpirit / NZXT Source 210 / OCZ ZS550W
                                    Intel i5 4570 / ASRock H87 Pro 4 / 2x G.Skill F3-14900CL8-4GBXM / Samsung 850 EVO 250Gb + ST1000DM003 + ST2000DM003 + HGST HDS723020BLA642 + Maxtor 6V250F0 / CM Seidon 240M / Zalman MS800 / CM MWE 550
                                    AMD Ryzen 7 1800X / B350 / 2x8GB Samsung DDR4-2400 CL17 / NVIDIA GTX 1070 8GB / SSD 120GB + ST4000DM004 + ST6000DM003 / EVGA Supernova 650 G2

                                    hlbm signature

                                    1 Respuesta Última respuesta Responder Citar 0
                                    • W Desconectado
                                      wwwendigo @fjavi
                                      Última edición por

                                      @fjavi:

                                      I'm actually always relaxed, but there are things I don't like. The issue has been left with the 3.5GB of the 970, because of that program, but it does the same thing on the Fury and brushes it off, and no one cares.

                                      I don't go from being envious to anti-AMD, because deep down I don't care about either; I move to a phase of freely saying what I think with arguments, without them wanting to burn me at the stake for it.

                                      Because I see that offering a contrary opinion isn't liked and I see that opinion forums are disappearing. As they say about the media, it's about manufacturing opinion and anyone who strays from the herd is a radical black sheep.
                                      It's just about being able to give an opinion with respect and arguments.

                                      Regards

                                      Another time with that program's nonsense? It's unbelievable that they insist on using something so obsolete to talk about the VRAM of graphics cards, when in reality this isn't being tested with that test.

                                      Let's see, that program, which is in gpgpu mode, does nothing more than request a certain amount of RAM from the system equivalent to the graphics card's VRAM, and it gets it, of course it does, and after that it does a bandwidth measurement based on reads, writes, whatever.

                                      What people FORGET is that being a computing program, not 3D, and as the APIs work for GPGPU, when a certain amount of memory X is requested from the system, if it doesn't have it or the driver doesn't let it locate ALL the VRAM that is requested, what it does is assign system RAM.

                                      That is, by default, this program doesn't allow the GTX 970 to access the slow sector of the VRAM (which the 970 driver will normally automatically use as it sees fit, almost certainly using victim cache algorithms to store data there that probably won't be used again or is very little used), and what the program receives from the system is a block of 512 MB of main RAM in exchange.

                                      When that block of RAM memory, which is not VRAM, is accessed, the bandwidth performance clearly drops a lot; in fact, it's slower than accessing the inaccessible VRAM, because it's not even VRAM.

                                      Why does it happen with the Fury? Because surely it doesn't allow the system, in that case at least, to freely access that segment, because either other applications are using it (among others, Aero, which eats about 200 MB of VRAM on all systems, except if you run the latest ones without it, which I DOUBT the majority of users do), or because, like the GTX 970, the driver blocks this attempt to access that part of the VRAM, for whatever reason the AMD driver has.

                                      It seems incredible that they keep going with the same broken record, Fassou. Instead of calling people who try to counter such poor reasoning anti-AMD, with this program, you should think about whether you're the one who's anti-NVIDIA.

                                      Because honestly, the argument is puerile, or you don't know how enormously outdated it is already and the explanations that were given about what was happening with that quick test created by a user, or you're doing it on purpose.

                                      I'm going to think it's the former, that you're very uninformed about this case, but coming with the same broken record of the 3.5 GB of VRAM with the GTX 970 is getting old. I'm tired of seeing how the last 512 MB of VRAM on this card is used without performance issues to see the latest re-edition of Jordie Dan's latest album, computer version.

                                      And about the asynchronous shaders, let's realize the following:

                                      1.- It's not part of any current DX12 standard; in fact, it was added at the last minute as something optional for DX12, but it already came before.

                                      2.- To make matters worse, with the already messed up and disappointing programming of DX12 (which promised a lot but ignored talking about the problems derived from programming for CTM), which basically requires programming two different paths (really 3, but okay) for each GPU manufacturer to be able to use each hardware correctly, it turns out that asynchronous shaders are especially complicated to program within DX12, and in fact they can LOWER performance if you're not careful (as the developers of Hitman affirmed), not only do you have to program taking into account the two manufacturers and possibly creating two forks of the DX12 code, but also if you use asynchronous shaders you're going to complicate your life in each path/fork to program, you're going to increase the programming hours enormously, and if you don't do it right you can lose performance.

                                      And all this for a 10% gain in a single manufacturer that represents 20% of discrete GPU manufacturers. Even less if we talk about including the integrated market, which is basically dominated by Intel (because in CPUs things are even worse for AMD than in GPUs).

                                      Do we need to explain what happens with non-friendly features that offer little performance gain, don't offer graphic novelties, and also have a limited impact on the market?

                                      Please, don't explain things again like it's Groundhog Day. This has happened before, let's leave the broken records. We're already talking less about asynchronous shaders (and in fact there seems to be a certain setback in the implementation of DX12 games, as only sponsored titles are being seen, given the problems of the first ones... it will take time to see their massive implementation).

                                      FassouF 1 Respuesta Última respuesta Responder Citar 0
                                      • FassouF Desconectado
                                        Fassou MODERADOR @wwwendigo
                                        Última edición por

                                        @wwwendigo:

                                        It seems incredible that they continue with the same broken record, Fassou, instead of calling people who try to counter such poor reasoning, with this program, labeling them as anti-AMD, one should wonder if you are the anti-nvidia.

                                        Because honestly, the argument is childish, or you don't know how enormously outdated it already is and the explanations that were given about what was happening with that quick test created by a user, or you do it on purpose.

                                        I'm going to think it's the former, that you are very uninformed about this case, but coming with the same broken record of the 3.5 GB of VRAM with the GTX 970, it's getting tiring. I'm tired of seeing how the last 512 MB of VRAM is used on this card without any problems in performance, to see the latest re-release of the latest Jordie Dan album, computer version.

                                        I'll answer you by allusions.

                                        It's one thing for that program, which I neither know nor use, not to be the appropriate way to demonstrate performance problems in the memory of a graphics card, and quite another to deny that nVIDIA lied about the specifications of their GTX970 graphics card, until it was found that they were being sued for it.

                                        The first meme talks about problems. And if they lied about the product's characteristics, there are undoubtedly problems, and the second is the last image from a promotional video by AMD itself, "encouraging" owners of GTX970 graphics cards who had just received confirmation from several assemblers and distributors that they could return them, to do so in order to buy a Radeon R290X.

                                        Obviously you are free to think whatever you want, which includes thinking if I like nVIDIA, AMD, Intel, Hacendado, or whatever company you think of more.

                                        Salu2!

                                        Intel i5 3570k / ASRock Z77 Extreme 4 / G.Skill F3-12800CL9D-8GBRL / Sapphire HD5850 / Samsung HD103UJ / TR TrueSpirit / NZXT Source 210 / OCZ ZS550W
                                        Intel i5 4570 / ASRock H87 Pro 4 / 2x G.Skill F3-14900CL8-4GBXM / Samsung 850 EVO 250Gb + ST1000DM003 + ST2000DM003 + HGST HDS723020BLA642 + Maxtor 6V250F0 / CM Seidon 240M / Zalman MS800 / CM MWE 550
                                        AMD Ryzen 7 1800X / B350 / 2x8GB Samsung DDR4-2400 CL17 / NVIDIA GTX 1070 8GB / SSD 120GB + ST4000DM004 + ST6000DM003 / EVGA Supernova 650 G2

                                        hlbm signature

                                        1 Respuesta Última respuesta Responder Citar 0
                                        • 1 / 1
                                        • First post
                                          Last post

                                        Foreros conectados [Conectados hoy]

                                        0 usuarios activos (0 miembros y 0 invitados).
                                        febesin, pAtO,

                                        Estadísticas de Hardlimit

                                        Los hardlimitianos han creado un total de 543.5k posts en 62.9k hilos.
                                        Somos un total de 34.9k miembros registrados.
                                        roymendez ha sido nuestro último fichaje.
                                        El récord de usuarios en linea fue de 123 y se produjo el Thu Jan 15 2026.