[GTX970] The "problem" with its memory
-
Not long after these graphics cards (GTX970) hit the shelves, some of our own geeks, Hardware Enthusiasts, were undoubtedly whispering in dark corners about how the little monster from nVIDIA was doing some weird things when you used high-demand settings.
They quickly established a relationship with the memory occupancy status, and it was speculated that there was a problem with the reservation and allocation of this memory, perhaps due to drivers that could be improved.
As time passed, and seeing that the drivers' makeup couldn't silence the torch bearers, nVIDIA slowly confessed to some experts, dealing with the fear that their sales of this popular model could be affected.
A very interesting article, explaining the issue in detail, on Hexus.net (ENG).
And another one from techPowerUp - GTX 970 Memory Drama (ENG).
For those who are already sweating just reading four lines :troll: and doubt their fortitude to read even the hexus article :facepalm:, to summarize in broad strokes, the 4GB of memory, in its management, is divided into 3.5GB of "normal" access memory, and 0.5GB of memory with much slower access, due to the capping of the part of the cache that should be associated with that memory. Some extra-official estimates say that it goes from 150GB/s to 20GB/s when the memory occupancy exceeds 3.3GB.
Salu2!
PD: none of the above applies to the GTX980
-
The problem is that when they disabled SMM they also disabled other things, Nvidia didn't say the real specs, a good screw-up, because if they had given them I think most would have sold out anyway, the 980 is very expensive.
Then the 3.5GB problem I don't see it, it can be seen with two or three cards, with one they will run out of power sooner, the graphics card has 4GB that it doesn't use well, but that a 970 will have fewer Vram problems than a 780Ti.
Let's see if in troubled waters they lower the price or someone gets the urge to sell theirs and I get a very cheap one, if they don't want them we will take the opportunity. In the end it's about performance and since I don't plan to do SLI it's perfect for my resolution.
Much worse it seems to me the 960, that even if they put 12Gb it has nothing to do against a 970, there is a chasm between them.
regards
-
It reminds me of a GeForce2 I had that was the MX… and it turned out to be "Very Xunga".
By the way, my apologies to all those to whom I recommended the 970... I did it in good faith... I didn't know... I swear it. :cry: -
-
It reminds me of a GeForce2 I had that was the MX… and it turned out to be "Very Xunga".
By the way, my apologies to all those to whom I recommended the 970... I did it in good faith... I didn't know... I swear it. :cry:
What's wrong with the 970? The only thing wrong is that it didn't give its specs, but it's still the best option in that price range.
I'm rubbing my hands together to see if they drop in price and I get a cheap one, I'm already visiting second-hand forums to see if someone makes an offer I can't refuse, the performance is what it is.It's a trick, but most people look at performance, consumption, temperature, OC and I don't see any flaws in that.
I resist paying those prices, but because it's a 204, for me that's mid-range, I didn't want the 600s and I didn't want the 900s, until the high-end ones really came out, but if I see them very cheap then of course I take advantage, it would be a price appropriate for its range, cutting corners is done with processors and everything, in the end what matters is what you pay for a certain performance.
regards
-
The problem is that when disabling SMM they also disabled other things, Nvidia didn't say the real specs, a good mess, because if they had given them I think most would have sold anyway, the 980 is very expensive.
Then the problem of the 3.5GB I don't see it, it can be seen with two or three cards, with one they will run out of power sooner, the graphics card has 4GB that it doesn't use well, but that a 970 will have fewer Vram problems than a 780Ti.
Let's see if in troubled waters they lower the price or someone gets the urge to sell theirs and I get one very cheap, if they don't want them we will take the opportunity. In the end it's about performance and as I don't plan to do SLI then for my resolution it's perfect.
Much worse it seems to me the 960, that even if they put 12Gb it has nothing to do against a 970, there is a chasm between them.
regards
No, that's what was implied on some websites, but it's not so.
Nvidia disabled some SMs, true, but this didn't imply at all disabling anything else. The thing is that Nvidia disabled, totally independently of the capped SMs, a block of L2 cache /ROPS, moreover, to increase performance per wafer and thus obtain more valid chips for GTX 970.
And this last detail is the one they didn't mention anywhere. But I insist, by no means disabling SMMs forces to disable anything else, it was done just because, there would be no problem if they had published it, but they hid it.
In fact some of the GM204 variants for laptops don't have anything touched of those L2/ROPs partitions, nor memory controllers, despite having capped several SMs (GTX 980M).
-
Come on, they sell you 4 but you use 3.5.
In my town this is called the fine print and it affects the websites. -
No, eso es lo que se dió a entender en algunas webs, pero no es así.
Nvidia deshabilitó algunos SM, cierto, pero esto no implicaba para nada deshabilitar nada más. Lo que pasa es que nvidia deshabilitó, de forma totalmente independiente de los SM capados, un bloque de caché L2 /ROPS, a mayores, para aumentar el rendimiento por oblea y así obtener más chips válidos para GTX 970.
Y éste último detalle es el que no comentó por ningún lado. Pero insisto, en absoluto deshabilitar SMMs fuerza a deshabilitar nada más, se ha hecho porque sí, no habría problema si lo hubieran publicado, pero lo ocultaron.
De hecho alguna de las variantes de GM204 para portátiles no tienen tocado nada de esas particiones L2/ROPs, ni controladores de memoria, a pesar de tener capados varios SMM (GTX 980M).
Yo el engaño tonto no lo discuto, lo que entiendo según tu comentario es que trataron de exprimir bien la gallina, se buscan ellos solos el problema.
De todas formas quien consiguiera las zotac esas pequeñas a 309€, me parece una gran opción, aunque hayan recortado por que una 980 esta casi al doble de dinero, el que quiera jugar y no piense en altas resoluciones y SLI hizo una gran compra y seguro que si llegan a decir los especificaciones, las hubieran vendido igual.
saludos
-
I don't argue about the silly scam, what I understand from your comment is that they tried to squeeze the most out of the chicken, they're creating a problem for themselves.
Anyway, whoever got those Zotac little ones at 309€, I think it's a great option, even if they've cut corners because a 980 is almost double the money, anyone who wants to play and doesn't think about high resolutions and SLI made a great purchase and surely if they were to say the specifications, they would have sold them anyway.
regards
I don't think so, the secret would have been discovered much faster, the problem that disabling the Rop/L2 unit creates is that you have to create two partitions, resulting in one part of the L2 having to share 2 memory controllers, so access to those MCs is very slow (hence the loss of bandwidth) because although the L2 can execute 4 instructions at once (2 input, 2 output), it turns out that you can only access one of the partitions at a time, so if you want to write and read at the same time in both partitions (when filling the 3.5gb) you're going to have to lose a clock cycle because with L2 shared with two MCs you can only execute 1 input and 1 output.
If you already have to go out to grab textures from ram/ssd just turn it off and let's go...
A more elegant solution would have been to remove the extra controller and restrict the bus to 224 bits and 3.5 gb of ram, the performance would be even better as you wouldn't have the fps dips and stuttering, but you can see the mess they've made...
-
I don't think so, it would have been discovered much faster, the problem that disabling the Rop/L2 unit creates is that you have to create two partitions, resulting in one part of the L2 having to share 2 memory controllers, so access to those mc is very slow (hence the loss of bandwidth) since although the L2 can execute 4 instructions at once (2 input, 2 output), it turns out that you can only access one partition at a time, so if you want to write and read at the same time in both partitions (when filling the 3.5gb) you will have to lose a clock cycle because when you have L2 shared with two mc you can only execute 1 input and 1 output.
If you already have to go out to grab textures from ram/ssd turn it off and let's go …
A more elegant solution would have been to remove the redundant controller and restrict the bus to 224 bits and 3.5 gb of ram, the performance would be even better by not having the fps dips and stuttering, but we have already seen the mess they have made...
Nothing more elegant. The graphics already automatically uses those 224b and 3.5 GB if it doesn't need more, and it will always be better to access those 512 MB slow than the very slow and especially high latency (much greater) RAM via PCI-e.
I assure you that there is no stuttering or fictitious dips while maintaining those 3.5GB, and even when they are exceeded, and I refer to the tests, the difference is marginal and even very questionable if there is any stuttering (some extra peak has been found in frametimes in tests using more than 3.5GB in some games and when comparing with the GTX 980, but far from being a determinate proof of anything, since in most cases it either maintains performance almost in line without any stuttering, or there is also some high peak in the GTX 980 under the same conditions).
And the "redundant" controller can't be removed, the only thing that could be done is to leave it without a memory chip but still "functional" in itself. It has already been explained, you can't disable the 32-bit controllers one by one (for a reason they usually work more like a single 64-bit controller, that is, they work in a duo 32x2 except in this case).
-
That's partly true, as long as it doesn't exceed 3.5 GB of VRAM, the graphics work correctly, in fact, it works very well, but as I say, as soon as you exceed that amount of VRAM, the penalty for not being able to access both partitions at the same time is huge, in fact, it's just a mess...
The extra controller, even when in pairs (32x2) can be disabled... just like the ROP/L2 unit, if you push it, I would even disable the entire pair leaving more L2 available.
As for the stuttering, well, it's undeniable, I've seen it myself, I have friends with SLI, Tri SLI and Quad SLI that are just fed up with 40 fps dips in Far Cry 4, Shadow of Mordor, huge stutters, it seems like you're carrying a huge lag, and I refer you to the evidence:Frame Rating: Looking at GTX 970 Memory Performance | Battlefield 4 Results
Weitere Benchmarks zur Geforce GTX 970 und dem 3,5-GiByte-"Modus"
Does the GeForce GTX 970 have a memory allocation bug? (update 3)
The timeframes are very clear, you go over 3.5 GB of RAM and it's unbearable, in configurations of more than one card it's just a mess... Here they explain everything very clearly:
NVIDIA Discloses Full Memory Structure and Limitations of GTX 970 | PC Perspective
GTX 970: 3.5 Go et 224-bit au lieu de 4 Go et 256-bit? - HardWare.fr
AnandTech | GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation
Nvidia: the GeForce GTX 970 works exactly as intended - The Tech Report - Page 1
At the very least, they should do what EVGA did, refund the money, but not the assemblers or wholesalers/stores, but NVIDIA...
EVGA devolverá el dinero a los compradores de una GTX 970 | El Chapuzas InformáticoThis is a real problem with cutreports and 2 well-made games, in a year and a little with the new engines around the corner, it's time to take them to court when you can't even play at 1080p.
I don't know how you can deny something that's all over the net and that's causing a massive refund that I think is unparalleled XD.
Best regards.
-
I don't think so, it would have been discovered much faster, the problem that disabling the Rop/L2 unit creates is that you have to create two partitions, resulting in one part of the L2 having to share 2 memory controllers, so access to those mc is very slow (hence the loss of bandwidth) since although the L2 can execute 4 instructions at once (2 input, 2 output), it turns out that you can only access one of the partitions at a time, so if you want to write and read at the same time in both partitions (when filling the 3.5gb) you will have to lose a clock cycle because when you have L2 shared with two mc you can only execute 1 input and 1 output.
If you already have to go out to grab textures from ram/ssd turn it off and let's go …
A more elegant solution would have been to remove the extra controller and restrict the bus to 224 bits and 3.5 gb of ram, the performance would be even better as you wouldn't have the fps dips and stuttering, but you can see the mess they've made...
But I don't see the stuttering or bottlenecks from lack of Vram in this video, in the 660Ti or 660 it's noticeable, in that video I see a lack of power, a 970 can't handle that resolution, but I think that even with 8GB of vram you won't see much difference in performance, because what's going to be missing is power.
It's wrong that they don't say their true specifications, but now that you can start seeing refurbished 970s for very cheap I'll get one, I won't have problems with one and resolution 1080.
I don't see these problems.
I think it's a bit absurd, because if the 970 can't keep up with the 780Ti, then the latter is no longer worth anything with 3GB, plus I saw many videos where the graphics show more Vram than 3.5GB and I don't see it making a difference, if in that video it stutters but if you enter the second controller it's no longer noticeable.
Regards
-
Well, the first thing is that it's hard to notice stuttering in a video, there have to be very big lags, even so I notice quite a bit of stutter with the GPU at about 85-90%, that can go unnoticed, but as I say, I've seen lags like in the 660 video that in situ must be spectacular. The problem is, it has already been recognized by all parties, that it is less pressing than in the 660s because of better drivers, architecture and partitions, yes, but personally I would never think of buying a 970, and much less looking at an immediate future where sli and Cross can probably share vram in its entirety giving the starting signal for uncompressed textures for cutreports ….. But it's just my opinion XD. -
Well, the first thing is that it's hard to notice stuttering in a video, there have to be very large lags, even so, I notice quite a bit of stutter with the GPU at about 85-90%, that can go unnoticed, but as I say, I've seen lags like in the 660 video that in situ must be spectacular.
The problem is, it has already been recognized by all parties, that it is less pressing than in the 660s due to better drivers, architecture, and partitions, yes, but personally I wouldn't think of buying a 970, much less looking at the immediate future where sli and Cross can probably share vram in full giving the starting signal for uncompressed textures for cutreports …..
But it's just my opinion XD.
But it's going at 25 or 30 fps, so we won't know if at 60 fps that has stuttering, that happens to us with any card as soon as it drops near 20fps.
I didn't buy 970 or 980 because it's a GM204 and I don't want to pay for a chip that I don't see as high-end, I don't want to pay for that chip as if it were high-end, but if I see a 970 at 200€ I don't think I can resist.
Because it's still the same card that a few days ago most people thought was the best.
Regards
-
They also said that it performed like a 780ti XD … Shadow of Mordor in sli still has the same problems that are noticeable, even more so with a better fps rate, but for that, the best thing is to see it live. -
They also said it performed like a 780ti XD … Shadow of mordor in sli still has the same problems that are noticeable, even more so with a better fps rate, but for that the best thing is to see it live.
That always happens, everything new is said to perform better than the old, but I wouldn't trade my 780 for a 970 thinking about performance, because I think it's a waste, it will be better in temperature and consumption, but mine also has good temperature and makes little noise, that's why I'm not interested in these new ones.
If I saw one for 200€ then yes I would put it in another computer and have both, but I don't upgrade unless it's to gain a significant margin in performance.
regards
-
Also they said it performed like a 780ti XD … Shadow of mordor in sli still has the same problems that are noticeable, even more so with a better fps rate, but for that the best thing is to see it live.
And it does perform, besides having an OC margin that a 780 Ti doesn't have without messing around with the voltages. Although of course, given your signature it's understandable why there's so much defense of how serious a problem it is and why there's so much interest (another case of defending "mine is the best, I hate it when there's a new option that offers the same or better at a lower price").
But so you don't fool yourself too much, I'll inform you:
3.5 GB (not counting the help of the 0.5 GB, whether you like it or not), are MORE than the 3 GB of a Ti, so don't start looking for a hair in an egg saying that you don't know what in SLI you can see the problem, when this doesn't exist in monogpu (a problem with an SLI profile doesn't mean absolutely nothing, and I'll also inform you about something: using SLI does NOT magnify either the consumption or memory requirements of a graphics card, less so when working in AFR which is the typical thing).
PD: Said from the experience of seeing a GTX 970 in operation with that little game, everything maxed out. Since you value that experience so much, just look…. :troll:
There are some pretty serious tests looking for the effects of memory in monogpu and SLI, without finding positive results or, when the worst case, cases so dubious and lacking practicality, that it's quite funny that people once again make up stories by recording videos that by definition affect, even if little, the functioning of the system (more so if recording against some disk in use).
http://www.hardware.fr/focus/106/gtx-970-3-5-go-224-bit-lieu-4-go-256-bit.html
http://www.guru3d.com/news-story/middle-earth-shadow-of-mordor-geforce-gtx-970-vram-stress-test.html
http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-GTX-970-Memory-Issued-Tested-SLI
In all the cases where they manage to "show something" of the problem of the VRAM configuration of the GTX 970, in any case the GTX 980s don't give an adequate performance (it's not having rates of <30 fps in BF4 or an average around 50 fps in CoD using multigpu, with peaks in both cases of almost 100 ms per frame, visible yes or not always, whether the "experience" with a GTX 980 is better or not, in no case can we talk about good gameplay in both cases).
And this is when you take things to the max, painting pixels at full capacity (4K, 1600p with supersampling), we already know that the GK110s get tired on the way to 4K, not to mention that the VRAM also makes them give up, if they put an SLI config of 780 Ti in the graphics cards, maybe we would all laugh even more, if possible. :ugly:
-
If you want to troll this is not the place, first of all because it has been more than proven that the 970 with OC only equals the 780ti ref without OC, that in some game it surpasses it by a few fps, well it goes almost to 1400/1800 in the tests …. You are one of those who seem to still live in the month following the release of these and their hype, there you go, I have already tested them more than enough both in single and in SLI.
The fact of the vram is getting tired, seriously, 3.5gb is more than 3gb ….. that is an idiocy like a castle, as has always happened, you run out of graphic power before vram, more so, with the cutreports doing ramdisk …..
In SLI configurations the problem is aggravated, not only in Mordor, in Far Cry 4 it is unbearable, it is not about AFR, it is about the delay that is generated when accessing section 1 of the vram in ns and its connection with the pciex, but well you to yours …..
About the reviews, of no use for the links, curiously everyone has started to put them as soon as I released them on the forums …. but regardless, you do not manage to "show something" as you say, it is clearly shown everywhere …..
Do not excuse the problem in the lack of power at high resolutions, the data from the websites is very very very clear, the case of guru3d is textbook with a drop to 0 fps and a pitiful timeframe compared to the 980.
On hardwarecanucks you can see perfectly the percentage of loss that added to the 14% difference between both cards (980-970) gives us an average of 18% worse in pure performance ….. not counting the problems derived from the fantastic strategy of nvidia with the 970.
And on pcperspective, if you do not recognize it there, we can stop, little more to comment with you then …..
About the sli of 780ti, as I say, it is quite more powerful and much more consistent, I do not understand where the weak laugh you would give me ….. A Crysis3 at 4k with 2xmsaa is at 2.8 gb of vram, to give you an example, Metro LL more of the same, the cutreports exhaust the vram without you mentioning it for maxwell for example, in fact there is no loss of performance for it since as I say they use ramdisk.
It is clear that if seeing the previous links you continue to think like this I will not convince you otherwise, there is no one more blind than the one who does not want to see and if you have not tested them in situ even more, but denying the obvious … bad.
About the 780ti and the 970, well, like everything, you have to go look for reviews:
EVGA GeForce GTX 970 FTW ACX 2.0 Review - Page 5
MSI GeForce GTX 970 Gaming 4 GB Review | techPowerUp
To give you two quick examples, in fact the 780ti is only 1% slower with maximum overclock compared to the 980 …. if you do not believe it, congratulations, but the reality is very different.
Best regards.
-
If you want to troll this is not the place, first of all because it has been more than proven that the 970 with OC only matches the 780ti ref without OC
Best regards.
Look, I didn't even bother to read more after the gem you dropped in the first sentence, first by being disrespectful, and second by dropping the first nonsense statement with the sole intention of flaming.
And stop lying, please, because here people don't suck their thumbs:


And so with any resolution they use on TPU!, a minimal difference between reference cards (especially the 970, with TDP and blower of the GTX 670 doing some thermal throttling) of 2-3% between both cards (and don't come with percentage subtractions, rookie mistake that I won't amend and I'll warn you about).
Nothing to do with the tuned ones or anyone who mounts a blower of "reference" style GTX 770 or GTX 680, which are better than the "basic" ones with blower like GTX 670/760.
About 20 games to make that average, so less wolves, by the way, given that I see a clear intention to flame and be disrespectful, on a personal level, and to lie about the most basic truth, obviously I report you for your clear intention to make fun of people.
Considering that mine comes with a slight factory OC, better dissipation and less thermal throttling, and that I can also get without problems an extra 15% of REAL performance with light OC without touching voltages, there's only one thing left to add:
Take your trolling elsewhere.
-
As in the heat of an argument one can forget where one is, we are going to remember it.
In this forum one can talk and argue even passionately, but do not mount a escalation of comments based on belittling the arguments of the other because you should already know that those things, at least here, end very quickly.
If you do not want posts to disappear, and your green squares to be reduced or change color, then relax.
Regards!
