Test of the Real Nvidia Gtx Titan, single-sli-tri Sli, 4 WAY SLI
-
In the tests that you use, which are preset, and 99% are with a maximum resolution of 1080p. The only thing that 4GB of Vram does is slow you down…
Salu2.
Thanks for the confirmation, because I forgot about more VRAM to get better results, it's super clear that it doesn't benefit in the tests we use.
By the way, something has come out about VMOD for the TITANs, it's being talked about by XtremeSystems in case someone wants to check it out.
Salu2!
-
I wonder, if the Titan is the most powerful out there now, when the 780 comes out, will it be more powerful than the Titan? Normally the next generation is usually a little stronger, but the Titan is already 50% more powerful than the 680, so what the hell is going to happen? What kind of cucumber are they going to launch with the 7xx series?
-
No.Es is ALWAYS silent when the fan is on Auto.The 80º limit makes it barely go up 10-15% in full load from the iddle.No don't get your head around that, it's one of the quietest graphics I've ever had.That said, always with the fan on auto.
Salu2.
But, what about when it's NOT on automatic …?
How much do the º and the sound level go up?
Keep in mind that according to the project, it "stresses" the graphics to a greater or lesser extent, so I can NEVER have the fan on AUTO, I have to adjust it manually..............
Saludos.
-
There will be discussions and opinions but since I've been in this field of Computing and for the amount of graphics cards (both ATI and nVidia) that have already passed through my hands, I can assure you that the data bus HAS A LOT TO DO with the performance of a graphics card.
Well and therefore, the bandwidth is unquestionable.
Regards.
The width of the bus per se DOESN'T matter, I'll repeat it if necessary a thousand times, what matters is the total bandwidth that is obtained in which the bus can greatly help, especially in cases where the same types of memory are used with different bus widths.
But by itself with the same bandwidth and a 128-bit bus and a 512-bit bus you will get THE SAME performance. In fact it is more likely that it will go worse with a very wide bus because depending on its memory controller configuration it may greatly waste the bandwidth it has.
That's what happened in pre-Geforce3 graphics cards where there were not several memory controllers for each 32-64 bit channel, and therefore when accessing an X data of 32 bits an entire line of, for example, 128 bits was loaded. It could happen that the following 96 bits were left over.
One of the main reasons why the Radeon 8500 was inferior in real performance to its rivals, despite how nice the chip was in capabilities and so on, is because of this issue, since the nVidias were much more efficient with a similar bus when using several separate memory channels.
So if one had to say something about whether the bus itself is better or worse for it to be wide, by itself it would actually be worse. Although obviously if the solution is well implemented (as is the case today) and memory of similar frequency is used with wider buses, it is clear that the wide data bus is "better". But it is not "better" for an ethereal reason of the "goodness/quality" of the use of a wide bus, it is because of the simple and plain bandwidth, which increases with the bus. Nothing more, nothing less. Same bandwidth, same performance, whether the bus is 1 bit or three million bits (I am deliberately ignoring the problem of latencies, since it is a separate issue and in graphics it really has little influence).
-
Yes, in principle the important thing is the effective bandwidth, regardless of how it is achieved, whether with more BUS or with higher VRAM speeds...
Best regards.
-
I wonder, if the Titan is the most powerful one out there right now, when the 780s come out, will it be more powerful than the Titan? Usually the next generation is a bit stronger, but the Titan is already 50% more powerful than the 680, so what the hell is going to happen? What kind of cucumber are they going to launch with the 7xx series?
I don't think the 780 will outperform the Titan, it will depend a bit on when they release it, if they release it in half a year I don't think it will perform as well as the Titan, if they release it at the end of the year it's more possible, if they are the specifications that some websites put with 1920SP and 384 bits I don't think it will perform as well as the Titan, I mean this one.
NVIDIA GeForce GTX 780 | techPowerUp GPU Database
Difficult but not impossible, although a GTX 780 with 3 GB with good performance wouldn't be bad either, now I'll answer your question about how the Titans are going these days, incredibly you can already order it and it takes 12 days for delivery but with a price of € 2500 (3 average worker salaries) when they stabilize they will go down to € 1400.-
Regards.
Impossible not but difficult yes, consider that if it takes two or three months to come out and the 780s come out in early June they won't be interested in releasing that GK110 anymore, it's cheaper for them to have a GK114 or whatever they call the 780, it's also more profitable for them and in performance they will probably be close even though the K20 should have more muscle for having more SP, but they could give the 780 a higher clock and without GPGPU, in the end they would be very close, although if they delay the 770.780 until the end of the year then it's much more possible that they will release another GK110 that's more cut.
It's incredible, that they consume less than the toxics (or that's what it seems to me)…and on top of that with more graphics options and I would say with less noise...anyway...
Regards.
In that video it seems like they consume less and are less noisy when playing, in 2D it seems like they consume a bit more, the noise is also inferior because it puts the microphone closer and even so it sounds less, with the Toxics it puts it on top and they sound more, I don't know what they will comment because they are Germans, but just by watching it seems clear.
Regards
-
The width of the bus per se does NOT matter, I will repeat it if necessary a thousand times, what matters is the total bandwidth that is obtained in what the bus can greatly help, especially in cases where the same types of memory are used with different bus widths.
But by itself with the same bandwidth and a 128-bit bus and one of 512 bits you will get the SAME performance. In fact, it is more likely that it will go worse with a very wide bus because depending on its memory controller configuration, it may greatly waste the bandwidth it has.
That is what happened in pre-Geforce3 graphics where there were not several memory controllers for each 32-64 bit channel, and therefore when accessing an X data of 32 bits, the entire line of, for example, 128 bits was loaded. It could happen that the following 96 bits were left over.
One of the main reasons why the Radeon 8500 was inferior in real performance to its rivals, despite how nice the chip was in capabilities and so on, is because of this issue, since the NVIDs were much more efficient with a similar bus when using several separate memory channels.
So if one had to say something about whether the bus itself is better or worse for it to be wide, by itself it would actually be worse. Although obviously if the solution is well implemented (as is currently) and memory of similar frequency is used with wider buses, it is clear that the wide data bus is "better". But it is not "better" for an ethereal reason of the "goodness/quality" of the use of a wide bus, it is for the simple and plain bandwidth, which increases with the bus. Nothing more, nothing less. Same bandwidth, same performance, whether the bus is 1 bit or three million bits (I am deliberately ignoring the problem of latencies, as it is another separate issue and in graphics it really has little influence).
Hello:
First, I want to clarify that I do not want to contradict anyone, confrontations or bad vibes, please.
Mainly, because I do not even reach the soles of your shoes in technical knowledge (many of you already know this).
I CAN only speak from my experience….......
I have put EQUAL drives from the same project to work in graphics with 256 bits and other graphics with 384 bits, and the graphics with 384 bits finished earlier.
Was the time difference overwhelming, exaggerated? I would say NO but it was perceptible.
Let's handle the time with an imaginary example:
Graphics with 256 Bits: 1 minute and 55 seconds.
Graphics with 384 Bits: 1 minute and 30 seconds.
For me YES there is a difference.................
Regards.
-
If this is confirmed that I have read:
Found this in the Nvidia forums…directed to ManuelG. Dude has nailed it:
Straight up question. Need a direct answer.
When using manual fan of 68% or higher with the 314.09 or 314.14 drivers, on all GTX 600 products And titan products, causes massive downclocking / throttling that did not occur with prior drivers.
314.07 did not exhibit this behaviour. The new drivers are adding the fan speed to the total TDP of the card somehow, and this is complete nonsense - I have MSI lightning GTX 680s yet I cannot use 70% manual fan or higher because these drivers DOWNCLOCK MY CARDS. I use MSI afterburner and EVGA precision to monitor my GPU boost clockspeeds 24/7. At 90% or higher load, and 70% fan, my cards will max boost to 1050mhz.. My normal boost speed is well past 1300. If I revert my driver to 314.07, manual fan has NO EFFECT on boost speeds.
This is not an isolated incident. Many people at overclock.net are discussing this issue. THIS SHOULD NOT HAPPEN. PLEASE GIVE US A DIRECT ANSWER AS TO WHY THIS IS HAPPENING. MANUAL FAN 68% OR HIGHER SHOULD NOT CAUSE OUR CARDS TO DOWNCLOCK AND THROTTLE.
THIS IS A BIG ISSUE ESPECIALLY WITH TITAN CARDS.
I look forward to your DIRECT answer.
estariamos ante un problema de controladores, afecta también a la serie 600, pero sólo los controladores salidos desde la Titan.
Saludos
-
Geltops, a graphic with a 128-bit Bus with a VRAM at 2Ghz has a bandwidth of 32000 Mb/s, and a graphic with a 256-bit Bus with a VRAM at 1Ghz has a bandwidth of 32000 Mb/s, that is, the same…

Regards.
-
But, what about when it's NOT on automatic...
How much do the º and the sound level go up?
Keep in mind that according to the project, it "stresses" the graph to a greater or lesser extent, so I can NEVER have the fan on AUTO, I have to adjust it manually...
Best regards.
I repeat, geltops, forget about the noise issue. This graph is totally different from the others; they have a default target of 80º that can be manually manipulated, which will make the graph work with little or a lot of load. It doesn't matter, it adjusts the fans so as not to exceed that temperature. At that temperature, the fans are even less audible than those of a GTX680. Having had AMD, I think you're cured of your fear of reactors. This is a tomb next to it. It only hums at maximum 85% which you'll never reach with the 80º target. Besides, there are many tests on both YouTube and review websites where you can clearly hear the fans.
Cheers.
If this is confirmed what I've read:
Found this in the Nvidia forums... directed to ManuelG. Dude has nailed it:
Straight up question. Need a direct answer.
When using manual fan of 68% or higher with the 314.09 or 314.14 drivers, on all GTX 600 products And titan products, causes massive downclocking / throttling that did not occur with prior drivers.
314.07 did not exhibit this behaviour. The new drivers are adding the fan speed to the total TDP of the card somehow, and this is complete nonsense - I have MSI lightning GTX 680s yet I cannot use 70% manual fan or higher because these drivers DOWNCLOCK MY CARDS. I use MSI afterburner and EVGA precision to monitor my GPU boost clockspeeds 24/7. At 90% or higher load, and 70% fan, my cards will max boost to 1050mhz.. My normal boost speed is well past 1300. If I revert my driver to 314.07, manual fan has NO EFFECT on boost speeds.
This is not an isolated incident. Many people at overclock.net are discussing this issue. THIS SHOULD NOT HAPPEN. PLEASE GIVE US A DIRECT ANSWER AS TO WHY THIS IS HAPPENING. MANUAL FAN 68% OR HIGHER SHOULD NOT CAUSE OUR CARDS TO DOWNCLOCK AND THROTTLE.
THIS IS A BIG ISSUE ESPECIALLY WITH TITAN CARDS.
I look forward to your DIRECT answer.
We would be facing a driver problem, it also affects the 600 series, but only the drivers released since the Titan.
Best regards
It was clear that the fan had a clear influence on the OC in synthetic benchmarks, not so much in games. And I'm glad it's a driver issue and not firmware, because that's easy to fix.
Cheers.
-
Take, what was missing with the drivers ….............
-
I repeat geltops, forget about the noise issue. This graph is totally different from the others, they have a default target at 80º that can be manually manipulated, which will make the graph with little or a lot of load. Never mind, adjust the fans to not exceed that temperature. At that temperature the fans are even less audible than those of a GTX680.**Having had AMD, I think you're cured of your fears with the reactors.** This one next to it is a tomb. It only hums at 85% maximum, which you'll never reach with the 80º target. Besides, you have plenty of evidence both on youtube and on review websites where you can see and hear the fans clearly.
Regards.
It was clear that the fan had a clear influence on the OC of the synthetic bench, not so in games. And that, in addition, it did so on the TPD.Me I'm glad it's a matter of drivers and not firmware, because that, it has an easy solution.
Regards.
I have 2 ATI HD 7970 and that turbine sound is unmistakable and " MADE IN ATI " :lol::lol::lol:
Thanks.
-
It was clear that the fan had a clear influence on the OC of the synthetic bench, not so in games. And that, in addition, it did so on the TPD.Me I'm glad it's a matter of drivers and not firmware, because that, it has an easy solution.
Salu2.
The FAN can be controlled from the KGB, I mean min and max in case it helps you.
I'll copy you a piece of KGB from the configuration:
# EXPERIMENTAL: This Setting makes the checksum calculate to # the same value it originally was by manipulating an unused # section of the bois. This may be needed for the new style # UEFI vbios. Set this to 1 if you want to preserve the orig # checksum. Set to 0 for the previous behavior of re-calculating # the checksum. NOTE: if you're not having driver detection # problems leave this at 0. # Preserve_Original_Checksum = 0 # Fan settings Fan_Min = 30 Fan_Max = 100 # Board power settings Max_Power_Target = 150 # Max Boost Frequency. Uncomment this if you want to change the # maximum frequency your card will boost to. #Max_Boost_Freq = 1228 # WARNING: # The following are valid voltages. I suggest you # use these values rather than coming up with your # own. 1212500 is the max and it is normally hard limited. # If you go over the max and your board is hard limited # you may actually get a much lower voltage than you # expect. # # Voltage = 1212500 # Voltage = 1200000 Voltage = 1187500 # Voltage = 1175000 # Voltage = 1162500 # Voltage = 1150000If with the drivers it advances and on top of that you can put the GPU to a power target of 150 % ufff and voltage control already available at 1.21v …. this is getting animated for the Titans. ;D;D
-
Geltops, a graphic with a 128-bit Bus with a VRAM at 2Ghz has a bandwidth of 32000 Mb/s, and a graphic with a 256-bit Bus with a VRAM at 1Ghz has a bandwidth of 32000 Mb/s, that is, the same…

Saludos.
Possibly for computing if he is more interested in SP or that the graphic is not capped for GPGPU, that's why maybe if he puts a 580 or a 7970 he could get better results than a 680, although it wouldn't be a matter of having a wider bus, it would be that they have more GPGPU power.
Depending on the program he uses, maybe if he uses a 580 it would be better than a 680, in games that wouldn't happen.
saludos
-
Is it known if any water block will be released for the Titan? -
@josele.:
is it known if any water block will be released for the Titan?
There are already water blocks for TITAN:
EK first with GeForce GTX Titan Full-Cover water block | EKWaterBlocks
Even Artic has released a hybrid with RL:
Professional Review » Inno3D iChill GTX Titan Accelero Hybrid
Salu2.
-
Hello:
First of all, I want to clarify that I don't want to contradict anyone, confrontations or bad vibes, please.
Mainly, because I don't even reach your shoes in technical knowledge (many of you already know this).
I CAN ONLY speak from my experience….......
I have put to work EQUAL units of the same project on graphics with 256 bits and other graphics with 384 bits, and the graphics with 384 bits finished before.
Was the time differential overwhelming, exaggerated? I would say NO but it was perceptible.
Let's handle the time with an imaginary example:
Graphics with 256 Bits: 1 minute and 55 seconds.
Graphics with 384 Bits: 1 minute and 30 seconds.
For me YES there is a difference.................
Regards.
Nop, no bad vibes at all, but some people claim that just because of the bus width the performance is better, and it's not.
You yourself are explaining it with your experience, same units but the ones with 384 bits finish before, but for the simple reason that they probably have more bandwidth. If they used a type of slower memory than the units with 256 bits, maybe what you saw was the opposite.
We have to understand that the bus width HELPS to get more bandwidth (or more total memory for when it's needed, it's easier to put more memory in graphics with a wide bus), and this is how it sometimes translates into better performance. But by itself it doesn't define the goodness of an architecture, that's defined by other parameters.
-
Hello:
First of all, I want to clarify that I don't want to contradict anyone, confrontations or bad vibes, please.
Mainly, because I don't even reach your shoelaces in technical knowledge (many of you already know this).
I can only talk about my experience….......
I have put to work EQUAL units of the same project on graphics with 256 bits and other graphics of 384 bits, and the graphics with 384 bits finished before.
¿ Differential time overwhelming, exaggerated? I would say NO but it was perceptible.
Let's handle the time with an imaginary example:
Graphics of 256 Bits: 1 minute and 55 seconds.
Graphics of 384 Bits: 1 minute and 30 seconds.
For me YES there is a difference.................
Regards.
Do you think that a GTX285 would finish the job before a Titan/Tesla K20X? The first one has a bus of 512 bits and the second ones a bus of 384 bits…
By the way, I don't know what cards you would test but as far as I know there has never existed the same GPU with two data buses of different widths, the only case that comes close is the G80 vs G92 (8800GTX vs 9800GTX), and, even so, they are different chips despite having quite a similarity, but the larger bus of the G80 provided it with greater bandwidth, in short, what is sure is that there is no same GPU, commercialized with two different buses and same bandwidth, and with computing capabilities (It wouldn't make sense to put a larger bus to a GPU, more expensive, to stay with the same bandwidth that a smaller bus).
-
Do you think that a GTX285 would finish that job faster than a Titan/Tesla K20X? The first one has a 512-bit bus and the second ones have a 384-bit bus…
By the way, I don't know what cards you would test but as far as I know there has never been the same GPU with two different data buses, the closest case is the G80 vs G92 (8800GTX vs 9800GTX), and even so, they are different chips despite having quite a lot of similarity, but the larger bus of the G80 provided it with more bandwidth, that is, what is certain
is that there is no same GPU, commercialized with two different buses and same bandwidth
, and with computing capabilities (It wouldn't make sense to put a larger bus on a GPU, more expensive, to stay with the same bandwidth as a smaller bus).
Well, I have never fallen for that fact and YOU are completely right …..........
It is impossible that with different data buses the bandwidth is the same.
Regards.
-
Well, I've never fallen for that trick and YOU're absolutely right.............
It's impossible for the bandwidth to be the same with different data buses.
Best regards.
Impossible no, but very unlikely, because either very different types of memory are used in the "same" chips with different bus widths, or the memory bandwidth will be clearly different (there is a big difference between a 512, 384 and 256-bit bus to compensate only with frequency). That is, when a manufacturer releases a chip with a bus, it's to squeeze it well, it wouldn't make sense that they could use a smaller bus and the same bandwidth for whatever reason, because that would go against their manufacturing costs (one of the main reasons for the bus reduction that occurred when moving from GT200 to GF1x0 is that, when changing the memory type from GDDR3 to GDDR5, they could increase the bandwidth somewhat by using a less wide bus, thus cutting costs while increasing performance).