Comparison Intel i7 3930K @ 4.8 Vs AMD Fx-8350 @ 4.8 Ghz.
-
I don't understand how AMD didn't introduce the Pcie controller to the CPU with the FX, I think they gave too much time to backward compatibility, that can be good for those who already have that platform, but since Intel changes sockets much more, in the end AMD falls behind because Intel makes a platform thinking only about a series of processors, while AMD did it thinking also about older processors.
Intel with the socket 775 also got a bit stuck and it was when they introduced the tick-tock that they started to advance and our wallets to tremble, anyway for normal people who don't put more than one graphics card and not the most powerful ones a computer can last a long time, anyone who buys a 2500K will have a computer for a long time just like the Q6600 have lasted a long time, where it struggles more is with three and four cards and a 2500K is quite valid for three cards.
AMD with the K7 and K8 was better than Intel, but then they started to lose, also because Intel is more advanced in manufacturing process, a shame that there isn't more competition, although AMD is weaker in gaming computers in other things not so much.
The memory test of sisoft sandra is that there is a big difference, as it is noticeable that Intel hired the engineer that AMD had, with the K8 it was the other way around AMD usually handled memory better.
good comparison.
regards
-
Thanks for the contribution ;D
-
And have you tried with the 3 graphics to see the difference? Although I suppose that the AMD card will not be able to do it, and if it does, the bottleneck it causes will be even greater..
Best regards and thanks for the work..
-
I don't understand how AMD didn't introduce the PCIe controller to the CPU with the FX, I think they've given too much time to backward compatibility, which may be good for those who already have that platform, but as Intel changes sockets much more often, AMD ends up behind, Intel makes a platform thinking only about a series of processors, while AMD did it thinking also about older processors.
Intel with the socket 775 also got a bit stuck and it was when they introduced the tick-tock that they started to advance and our wallets to tremble, anyway for normal people who don't put more than one graphics card and not the most powerful ones, a computer can last a long time, whoever buys a 2500K will have a computer for a long time just like the Q6600 have lasted a long time, where it falls short more is with three and four cards and a 2500K is quite valid for three cards.
AMD was better than Intel with the K7 and K8, but then they started to lose, also because Intel is more advanced in the manufacturing process, it's a shame that there isn't more competition, although AMD is weaker in gaming PCs, in other things not so much.
The memory test of the sisoft sandra shows that there is a big difference, it's noticeable that Intel hired the engineer that AMD had, with the K8 it was the opposite, AMD usually handled memory better.
good comparison.
regards
Thanks fjavi, that's what I see in the AMD platform, it doesn't quite "take off", the thing about backward compatibility, it's very good for its users to maintain the base pack and the platform update doesn't come out so expensive. But it's a burden in the long run. Also the path they have followed towards a greater number of cores, I think it's not the most appropriate, with the current panorama of applications that we have, it's rare that one takes good advantage of a Quad. Fortunately more and more applications take advantage of the threads of our processors.
The pure processor performance, like a sandra, or image rendering, Intel is far ahead, due to what you have explained so well ;).
A greeting…
@Praimus:Thanks for the contribution ;D
Thanks, buddy ;), every now and then we like to get into these "fights"…........ :).
A hug.
And have you tried with the 3 graphics cards to see the difference? Although I suppose that the AMD card won't be able to do it, and if it does, it will be even greater the bottleneck that it causes..
A greeting and thanks for the work..
No worries ELP3, yes, the card does allow 3 way at x16/x8/x8 exactly, but the layout didn't allow me to put the 3 GTX 670 OC. Right now I can't remember exactly why, but the intention was to pass it with the 3 way.
Although as you say, already with 2, you notice a big difference with Intel, with three it would be even bigger. I didn't get to play with the AMD for a long time. As to have an idea of the real performance in games.
A greeting.
P.D. Thanks to all…
-
thanks for the information, although without trying them it was clear that the difference would be quite
but the AMD costs less than half of the INTEL, so it's not badregards
-
Jotole in multithreaded applications is not so bad AMD, where it fails more is when you put powerful graphics in SLI or CF, in servers and multithreaded applications they are not badly positioned, it is more in games where they lose.
it seems that Asus is showing a card with plx and 2 Pcie 3 for AMD I don't know how it will go, it has two other Pcie2
but two work in Pcie3 which are the ones used by the PLX, here is a newsCES 2013: ASUS brings PCIe 3.0 support to AM3+ processors
Also an IB-E, ES is already seen, but the easiest thing is that they release an X89, for those processors.
http://www.hardwaremx.com/news/ya-se-oferta-en-ebay-un-supuesto-procesador-ivy-bridge-eep-es/regards
-
I've just landed in X79 and they're already talking about releasing the X89, there's a mother who ruins this computer stuff :chuckles:
Thanks for the tremendous work Jotole, of course as you rightly say, the most bleeding edge task pending for AMD is multiGPU, the differences are overwhelming.Best regards.
-
I've just landed in X79 and they're already talking about releasing the X89, my god, this ruins everything about computing :chuckles:
Thanks for the tremendous work Jotole, of course as you rightly say, the most bleeding edge and pending task for AMD is multiGPU, the differences are overwhelming.Best regards.
I doubt it's worth switching from X79 to X89, does the person with a 2500K get compensated by switching to a 3570k?
Maybe I'm wrong, but between Intel having no competition and the general rule of thumb that from tick to tock it doesn't usually improve much.. I wouldn't expect anything striking.
Best regards.
-
I doubt it's worth upgrading from X79 to X89, does someone with a 2500K get anything out of upgrading to a 3570k?
+1
Regards
-
¡Esta publicación está eliminada! -
Having PCIe 3 I also don't see the reason.
-
gracias por la informacion, aunque sin probarlos estaba claro que la diferencia seria bastante
pero el AMD cuesta menos de la mitad que el INTEL, con lo que no esta malsaludos
No hay de que para eso son las pruebas para compartirlas. ;).
Hay llevas razón, si nos vamos a la relación rendimiento/precio, es verdad que estas pagando por lo que obtienes, en eso AMD hace muy bien siempre sus deberes.
Ya sabemos desde hace tiempo que el que quiere lo maximo se tiene que rascar el bolsillo, y muchas veces el rendimiento obtenido no es acorde a lo que pagamos.
Salu2…
Jotole en aplicaciones multihilo no esta tan mal AMD,donde falla mas es al meter graficas potentes en SLI o CF,en servidores y aplicaciones multihilo no salen mal parados, es mas en Juegos donde pierden.
parece que Asus esta mostrando una placa con plx y 2 Pcie 3 para AMD no se que tal ira,tiene otros dos Pcie2
pero funcionan dos en Pcie3 que son las que usa el PLX,aqui una noticiaCES 2013: ASUS lleva soporte PCIe 3.0 a procesadores AM3+
Tambien se ve ya un IB-E, ES pero que lo mas facil es que saquen un X89,para esos procesadores.
Ya se oferta en Ebay un supuesto procesador Ivy Bridge-E/EP ESsaludos
Yo no lo veo así en aplicaciones multihilos, al menos en los bench que he sacado, Intel practicamente le dobla el rendimiento.
Pero claro eso se sabe a ciencia cierta trabajando a diario con ellas. Yo solo he apsado unos bench, no trabajo con esas aplicaciones…
Salu2...
Apenas e aterrizado en X79 y ya están hablando de sacar el X89, hay madre que ruina esto de la informática :risitas:
Gracias por el tremendo curro Jotole, desde luego como bien decís, la tarea más sangrante y pendiente de AMD es el multiGPU, las diferencias son apabullantes.Un saludo.
Dudo que merezca la pena cambiar de X79 a X89, ¿acaso el que tiene un 2500K le compensa cambiarlo por un 3570k?
Igual me equivoco, pero entre que Intel no tiene competencia y que por norma general de tic a toc no suele mejorar mucho.. no esperaría nada llamativo.
Un saludo.
Opino como vosotros, no creo que la ganacia de los nuevos procesadores, sea como cambiar una plataforma, y mas viendo la potencia que tienen estos 3930 a 5 Ghz. ;).
Siento no haberos contestado antes, pero estoy enfrascado en otro proyecto que pronto podreis ver en el hilo de mod´s
.Un saludo a todos…...
-
¡Esta publicación está eliminada! -
I've just landed in X79 and they're already talking about releasing the X89, my god, this ruins computing :chuckles:
Thanks for the tremendous work Jotole, of course as you rightly say, the most bleeding edge and pending task for AMD is multiGPU, the differences are overwhelming.Best regards.
There's nothing certain here according to the roadmap, it says that IB-E will be released in Q3 2013, I suppose they'll release another chipset or maybe not, but that would seem strange since with IB they've also launched the Z77, so it's normal that we'll see X89 motherboards at Cebit.
Intel's Ivy Bridge-E set for Q3 2013, Shows Leaked Slide
regards
-
Great comparative Jotole ;), of course on the AM3+ platform pcie 3.0 doesn't make sense because even with pcie 2.0 the platform doesn't offer enough bandwidth in multiGPU to make the cards perform well (or, at least, not enough so that communication between GPU and CPU doesn't have bottlenecks), especially powerful cards. The bandwidth that an SLI/CF with 2 pcie 2.0 x16 GPUs could offer is choked by the HyperTransport bus, which is already quite outdated (The last standard is from 2008, HT 3.1). It's enough for 2 pcie 1.0 cards but not for two pcie 2.0 cards. The HT bus can work with 16-bit or 32-bit buses, with 32 bits achieving transfer speeds twice as fast as with 16 bits, but AMD, since the beginning with HT 1.0 in the Athlon 64 socket 754, implements the 16-bit version, which from the start limits the speed that the HT bus allows to half. With 16 bits in HT3.0, which is the one implemented on the AM3+ platform, a maximum unidirectional bandwidth of 10.4GB/s can be achieved. The pcie 2.0 bus running at x16 offers a unidirectional bandwidth of 8GB/s, but in the case of an SLI/CF x16-x16 that bandwidth would be doubled to 16GB/s unidirectional (logically, using 2 x16 buses the total bandwidth is multiplied by 2), however the HT bus would remain limited to 10.4GB/s, hence in this platform multiGPU solutions literally drown in this mode of operation, since the GPUs are capable of transmitting 16GB/s of data, but the HT bus is not capable of transmitting more than 10.4GB/s. If we were talking about pcie 3.0 the platform would directly not be able to offer the necessary bandwidth for that bus, a single powerful pcie 3.0 card would leave the HT bus without bandwidth, since we would be talking about a single card offering a bandwidth of 16GB/s at x16 (Double that of pcie 2.0) which would be limited by the 10.4GB/s of the HT 3.0 bus. The only solution would be to switch to a 32-bit HT bus, which would also form a bottleneck in the case of SLI/CF with 2 pcie 3.0 cards running at x16-x16. As fjavi comments, having the pcie controller in the chipset instead of in the CPU also influences performance, it will always be better to have communication as direct as possible and with less length, and it must also be said, that we already know how the Bulldozers perform, if the game in question is not capable of using its 8 cores the performance is bad compared to Intel CPUs with 4 cores, let alone SB-E with 6 cores... -
¡Esta publicación está eliminada! -
Great comparison Jotole ;), of course on the AM3+ platform PCIe 3.0 doesn't make sense because even with PCIe 2.0 the platform doesn't offer enough bandwidth in multiGPU to make the cards perform well (or, at least, not enough so that communication between GPU and CPU doesn't have bottlenecks), especially powerful cards. The bandwidth that SLI/CF with 2 PCIe 2.0 x16 GPUs could offer is choked by the HyperTransport bus, which is already quite outdated (the last standard is from 2008, HT 3.1). It's enough for 2 PCIe 1.0 cards but not for two PCIe 2.0 cards.
The HT bus can work with 16-bit or 32-bit buses, with 32 bits achieving transfer speeds twice as fast as with 16 bits, but AMD, since the beginning with HT 1.0 in Athlon 64 socket 754, implements the 16-bit version, which limits the HT bus speed to half from the start.
With 16 bits in HT3.0, which is implemented on the AM3+ platform, a maximum unidirectional bandwidth of 10.4GB/s can be achieved. PCIe 2.0 running at x16 offers a unidirectional bandwidth of 8GB/s, but in the case of SLI/CF x16-x16 that bandwidth would be doubled to 16GB/s unidirectional (logically, using 2 x16 buses the total bandwidth is multiplied by 2), however the HT bus would remain limited to 10.4GB/s, which is why on this platform multiGPU solutions literally drown in this mode of operation, as the GPUs are capable of transmitting 16GB/s of data, but the HT bus can't transmit more than 10.4GB/s. If we were talking about PCIe 3.0 the platform wouldn't be able to offer the necessary bandwidth for that bus directly, a single powerful PCIe 3.0 card would leave the HT bus without bandwidth, as we would be talking about a single card offering a bandwidth of 16GB/s at x16 (double that of PCIe 2.0) which would be limited by the 10.4GB/s of HT 3.0. The only solution would be to switch to a 32-bit HT bus, which would also form a bottleneck in the case of SLI/CF with 2 PCIe 3.0 cards running at x16-x16.
As fjavi comments, having the PCIe controller in the chipset instead of in the CPU also affects performance, it will always be better to have communication as direct as possible and with less length, and it must also be said that we already know how the Bulldozers perform, if the game of the moment isn't capable of using its 8 cores the performance is bad compared to Intel CPUs with 4 cores, let alone SB-E with 6 cores...
But I don't understand then why Asus releases some board with PCIe3 for AM3+, nor why AMD didn't change with the FX platform, they should have focused on new processors and changed the buses, although perhaps that requires a lot of money in R&D, it's that since the 754 a lot of time has passed, the previous CEO of AMD should have put his foot in it, not to start with the SOC or not to release any socket exclusive for a series of processors, Intel is more advanced in process and if it also changes socket like shirts, it's normal that it distances itself, although it seems that in other things AMD isn't doing so badly.
I don't know if the K20X will take advantage of a better CPU or platform, I suppose they will be like these Titans, that is, theoretically if that Titan supercomputer had Intel instead of Opteron it should be more powerful.
Regards
-
But I don't understand why Asus releases some motherboard with pcie3 for AM3+, nor why AMD didn't change with the FX platform, they should have focused on new processors and changed the buses, although maybe that requires a lot of money in R&D, it's been a long time since the 754, the previous CEO of AMD should have put his foot in it, not to start with the SOC or not to release any socket exclusive for a series of processors, Intel is more advanced in process and if it also changes socket like a shirt, it's normal that it distances itself, although it seems that in other things AMD is not doing so badly.
I don't know if the K20X will take advantage of a better CPU or platform, I suppose they will be like these Titan, that is, theoretically if the Titan supercomputer had Intel instead of Opteron, it should be more powerful.
Regards
Asus's thing is pure marketing, and in that sense there won't be much difference because with a GPU you don't short the pcie 2.0 yet, so in that case, although the pcie 3.0 is limited (I don't know how the hell Asus will have implemented it, but it will be in a crappy way, because no AMD chipset supports pcie 3.0) I don't think there will be a loss of performance in monoGPU, for now.
-
The Asus thing is pure marketing, and in that sense there won't be much difference because with a GPU you don't underutilize PCIe 2.0 yet, so in that case, even though PCIe 3.0 is limited (I don't know how the heck Asus will have implemented it, but it will be cheap, because no AMD chipset supports PCIe 3.0) I don't think there will be a loss of performance in monoGPU, for now.
It seems like they've put a PLX in it, but it's just a patch, what they need is for the architecture to be ready for PCIe 3 and if it supports SLI CF of 2 or more cards it's because it can handle them well.
Noticias3D - ASUS shows the first AMD card with PCI-Express 3.0
so far it looks like with the Titan, lacking to see it here, two or three cards will leave even the most modern Intel platforms in their underwear, I don't know what will happen with Haswell but they look like they will require a lot of equipment, these Titans, although I suppose that whoever has two or three will have a monitor of at least 1440p, they look like at 1080p that will need a hell of a setup.
regards
-
This post is being processed/translated. The original version will be shown:
Pues teniendo en cuenta que los haswell en principio solo van a tener un máximo de 4 núcleos no creo que vayan a marcar mucho la diferencia. Quizás cuando salgan los Haswell de 6 o 8 núcleos en el socket 2011…