CPU vs. GPU
-
Bueno pues siguiendo el tema que se abrió aquí: Duda sobre cuello de botella… - HardLimit.
La "cordura" empieza con molestarse en leer completamente las respuestas cuando se usan datos ajenos y no ignorar gráficas de rendimiento anexas a la respuesta anterior, que he hablado muy claramente sobre qué falla y qué no, pero nada, que según parece no vale lo dicho y me salen por la vía de los tests basados en bench de techspot con el mismo juego.
Porque si javi1221 se compra una gráfica como éstas dos opciones para jugar en principio a 720p y puede que después a 1080p, está claro que es para jugar con todo a tope, y en este caso se puede dar de lo que estoy hablando. Yo no digo que esta cpu "no valga", digo que podrá limitar el rendimiento en más de una situación, de juego real además.
Bm4n, a ESTO es lo que pasa realmente con Tomb Raider en algunas fases:

Ésta con el i5 funcionando a 2 GHz, calidad máxima pero a 720p. 26,6 fps.

Ésta con el i5 a 3 GHz y lo demás igual. 36,9 fps.

Ésta con el i5 a 4 GHz, 44,3 fps.

Y finalmente a 4,5 GHz y con 48,5 fps.

Ésta es la configuración, máxima y a 720p, que por culpa de usar interpolación via GPU en vez del monitor, con resoluciones distintas de la nativa, quedan las capturas a 1080p igualmente, se siente por la dificultad para leer los datos, pero es lo que hay.
Pero ahí están los datos, hasta con 4,5 GHz el i5 sigue sin ser capaz de acercarse a 60 fps, no es la peor escena posible además en este juego, pero sí representativa. Son las dos fases más largas del juego, por tanto no es algo tan "puntual", aunque si tienes unos 45 fps puede ser pasvable. Pero está claro que con un i5 que no vaya demasiado rápido no se podrán mantener los 40 fps siquiera, y en este juego se nota.
Yo no estoy diciendo que en la mayoría de casos no sea suficiente este i5, pero que va a limitar tanto a la 7870 como a la 7950 en juegos exigentes de cpu, como el ya mencionado por otros crysis 3, o en el mencionado por mí Tomb Raider, seguro que sí.
Que yo estoy hablando de algo muy concreto, y es de la idoneidad de esa cpu, que no es del todo buena para explotar la gráfica que se compre porque guste o no, muchos juegos tiran bastante de cpu, ya sea porque están mal programados (tomb raider usa un único hilo para gestionar el subsistema gráfico) ya sea porque son exigentes (Crysis 3 carga a tope las cpus).
Habrá momentos en que sea 7870 o 7950, que para jugar de momento a 720p da igual, esa cpu se convertirá en una rémora. Porque situaciones muy cargadas de cpu siempre se ven en ciertos géneros (mismamente estoy con otro juego que pide cpu rápida como Starcraft II y su expansión, maneja mapas complejos (unidades) y más vale que tengas una buena cpu con músculo).
Y sí, cambiaría primero la gráfica, pero tendría un ojo puesto en cambiar la cpu en la siguiente actualización si quiero aprovechar al 100% la gráfica, que la gente tiende a pensar que la cpu poco importa pero tiene su peso.
Y moraleja para javi1221:
Compra ahora la gráfica que te puedas permitir, sobre todo si cambias en breve a 1080p, pero si quieres disfrutar al 100% de los juegos, no desestimes del todo el cambiar la cpu en un futuro posterior a la actualización de la gráfica, ya que el rendimiento que ofrece dicho i5 es muy similar a lo que ves en mis capturas por el resultado a 3 GHz (algunos fps más) y como bien puedes ver en este juego la diferencia con versiones más rápidas es notable.
Y no es el único, no es lo más habitual, pero esta situación te la vas a encontrar más de lo que mucha gente se piensa. Si ves que con la 7870/7950 que te pillas el equipo renquea, entonces plantéate si será la cpu (revisa carga de trabajo de la gpu).
No para que actualices ahora, sino para que no pierdas de vista que esta cpu, aún siendo buena, puede ser insuficiente en algunos casos.
Que si que me leo lo que pones leche, si aun va a resultar que estoy de acuerdo contigo :risitas:
Bueno tío pues eso está genial así podemos ver de que hablas, de 36 (3GHz) a 48FPS (4,5GHz) es un +35%, es muy considerable. Pero por lo que veo en las screens ni la GPU ni el CPU están trabajando a tope no? Porque normalmente en el caso de un cuello de botella, que me corrijan, estaría la CPU a tope de carga y la gráfica no. Que gráfica usas, por cierto?
Yo como decía creo que Tomb Raider es un desastre de siempre para PC probablemente tengan que parchearlo de alguna forma, no me parece un problema de que le falte rendimiento al equipo. Bueno es más puedo afirmarlo, porque lo juego en la PS3 de lujo así que es evidente que está muy mal portado.
-
Bueno pues siguiendo el tema que se abrió aqui: Duda sobre cuello de botella… - HardLimit.
Que si que me leo lo que pones leche, si aun va a resultar que estoy de acuerdo contigo :risitas:
Bueno tío pues eso está genial así podemos ver de que hablas, de 36 (3GHz) a 48FPS (4,5GHz) es un +35%, es muy considerable. Pero por lo que veo en las screens ni la GPU ni el CPU están trabajando a tope no? Porque normalmente en el caso de un cuello de botella, que me corrijan, estaría la CPU a tope de carga y la gráfica no. Que gráfica usas, por cierto?
Yo como decía creo que Tomb Raider es un desastre de siempre para PC probablemente tengan que parchearlo de alguna forma, no me parece un problema de que le falte rendimiento al equipo. Bueno es más puedo afirmarlo, porque lo juego en la PS3 de lujo así que es evidente que está muy mal portado.
Sí, pero la gpu está trabajando, da igual a la resolución que lo pongas (bueno, sin pasar de 1080p o 1200p vamos), a algo así como el 50% en gran parte de estas fases, con Vsync desactivado y todo lo demás. Lo cual es un indicativo de que hay una cpudependencia en el juego.
En realidad en la carga de la cpu, aunque no se ve ningún core al 100%, sí se ve que hay uno (a veces parecen ser dos, pero es un "artefacto" de la forma en que se mide la carga) de los cores con una carga de un 80% o más. Atribuyo a un rebalanceo de los hilos del subsistema gráfico esto, porque ni siquiera coincide el mismo core como el que tiene esta carga entre capturas.
Pero vamos, los demás cores están que se aburren mientras que uno de ellos parece estar trabajando, y la gráfica mientras tanto también está a medio gas. Eso señala a lo dicho de un problema de cpudependencia, probando con las opciones gráficas del juego he visto que este problema de rendimiento está atado a una opción muy concreta, que es el LOD o "nivel de detalle" de las opciones gráficas del juego.
Esta opción lo que hace es definir cómo cambia la geometría y texturas de un objeto según la distancia (o si incluso desaparece), lo que hace que en mapeados como éste de la aldea de la montaña y shandytown sean pesados para la cpu, porque son muy abiertos y hay vistas a terrenos, edificios, y objetos a gran distancia.
Con el LOD en el máximo se obliga a la cpu a procesar gran cantidad de objetos con un nivel de complejidad muy alto, y esto satura a la cpu porque además es un trabajo que parece hacer un único hilo del juego.
En consola esto no pasa, seguramente porque el nivel máximo de LOD que usa la PS3, además de no ser configurable casi seguro, equivaldrá al "bajo" o "normal" de PC, donde se ha añadido como "extra" unos niveles de LOD muy altos para mejorar la calidad gráfica (más bien la fidelidad gráfica a distancia), a costa de la cpu.
Es más un tema de mala optimización (por usar un core sólo para el tema, por definir niveles de LOD casi intragables para las cpus actuales, por lo menos la mía) de la versión de PC del juego con sus extras.
Pero vamos, algo parecido es lo que pasa con Borderlands (otro core es el que se lleva toda la carga de trabajo con el tema de sombras dinámicas), posiblemente como opción gráfica exclusiva de PC que en ciertos mapas demasiado complejos sale caro (para un equipo normalito), o lo que pasa con otros juegos mejor optimizados pero exigentes con el tema.
-
I think it has more to do with the poor development of a game than the general need for CPU, because for example with GTA4 something similar happened to Tomb Raider, it was quite difficult to move it decently at maximum both on an i7 and on my i5, with a 660Ti and a crossfire of 5850 in each case, apart from the urgent need to have decent drivers and a couple of patches installed. Some recent games with quite high graphic demands do not have fps drops despite using the CPU at 100%, I have been able to verify this, although I still have to test Tomb Raider and Crysis 3, but it is for this reason, having verified it in other games, that I lean towards the game development factor as the cause of these problems. Because let's be realistic, although newer games demand a lot from both CPU and GPU, the most recent i5s are not to be sneezed at, and more so if they are OC, so pointing them out as "weak micros" is nonsense. It is also interesting to take into account other details such as memories, or mostly the hard drive... Because this influences directly, it is not the same to do the tests on a mechanical disk with high reading/writing speed than to do them on an SSD of the fastest on the market, to exemplify.
In summary, there are many factors, but the games open this debate, and as such I believe they are the main ones pointed out.
Regards
-
I think it has more to do with the poor development of a game than the general need for CPU, because for example with GTA4 something similar happened to what happened with Tomb Raider, it was quite difficult to move it decently at maximum both on an i7 and on my i5, with a 660Ti and a crossfire of 5850 in each case, apart from the urgent need to have decent drivers and a couple of patches installed. Some recent games with high graphic demands do not have fps drops despite using the CPU at 100%, I have been able to verify this, although I still have to test Tomb Raider and Crysis 3, but it is for this reason, having verified it in other games, that I lean towards the game development factor as the cause of these problems. Because let's be realistic, although newer games demand a lot from both CPU and GPU, the most recent i5s are not to be sneezed at, and more so if they are overclocked, so pointing them out as "weak micros" is nonsense. It is also interesting to take into account other details such as memories, or mostly the hard drive... Because this influences directly, it is not the same to do the tests on a mechanical disk with high read/write speed than to do them on an SSD of the fastest in the market, to exemplify.
In summary, there are many factors, but the games open this debate, and as such I believe they are the main ones pointed out.
Regards
If I already say that there is some poor optimization along the way, but for example in the case of Tomb Raider I am clear that the multithread issue is not being used efficiently for the graphics subsystem, which is underutilized.
But this situation is very normal, since programming an application is not so trivial and dividing "to infinity" each subsystem of the game into threads and in turn this into other threads. What was often done was to take and put it in a thread exclusively, for example, the physics engine, in another the graphics engine itself, in another the AI and in another the sound subsystem, another for IOs, etc.
The problem is that each subsystem does not necessarily need a similar performance, and there are some specific ones like the graphic that weigh heavily on the cpu, and the optimal thing would be to divide this subsystem so that it works along several threads and cores, instead of functioning with a single thread.
But I already say that this type of topics, easy-easy is not. In addition to being in many cases a multiplatform game, as is the case. And with Tomb Raider running on the PS3, XBOX360 and PC, it is clear that the common denominator in terms of cores and threads is the XBOX360, where the graphics subsystem would go in a single thread "by logic" (and if possible using a core exclusively, without using SMT). And in the rest of the cores the other subsystems.
Then the versions for PS3 or PC would be nothing more than remakes of this base to adapt them, making changes that are not always fortunate. If this game used the cores efficiently in multithread it would have no problem running at 60 fps with everything at maximum on an i5 of 3.5 GHz, but sure, let's go.
But this is not the case, and in part I understand why it is so. Or rather, "I understand them" (the developers).
-
I used to understand that the CPU limited in strategy-type games that handled a lot of info for the AI that had nothing to do with graphic calculations. But today, if a game is not satisfied with a processor like the current ones that can go beyond 100GFlops, it is completely absurd. These games like Borderlands, Tomb Raider are the fault of the developers who have not optimized a thing, in many cases because they are made thinking about consoles and another problem is added, like what happens to Crysis or Metro, which put some top configurations exaggerated, perhaps thinking about the future or, frankly, ADVERTISING.
What happens with the CPU load of Tomb Raider is the same as if you run the SuperPI, it is really a process thread that instead of going to one core alternates between the 4, if in the task manager you set it to be fine-tuned to one core you would see a 100% load on one of the cores getting the same result. In Tomb Raider, putting the LOD to the top is not acceptable, so you lower it and play normally, sometimes it is also our fault for being obsessed with the highest settings.
Of all in a normal case, strange things like the ones we commented apart, the limitation that the CPU would put on the graphics (or the bottleneck) would be something like this, once it reaches the power that the game needs, the FPS increase stabilizes.

-
Don't worry, fortunately it's almost time to find out about the new version of Havok, at GDC 2013.
See you soon!
-
¡Esta publicación está eliminada! -
Do you think they will be as good as they say? I have my doubts ¬¬
-
Personally, I have always thought that Havok is the best physics system in video games, and a new version was expected years ago (2007), but then they were bought by Intel :facepalm:, and it wasn't until 2013 that we got to know the long-awaited new version.
For its part, a company that calls itself a software company, a fierce enemy of Intel, and which goes by the name of nVIDIA, has managed to improve the use of PhysX in the video games it collaborates with to unsuspected limits, which, as we have seen in the recent Tomb Raider with collaboration from AMD, means that anyone who does not use the hardware for which the game has been "optimized" will find a higher CPU usage, and worse graphics; or, to generalize a lot, a game that is worse optimized.
Salu2!
-
Don't worry, fortunately there's not much left to know the new version of Havok, at GDC 2013.
Best regards!
Err… what does it have to do with the topic? If everything started with Tomb Raider, and the problem of that game is the graphics engine, nothing to do with physics.
Anyway, Havok so far has not done anything that was not seen with PhysX by cpu. Many people ignore to what extent this variant of physX is used, and to surprise some I'm going to give a title:
Hitman: Absolution.
Yes, the Gaming Evolved game that was bundled with AMD graphics. Curiosities of life.
But let's say that people forget about physics, what loads the most in a game is the issue of the graphics engine in the cpu part that corresponds to it, and then at a similar distance both AI and physics (each one usually does not go beyond 15% of the performance used by a game).
-
I think it has become clear that games do not take advantage of CPUs, if the surplus was used under conditions to process physics or other processes it would be interesting, for example, taking advantage of new instructions like AVX would have a very large extra potential. Sometimes relatively simple things come out like FXAA that improve performance incredibly, more efficient physics that consume fewer resources would be a big step forward.
-
¡Esta publicación está eliminada! -
I think it has more to do with the poor development of a game than the general need for CPU, because for example with GTA4 something similar happened to what happened with Tomb Raider, it was quite difficult to move it decently at maximum both on an i7 and on my i5, with a 660Ti and a crossfire of 5850 in each case, apart from the urgent need to have decent drivers and a couple of patches installed. Some recent games that are quite demanding graphically do not suffer from fps drops despite using the CPU at 100%, I have been able to verify this, although I still have to test Tomb Raider and Crysis 3, but it is for this reason, having verified it in other games, that I lean towards the game development factor as the cause of these problems. Because let's be realistic, although newer games demand a lot from both CPU and GPU, the most recent i5s are not to be sneezed at, and more so if they are overclocked, so pointing them out as "weak micros" is nonsense. It is also interesting to take into account other details such as memories, or mostly the hard drive... Because this influences directly, it is not the same to do the tests on a mechanical disk with high read/write speed than to do them on an SSD of the fastest in the market, to exemplify.
In summary, there are many factors, but the games open this debate, and as such I believe they are the main ones pointed out.
Regards
The i5s are very good but for normal resolutions, not even graphics like the 670,680,7950,7970 are for 720p, they are graphics designed for more resolution, that's why with such low resolutions any CPU collapses, it's like when you put a 4way you must have a very fast CPU, but to play at normal resolutions 1050p,1080p the i5s are very good.
The issue with GTA4 is because it does not use the processor 100% and loads it quite a lot, if you manage to get GTA4 to use 70% of the CPU and you have a good card it will be more stable.
That game on pc runs with much more details than on console, I have it on Xbox and on pc and the graphic difference is beastly, on console they look like stick figures, it's not just low resolution it seems like they have very few polygons, even so on console it also scratches at 20 or 25 fps.
regards
Personally I have always thought that Havok is the best physics system in video games, and a new version was expected years ago (2007), but then they are bought by Intel :facepalm:, and until 2013 we have arrived to know the expected new version.
For its part, a company that calls itself software, a fierce enemy of Intel, and which responds to the name of nVIDIA, has managed to improve the use of PhysX in video games in which it collaborates to unsuspected limits, which as we have seen in the recent Tomb Raider with collaboration of AMD, means that anyone who does not use the hardware for which they have "optimized" the game, will find a greater use of CPU, and worse graphics; or generalizing a lot, a worse optimized game.
Salu2!
It seems that Intel is thinking of presenting its new version of Havok soon according to this and other news I saw.
Intel presentara en el GDC 2013 su nueva versión de Havok Physics - Benchmarkhardware
Nvidia does have its own physics, but I think that since they allowed those physics effects by GPU that's when others have focused more on trying to get their own physics system, besides Nvidia seems to have already reached an agreement with Sony for the Ps4 to also support physx, if they do the same with the next Xbox maybe we will finally see unified physics, I hope that happens.
It would be amazing if a library already be it Dx or opencl or directcompute could use all the physics engines whether by GPU or by CPU, to better utilize the hardware.
regards
-
The i5s are very good but for normal resolutions, not even the graphics like the 670, 680, 7950, 7970 are for 720p, they are graphics designed for more resolution, that's why with such low resolutions any CPU collapses, it's like when you put a 4way you must have a very fast CPU, but to play at normal resolutions 1050p, 1080p the i5s are very good.
Aha so it only works with a normal CPU if you use 1080, it's no good with more resolution and with LESS either, jajaja. Look if you find a game that goes worse at 720 than at 1080 I'll applaud you.
The i5 that you call it, are four sandy/ivy bridge cores that are the most powerful on the market and at 3GHz you can only improve it either by adding the HT that you know well that almost no game uses, a couple more cores that also don't make good use, or by going up to 4.5GHz by OC which is usually what works best and works whatever the resolution.
About the tomb raider the big difference between pc and ps3 as we saw in a video in the games branch (Tomb Raider (2013) - HardLimit) is the HAIR… And in my opinion the detail of the objects in the distance, the rest looks great. That said I also tell you that it affects a lot where you play it, a 720 plasma is ideal for resolution and because it smooths out the jagged edges a lot, with the previous Tomb Raider even though I had it on pc I ended up playing it on the TV and it looked much better.
-
Aha, I mean, it only works with a normal CPU if you use 1080, it doesn't work with higher resolution and not with LOWER either, jajaja. See if you find a game that runs worse at 720 than at 1080 I'll applaud you.
The i5 that you call it, are four cores sandy/ivy bridge which are the most powerful on the market and at 3GHz you can only improve it either by adding HT which you know that almost no game uses, a couple more cores that also don't make good use of, or by going up to 4.5GHz by OC which usually works best and works regardless of the resolution.
About tomb raider the big difference between pc and ps3 as we saw in a video in the games branch (Tomb Raider (2013) - HardLimit) is the HAIR… And in my opinion the detail of the objects in the distance, the rest looks great. That said I also tell you that it affects a lot where you play it, a 720 plasma is ideal for resolution and because it smooths out the jagged edges a lot, with the previous Tomb Raider even though I had it on pc I ended up playing it on the TV and it looked much better.
Well I refer to resolutions higher than 1050, the more resolution the better the card loads and the better it runs as long as it doesn't have very low fps, that i5 may be an SB but at such low resolution it will encounter games that will see many fps and notice microstuttering, because the graphics may have significant drops in load.
Try an SLI of 580 for example and play FC2 at 720p to see if it's true that it runs better or has a careful MS and perhaps you'll see many fps, but it's clear that the load of the cards will drop below 50% at many moments, while at higher resolution it won't possibly drop below 80%, it's not all about more fps it's how the cards work.
You put a 295 in a Q6600 with an X48 and put those resolutions and it's directly disgusting to play like that, on that platform even at 1080p that card runs poorly, if you lower the resolution more directly it's better to play with a 275, although of course it will depend on the game.
That's what I'm getting at, it's not all about more fps, it also greatly influences how the card works at load, if the card is at 90% at one moment and drops to 45% it will give a very bad feeling, it's not the same when vsync is applied and the card maybe only needs to work at 60% but stable.
He commented on games that I would like to see how they run, to see if he gives us his opinion on Crysis, Farcry 3, hitman or Max payne 3, those I'm interested in seeing if they run well or not and I'm interested in seeing what minimum fps they have, especially at what load the card runs.
-
Yo antiguamente entendía que el CPU limitara en juegos tipo estrategia que manejaban mucha info para las IA que no tenía que ver con cálculos gráficos. Pero a día de hoy si un juego no se contenta con un procesador como los actuales que pueden pasar de 100GFlops es completamente absurdo. Estos juegos tipo Borderlands, Tomb Raider es culpa de los desarrolladores que no han optimizado un cuerno, en muchos casos porque son hechos pensando en consolas y se le añade otro problema como les pasa a Crysis o Metro que ponen unas configuraciones tope exageradas quizas pensando futuro o sinceramente PUBLICIDAD.
Lo que pasa con la carga del CPU de Tomb Raider es lo mismo que pasa si pones a funcionar el SuperPI, realmente es un hilo de proceso que en vez de ir a parar a un núcleo se alterna entre los 4, si en el administrador de tareas pones que se a afine a un nucleo verías una carga del 100% en uno de los núcleos obteniendo el mismo resultado. En Tomb Raider poner el LOD a tope no es aceptable, con lo cual lo bajas y juegas normalmente, a veces también es culpa nuestra estar obsesionados con los ajustes más altos.
De todas en un caso normal, cosas raras como las que comentamos aparte, la limitación que le pondría el CPU a la gráfica (o el cuello de botella) sería algo así, una vez alcanza la potencia que necesita el juego la subida de FPS se estabiliza.

Mira, yo de programar sé un poco (curro), así que te puedo decir que a diferencia de lo que piensa mucho usuario de foro o de juegos, en absoluto es trivial crear un programa multihilo que tenga A SU VEZ cada subsistema no en un único hilo, sino en varios, y que además haga un buen reparto de recursos y uso de cada core presente. Lo que pasa en Tomb Raider es lo que pasa en mil juegos, y es que el subsistema gráfico va al completo en un único hilo.
Pocos juegos tienen este subsistema paralelizado entre varios hilos, en teoría se puede hacer pero no es un asunto trivial.
El problema es que la situación que tú pones ocurre "normalmente" pero no siempre, porque de la misma forma que hay escenas que a nivel gráfico son mucho más costosas para la gpu y produce una bajada de fps, hay otras situaciones que pueden llevar a lo mismo a la cpu, aunque la escena en sí para la gráfica sea trivial o muy normal.
Ejemplos, todos los que te he puesto incluido el Tomb Raider, Borderlands, etc. Añade otro a tu lista, Starcraft II, que usa multihilo y aún no tengo claro qué subsistema es el que se "sobrecarga" (motor gráfico, IA, etC), pero muchas veces se ve que la gráfica va a medio gas y tienes en plena partida variaciones de 30 a 45 fps en ciertos momentos.
Que en este último caso es jugable de todas formas, pero sí se nota algo esto. Están ocurriendo mil veces este tipo de problemas en cantidad de juegos, y muchas veces no son sólo un momento puntual, son fases enteras o el juego completo (Dead Rising 2).
NO son situaciones donde el rendimiento gráfico limite, porque los casos dichos son juegos relativamente normales (excepto Tomb Raider), el rendimiento en este caso está limitado por la cpu y el subsistema concreto del juego que limita el rendimiento de todo lo demás (dado que cada subsistema se "sincroniza" en un juego miltihilo para actualizar el "estado" del juego cada X tiempo, todos los subsistemas están limitados por el más lento o que más tarda en hacer sus tareas).
O sea, que esas gráficas son muy bonitas pero no representan la realidad en multitud de juegos, y si un i5 con Oc es capaz de atragantarse con un juego, con su IPC y frecuencia a raudales, es que este juego, esté mal o bien programado (que ya digo que no es lo que mucha gente piensa lo de explotar el rendimiento en MT), en este juego el límite es la cpu.
Llana y claramente. Y NO, 100 GFlops no tienen nada que ver con el rendimiento en juegos de una cpu, porque no se trata de machacar unas matrices enormes a base de AVX que es una tarea relativamente simple de paralelizar.
Los juegos en absoluto son tareas tan simples, en cada uno de sus subsistemas, y más marca el rendimiento sus capacidades de ejecución avanzadas que el puro machaque de datos (ejecución especulativa, prefetch y funcionamiento de cachés y memoria, etc). O sea, que para nada "cosas tan raras".
PD: Por cierto Bm4n, si tienes algo que comentar sobre gente y sus opiniones, hazlo por aquí. Ya sabes a qué me refiero.
Saludos.
–--
Fjavi:
Estás diciendo cosas "muy raras" de dios... a ver si te vas a estar liando. :troll:
-
And if the hard drive is practically the component on which the fluidity of the software on a computer depends the most… why doesn't it appear in this debate?
Greetings
-
And if the hard drive is practically the component that the fluidity of the software on a computer depends on the most... why doesn't it appear in this debate?
Regards
Because a good program will avoid being left hanging, and will either do initial and unique loads of mappings, or stream the mapping in the background BEFORE it needs the data. It's not usually a problem, and when it is, a good SSD fixes it.
On the other hand, the CPU issue is more volatile and always important in every game, you can't have an athlon x2 and a GTX Titan and have the games run well, to make a hyperbole.
-
@__wwwendigo__ Then we have the same opinion, I don't know what we are arguing about. The only thing I haven't said is that it's a simple thing, otherwise everyone would do it, but the issue is that it's really necessary for games to optimize current resources. I think my opinions are very clear, but if you want me to clarify more just ask

@__fjavi__ The first time I saw about microstutter was with the issue of multiple graphics, which seems like they had a problem with the division of labor and it was also seen that even if they showed for example 60FPS, in reality much fewer were shown on the screen. But it seems that in certain games even with simple configurations they repeatedly have lags when it comes to showing frames. And then there are the FPS drops of all time.
It is very possible that with dual GPU graphics or multiple graphics you have problems if you don't use a good CPU, I don't deny that, but in general there will always be less load for graphics at lower resolutions.
-
Because a good program will avoid being left hanging by a hair, and will either do initial and unique map loads, or stream the map in the background BEFORE needing the data. It's not usually a problem, and when it is, a good SSD fixes it.
On the other hand, the CPU issue is more volatile and always important in every game, you can't have an athlon x2 and a GTX Titan and have games run well, just to make a hyperbole.
Bestial hyperbole of all hyperboles...
But now that you mention it, I have an Ahtlon X2 next to which I had an 5850 Xtreme. I didn't play at maximum settings for GTA IV but with almost everything on high and at 1280x1024, and the truth is that I didn't notice any instability in the fps or any sudden drops in load. The thing is that at first it did happen, with old drivers and without patching the game, I had a very, very bad time... But then I patched to fix Rockstar's misstep and updated drivers (and they were still the 12.11 which is saying something) and stability came. The variation was minimal, between 3 and 6 frames each time, except for some odd spikes where they dropped 10 or more due to background programs or punctual graphics loads with a lot of action, but generally it was pretty fluid, although I wasn't getting many fps on average, because with that equipment and so little RAM and such, it's normal...
Too bad I don't have the 5850 here anymore, because if I did, I would test it now to see what results are obtained, especially now that catalyst 13.1 has just come out and I haven't been able to test it on this equipment.Regards