• Portada
    • Recientes
    • Usuarios
    • Registrarse
    • Conectarse

    Why a graphics card?

    Programado Fijo Cerrado Movido Tarjetas Gráficas
    11 Mensajes 3 Posters 1.5k Visitas 1 Watching
    Cargando más mensajes
    • Más antiguo a más nuevo
    • Más nuevo a más antiguo
    • Mayor número de Votos
    Responder
    • Responder como tema
    Accede para responder
    Este tema ha sido borrado. Solo los usuarios que tengan privilegios de administración de temas pueden verlo.
    • M Desconectado
      MaxLG
      Última edición por

      I'm curious. As we all know, everything we see on the screen is the result of millions of calculations by our computer. My question is, what do you need a graphics card for? Let's take, for example, an I7 K with OC at 4.6 GHz. I mean, this thing doesn't have enough power to run BF3, for example?
      Or even more interesting, a graphics card already has a processor, memory, and fan; why not use more graphics cards instead of a CPU if, in the end, they do the same thing, perform operations?

      1 Respuesta Última respuesta Responder Citar 0
      • Bm4nB Desconectado
        Bm4n
        Última edición por

        Graphics Processing Unit - Wikipedia, the free encyclopedia

        hlbm signature
        ↳ Mis componentes

        M whoololonW 2 Respuestas Última respuesta Responder Citar 0
        • M Desconectado
          MaxLG @Bm4n
          Última edición por

          It puts "a dedicated coprocessor for processing graphics or floating-point operations, to relieve the workload of the central processor in applications"
          So it helps, and you can't do without that help? Can't a powerful CPU do all the work?
          And the second question he asked was why the graphics card has a processor, couldn't we use it as if it were a CPU?
          1 Respuesta Última respuesta Responder Citar 0
          • whoololonW Desconectado
            whoololon Veteranos HL @Bm4n
            Última edición por

            At first, everything was a mashup that integrated into the same circuit.
            As more powerful applications were developed, it was concluded that the optimal solution would be dedicated components for each task.
            It is true, and in fact, until a few decades ago it was like that. The processor did all the work, in fact, practically any application of today could run without the need for an accelerator if said applications allowed it (software rendering, which does not do so. Can you imagine playing Skyrim or BF3 in a stumbling manner, with a pea resolution, without shadows and in flat colors?).

            What it is about, in the end, is to "offload" the processor of work, and in that aspect it is the best way to do it.;D

            Edit: Also, and in fact in the integrated ones this happens, that the corresponding part of the CPU is dedicated to that.
            As I already say, the issue is that each component takes care of its own in a more or less autonomous way, and in this particular and simplifying a lot, the CPU is in charge of the management of routines and commands related to the "base" architecture of the application, and the GPU of the functions necessary for its representation on the screen.

            ...me lo dicen las voces...

            hlbm signature

            M 1 Respuesta Última respuesta Responder Citar 0
            • M Desconectado
              MaxLG @whoololon
              Última edición por

              Okay, so according to you, the problem is that the software is not prepared to work only with the CPU, right?

              whoololonW 1 Respuesta Última respuesta Responder Citar 0
              • whoololonW Desconectado
                whoololon Veteranos HL @MaxLG
                Última edición por

                In my opinion, no, according to the industry. There are hardly any applications that offer software acceleration as an alternative.
                It has gone from applications without acceleration, (the 2D of all time), accelerated applications with the option to run in software (the first 3D accelerators that worked alongside the VGA, which later merged), to what we have today, which are 3D applications that, due to hardware requirements, require a dedicated accelerator.
                An example of each stage in games, which is something that we all more or less understand: Wolfstein (3D appearance developed in 2D), Quake (you could play with software rendering helped by the 3D extensions of the P2 / K6-2 or with those pre-T&L accelerators) Skyrim (where software acceleration is only applied to the mouse pointer).
                I really feel that I can't explain myself better; making all the current graphic work fall on the CPU, for many cores it may have, would be going back to the days of the first Pentium with S3 Trio graphics.

                ...me lo dicen las voces...

                hlbm signature

                M 1 Respuesta Última respuesta Responder Citar 0
                • M Desconectado
                  MaxLG @whoololon
                  Última edición por

                  Haha ok thanks,
                  What if we did it the other way around? Use a GPU as a CPU? Aren't they more efficient?

                  whoololonW 1 Respuesta Última respuesta Responder Citar 0
                  • whoololonW Desconectado
                    whoololon Veteranos HL @MaxLG
                    Última edición por

                    Imagine, for the sake of analogy, an office.
                    There are several departments that handle specific segments of a common project, one designs the product, another plans the merchandising, another handles the issue of homologations, etc.
                    Now imagine that they fire everyone and leave you in charge of everything: in the end that product will be released, but perhaps when it does it will already be obsolete.
                    The same thing happens with our machines: the moment one has an overload of work, it slows down. As I said before, the CPU was in charge of everything, and although over time its power and processing capacity increased, the resources that applications needed also increased, so there has always been a certain "balance".
                    Certainly, the graphics section has been one of the most demanding as it evolved, requiring more and more CPU cycles for this purpose. The solution was to design an architecture that would free the CPU from all that work, creating a unit exclusive for everything related to the screen representation of the most elaborate 3D scenes.
                    This has resulted in (which is what I think confuses you) small "sub-computers", with motherboard, processor, memory, that only have the function of making the most spectacular scenes appear on the screen that programming and technology itself allow.
                    However, it must be taken into account that this was also applied to sound cards (I still have an SB32 PnP for ISA port that practically worked on its own), and in fact until recently, its acquisition was something obligatory. But since they do not need as much energy or produce as much heat, their integration into the motherboards has ended up normalizing.

                    Here is a small guide that may finally clear up your doubts, it produces nostalgia for many.:sisi:

                    ...me lo dicen las voces...

                    hlbm signature

                    M 1 Respuesta Última respuesta Responder Citar 0
                    • M Desconectado
                      MaxLG @whoololon
                      Última edición por

                      OK now I get more of what you mean.
                      The Wiki thing, I've read the first paragraphs but there's a lot more text xD

                      Thanks ?

                      Bm4nB 1 Respuesta Última respuesta Responder Citar 0
                      • Bm4nB Desconectado
                        Bm4n @MaxLG
                        Última edición por

                        What you're saying is like asking if you can use an F1 as a heavy-duty truck or vice versa, because after all, both have an engine, wheels, and a lot of horsepower.

                        A GPU is a type of processor dedicated to solving many simple tasks per second in parallel, while a current x86 CPU is a processor dedicated to solving large and more complex tasks. While it is true that both could theoretically do the other's job, without counting whether the software (which for the example would be the driver) is prepared or not for it; logically, they will not perform them in an optimal or acceptable way, just as an F1 can pull a little bit of a ton load, but it could not replace a truck.

                        Just as an engine is made for a certain job, a processor is also, you can't talk about this as if we were talking about a light bulb that can light up a rock or a dining room. A processor chip consists of many parts that are designed to process certain tasks, now it is even common that in the same package the GPU is included that uses RAM to act as a graphics card, but when it comes to consciously using a graphics card we use a separate one that has a CPU and memory designed for that specific job and therefore of much higher performance. How much more? An i7 4770 gives about 100 GFlops while an 780 gives almost 4000 GFlops (thousands of operations per second), that is 40 times more, and that is only the CPU/GPU work without counting the performance difference of a graphics card's memory (just like a truck and an F1 don't have the same tires).

                        So a GPU can do the CPU's job since it has so much computational power, just as a truck can do the F1's job, if the pilot (or the software) knows how to control it, it may complete the circuit, but it won't go very fast. GPUs use many parallel processing units to achieve computational power but each of them has very little on its own, so if it had to process a single task that can't be divided among all those units, it would go tremendously slow (which is what the CPU usually does).

                        Hence the question is not very intelligent, you should first be interested in understanding what a CPU is. I encourage you to read on Wikipedia (in English if possible), the articles related to: CPU, GPU, parallelism, microarchitecture, coprocessor, CUDA, OpenCL and those referred to in them.

                        PD. If we will finally see a fusion between CPU and GPU as happened with the mathematical coprocessors? (that would be a good question) Most likely, we are already seeing it for the low-end ranges, and it seems that it will continue to increase to the mid-range, but I see it as difficult for high performance because this is only used by a certain audience: games and renderings; each with its own aspect, so it is logical that they are components that are put separately.

                        hlbm signature
                        ↳ Mis componentes

                        M 1 Respuesta Última respuesta Responder Citar 0
                        • M Desconectado
                          MaxLG @Bm4n
                          Última edición por

                          It's impossible to explain it better xD

                          Thank you very much ?

                          1 Respuesta Última respuesta Responder Citar 0
                          • 1 / 1
                          • First post
                            Last post

                          Foreros conectados [Conectados hoy]

                          0 usuarios activos (0 miembros y 0 invitados).
                          febesin, pAtO,

                          Estadísticas de Hardlimit

                          Los hardlimitianos han creado un total de 543.5k posts en 62.9k hilos.
                          Somos un total de 34.9k miembros registrados.
                          roymendez ha sido nuestro último fichaje.
                          El récord de usuarios en linea fue de 123 y se produjo el Thu Jan 15 2026.