-
Hello. For the past few weeks, I've been getting into the world of rendering in a somewhat tangential way: I don't do 3DS design or rendering, but some colleagues have started doing it professionally and they've asked me for resources in the form of PCs for rendering.
During this time, I've learned something about how 3DStudio works when rendering and there's a concept I can't quite understand. You see, 3D Studio doesn't use the graphics card for rendering: only the processor. It doesn't matter if you have a blazing fast Quadro (I've had the opportunity to try it) or an Intel integrated card (it's been rendered there too). The important thing is to have a very powerful processor: with many cores and high frequency. I've been able to verify that the rendering time is exclusively proportional to the processor: a processor twice as fast (whether by having twice as many cores or twice the frequency on the same platform) takes half the time.
The other day, while we were rendering with 3 PCs, I was talking to these people (who don't know much about computers) and I told them that in the world of video games, images with very detailed textures, lights, and shadows are rendered in real time at several tens of frames per second. They said that couldn't be, that I should look at how 3DS (at 0.02 fps on a 3570K with 8Gb of RAM). I told them that yes, but that's how it is in video games. They didn't believe me, so I downloaded and ran LightMarks for them and they said that was a video, that it couldn't be. I showed them Battlefield 3 and they were left looking like fools.
The thing is that I, like them, can't explain why 3DS is so slow. I understand that it's slow because it doesn't use 3D acceleration (although in several forums they say that in the next versions there will be 3D acceleration assistance).
I know that in 3DS, textures of enormous size are used (sometimes images of more than 500mb) and that the details are superior, but the times are too high.
The question is simple: why is 3DS so slow? Why aren't the same techniques used as in video games?
-
But quadro are supposed to be specifically for autocad and 3dstudio. I know that there is an addition to the drivers to make them work and that they use cuda to significantly speed up calculations. Is that why it is using the cpu and not the graphics? It is the graphics that is used in magnetic resonance stations because of its power.
But I am also interested in someone who is knowledgeable about the subject to explain it to us in a more comprehensive way. -
But quadro are supposed to be specifically for autocad and 3dstudio. I know there is an addition to the drivers to make them work and that they use cuda to speed up calculations quite a bit. It must be why it's using the cpu and not the graphics. It's the graphics used in MRI machines because of its power.
But I'm also interested in someone who knows the subject well to explain it to us in a more complete way.It's curious about the subject of graphics for design and rendering. 3D Studio uses the graphics card ONLY for scene preview. That is, when you're "drawing" everything and you can move, rotate, and so on, that's when the graphics are used. But in that preview, there's no calculation of lights and shadows.
The thing is that the difference between a graphics card focused on games (no matter how powerful it is) and a Quadro in this field is that when you move through that preview, the Quadro maintains the textures during the movement while with the card focused on games, the textures disappear and only vector graphics are shown. When you release the mouse and leave the scene static, the textures reappear.
On the other hand, Quadro allows for a smoother animation preview than a normal graphics card.
But when it comes to rendering, that is, to output all the frames to image files with the calculation of lights and shadows, antialiasing and so on, the graphics card does absolutely nothing. Only the processor works.
I can give another example of how slow this all is: scene WITHOUT textures and antialiasing activated, that is, something very simple except for the antialiasing. On the 3570K it took 40 hours to render 900 frames.
-
Well, I now know something I didn't know before. So, from what you're saying, the team's ram would only be for previewing, and you would need a micro with many cores to finalize the image, and the computer very well cooled. If you can, I would try with a Xeon, which has much more cache to see if that helps, with dual boards; or AMD which have more cores, but fewer instructions per cycle.
And if you don't have another option, use servers to perform the animation with hundreds of micros,
But the strange thing is that the klepers were supposedly made precisely for them to perform the calculations of lights and shadows. It's curious because it's similar to what happened with Corel. No matter what graphics you use, when you put in many elements the pc goes to 100%, with an hd3000 or an ati6850.I'm asking, not stating, because as I said, I'm lost in this.
-
For that type of programs, what works is a very powerful processor and the more cores the better
a lot of memory and a medium graphics card is enough, like HD 7770regards
-
Well, I've learned something new. So, from what you're saying, the team's ram would only be for previewing, and you'd need a micro with many cores to finalize the image, and the computer would need to be very well cooled. If they can, I'd try it with a Xeon, which has much more cache to see if that helps, with dual boards; or AMD which has more cores, but fewer instructions per cycle.
And if they don't have the option, they could use servers to perform the animation with hundreds of micros,
But the odd thing is that the klepers were supposedly made precisely for them to perform the calculations of lights and shadows. It's curious because it's similar to what happened with Corel. It doesn't matter what graphics you use, when you put in many elements the pc goes to 100%, with an hd3000 or an ati6850.I'm asking, not stating, because as I said, I'm lost in this.
It would be interesting to test with a micro with a lot of cache. The truth is that I don't know how it influences but for now there isn't any available. Apparently it's easy to make a processing farm with 3DS (parallelizing a single job among several machines). I've been thinking about it and I think the most economically efficient thing is to set up several single-processor towers than a single dual-processor one. The good thing about this is that you only have to focus on having a good micro since it's the only variable that intervenes so you can get brute force for relatively little money.
For that type of programs, what works is a very powerful processor and the more cores the better
a lot of memory and a medium graphics card is enough like HD 7770regards
That depends a lot on the use and the purpose. For the one who is going to dedicate themselves professionally to this, I would recommend that they have at least a PC with a Quadro because having fluid preview is important. On the other hand, you don't require huge amounts of RAM: with 4 cores the program consumes 5-6 Gb of RAM so with 8 you're enough and with 16 you're very over it. The thing is that these people work with images of absurd sizes with Photoshop and there sometimes with 16 Gb they're just enough.
Once you have the PC with the Quadro, you can have others with crappy cards but with good micros to which you send jobs remotely.
-
Searching on the topic, I'm not sure if it will be useful and if my English is rusty:
http://www.legitreviews.com/nvidia-kepler-versus-fermi-in-adobe-after-effects-cs6_2127
Legit Reviews: NVIDIA Kepler versus Fermi in Adobe After Effects CS6 - AnandTech Forums
NVIDIA CUDA: Kepler Vs. Fermi Architecture | The GPU BlogIt says that the GTX5xx are faster in after effects than the GTX6xx, that the fermi are faster than the kepler. This is because Adobe has activated the drivers for their program.
But they only talk about the gaming ones, not the quadro ones.And in another nvidia forum they comment that there are new drivers for iray, which may be what you need:
fermi & kepler mixed for irayand for quadro:
3ds Max Performance Driver | NVIDIAI'm not sure if it will be useful
-
I'm a bit rusty on this topic, but real-time rendering is very different from design rendering that uses 3DS or others. Games use a simple polygon system (even if they are thousands) while design applications multiply them in complexity. In games, many pre-cooked objects are used (like pre-rendered textures) while 3DS rendering calculates everything. Many of the effects like lights and shadows in games are relatively simple (very complex to do in real time, but relatively simple in their application) but 3DS calculates absolutely everything. Finally, much of the visual effects that 3DS calculates in rendering, games do it with post-process filters (like blur, antialiasing, etc.).
Rendering applications do not exploit the graphics card because graphics processors are specialized in real-time rendering which is basically the application of textures already pre-rendered on polygons, light calculations and post-process filters. Surely programs like 3DS could use them more efficiently, but I don't think it's worth much. I mean, basically 3DS is pure mathematical calculation and that's better done by the processor than the GPU.
Real-time rendering has aberrations that go unnoticed in the game, like vegetation effects or texture deformation when applied to certain polygons, which do not occur when rendering in 3D studio as it is calculated in each application.
All this said without being an expert in the subject …
-
I'm a bit rusty on this topic, but real-time rendering is very different from the design that uses 3DS or others. Games use a simple polygon system (even if they are thousands) while design applications multiply them in complexity. In games, many pre-cooked objects are used (like pre-rendered textures) while 3DS rendering calculates everything. Many of the effects like lights and shadows in games are relatively simple (very complex to do in real-time, but relatively simple in their application) but 3DS calculates absolutely everything. Finally, much of the visual effects that 3DS calculates in rendering, games do it with post-process filters (like blur, antialiasing, etc.).
Rendering applications do not exploit the graphics card because graphics processors are specialized in real-time rendering which is basically the application of textures already pre-rendered on polygons, light calculations and post-process filters. Surely programs like 3DS could use them more efficiently, but I don't think it's worth much. That is, basically 3DS is pure mathematical calculation and that's what the processor does better than the GPU.
Real-time rendering has aberrations that go unnoticed in the game, such as vegetation effects or texture deformation when applied to certain polygons, which do not occur when rendering in 3D studio as it is calculated in each application.
All this said without being an expert in the subject …
Excellent, you cleared up many doubts for me
-
In my humble opinion …
3ds is slow because its render engine is realistic. Try Vray, and you'll see it's even slower

Games use a render engine that is not as realistic, with tricks and shortcuts that are not used in a real render engine.
The issue of textures … is another thing. There are textures that are photos, and textures that are mathematical … try to do a real-time render of those textures

With GPUs things change somewhat, because they are more powerful in mathematical calculation than CPUs … but you have to look at what GPU and what CPU

an i7 is far superior to an nvidia 610 …Then … there is the issue of the output resolution of the render. I have seen that many people start rendering directly in Full HD to do tests … even final ones… and then they have to re-scale to lower resolutions … or the other way around …
Anyway … it's a complex issue …