|
Page 1 of 1 |
|
Posted: Wed, 18th May 2016 16:00 Post subject: A different high-fps rendering approach |
|
 |
Hey guys...
The future of gaming will aim at 120/144Hz graphics. So I thought about a driver/hardware solution which would help to render games at 120Hz with only a MINIMAL impact on current graphics cards and already existing games.
Basically the game engine will send the data-to-be-rendered to the graphics card which will render a full frame, the next frame though will only use the next frames geometry information to move the on screen content of the last frame, just like video codecs do it. This should save all sorts of shader and texturing calculations to render that single frame and would only need a fraction of render power to determine the delta vector of on screen geometry changes resulting in a much better performance.
Any thoughts on this?
|
|
Back to top |
|
 |
|
Posted: Wed, 18th May 2016 16:24 Post subject: |
|
 |
Apparently some smart guys already thought about this stuff but as far as I can see they only aimed at creating "middle" frames to give the "feel" of double the FPS.
http://and.intercon.ru/releases/talks/rtfrucvg/
I would suggest rendering a proper first frame and taking it's rendered content and move it according to the delta vector changes for the second frame...
|
|
Back to top |
|
 |
|
Posted: Wed, 18th May 2016 17:01 Post subject: |
|
 |
So pretty much interpolation, but for games/graphics? Yes please! I'd be all over that <3
|
|
Back to top |
|
 |
|
Posted: Wed, 18th May 2016 17:11 Post subject: |
|
 |
Not exactly interpolation as I understand it. There is no estimation and you don't fill data between the frames. You take the known changes in the data of the next frame compared to the previous one and sort of only render the "difference".
Don't know much about rendering on that level to comment. Isn't it similar to what they do (or intend to do) to optimize VR?
|
|
Back to top |
|
 |
|
Posted: Wed, 18th May 2016 17:20 Post subject: |
|
 |
This approach would basically work like fps upscalers in TVs, but with the difference that the engine delivers the changed geometry data for free and that there is no need to re-analyze the whole image, but only the delta geometry changes. The result would be a new rendered crisp geometry but all texture and shader content would be just moved according to the geometry changes.
If they intend to do this for VR: Great But this would not only work for VR but for all existing 3D Games I would suppose 
|
|
Back to top |
|
 |
LeoNatan
☢ NFOHump Despot ☢
Posts: 73246
Location: Ramat HaSharon, Israel 🇮🇱
|
Posted: Wed, 18th May 2016 19:47 Post subject: |
|
 |
The way video encoding works is by looking at temporal differences in advance (during encode time). With video, that's not a problem, given finite prerendered 2D frame (x), you have finite prerendered 2D frame (x+1), you know (t), calculating delta is very easy (whether per pixel, per macroblock, per slice, etc. To optimize, you detect changes scene/macroblock/slice change and start a new "key frame".
But how to achieve a similar effect with a live geometry, with modern rendering constraints? Let's throw the simple stuff out of the way. Culling, texture optimization, terrain tessellation, etc. Let's assume modern video cards have enough VRAM in them to hold an entire level's geometry and textures in there. The problem with modern real time rendering is not geometry or textures. Most work these days are spent on post processing, which is usually calculated at screen space, so it cannot be prerendered or easily differentiated. Lighting, coloring, shading, normal mapping, displacement mapping, reflection, refraction, bloom, depth of field/focus, exposure (bloom, HDR), AO, AA, GO, tessellation, and others. Basically most of what we consider graphics these days. These cannot be easily prerendered or fed to GPU ahead of time for easy differentiation. Some of these you can skip if you feed a high quality geometry, but that is not enough at all.
I think the breakthrough in optimizing modern rendering is advancing real-time ray tracing. Unfortunately, not enough work goes into this. This is what will take rendering engines to the next level of visual quality and performance.
|
|
Back to top |
|
 |
LeoNatan
☢ NFOHump Despot ☢
Posts: 73246
Location: Ramat HaSharon, Israel 🇮🇱
|
Posted: Wed, 18th May 2016 19:49 Post subject: |
|
 |
PumpAction wrote: | This approach would basically work like fps upscalers in TVs, but with the difference that the engine delivers the changed geometry data for free and that there is no need to re-analyze the whole image, but only the delta geometry changes. The result would be a new rendered crisp geometry but all texture and shader content would be just moved according to the geometry changes. |
Ehh, what? Upscalers interpolate data. This is very different than encoding or decoding. Interpolation is garbage. It creates something from nothing. This is no good, and is not a desired outcome of real time rendering.
|
|
Back to top |
|
 |
Page 1 of 1 |
All times are GMT + 1 Hour |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB 2.0.8 © 2001, 2002 phpBB Group
|
|
 |
|