No way. 3-4 years i think. we already have it in various ways, though quite limited. i expect to see the first (glitchy) demos next year.
edit: chat-gpt seems to agree, but is even more optimistic, 2-4 years.
"Technically Feasible Today (with limitations):
For slow-paced or pre-rendered games, this could be prototyped now:
Use Stable Diffusion + ControlNet or Real-ESRGAN + prompt-based transfer on a game video stream.
Run it on beefy GPUs (e.g., RTX 4090 or multiple GPUs).
Low-res and low-framerate only (e.g., 1–5 fps) is doable now."
so yes, give it another year and a another generation of graphics cards and i think it will be doable, though very far from perfect. in 4 years, absolutely possible and it will be low-latency etc.
No way. 3-4 years i think. we already have it in various ways, though quite limited. i expect to see the first (glitchy) demos next year.
edit: chat-gpt seems to agree, but is even more optimistic, 2-4 years.
"Technically Feasible Today (with limitations):
For slow-paced or pre-rendered games, this could be prototyped now:
Use Stable Diffusion + ControlNet or Real-ESRGAN + prompt-based transfer on a game video stream.
Run it on beefy GPUs (e.g., RTX 4090 or multiple GPUs).
Low-res and low-framerate only (e.g., 1–5 fps) is doable now."
so yes, give it another year and a another generation of graphics cards and i think it will be doable, though very far from perfect. in 4 years, absolutely possible and it will be low-latency etc.
Why do ypu rely on chatgpt in this? It only uses existing info available, meaning a lot of guesses. Factors such as regulations and push to monetize all things ai will likely be roadblocks ahead.
Using "existing info" isn't good? How is using existing info guessing? It's not guessing, it's absolutely right that we can use it right now, though with very low frame rate and not great latency + it will be glitchy.
How would making money of such tech be a limiting factor? Quite the opposite, if you can make money of it (and yes, nvidia will make money of it, a great way to promote with a new card, for example) it is much more likely to become a thing than something which is impossible to monetize.
Regulations for training data can become bigger issues, but i think companies have prepared ahead for this and are now using their own data or data which is bought and paid for instead of scraping e.g entire youtube or using hollywood movies etc. I have noticed a type of decline for audio and AI movie making, very likely due to them using their own data. but it'll get better with time.
what will become bigger is subscription services for using such things, we are moving away from running things locally, which kind of sucks.. not hard to imagine something like "Nvidia AI Realtime Filter - make any game look the way you want, subscribe for $29.99 / month!"
Using existing info isnt good because when we reach plateau's with roadblocks the ai cant be relied on, especially with 1 year old data. If things goes swell it'll think it'll always go well.
Honestly, stop using chatgpt to predict the future.
its training data is not 1 year old + its been able to use the web for a very long time, its up-to-date.
for predictions its best to rely on your own knowledge, like i did here, but i also used chatgpt to see what it said, and we were pretty aligned. current info is all that exists, predicting is always predicting of course. taking into account where we are today with this tech, 4 years seems absolutely doable and it would be very odd if we can't reach it by then, tech is not declining, its improving steadily.
Roadblocks are mostly things like vram, model size, performance, these things just naturally improve over time.
Roadblocks are mostly things like vram, model size, performance, these things just naturally improve over time.
Vurts on the nose about this. Our only limiting factor in how 'good' Ai is now, is our hardware can barely manage to train what we have now. And thats with farms of $30k GPUs.
AI is currently 'dumb' as it is, not by its limits. But by our hardware limits. And its compounding because of it. We can only train models so big on current hardware (fp32 is about the max fp64 is a dream), and can only distill down those models to smaller models limited by vram to do so with.
We are (sort of) in equal to when realtime raytracing, and volumetric shadows was not doable, not because the concept is impossible. But because our hardware was far behind what was needed to do it. What took 5 hours to render in vray then, now can be done 60 time a second at 3-4x the resolution it was then (5 hours for 480x640 vs 1/60th a second for 768x1080).
I can get img2img locally with a lightning SDXL or illustrious model, at about 2-3 seconds per image now. 2 years ago it was 15-20 seconds an image.
Stormwolf - "Who cares about some racial stuff, certainly not the victims."
- Democracy Dies in Dumbness.
- Watching people my age grow from cynical youth who distrusts and dismisses the older generation, into cynical old people who distrusts and dismisses younger generations.
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum