Grand Theft Auto V
Page 708 of 708 Goto page Previous  1, 2, 3 ... 706, 707, 708
Amadeus




Posts: 2350
Location: Yes
PostPosted: Sat, 26th Jul 2025 14:29    Post subject:
10-15 years

gonna be lit
Back to top
vurt




Posts: 13829
Location: Sweden
PostPosted: Sat, 26th Jul 2025 14:44    Post subject:
No way. 3-4 years i think. we already have it in various ways, though quite limited. i expect to see the first (glitchy) demos next year.

edit: chat-gpt seems to agree, but is even more optimistic, 2-4 years.

"Technically Feasible Today (with limitations):
For slow-paced or pre-rendered games, this could be prototyped now:

Use Stable Diffusion + ControlNet or Real-ESRGAN + prompt-based transfer on a game video stream.

Run it on beefy GPUs (e.g., RTX 4090 or multiple GPUs).

Low-res and low-framerate only (e.g., 1–5 fps) is doable now."

so yes, give it another year and a another generation of graphics cards and i think it will be doable, though very far from perfect. in 4 years, absolutely possible and it will be low-latency etc.
Back to top
FireMaster




Posts: 13491
Location: I do not belong
PostPosted: Sat, 26th Jul 2025 16:49    Post subject:
just gotta sell half your organs and secure a nuclear power plant of your own given the direction nvidia's going.
Back to top
Stormwolf




Posts: 23703
Location: Norway
PostPosted: Sat, 26th Jul 2025 16:52    Post subject:
vurt wrote:
No way. 3-4 years i think. we already have it in various ways, though quite limited. i expect to see the first (glitchy) demos next year.

edit: chat-gpt seems to agree, but is even more optimistic, 2-4 years.

"Technically Feasible Today (with limitations):
For slow-paced or pre-rendered games, this could be prototyped now:

Use Stable Diffusion + ControlNet or Real-ESRGAN + prompt-based transfer on a game video stream.

Run it on beefy GPUs (e.g., RTX 4090 or multiple GPUs).

Low-res and low-framerate only (e.g., 1–5 fps) is doable now."

so yes, give it another year and a another generation of graphics cards and i think it will be doable, though very far from perfect. in 4 years, absolutely possible and it will be low-latency etc.


Why do ypu rely on chatgpt in this? It only uses existing info available, meaning a lot of guesses. Factors such as regulations and push to monetize all things ai will likely be roadblocks ahead.
Back to top
vurt




Posts: 13829
Location: Sweden
PostPosted: Sat, 26th Jul 2025 17:26    Post subject:
Using "existing info" isn't good? How is using existing info guessing? It's not guessing, it's absolutely right that we can use it right now, though with very low frame rate and not great latency + it will be glitchy.

How would making money of such tech be a limiting factor? Quite the opposite, if you can make money of it (and yes, nvidia will make money of it, a great way to promote with a new card, for example) it is much more likely to become a thing than something which is impossible to monetize.

Regulations for training data can become bigger issues, but i think companies have prepared ahead for this and are now using their own data or data which is bought and paid for instead of scraping e.g entire youtube or using hollywood movies etc. I have noticed a type of decline for audio and AI movie making, very likely due to them using their own data. but it'll get better with time.

what will become bigger is subscription services for using such things, we are moving away from running things locally, which kind of sucks.. not hard to imagine something like "Nvidia AI Realtime Filter - make any game look the way you want, subscribe for $29.99 / month!"
Back to top
Stormwolf




Posts: 23703
Location: Norway
PostPosted: Sat, 26th Jul 2025 18:11    Post subject:
Using existing info isnt good because when we reach plateau's with roadblocks the ai cant be relied on, especially with 1 year old data. If things goes swell it'll think it'll always go well.

Honestly, stop using chatgpt to predict the future.
Back to top
vurt




Posts: 13829
Location: Sweden
PostPosted: Sat, 26th Jul 2025 19:11    Post subject:
its training data is not 1 year old + its been able to use the web for a very long time, its up-to-date.

for predictions its best to rely on your own knowledge, like i did here, but i also used chatgpt to see what it said, and we were pretty aligned. current info is all that exists, predicting is always predicting of course. taking into account where we are today with this tech, 4 years seems absolutely doable and it would be very odd if we can't reach it by then, tech is not declining, its improving steadily.

Roadblocks are mostly things like vram, model size, performance, these things just naturally improve over time.
Back to top
SumZero




Posts: 2400

PostPosted: Sat, 26th Jul 2025 19:39    Post subject:
vurt wrote:
Roadblocks are mostly things like vram, model size, performance, these things just naturally improve over time.

Vurts on the nose about this. Our only limiting factor in how 'good' Ai is now, is our hardware can barely manage to train what we have now. And thats with farms of $30k GPUs.
AI is currently 'dumb' as it is, not by its limits. But by our hardware limits. And its compounding because of it. We can only train models so big on current hardware (fp32 is about the max fp64 is a dream), and can only distill down those models to smaller models limited by vram to do so with.

We are (sort of) in equal to when realtime raytracing, and volumetric shadows was not doable, not because the concept is impossible. But because our hardware was far behind what was needed to do it. What took 5 hours to render in vray then, now can be done 60 time a second at 3-4x the resolution it was then (5 hours for 480x640 vs 1/60th a second for 768x1080).

I can get img2img locally with a lightning SDXL or illustrious model, at about 2-3 seconds per image now. 2 years ago it was 15-20 seconds an image.


Stormwolf - "Who cares about some racial stuff, certainly not the victims."

- Democracy Dies in Dumbness.
- Watching people my age grow from cynical youth who distrusts and dismisses the older generation, into cynical old people who distrusts and dismisses younger generations.
Back to top
headshot
VIP Member



Posts: 35910
Location: UK
PostPosted: Thu, 28th Aug 2025 18:41    Post subject:


May the NFOrce be with you always.
Back to top
4treyu




Posts: 23131

PostPosted: Sun, 7th Sep 2025 18:53    Post subject:
Back to top
Page 708 of 708 All times are GMT + 1 Hour
NFOHump.com Forum Index - PC Games Arena Goto page Previous  1, 2, 3 ... 706, 707, 708
Signature/Avatar nuking: none (can be changed in your profile)  


Display posts from previous:   

Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB 2.0.8 © 2001, 2002 phpBB Group