It might have a TN panel but it's far from a 'shitty' TN panel.
I don't know much about the korean overclocked IPS panels but I'd take 120-144Hz + 1ms response time any day over the a IPS panel.
Although I'm a massive FPS gamer and avid gamer in general so from what I've played on IPS panels, in particular the Mac ones, I much prefer my 120Hz Benq panel.
I should also say, this would have been an insta buy for me but the cost here in Oz is $999
On another note, what type of hardware do you need for 1440p vs 1080p?
I had a dual screen setup (24 inch) but after finishing study and no longer working much from home I found the two screens became annoying when gaming. I figured instead I'd upgrade to a larger single screen
so it might sound stupid but is the conversion that simple, 75% more pixels = 75% more power required.
I figure it might be displaying more pixels but with the gpu doing other calculations as well is it simply a direct correlation? obviously it's using more power to display more but if, for example, it's calculating the physics of an explosion, although I'm seeing more of the explosion is the power needed to calculate that bigger explosion a direct comparison to the amount I see or is the full explosions being calculated anyway and therefore seeing more of the image not taking any more power to calculate the physics which would then make it not a direct comparison.
I hope that makes sense, its a bit off topic but a serious question.
On high end GPUs its not -75% performance, its hard to tell exact performance loss compared to 1080p, but its atleast around 30-40% performance hit. Also the higher the resolution, the more it requires VRAM.
Thanks for the feedback, I only ask because this screen is a serious contender as my next purchase. The only problem is I'm running 2 x 670 but they are only have 2GB of vram. I can't justify the move to a 700 series card so may have to wait for 800 series to get more vram
so it might sound stupid but is the conversion that simple, 75% more pixels = 75% more power required.
I figure it might be displaying more pixels but with the gpu doing other calculations as well is it simply a direct correlation? obviously it's using more power to display more but if, for example, it's calculating the physics of an explosion, although I'm seeing more of the explosion is the power needed to calculate that bigger explosion a direct comparison to the amount I see or is the full explosions being calculated anyway and therefore seeing more of the image not taking any more power to calculate the physics which would then make it not a direct comparison.
I hope that makes sense, its a bit off topic but a serious question.
As Breezer lays out, no, it's not a 1:1 thing.
VRAM is though. The same frame takes up 75% more space; a 1920x1080x32 frame takes up just over 8 megabytes, while a 2560x1440x32 image takes up just under 14.5 megs - that's just the rendered frame.
As for the other stuff: not all rendering operations are directly affected by rendering resolution. There's a lot of shit that you never even realise happens. Culling for example (not rendering shit that isn't visible) won't take up any more time on a higher resolution, but rendering a shadow will. Thus, how big of a hit it is depends heavily on the engine too; a pretty engine like CryEngine drops by 50% easily. Frostbite and even Ubioptimised games sit around 40%. UE3 and such around 30%.
Do not even start IPS vs TN conversation, we all know how it ends.
SUre we do. Tn is shit
3080 | ps5 pro
Sin317-"im 31 years old and still surprised at how much shit comes out of my ass actually ..."
SteamDRM-"Call of Duty is the symbol of the true perfection in every aspect. Call of Duty games are like Mozart's/Beethoven's symphonies"
deadpoetic-"are you new to the cyberspace?"
Better wait until later this year, early next year when we'll have monitors with Adaptive Sync on the market. Dell and Eizo are both working on it. For those I'll gladly pay more than normal.
But for this TN piece of shit with a wonderful vendor lockin? Fuck off.
@Werelds
Can you comment about this.
Q: How is G-SYNC different than Adaptive V-SYNC?
A: Adaptive V-Sync is a solution to V-Sync’s issue of stuttering, but input lag issues still persist. NVIDIA G-SYNC solves both the stutter and input lag issue.
@Werelds
Can you comment about this.
Q: How is G-SYNC different than Adaptive V-SYNC?
A: Adaptive V-Sync is a solution to V-Sync’s issue of stuttering, but input lag issues still persist. NVIDIA G-SYNC solves both the stutter and input lag issue.
Adaptive Sync and AMD's implementation of it in FreeSync is exactly what Nvidia's G-Sync is but without a $100 premium for a crappy display controller that locks you into a specific GPU vendor. Adaptive Sync will work on both Nvidia and AMD (that is, assuming Nvidia don't cockblock it; but they'll most likely rebrand G-Sync to support Adaptive Sync). Monitor manufacturers can freely implement it in their controllers.
Adaptive V-Sync is a software thing in Nvidia's driver and something else entirely.
Oh ok my bad, you meant freesync, but to be honest e.g. a slice taken from the faq:
"Upon connecting a FreeSync-enabled monitor to a compatible AMD Radeon™ graphics card..."
has me a little bit concerned
Oh ok my bad, you meant freesync, but to be honest e.g. a slice taken from the faq:
"Upon connecting a FreeSync-enabled monitor to a compatible AMD Radeon™ graphics card..."
has me a little bit concerned
1:04:51
Nvidia will get screwed
Linus gets it wrong. Project FreeSync is NOT a different name for Adaptive Sync, nor will Project FreeSync end up being called AS. Project FreeSync builds *on top* of AS.
AS is the "push refresh" technology, part of the DisplayPort _spec_ (which is nothing but words). This spec is royalty free, so anyone is free to implement it on either side of the cable (monitor or GPU) and there are no vendor requirements on either side.
Project FreeSync is AMD's implementation of that technology. Nvidia can, and most likely will, implement it too. And they'll probably do so quietly under the G-Sync name, so that they can go "HAH WE DID IT FIRST ANYWAY".
Just like AMD and Nvidia have their own implementation of tesselation, anisotropic filtering and so on.
matta666 wrote:
I hope it's better implemented than the NVidia version in their control panel. There is still a lot of tearing with adaptive vsync at the moment.
Read above.
It is something different than Adaptive V-Sync. Forget about Adaptive V-Sync, it's crap.
After using strobe backlight blur reduction i can never go back to 60hz crap no matter how great the color, resolution or contrast. I don't get he fuss about gsync either, no tearing but its still going to be blurry blurry unless its strobed and you cant enable both.
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum