i have a feeling ai isn going to change much in the gaming opponent difficulty space if it cant even learn the super simple rules of chess and making up moves NEVER written down ever in any chessbook
Last edited by PickupArtist on Sat, 29th Apr 2023 12:20; edited 1 time in total
"Enlightenment is man's emergence from his self-imposed nonage. Nonage is the inability to use one's own understanding without another's guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one's own mind without another's guidance. Dare to know! (Sapere aude.) "Have the courage to use your own understanding," is therefore the motto of the enlightenment."
@couleur
Agree, its simply a trained model of making responses based on comparative data to the words asked. It doesn't 'understand' what it's saying in any capacity. Then thrown on a servo driven frame with a human face attached to it that triggers off weighted positions of the servos based on keywords it uses, that it also doesn't 'understand' why it does the expressions.
AI is a LONG way from being a things to fear or be concerned about. We may be headed that way, at but at the current time its simply a thing we invited that emulates things we only associate with humans doing, to make us feel it is doing them in some capacity like we do it..and not what it really is doing: Just emulating a visual and audio version of it.
AI cannot learn yet. It can be trained yes. But learn? No. It does no pondering, or internal speculation and reflection. Leave ChatGPT (or insert other AI thing here) alone for a million years with no user input. It will not have one independent event it caused itself. Its purely reactionary to input we give it, to do a thing we trained it to do that comes out as a event we relate as 'only humans do that'.
So we confuse we made it emulate that thing, with 'that another thing other than us is doing it. So it's nearly like us".
For example, can train it on how to brew coffee, and it can tell you how in 1000 different ways (because we trained it to put letters together in a order that sounds like our patterns of talking and fed it coffee recipes). But put it in a body in a random house and tell it to. It won't be able to tell where the kitchen is, what a kitchen even is, where the coffee pot is, how to get water for it, or what water is. (It knows the letters that make the word water is needed, it's in the training model) or how to start whatever machine you have to make coffee (or know what a 'coffee machine is'), or even what coffee 'is'.
Its learned to adlib parrott a recipes at you. But doesnt know what a recipe 'is'. Or that coffee is a liquid or what it's used for..(well that its even a 'thing' it just knows the characters for the word coffee).
Sure you could train one that does all that, but it would need to be a specific house, with specific items to look for. But then its no more than a robot arm at a factory keying off camera input of what to target, it still doesn't 'know' or have capacity to 'know' its making a pot of coffee.
And it cannot 'deduce' these on its own. It has no cognitive reasoning. They are only good at 'faking' being human at the specific thing we trained it on. As of now, it can never expand on its own into data it trained itself with outside the scope of only what we told it.
The sensory input to trick/fool us into thinking its more than it is, is the point of making it emulate more than it is. Program it to do what humans do, people attribute human traits to it.
Even an ant has far more self deduction of productive decision making to untrained stimuli and choice of action than the most advanced AI we have now.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
'In an interview with The New York Times, his most immediate concern with AI was its use in flooding the internet with fake photos, videos and text, to the extent that many won’t “be able to know what is true anymore.”'
How is that any different than before AI was around? Not like news and media wasn't constantly trying to be the first to report on something and getting bad info. Only thing that changed is what tool is used to make the wrong/fake info. It went from people purposely doing it with pen and paper/Word, and photoshop; to people using AI to write what they tell it to and skip needing photoshop.
Newspapers since the 1900's has been fraught with errors, wrong info, and mislead into articles of sensationalism on info they got from iffy source trying to be first with the scoop, and less about making sure the scoop is right...since they might not be first to say it if they waste that time checking.
I guess my point is: AI isnt autonomously making this info. It's the people that did it before, doing it still is all.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Last edited by DXWarlock on Tue, 2nd May 2023 18:51; edited 2 times in total
These effects he is warning about are all known for a long time in the field and often logical and even desirable outcomes of AI. Funny how guys like E. Yudkowsky and G. Hinton suddenly start to panic, but they should have been aware about them all the time. It's either dishonest or foolish.
AI was always supposed to eventually be able to confabulate every conceivable imagery - static images, moving images, virtual environments. It is just logical that this can also be abused for fakes and people will have more of a difficulty to know what is true. The other side of the coin.
Same goes for automation, what did they think it will create more jobs? Yes some new jobs will pop up but many more will disappear. Wait until those AIs will be combined with tools that can interact with the environment.
Anyone engaging with AI research should be aware of these potential outcomes. If he thought this was going to take another 30-50 years he was naive.
Yes, but I feel they misgauge how easy it is to trick/fool/mislead the average person. AI isnt needed to do it, it's been done on huge scales with far more primitive and easily spottable false info.
My mom constantly believed shit she read in the Enquirer and Daily Globe (think that was its name..the one that had the 'bat boy' article...I remember that one) in the 80's. LONG before AI was around.
I feel like they are gauging how effective it is on fooling people, based around a self centric view of what it would take to fool them. As most people it takes FAR less than that to start with and already is 100% effective. Can't go over 100%, they are already there.
Just hearing a thing makes some believe and repeat it without a single effort to check it. For example two friends of mine both sent me a link, since I live in florida, reporting our Gov sent National Guard to block disney entrances. No photos, no links, no sources. Just a article written by someone. A 5 second google search shows it wasn't true.
Dont need AI to trick people, you just need to know how people work. The AI just makes you need to type less words.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Yes, but I feel they misgauge how easy it is to trick/fool/mislead the average person.
True...it is shockingly easy in fact!
But this has been known in the fields of social psychology and behavioral economics for a long time. The work of Daniel Kahneman for example is incredibly valuable.
It is still ignorance then, that these researchers apparently had tunnel vision and couldn't look beyond their own field of work.
@DXWarlock The point is AI can perform misinformation both at an unparalleled rate and much more effectively than humans could ever dream of.
It could perfectly author subjective articles and target every demographic in minutes to sway public opinion. Think 10 million permutations of an article targeting twitter, facebook, hurrdurr users that upload their entire personality to a webpage. Achievable in hours.
Even just big data google / facebook holds is enough for a shitty programmer to write apps that could get to pre-crime, or break stock market levels. I don't think people grasp how powerful having 100's of millions of peoples location, interests, search history, payments really is.
Then you put that ALL that data into the hands of a system that can make and generate content in seconds.
As if fake news generated from Eastern Europe wasn't already a thing?
This is basically the same argumentation about the printing press. "Oh no, everyone will be able to print their pamflets in a few hours, spreading their alternative facts"
Meanwhile, we've been and are still suffering from religions, since thousands of years.
There must have been a door there in the wall, when I came in.
Truly gone fishing.
@AmpegV4
Yes, but that is a flaw in people no? Not Ai's future being honed to be for malevolent uses.
People can't be assed to verify anything. How many times on here in the last year have I mentioned something that is easily found out it is false/wrong with the literal first google result on a subject?
"I found a faster way to trick people"
Well...wasn't that hard to start with, if human as a whole would do even the most basic whiff of trying to check things they read, 95% of it wouldn't be effective
I do get what your saying. But it's not that AI is really good at it that is the worry to me. Its we are really good at being tricked.
We essentially made a supercomputer to do a task a pen and paper accomplished before. You dont need well written, perfectly authored pieces aimed at a demographic to do it. That wasn't a stumbling block that was being struggled to get over. Just any old article that confirms what they think. Even poorly written tweets is enough. Dont need AI to accomplish that level of saturation.
We are scared of a massive battering-ram we may have invented, to get thru a paper door that just pushing on it breaks it to start with.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Well, I'm sure this will lead to a significantly higher quality of (work) life and finally ring the bell for the end of silly bullshit jobs for us all.
Right ?
Tbh, I'm not overly enthusiastic about AI whatsoever, looking at the actors that will be at its forefront.
I hope to be proven wrong, but as you guys are writing, it's still humans at the helm.. hence a complete lack of trust it'll actually be used for an actual greater good.
Shocktrooper wrote:
DXWarlock wrote:
Yes, but I feel they misgauge how easy it is to trick/fool/mislead the average person.
True...it is shockingly easy in fact!
But this has been known in the fields of social psychology and behavioral economics for a long time. The work of Daniel Kahneman for example is incredibly valuable.
It is still ignorance then, that these researchers apparently had tunnel vision and couldn't look beyond their own field of work.
Thanks for the insightful replies as per your usual !
It is quite shocking indeed, but not very surprising to me (nor to you, seems like you work in academia too) : researchers are highly prone to completely disregard other neighboring fields for implications of their results out of sheer ignorance, tunnel vision/lack of vision because time is ultimately finite to allow for it, and sometimes a nice touch of ego on top of that
Seems like that's the case here, utterly brilliant in their field of expertise, but very out of touch when it comes to practicality and basically speaking, real life ramifications - nuclear research comes to mind with researchers realizing the full scope of their findings much too late, despite the whole thing being pretty obvious.
R5 5600X - 3070FE - 16GB DDR4 3600 - Asus B550 TUF Gaming Plus - BeQuiet Straight Power 11 750W - Pure Base 500DX
Last edited by TheZor on Wed, 3rd May 2023 15:33; edited 1 time in total
I really think it comes down to which is more beneficial to people: Weaponizing it, or commercializing it. Or a mix of both.
Any sufficient advancement since we have been around and writing down what we do has been: How can we use this?
Followed by: How can we weaponize this thing we just made.
Can you think of any major advancement we have made as mankind that wasn't used shortly after for: This thing is awesome, how can we use or incorporate this into the fucking of the people we dont like...somehow? Think boys we need to figure out how to.
I personally think it will be like the printing press, or radio, or TV, or internet when it came around. Huge speculation of how it is going to ruin mankind and overload all people with instant misinformation. Then it finds a middle ground as totally weaponizing it, ruins the chances of commercializing it to the masses for profit.
So a balance of "Use it as a tool, but not TOO much as still we need exploit its ubiquity as profit maker too" will happen.
The only place that pure weaponization of a item is used with no concern for profit potential is when it is in military only hands with no public access. This started in the hands of the public too. It's too useful as a money maker to make TOO much of a negative public opinion of it once it can be profitable to use. Sure some of that profit will be from misinformation and other purposes. But the bulk of it will be from other means. And when mankind has two choices: Make profit, or make enemies. They will make money first and foremost with an invention, and if they make enemies while doing that, so be it.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Anything you consider "will humans do this?" = Yes.
The thing with this issue, is there is a potential for a runaway effect, like with wep mass destruction testing that fortunately just panned out. In my view, humans are dumb as dogshit - curiosity wins in any situation.
Therefore I'm supportive of any form of regulation (which doesn't really work btw) for groups working with technology with a potential for serious harm. I really think AI is one of those situations, I'm not saying don't do, just don't let Facebook dickheads do anything they want unbridled.
Destiny asks a simple question, and this so-called expert has literally nothing to answer, ai is a big joke for day to day life. Maybe medecin n warfare.
AI will be able to create all those things on the popular side of stuff in no time. Its not like most popular movies are quite formulaic, heck alot of stuff follows archetypes and structures that we know at least since Aristotle wrote them down 2400 years ago. Same for popular music.
"Enlightenment is man's emergence from his self-imposed nonage. Nonage is the inability to use one's own understanding without another's guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one's own mind without another's guidance. Dare to know! (Sapere aude.) "Have the courage to use your own understanding," is therefore the motto of the enlightenment."
It's actually made (on purpose) really bad at copying things because obviously it's not too interesting if we would get copies of already existing paintings or music. It's made to take inspiration from things. You can use it for copying too if you train it in a certain way, but it's not really what makes it interesting or good.
For mastering audio it can be really great i just noticed, mastering can be super expensive, you need a ton of knowledge and time to do that properly.
I'd much rather have AI than Bethesda writing. Let it write 50 scripts, even if 49 are total shit and one is good or great it's still quite an amazing achievement because of how fast and cheap (free) it can do it in comparison to 50 writers.
It couldn't do crap like just 1.5 years ago, it was super experimental. Now Hollywood is on strike because of it. Many people are losing their jobs due to AI, so yes it can do a lot already and this only in machine learning's infancy.
Machine learning is used in so many fields already, calling it a joke just proves you know nothing.
Soon the rich will have no need for that 80% of the population, they'll live in walled AI & robot served and protected societies while the rest of us ooga booga it through.
Got a problem with it? A bullet will be auto-guided to your eyeball
AI cant write good stories n scripts ... people do, ai cant direct,ai cant songwrite ( decent ones lol) etcetra
It's not bad, I use it a lot for my tabletop game. Give it the past, lore, and current situation and ask it for plot ideas, or NPC/villain dialog or perspectives, or ways of talking/saying things I wouldn't think of.
Honestly it has written/gave me ideas, filled out about 1/4 of my games plot, story, and situations by making connections of things it knows, and would be a good plot hook/twist I would have never made the connection of to use.
So I'd say it does pretty good. Even real movie/plot/story writing isn't 'one and done' as they all get tweaks, changes, rewrite parts, and such from director, editor, proofreader (depending on the medium). So its roughly on par with writing a rough final draft of as story as the average script writer. Not as good as the masters, but plenty close enough to what the mean average of people that have a book on amazon for sale can do.
It wrote and voiced these below 100%
All I had to do was tell it a short description of info of the setting, and it keeps the story going on each one as a continuation of the last with the info it has gathered about it over time:
It has literally written about 40 minutes of dialog for our pregame narration (15-20 sessions worth, not 40 minutes in one go) of what happened the week before. It keeps up with the plot, remembers past things to incorporate them into the context of what it is saying and continues the 'story' as one cohesive thing.
It makes it and gives me far more engaging narration than when I try to do it.
It has even nailed down the 'demeanor' of the Overly boastful, chauvinistic, egotistical and 'try hard' nature of Rip Studwell (An NPC I have) that this is the memoirs of.
All I did it was tell it what he is like (a character overview) at the start, and it uses that for them all and it writes it exactly like I imagine Rip would tell it. Over exaggerating, fluffing up details, and ego stroking himself and his 'man crush' on Xavier.
-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf
Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Last edited by DXWarlock on Sun, 24th Sep 2023 20:30; edited 14 times in total
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum