|
Page 1 of 1 |
Invasor
Moderator
Posts: 7638
Location: On the road
|
Posted: Mon, 25th Apr 2016 17:38 Post subject: Google's AI (DeepMind) might play Starcraft |
|
 |
Quote: | Computers That Crush Humans at Games Might Have Met Their Match: ‘StarCraft’
Artificial intelligence has conquered complex games, but to win this one, machines need to figure out how to lie
SEOUL—Humanity has fallen to artificial intelligence in checkers, chess, and, last month, Go, the complex ancient Chinese board game.
But some of the world’s biggest nerds are confident that machines will meet their Waterloo on the pixelated battlefields of the computer strategy game StarCraft.
A key reason: Unlike machines, humans are good at lying.
StarCraft, created in 1998, is one of the world’s most popular computer game franchises. It pits three races against one another: the humanlike Terrans, the slimy insectoid Zerg and a mystical race with psionic powers called the Protoss.
Players pick a race and then use its units—spacecraft with cloaking abilities or little creatures that can burrow in the soil, for example—to exterminate their opponents and capture their headquarters. Unlike chess or Go, players don’t take turns making their moves. Everything happens at once.
The intricacies of the game and the endless permutations of possible strategies have made StarCraft, in the minds of many artificial intelligence experts, the obvious next target for man-machine contests.
Demis Hassabis, creator of the artificial-intelligence program that defeated Go grandmaster Lee Se-dol in the recent closely watched match in Seoul, has long eyed StarCraft as a possible challenge for his AI company DeepMind, which Alphabet Inc.’s Google acquired two years ago.
In 2011, Mr. Hassabis called StarCraft “the next step up” from abstract games like Go, and last month named it as a potential next target for his AI researchers, which include a former pro StarCraft gamer.
Michael Morhaime, co-founder and president of StarCraft creator Blizzard Entertainment, says he has reached out to Google after the man-machine Go match, but Google says it is still weighing a number of possible platforms for its AI tests.
“We would love to be a milestone on that advance of artificial intelligence, from chess to Go and then us,” Mr. Morhaime said.
Facebook Inc. and Microsoft Corp. also have employees working on StarCraft AI projects, but both companies say they are currently small-scale ventures. Microsoft says it is also working on using AI to crack the videogame Minecraft and a variation of Texas Hold ’em Poker. Facebook is working on a Go AI program.
In addition to its complexity, the most appealing aspect of StarCraft for AI developers is the element of uncertainty: Unlike games like chess and Go where both players can see the entire board at once, StarCraft players can’t. Players must send out units to explore the map and locate their opponent.
The lack of visibility means that computers can’t simply calculate all the possible moves their opponent might make, and elevates bluffing as a key strategy employed by the world’s top StarCraft pros.
An advanced human player might, for example, feign weakness on one side of the playing field while mustering a pack of mutalisks—fire-breathing dragon-like creatures—on the other side of the board.
“Giving false information or false cues is a very advanced strategy, so it would be amazing to see a computer try to do that,” says Mr. Morhaime, the Blizzard co-founder.
Eugene Kim, a 22-year-old professional StarCraft gamer in South Korea considered the world’s top human player, says bluffing is a critical skill to succeed at the top levels of the game.
Mr. Kim is skeptical that AI is anywhere near challenging mankind. “In order for a computer to win, it needs to learn how to lie,” he says.
Cho Man-soo, secretary-general of the Korea e-Sports Association, describes StarCraft as “all about bluffing.”
“It’s going to be hard for AI to bluff or to trick a human player,” he says.
Some believe machines will eventually prevail, once they are programmed to figure out their version of lying.
What humans call bluffing, the computer simply considers another possible strategy among many, says University of Alberta computer scientist David Churchill.
“When the AI finds that the only way to win is to show strength, it will do that,” Mr. Churchill says. “If you want to call that bluffing, then the AI is capable of bluffing, but there’s no machismo behind it.”
Mr. Churchill has been running an annual StarCraft AI challenge for the past five years. The competition pits AI programs developed by different teams of Ph.D.s against one another to sharpen their skills, before taking on top-ranked human players.
So far, it hasn’t even been close. As it turns out, the AI isn’t quite as good as humans at executing time-tested maneuvers like the mutalisk rush (dispatching a flock of the flying dragon-like creatures at enemy headquarters) or dark templar drops (using a flying shuttle to deposit a cloaked warp-blade-armed warrior near enemy worker drones).
Other AI developers are still far from the point where their programs might be able to trick a human opponent. For now, some programmers are still trying to work out more basic kinks, like one in which units appear to dance back and forth on the map as the algorithm struggles to stick to one coherent strategy.
Few know StarCraft as deeply as South Koreans, where the game has been dubbed the national sport because of its popularity. StarCraft and other competitive computer games have been recognized as sports by the country’s national Olympic committee.
Young pro gamers are feted like rock stars, with devoted fanbases and endorsement deals. Cable-TV channels broadcast games between top-ranked StarCraft masters, while tournaments can fill arenas with screaming fans and live commentators.
Using a mouse and keyboard, the world’s top players can issue 500 or more commands a minute. In last year’s global StarCraft tournament, held in Anaheim, Calif., 15 of the 16 finalists were from South Korea.
But the humans’ reign at StarCraft is threatened by the machines, Mr. Churchill says.
“In the past we have seen the human world champions of checkers, chess, and Go say that they will not be defeated by computers, and each time they were wrong,” he says. “It would be foolish to assume that StarCraft, even though it is a much more complex game, is any different.” |
http://www.wsj.com/articles/computers-that-crush-humans-at-games-might-have-met-their-match-starcraft-1461344309
|
|
Back to top |
|
 |
|
Posted: Mon, 25th Apr 2016 17:47 Post subject: |
|
 |
There are also reports about AIs playing Doom 1 in a deathmatch tournament.
It's also pretty interesting to read about those things and what's going on in the backend of the AIs.
Enthoo Evolv ATX TG // Asus Prime x370 // Ryzen 1700 // Gainward GTX 1080 // 16GB DDR4-3200
|
|
Back to top |
|
 |
Nui
VIP Member
Posts: 5720
Location: in a place with fluffy towels
|
Posted: Mon, 25th Apr 2016 18:36 Post subject: |
|
 |
The same guys haven't beaten humans in every atari game they tested. Why the heck would starcraft be the next thing? What kind of a jump is that?
You can view this one easily: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
This one is just obtainable somehow (because I did once, google-fu revealed a link for viewing only because of some access program or something) http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html
Anyway. I'd like to see an AI succeeding in several levels (or rather the complete game) of Super Mario world.
Last edited by Nui on Mon, 25th Apr 2016 18:39; edited 1 time in total
|
|
Back to top |
|
 |
|
|
Back to top |
|
 |
Nui
VIP Member
Posts: 5720
Location: in a place with fluffy towels
|
Posted: Mon, 25th Apr 2016 18:51 Post subject: |
|
 |
I know that one. I actually ran it . Unless something changed dramatically it is not what I mean or want. So the following is based on old state. I dont know if it was updated.
It used an evolutionary algorithm (EA) to generate a neural network (represented by those lines and blocks on screen) which controls mario. So first of all, it doesn't learn it. Its a clever way to guide somewhat random design choices to build networks (not a bad thing, I just like "learning" better ). Its input (the map you see in the upper left) was so simplified that It can't tell that power ups exist at all and some platforms don't exist either (so the input is slightly engineered!). The score the EA aimed to optimize was basically the x-position of mario. The more to the right, the better. This goal helps optimization early, because you can tell immediately whats good, but is essentially wrong, in the sense that it is not the goal of the game. You want to reach the finish line, perhaps with the highest score or the most time left, unless you have a boss battle...
Anyway. When I tested it, it couldn't finish the 2nd level or something, because the first thing you needed to do there was climb vertically. Instead it optimized the length of it's leap of faith to death in a pit . When I got it over that, It couldn't see important (moving) platforms which made it extremely unlikely to successfully getting over them.
The deepmind approach for the atari games does not modify the input (for GO they did btw) and the network would "learn" to play the game.
And I want to these to succeed in the sense of beating human players at the whole game 
|
|
Back to top |
|
 |
|
|
Back to top |
|
 |
Nui
VIP Member
Posts: 5720
Location: in a place with fluffy towels
|
Posted: Mon, 25th Apr 2016 19:17 Post subject: |
|
 |
|
|
Back to top |
|
 |
|
|
Back to top |
|
 |
Nui
VIP Member
Posts: 5720
Location: in a place with fluffy towels
|
Posted: Mon, 25th Apr 2016 20:09 Post subject: |
|
 |
|
|
Back to top |
|
 |
Page 1 of 1 |
All times are GMT + 1 Hour |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB 2.0.8 © 2001, 2002 phpBB Group
|
|
 |
|