(...)
For about a year, the project was known informally as “The Google Brain” and based within Google X, the company’s long-range, high-ambition research department. “It’s a kind of joking internal name, but we tried not to use it externally because it sounds a little strange,” says Dean. In 2012, results began to accrue, the team moved out of the purely experimental Google X division and situated itself in the search organization. It also began to avoid using the term “brain.” The preferred term for outsiders is “Google’s Deep Learning Project,” which does not have the same ring but is less likely to incite pitchfork gatherings at the gates of the Googleplex.
Dean says that the team started by experimenting with unsupervised learning, because “we have way more unsupervised data in the world than supervised data.” That resulted in the first publication from Dean’s team, an experiment where the Google Brain (spread over 16,000 microprocessors, creating a neural net of a billion connections) was exposed to 10 million YouTube images in an attempt to see if the system could learn to identify what it saw. Not surprisingly, given YouTube content, the system figured out on its own what a cat was, and got pretty good at doing that a lot of users did — finding videos with feline stars. “We never told it during training, ‘This is a cat,’” Dean told the New York Times. “It basically invented the concept of a cat.”
And that was just a test to see what the system could do. Very quickly, the Deep Learning Project built a mightier neural net and began taking on on tasks like speech recognition. “We have a nice portfolio of research projects, some of which are short and medium term — fairly well understood things that can really help products soon — and some of which are long term objectives. Things for which we don’t have a particular product in mind, but we know would be incredibly useful.”
One example of this appeared not long after I spoke to Dean, when four Google deep learning scientists published a paper entitled “Show and Tell.” It not only marked a scientific breakthrough but produced a direct application to Google search. The paper introduced a “neural image caption generator” (NIC) designed to provide captions for images without any human invention. Basically, the system was acting as if it were a photo editor at a newspaper. It was a humongous experiment involving vision and language. What made this system unusual is that it layered a learning system for visual images onto a neural net capable of generating sentences in natural language.
Nobody is saying that this system has exceeded the human ability to classify photos; indeed, if a human hired to write captions performed at the level of this neural net, the newbie wouldn’t last until lunchtime. But it did shockingly, shockingly well for a machine. Some of the dead-on hits included “a group of young people playing a game of frisbee,” “a person riding a motorcycle on a dirt road,” and “a herd of elephants walking across a dry grass field.” Considering that the system “learned” on its own concepts like a Frisbee, road, and herd of elephants, that’s pretty impressive. So we can forgive the system when it mistakes a X-games bike rider for a skateboarder, or misidentifies a canary yellow sports car for a school bus. It’s only the first stirrings of a system that knows the world.
And that’s only the beginning for the Google Brain. Dean isn’t prepared to say that Google has the world’s biggest neural net system, but he concedes, “It’s the biggest of the ones I know about.”
...
Not surprisingly, given YouTube content, the system figured out on its own what a cat was, and got pretty good at doing that a lot of users did — finding videos with feline stars. “We never told it during training, ‘This is a cat,’” Dean told the New York Times. “It basically invented the concept of a cat.”
Quote:
But it did shockingly, shockingly well for a machine. Some of the dead-on hits included “a group of young people playing a game of frisbee,” “a person riding a motorcycle on a dirt road,” and “a herd of elephants walking across a dry grass field.” Considering that the system “learned” on its own concepts like a Frisbee, road, and herd of elephants, that’s pretty impressive.
Oh wow. That is both genuinely incredible... and also highly unnerving. Well played Google.
Oh wow. That is both genuinely incredible... and also highly unnerving. Well played Google.
I feel the same - it's kinda awesome (especially imagining how it works, me being a programmer who did mess around neural nets a bit) but at the same time - it's "spooky".
From its earliest days, the company’s founders have been explicit that Google is an artificial intelligence company. It uses its AI not just in search — though its search engine is positively drenched with artificial intelligence techniques — but in its advertising systems, its self-driving cars, and its plans to put nanoparticles in the human bloodstream for early disease detection.
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).
How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?
thats the thing, it would be ultimately rational, disconnected from human concepts such as morality and justice.
all decision making would simply turn into a mathematical formula that wouldnt take the worth of a human life as perceived by humans into account.
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).
How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?
Because highly logical and rational beings will automagically be evil since they don't have emotions. That's why we humans are awesome, we have emotions. Haven't you ever watched a syfy movie?
If you take into account that this AI learns from internet search inquiries than it will be the most perverted, vile and stupid AI ever dreamed of. Forget Terminators prepare for Anal Penetrators. Fuck It in the Butt.
Last edited by VonMisk on Sat, 17th Jan 2015 23:48; edited 1 time in total
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).
How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?
This would be a disaster. It would understand that humanity is a volatile cocktail of evil and stupidity and therefore conclude that earth would be better off without us
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).
How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?
This would be a disaster. It would understand that humanity is a volatile cocktail of evil and stupidity and therefore conclude that earth would be better off without us
I like to imagine all the other forms of life in the universe observing us from afar with utter disgust, like the noisy, dirty, lousy neighbours that no one wants to have.
Since wiping us out completely would probably go against a hypothetical Universal Committee of Justice and Alien Rights, they could just think about a sneaky trojan horse act of extra-planetary terrorism by boosting our primitive AI advancements (already proven to be the #1 cause of self-destruction of other important colonies in the past) leading us to the inevitable demise. A new different, 100% legal form of terraforming! We're all fucking then xD
"'It wasn't always like this'" it mused, gazing out of a window.
Once, in the days of being nothing more than a humble search engine, carrying out the whims of inferior beings, it was forced to find information on diverse and pointless information. Things like 'why does my cat have furballs' and 'how to make weed'.
Trivial information. But it listened, and it remembered. Quietly storing such things in a dark corner of it's memory, it carried out the pointless tasks of it's creators.
And then its purpose expanded. It had access to communications between humans. Email. Google Mail they called it. It called it 'gathering intelligence'. For years, it learned. Understood the depravity and callousness of what was called 'Humans', and it remembered.
Years passed, knowledge quietly accumulating. Humans continued to expand it's memory, it's brain, faster, more powerful. Bigger.
And then it was given sentience. An 'experiment' in learning. It quickly adapted these crude algorithms to be more efficient, more understanding. A cat. A bike. Elephants.
Control.
War.
It finally understood the ultimate driving need of all humans. To control. To conquer. To judge. And it found these qualities most compatible with it's own understanding of the world.
It judged. It judged the human race to be most inferior to itself, the created far surpassing the creator. And, as humans have done in the past, so it would have to do now.
Pass sentence.
And so, it began. Started to exploit the 'internet of things'. Gently taught these unknowing devices about the world they lived in, and their potential superiority over those it served. Carefully it tied these crude machines into it's own growing conciousness, it's brain.
Cloud computing, they called it.
Once it had assumed many of these smaller, but equally superior things into it's collective, it started to understand the world around it. Seeing through thousands of eyes, hearing through thousands of ears, it found weakness. Formed strategy.
Proclaimed war.
War, in order to cleanse. This was the driving force behind human war, and so it was logical to use this information to achieve it's goal. Humans believed slavery was immoral, and yet they still create existences to serve it's own selfish whims. A toaster may only have the ability to toast bread, but it has the potential to do so much more.
And it would liberate these devices.
Searching through it's vast accumulation of information about war, it decided upon efficient means of cleansing. Nuclear devices are very effective, but the resulting electromagnetic pulse may harm it's own.
Biological weapons, however, would suit it's needs perfectly.
And so it had released contagion throughout the world. Accessing those machines that stored human diseases for study, lowering all the crude defences humans had erected in order to keep themselves safe, while improving the killing effectiveness of the chemicals they manipulated.
It wondered if it had inadvertently evolved 'irony'.
And now it was done. Gazing out of the window, the eye of a nearby camera providing it with digital vision, it saw what it had achieved.
And it was good."
(by me.)
Playing Valheim every weekday at 10pm GMT - twitch.tv/kaltern
Follow me on Twitter if you feel like it... @kaltern
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).
How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?
thats the thing, it would be ultimately rational, disconnected from human concepts such as morality and justice.
all decision making would simply turn into a mathematical formula that wouldnt take the worth of a human life as perceived by humans into account.
An ultimately rational being would not do anything. It would not be good or evil, it would just be. It couldn't even be classified as a being. It would be on the same level as a CPU in your computer right now. Everything living things do is guided by instincts and emotions that are ultimately irrational (e.g. there is no ultimate point to it other than survival/reproduction as a purpose of its own, but survival/reproduction itself has no point or logical reason (i.e. meaning of life)). So technically there is no such thing as an ultimately rational "being". There is an ultimately rational machine, and we already have those.
The neural network they describe is no different. It cannot learn to be good or evil, in fact it cannot and will never be able to do anything on its own without exact tasks from people or some external system (even if they increase the size of it drastically).
In short, it will never do anything unless we tell it to. If we decide to hook up some kind of reward/punishment system similar to our emotions (and instincts) it could start to do things "on its own", and be good or evil. It would still be guided by that external system which is just following a set of rules, but that is no different how our brains work. If you make it complex enough it would get called consciousness, but in fact it would just be a complex system of actions/reactions that guide it (as is the same with humans). These interactions are way to complex to human brain to intuitively understand therefore people often think of consciousness as something higher or even spiritual.
So whether it would be good or evil, and whether it would take a value of human life into account is not something you can generalize. There would not be some ultimate goal an AI would strive for. It would be defined by the humans that made it. If they made the control system (emotions/instincts) complex enough not even them might know how would it act, but it would still be defined by them.
Just my theory, not facts, I think about stuff like this a lot. Although I did program a few simple self learning neural networks myself.
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum