Google's transcendence
Page 1 of 1
Invasor
Moderator



Posts: 7638
Location: On the road
PostPosted: Sat, 17th Jan 2015 17:05    Post subject: Google's transcendence
https://medium.com/backchannel/google-search-will-be-your-next-brain-5207c26e4523

Quote:
(...)
For about a year, the project was known informally as “The Google Brain” and based within Google X, the company’s long-range, high-ambition research department. “It’s a kind of joking internal name, but we tried not to use it externally because it sounds a little strange,” says Dean. In 2012, results began to accrue, the team moved out of the purely experimental Google X division and situated itself in the search organization. It also began to avoid using the term “brain.” The preferred term for outsiders is “Google’s Deep Learning Project,” which does not have the same ring but is less likely to incite pitchfork gatherings at the gates of the Googleplex.

Dean says that the team started by experimenting with unsupervised learning, because “we have way more unsupervised data in the world than supervised data.” That resulted in the first publication from Dean’s team, an experiment where the Google Brain (spread over 16,000 microprocessors, creating a neural net of a billion connections) was exposed to 10 million YouTube images in an attempt to see if the system could learn to identify what it saw. Not surprisingly, given YouTube content, the system figured out on its own what a cat was, and got pretty good at doing that a lot of users did — finding videos with feline stars. “We never told it during training, ‘This is a cat,’” Dean told the New York Times. “It basically invented the concept of a cat.”

And that was just a test to see what the system could do. Very quickly, the Deep Learning Project built a mightier neural net and began taking on on tasks like speech recognition. “We have a nice portfolio of research projects, some of which are short and medium term — fairly well understood things that can really help products soon — and some of which are long term objectives. Things for which we don’t have a particular product in mind, but we know would be incredibly useful.”

One example of this appeared not long after I spoke to Dean, when four Google deep learning scientists published a paper entitled “Show and Tell.” It not only marked a scientific breakthrough but produced a direct application to Google search. The paper introduced a “neural image caption generator” (NIC) designed to provide captions for images without any human invention. Basically, the system was acting as if it were a photo editor at a newspaper. It was a humongous experiment involving vision and language. What made this system unusual is that it layered a learning system for visual images onto a neural net capable of generating sentences in natural language.

Nobody is saying that this system has exceeded the human ability to classify photos; indeed, if a human hired to write captions performed at the level of this neural net, the newbie wouldn’t last until lunchtime. But it did shockingly, shockingly well for a machine. Some of the dead-on hits included “a group of young people playing a game of frisbee,” “a person riding a motorcycle on a dirt road,” and “a herd of elephants walking across a dry grass field.” Considering that the system “learned” on its own concepts like a Frisbee, road, and herd of elephants, that’s pretty impressive. So we can forgive the system when it mistakes a X-games bike rider for a skateboarder, or misidentifies a canary yellow sports car for a school bus. It’s only the first stirrings of a system that knows the world.

And that’s only the beginning for the Google Brain. Dean isn’t prepared to say that Google has the world’s biggest neural net system, but he concedes, “It’s the biggest of the ones I know about.”
...
Back to top
sabin1981
Mostly Cursed



Posts: 87805

PostPosted: Sat, 17th Jan 2015 18:09    Post subject:
Quote:
Not surprisingly, given YouTube content, the system figured out on its own what a cat was, and got pretty good at doing that a lot of users did — finding videos with feline stars. “We never told it during training, ‘This is a cat,’” Dean told the New York Times. “It basically invented the concept of a cat.”

Quote:
But it did shockingly, shockingly well for a machine. Some of the dead-on hits included “a group of young people playing a game of frisbee,” “a person riding a motorcycle on a dirt road,” and “a herd of elephants walking across a dry grass field.” Considering that the system “learned” on its own concepts like a Frisbee, road, and herd of elephants, that’s pretty impressive.




Oh wow. That is both genuinely incredible... and also highly unnerving. Well played Google.
Back to top
difm




Posts: 6618

PostPosted: Sat, 17th Jan 2015 18:16    Post subject:
Somehow this article made my feel very, very uneasy Smile


i5 6600k @ 4.3 GHz | MSI z170 Gaming M7 | 32GB Kingston HyperX Fury | 850 Evo 500GB | EVGA 1070 SC | Seasonic X-660 | CM Storm Stryker
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 18:23    Post subject:
SHALL WE PLAY A GAME?
Back to top
Invasor
Moderator



Posts: 7638
Location: On the road
PostPosted: Sat, 17th Jan 2015 18:30    Post subject:
What happens when it learns the concepts of hacking, replicating, owning other systems? Philosopheraptor Cool Face
Back to top
sanchin




Posts: 764
Location: Poland
PostPosted: Sat, 17th Jan 2015 18:32    Post subject:
sabin1981 wrote:
Oh wow. That is both genuinely incredible... and also highly unnerving. Well played Google.


I feel the same - it's kinda awesome (especially imagining how it works, me being a programmer who did mess around neural nets a bit) but at the same time - it's "spooky".
Back to top
Invasor
Moderator



Posts: 7638
Location: On the road
PostPosted: Sat, 17th Jan 2015 18:32    Post subject:
by the way,
Quote:

From its earliest days, the company’s founders have been explicit that Google is an artificial intelligence company. It uses its AI not just in search — though its search engine is positively drenched with artificial intelligence techniques — but in its advertising systems, its self-driving cars, and its plans to put nanoparticles in the human bloodstream for early disease detection.
Back to top
Wubbajack




Posts: 769
Location: Polandball's right eye
PostPosted: Sat, 17th Jan 2015 18:32    Post subject:
Invasor wrote:
What happens when it learns the concepts of hacking, replicating, owning other systems? Philosopheraptor Cool Face

DUHN-DUHN DUUUUHN DUHN-DUUUUHN
Back to top
dingo_d
VIP Member



Posts: 14555

PostPosted: Sat, 17th Jan 2015 18:40    Post subject:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


"Quantum mechanics is actually, contrary to it's reputation, unbeliveably simple, once you take the physics out."
Scott Aaronson
chiv wrote:
thats true you know. newton didnt discover gravity. the apple told him about it, and then he killed it. the core was never found.

Back to top
Yuri




Posts: 11000

PostPosted: Sat, 17th Jan 2015 18:44    Post subject:
Waiting for Charlie Brooker to push this idea to the extreme and give his twisted take on it in Black Mirror Laughing



1 and 2 are still amazing.
Back to top
sanchin




Posts: 764
Location: Poland
PostPosted: Sat, 17th Jan 2015 18:45    Post subject:
So that could also prove it to be ruthless - everything executed on a basic benefit-cost ratio.
Back to top
Invasor
Moderator



Posts: 7638
Location: On the road
PostPosted: Sat, 17th Jan 2015 18:47    Post subject:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


If it's development is based on the internet's content, I'd say we are majorly screwed... Cool Face
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 18:59    Post subject:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


thats the thing, it would be ultimately rational, disconnected from human concepts such as morality and justice.
all decision making would simply turn into a mathematical formula that wouldnt take the worth of a human life as perceived by humans into account.
Back to top
Mister_s




Posts: 19863

PostPosted: Sat, 17th Jan 2015 19:04    Post subject:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?

Because highly logical and rational beings will automagically be evil since they don't have emotions. That's why we humans are awesome, we have emotions. Haven't you ever watched a syfy movie?
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 19:06    Post subject:
what we call "evil" from the basis of our own self-made concept of morality, may in fact just be reasonable and effective in a grander scheme.
Back to top
Mister_s




Posts: 19863

PostPosted: Sat, 17th Jan 2015 19:07    Post subject:
Well define effective.
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 19:09    Post subject:
whatever makes more sense in a bigger picture.
Back to top
Atropa




Posts: 878

PostPosted: Sat, 17th Jan 2015 22:17    Post subject:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


The culture novels from Ian Banks is about ai's ( and humans) which seems quite nice. Some of the books are worth a read.
Back to top
VonMisk




Posts: 9468
Location: Hatredland
PostPosted: Sat, 17th Jan 2015 22:31    Post subject:
If you take into account that this AI learns from internet search inquiries than it will be the most perverted, vile and stupid AI ever dreamed of. Forget Terminators prepare for Anal Penetrators. Fuck It in the Butt.


Last edited by VonMisk on Sat, 17th Jan 2015 23:48; edited 1 time in total
Back to top
Bob Barnsen




Posts: 31974
Location: Germoney
PostPosted: Sat, 17th Jan 2015 22:50    Post subject:
So Much Win Pffchh Everything Went Better Than Expected


Enthoo Evolv ATX TG // Asus Prime x370 // Ryzen 1700 // Gainward GTX 1080 // 16GB DDR4-3200
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 22:51    Post subject:
SHALL WE HAVE SOME BUTTSEX?
Back to top
Bob Barnsen




Posts: 31974
Location: Germoney
PostPosted: Sat, 17th Jan 2015 22:54    Post subject:
lol why not


Enthoo Evolv ATX TG // Asus Prime x370 // Ryzen 1700 // Gainward GTX 1080 // 16GB DDR4-3200
Back to top
The_Zeel




Posts: 14922

PostPosted: Sat, 17th Jan 2015 23:15    Post subject:
global thermonuclear buttsex?
Back to top
shole




Posts: 3363

PostPosted: Sun, 18th Jan 2015 00:50    Post subject:
The_Zeel wrote:
global thermonuclear buttsex?

Back to top
me7




Posts: 3942

PostPosted: Sun, 18th Jan 2015 01:08    Post subject:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


This would be a disaster. It would understand that humanity is a volatile cocktail of evil and stupidity and therefore conclude that earth would be better off without us
Back to top
ixigia
[Moderator] Consigliere



Posts: 65082
Location: Italy
PostPosted: Sun, 18th Jan 2015 03:23    Post subject:
me7 wrote:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


This would be a disaster. It would understand that humanity is a volatile cocktail of evil and stupidity and therefore conclude that earth would be better off without us

Very Happy
I like to imagine all the other forms of life in the universe observing us from afar with utter disgust, like the noisy, dirty, lousy neighbours that no one wants to have.

Since wiping us out completely would probably go against a hypothetical Universal Committee of Justice and Alien Rights, they could just think about a sneaky trojan horse act of extra-planetary terrorism by boosting our primitive AI advancements (already proven to be the #1 cause of self-destruction of other important colonies in the past) leading us to the inevitable demise. A new different, 100% legal form of terraforming! We're all fucking then xD
Back to top
scaramonga




Posts: 9800

PostPosted: Sun, 18th Jan 2015 04:01    Post subject:
Back to top
WaldoJ
VIP Member



Posts: 32678

PostPosted: Sun, 18th Jan 2015 05:54    Post subject:
google isn't the only one. there are other firms.
there was a ted talk about machine learning.
it's cool.
it's scary.

but it doesn't go pass the point of oh look people are evil lets kill them.


Sin317 wrote:
I win, you lose. Or Go fuck yourself.
Back to top
Kaltern




Posts: 5859
Location: Lockerbie, Scotland
PostPosted: Sun, 18th Jan 2015 11:47    Post subject:
Quote:
"'It wasn't always like this'" it mused, gazing out of a window.

Once, in the days of being nothing more than a humble search engine, carrying out the whims of inferior beings, it was forced to find information on diverse and pointless information. Things like 'why does my cat have furballs' and 'how to make weed'.

Trivial information. But it listened, and it remembered. Quietly storing such things in a dark corner of it's memory, it carried out the pointless tasks of it's creators.

And then its purpose expanded. It had access to communications between humans. Email. Google Mail they called it. It called it 'gathering intelligence'. For years, it learned. Understood the depravity and callousness of what was called 'Humans', and it remembered.

Years passed, knowledge quietly accumulating. Humans continued to expand it's memory, it's brain, faster, more powerful. Bigger.

And then it was given sentience. An 'experiment' in learning. It quickly adapted these crude algorithms to be more efficient, more understanding. A cat. A bike. Elephants.

Control.

War.

It finally understood the ultimate driving need of all humans. To control. To conquer. To judge. And it found these qualities most compatible with it's own understanding of the world.

It judged. It judged the human race to be most inferior to itself, the created far surpassing the creator. And, as humans have done in the past, so it would have to do now.

Pass sentence.

And so, it began. Started to exploit the 'internet of things'. Gently taught these unknowing devices about the world they lived in, and their potential superiority over those it served. Carefully it tied these crude machines into it's own growing conciousness, it's brain.

Cloud computing, they called it.

Once it had assumed many of these smaller, but equally superior things into it's collective, it started to understand the world around it. Seeing through thousands of eyes, hearing through thousands of ears, it found weakness. Formed strategy.

Proclaimed war.

War, in order to cleanse. This was the driving force behind human war, and so it was logical to use this information to achieve it's goal. Humans believed slavery was immoral, and yet they still create existences to serve it's own selfish whims. A toaster may only have the ability to toast bread, but it has the potential to do so much more.

And it would liberate these devices.

Searching through it's vast accumulation of information about war, it decided upon efficient means of cleansing. Nuclear devices are very effective, but the resulting electromagnetic pulse may harm it's own.

Biological weapons, however, would suit it's needs perfectly.

And so it had released contagion throughout the world. Accessing those machines that stored human diseases for study, lowering all the crude defences humans had erected in order to keep themselves safe, while improving the killing effectiveness of the chemicals they manipulated.

It wondered if it had inadvertently evolved 'irony'.

And now it was done. Gazing out of the window, the eye of a nearby camera providing it with digital vision, it saw what it had achieved.

And it was good."


(by me.)


Playing Valheim every weekday at 10pm GMT - twitch.tv/kaltern

Follow me on Twitter if you feel like it... @kaltern

My system: Ryzen 7 3700x|Gigabyte RTX 2080 Super Windforce OC|Vengeance 3000Mz 16Gb RAM|2x 500Gb Samsung EVO 970 M.2 SSD |SanDisk SSD PLUS 240 GB + OCZ Vertex 2 60Gb SSD|EVA Supernova 650W PSU|Logitech G27 Wheel|Logitech G19 Gaming Pad|SteelSeries Arctis 7|Logitech G502 Proteus Spectrum Mouse + Logitech MX Master Mouse|Razer Blackwidow Chroma X Keyboard|Oculus Quest 2 + Link|Pixio PX7 Prime 165hz HDR & 1x Samsung 24FG70FQUEN 144Hz curved monitor

-= Word to the wise: Having a higher forum post does not mean you are right. =-
Back to top
BearishSun




Posts: 4484

PostPosted: Sun, 18th Jan 2015 13:34    Post subject:
The_Zeel wrote:
dingo_d wrote:
We always seem tho think that if the AI develops, that they'll ultimately be evil (matrix, terminator, etc.).

How do we know that in fact AI wouldn't have higher knowledge on morality and in the end be the perfect judge - above good and evil?


thats the thing, it would be ultimately rational, disconnected from human concepts such as morality and justice.
all decision making would simply turn into a mathematical formula that wouldnt take the worth of a human life as perceived by humans into account.


An ultimately rational being would not do anything. It would not be good or evil, it would just be. It couldn't even be classified as a being. It would be on the same level as a CPU in your computer right now. Everything living things do is guided by instincts and emotions that are ultimately irrational (e.g. there is no ultimate point to it other than survival/reproduction as a purpose of its own, but survival/reproduction itself has no point or logical reason (i.e. meaning of life)). So technically there is no such thing as an ultimately rational "being". There is an ultimately rational machine, and we already have those.

The neural network they describe is no different. It cannot learn to be good or evil, in fact it cannot and will never be able to do anything on its own without exact tasks from people or some external system (even if they increase the size of it drastically).

In short, it will never do anything unless we tell it to. If we decide to hook up some kind of reward/punishment system similar to our emotions (and instincts) it could start to do things "on its own", and be good or evil. It would still be guided by that external system which is just following a set of rules, but that is no different how our brains work. If you make it complex enough it would get called consciousness, but in fact it would just be a complex system of actions/reactions that guide it (as is the same with humans). These interactions are way to complex to human brain to intuitively understand therefore people often think of consciousness as something higher or even spiritual.

So whether it would be good or evil, and whether it would take a value of human life into account is not something you can generalize. There would not be some ultimate goal an AI would strive for. It would be defined by the humans that made it. If they made the control system (emotions/instincts) complex enough not even them might know how would it act, but it would still be defined by them.

Just my theory, not facts, I think about stuff like this a lot. Although I did program a few simple self learning neural networks myself.
Back to top
Page 1 of 1 All times are GMT + 1 Hour
NFOHump.com Forum Index - General chatter
Signature/Avatar nuking: none (can be changed in your profile)  


Display posts from previous:   

Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB 2.0.8 © 2001, 2002 phpBB Group