Google's AI bot thinks the purpose of life is 'to live forever'!

Conversations with a machine


This week, Google released a research paper chronicling one of its latest forays into artificial intelligence.
Researchers at the company programmed an advanced type of ‘chatbot’ that learns how to respond in conversations based on examples from a training set of dialogue. And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.














This means Google’s researchers could get a little creative with it, and they certainly did - they asked the bot everything from boring IT questions to the meaning of life.
The responses were alternately impressive, amusing, and unnerving.
Here are some of the most poetic interactions from the paper (sic):
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don't believe in god, then you don’t know.
Human: OK…so what is the definition of morality?
Machine: Well, the truth is, you're not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates…
Human: What do you think about Bill Gates?
Machine: He’s a good man.
The researchers said they found it "encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines", and that it could generalise to new questions.
What they weren’t as happy about, was that the model "only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above".
You can read the whole paper, 'A Neural Conversational Model' here.

Post a Comment

emo-but-icon

Like Us on Facebook

Hot in week

Recent

Comments

Quiz !

Video Game Trivia
Asking question 1 of 10 with 0 answers correct

???

  True     False

Side Ads

item