No doubt we have all read of the MS AI teenager that went feral.
Think that perhaps an AI political pundit could defeat ‘confine and indoctrinate’ policies?
No doubt we have all read of the MS AI teenager that went feral.
Think that perhaps an AI political pundit could defeat ‘confine and indoctrinate’ policies?
Mods,
Please move this to the dog house or merge with the AI thread that already exists in Speaker's Corner.
crepitas, we recommend that you up your game.
Artificial Intelligence?...MS?...Multiple Sclerosis?...
There is a serious side to it though, these experiments, using on line input, show how AIs will lean and how they may actually think, if you can call it that.
No computer person really, but this one had to be shut down, what happens when one can't be switched off, say a military defense system, NSA, CIA etc.
Skynet, terminator, could be problems.
This made me laugh, the AI called tay posted this.
TayTweets (@TayandYou)
March 24, 2016
@icbydt bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got.
Sigh..dearie dearie me:
I was suggesting that an AI entity could express opinions with impunity. Providing of course the entity resided outside certain jurisdictions.
Unfortunately did not prompt any intelligent debate as obviously 34 thousand feet above the heads of some.
Guess I will go back to my beer now..5555
I think that perhaps you have been drinking again.[/quote]Microsoft had an on line AI, people could interact with and it leaned as it went, after 24 hours it had to be shut down.
It became a Trump supporter, a Hitler admirer, racist and generally not PC, proving even computers with AI can form opinions that are not allowed.[/quote]
Pure logic vs. political correctness.
Microsoft shuts down AI chatbot after it turned into a Nazi
Last Updated Mar 25, 2016 7:53 PM EDT
Microsoft got a swift lesson this week on the dark side of social media. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Users could follow and interact with the bot @TayandYou on Twitter and it would tweet back, learning as it went from other users' posts. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets.
http://www.cbsnews.com/news/microsof...o-racist-nazi/
If a learning AI was locked into a chat room getting spammed by people like you, piwanoi, Drumpfanistas etc is it really that illogical that the thing went mad?Originally Posted by BobR
There are currently 1 users browsing this thread. (0 members and 1 guests)