Results 1 to 7 of 7
  1. #1
    Thailand Expat tomcat's Avatar
    Join Date
    Nov 2005
    Last Online
    @
    Posts
    17,246

    Something More Intelligent Than...Us

    Opinion
    How AI could accidentally extinguish humankind
    By Émile P. Torres (WaPo)
    August 31, 2022 at 7:00 a.m. EDT

    Émile P. Torres is a philosopher and historian of global catastrophic risk.

    People are bad at predicting the future. Where are our flying cars? Why are there no robot butlers? And why can’t I take a vacation on Mars?

    But we haven’t just been wrong about things we thought would come to pass; humanity also has a long history of incorrectly assuring ourselves that certain now-inescapable realities wouldn’t. The day before Leo Szilard devised the nuclear chain reaction in 1933, the great physicist Ernest Rutherford proclaimed that anyone who propounded atomic power was “talking moonshine.” Even computer industry pioneer Ken Olsen in 1977 supposedly said he didn’t foresee individuals having any use for a computer in their home.

    Obviously we live in a nuclear world, and you probably have a computer or two within arm’s reach right now. In fact, it’s those computers — and the exponential advances in computing generally — that are now the subject of some of society’s most high-stakes forecasting. The conventional expectation is that ever-growing computing power will be a boon for humanity. But what if we’re wrong again? Could artificial superintelligence instead cause us great harm? Our extinction?

    As history teaches, never say never.

    It seems only a matter of time before computers become smarter than people. This is one prediction we can be fairly confident about — because we’re seeing it already. Many systems have attained superhuman abilities on particular tasks, such as playing Scrabble, chess and poker, where people now routinely lose to the bot across the board.

    But advances in computer science will lead to systems with increasingly general levels of intelligence: algorithms capable of solving complex problems in multiple domains. Imagine a single algorithm that could beat a chess grandmaster but also write a novel, compose a catchy melody and drive a car through city traffic.

    According to a 2014 survey of experts, there’s a 50 percent chance “human-level machine intelligence” is reached by 2050, and a 90 percent chance by 2075. Another study from the Global Catastrophic Risk Institute found at least 72 projects around the world with the express aim of creating an artificial general intelligence — the steppingstone to artificial superintelligence (ASI), which would not just perform as well as humans in every domain of interest but far exceed our best abilities.

    The success of any one of these projects would be the most significant event in human history. Suddenly, our species would be joined on the planet by something more intelligent than us. The benefits are easily imagined: An ASI might help cure diseases such as cancer and Alzheimer’s, or clean up the environment.

    But the arguments for why an ASI might destroy us are strong, too.

    Surely no research organization would design a malicious, Terminator-style ASI hellbent on destroying humanity, right? Unfortunately, that’s not the worry. If we’re all wiped out by an ASI, it will almost certainly be on accident.

    Because ASIs’ cognitive architectures may be fundamentally different than ours, they are perhaps the most unpredictable thing in our future. Consider those AIs already beating humans at games: In 2018, one algorithm playing the Atari game Q*bert won by exploiting a loophole “no human player … is believed to have ever uncovered.” Another program became an expert at digital hide-and-seek thanks to a strategy “researchers never saw … coming.”

    If we can’t anticipate what algorithms playing children’s games will do, how can we be confident about the actions of a machine with problem-solving skills far above humanity’s? What if we program an ASI to establish world peace and it hacks government systems to launch every nuclear weapon on the planet — reasoning that if no human exists, there can be no more war? Yes, we could program it explicitly not to do that. But what about its Plan B?

    Really, there are an interminable number of ways an ASI might “solve” global problems that have catastrophically bad consequences. For any given set of restrictions on the ASI’s behavior, no matter how exhaustive, clever theorists using their merely “human-level” intelligence can often find ways of things going very wrong; you can bet an ASI could think of more.

    And as for shutting down a destructive ASI — a sufficiently intelligent system should quickly recognize that one way to never achieve the goals it has been assigned is to stop existing. Logic dictates that it try everything it can to keep us from unplugging it.

    It’s unclear humanity will ever be prepared for superintelligence, but we’re certainly not ready now. With all our global instability and still-nascent grasp on tech, adding in ASI would be lighting a match next to a fireworks factory. Research on artificial intelligence must slow down, or even pause. And if researchers won’t make this decision, governments should make it for them.

    Some of these researchers have explicitly dismissed worries that advanced artificial intelligence could be dangerous. And they might be right. It might turn out that any caution is just “talking moonshine,” and that ASI is totally benign — or even entirely impossible. After all, I can’t predict the future.

    The problem is: Neither can they.
    Majestically enthroned amid the vulgar herd

  2. #2
    Thailand Expat
    Iceman123's Avatar
    Join Date
    Aug 2013
    Last Online
    Today @ 02:04 AM
    Location
    South Australia
    Posts
    5,530
    Yes very thought provoking and somewhat worrying.

    Many years ago Sir Clive Sinclair said that in the future the question will not be “can we live without computers, but can we live with them”

  3. #3
    Excommunicated baldrick's Avatar
    Join Date
    Apr 2006
    Last Online
    Today @ 02:31 PM
    Posts
    24,805
    Quote Originally Posted by tomcat View Post
    According to a 2014 survey of experts
    Really ?

    If anyone is actually interested in this subject beyond the clickbait

    Genius makers - Cade Metz is a book worth reading

  4. #4
    DRESDEN ZWINGER
    david44's Avatar
    Join Date
    Aug 2011
    Last Online
    @
    Location
    At Large
    Posts
    21,354
    Quote Originally Posted by baldrick View Post
    Cade Metz
    thanks for that.

    Cade Metz | The Independent

  5. #5
    DRESDEN ZWINGER
    david44's Avatar
    Join Date
    Aug 2011
    Last Online
    @
    Location
    At Large
    Posts
    21,354
    Quote Originally Posted by tomcat View Post
    sufficiently intelligent system
    TD Lounge



    Quote Originally Posted by tomcat View Post
    prepared for superintelligence
    Mods room, Nev and the other "Gals"

  6. #6
    Thailand Expat
    Takeovers's Avatar
    Join Date
    Nov 2008
    Last Online
    Today @ 06:02 PM
    Location
    Berlin Germany
    Posts
    7,069
    Quote Originally Posted by david44 View Post
    Even computer industry pioneer Ken Olsen in 1977 supposedly said he didn’t foresee individuals having any use for a computer in their home.
    Konrad Zuse, german computer pioneer, was refused a patent on his computer, because it was not sufficiently inventive to be patent worthy according to the patent office.

  7. #7
    Thailand Expat
    malmomike77's Avatar
    Join Date
    Aug 2021
    Last Online
    @
    Posts
    13,918
    ^ spare a thought for Babbage

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •