Artificial Intelligence: Lethal, Biased, or Tool for Good? / by FPM Team

  • Last week, Tesla’s Elon MuskAlphabet/Google’s Mustafa Suleyman and 114 other artificial intelligence experts at the International Joint Conference on Artificial Intelligence in Melbourne, Australia, signed a letter that said, "As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm …
  • "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways … 
  • "We do not have long to act. Once this Pandora’s box is opened, it will be hard to close." 
  • Separately, a report by researchers at the University of Virginia and University of Washington said “As intelligence systems start playing important roles in our daily life, ethics in artificial intelligence research has attracted significant interest. It is known that big-data technologies sometimes inadvertently worsen discrimination due to implicit biases in data
  • "Such issues have been demonstrated in various learning systems, including online advertisement systemsword embedding modelsonline newsweb search, and credit score ... we show that given a gender biased [body of work], structured models such as conditional random fields, amplify the bias.
  • In a May 2017 TED talk, former World Chess Champion Garry Kasparov said, “Human plus machine isn't the future, it's the present."
  • "We don't get to choose when and where technological progress stops. We cannot slow down. In fact, we have to speed up. Our technology excels at removing difficulties and uncertainties from our lives, and so we must seek out ever more difficult, ever more uncertain challenges. Machines have calculations. We have understanding. Machines have instructions. We have purpose. Machines have objectivity. We have passion. We should not worry about what our machines can do today. Instead, we should worry about what they still cannot do today, because we will need the help of the new, intelligent machines to turn our grandest dreams into reality."

OUR TAKE

  • Regarding lethal AI - As  AI thought leaders seek to influence policymakers, it is likely than a substantive "call to action" will take place after Pandora's box is opened.
  • Regarding biased AI - As biases in systems design are increasingly identified, AI development efforts will have to avoid design flaws that could bias diverse groups and individuals.
  • Regarding Kasparov's comments - He makes many good points related to AI trends and potential benefits as the AI market evolves.