OpenAI co-founders Sam Altman and Ilya Sutskever joined a panel discussion on the subject of artificial intelligence at Tel Aviv University on Monday.
Dror Globerman, a broadcaster for Keshet, asked the two about some of the concerns surrounding the topic: "If you truly believe that AI poses a danger to humankind, why are you developing it?"
"That's a super fair question," Altman answered. "We do have to balance this technology, that, frankly, I think people really need, but I think that it will really give people better lives. When people in the future look back on the world without AI, they'll think it's barbaric. I really think we have a moral duty to figure out how to do that." "I also think it's unstoppable. Stopping it won't work, so we have to figure out how to manage the risk. We formed as a company in large part because of this risk, and the need to manage it."
Globerman also posed a question regarding regulation asking the two: "If regulations are imposed that limit you, will you comply, or will you try and evade them like Mark Zuckerberg did?"
Altman replied: "We have a unique structure, and believe in incentives, and that if you design the incentives right, you usually get the behavior you want. In the end, we're all going to do fine, we're not going to make more or less money if we make the numbers go a little more up and to the right. We don't have the same incentives as Facebook, and there were very well-meaning people at Facebook, they just didn't have the incentives."
"When we were setting up our company originally, we thought about our profit structure - how do we balance the need for money with what we care about as our mission? One of the things we talked about was, what is a structure that would let us warmly embrace regulation?"