Tuesday, November 1, 2016

“Democracy is the worst form of government, except for all the others.” Can AI benefit humanity as a whole?

While tech companies have been overpromising on how Artificial Intelligence (AI) applications would affect our daily lives for the past decade or so, it seems we have gotten to a point, where those science-fiction dream smart machines will soon be a reality. Algorithms and machine learning processes  get more advanced, larger and larger data sets become available, and processing power becomes more cost efficient. The result – autonomous cars, drones, augmented and virtual reality, robots, and many more.

No doubt about it: AI will have a significant impact on our future. But who will be involved in shaping that impact? When it comes to AI there are countless ethical, social moral and economic questions to be answered but who will answer them? What ethical code will a smart machine follow when it makes autonomous decisions? How to deal with a potential mass dislocation of labor when machine learning becomes sufficiently advanced? Will AI be used in warfare?


According to an article by John Markoff in the New York times some of the world’s largest and most prominent tech companies have started discussing the impact of AI on society and our world. These discussions are being led to ensure that AI research and development benefits humanity, rather than harm it. But IBM, Alphabet, Amazon, Microsoft and Facebook are companies with agendas and a need to generate financial return. So while their effort is admirable, the industry alone should not be the only one setting a framework of regulations for AI development. It’s not new that policymakers and regulators lag technology. Governments will have to catch up soon and set their own regulations (Thank you Mr. President). There is another approach that goes beyond industry and politics, shaping our future with intelligent machines:



OpenAI is a non-profit organization that promotes the decentralization and democratization of AI research. The organization is funded by entrepreneur Sam Altman and Elon Musk. Altman and Musk think that AI is the most important technology that will most affect humanity’s future. Yet they are very outspoken that AI bears the potential to be an existential risk to society. By decentralizing, distributing and democratizing research into AI, Altman and Musk hope to prevent humanity’s destruction by an uncontrollable AI or a super intelligence in the hands of a small group of people who use it for their personal profit and gain. You say these thoughts sound a lot like the Terminator movies and their occurrence is unlikely? Maybe, but the democratization of research could also democratize the answering of all those complex questions related to the matter.

No comments:

Post a Comment