Breaking News

Using AI can be a risky business


For years, companies have operated on the premise that in order to improve their artificial intelligence software and gain competitive advantage, they must collect huge amounts of user data – the cornerstone of machine learning.

But increasingly, collecting massive amounts of user information can be a major risk. Laws like those of EuropeGeneral Data Protection Regulation, or GDPR, and Californianew privacy rules now impose heavy fines on companies that mishandle this data, for example by failing to protect company computer systems from hackers.

Some companies are even publicly distancing themselves from what was once standard practice, such as using machine learning to predict customer behavior. Alex Spinelli, chief technologist at enterprise software maker LivePerson, recently saidFortune, that he hascanceled some AI projects at his current company and with former employers because these commitments contradicted his own ethical beliefs about data privacy.

For Aza Raskin, co-founder and program advisor of the nonprofit Center for Humane Technology, technology – and by extension AI – is living at a time close to climate change.

Raskin, whose father, Jef Raskin, helped Apple developing its first Macintosh computers, said that for years researchers have studied different environmental phenomena such as the depletion of the ozone layer and the rise in sea level. It took years before these different Environmental problems do not merge into what we now call climate change, a catch-all term that helps people understand the current global crisis.

Likewise, researchers have studied some of the unintended consequences of AI linked to the proliferation of disinformation and surveillance. The pervasiveness of these problems, such as Facebook allow disinformation to spread on its service or the use of AI by the Chinese government Track Uyghurs, could lead to a societal assessment of AI-based technology.

“Another five years ago, if you stood up and said, ‘Hey, social media is pushing us to increase polarization and civil war,’ people would eye you and call you Luddite,” Raskin said. But with the recent riots on the U.S. Capitol, led by people who believed conspiracy theories were shared on social media, it’s getting harder and harder to ignore issues with AI and related technologies, a- he declared.

Raskin, who is also a member of the World Economic Forum’s Global AI Council, hopes governments will create regulations that explain how businesses can use AI ethically.

“We need government protections so that we don’t have unhindered capitalism that points to the human soul,” he said.

He believes companies that take data privacy seriously will have a “strategic advantage” over others as more AI issues emerge, which could lead to financial penalties or a damaged reputation.

Companies should expand their existing risk assessments – which help companies measure the legal, political and strategic risks associated with certain business practices – to include technology and AI, Raskin said.

The recent Capitol Riots highlight how technology can lead to societal problems, which in the long run can hurt a company’s ability to succeed. (After all, it can be difficult to run a successful business during a civil war.)

“If you don’t have a healthy company, you can’t have a successful business,” Raskin said.

Jonathan vanian
@JonathanVanian
jonathan.vanian@fortune.com





Source link

Leave a Reply