An American political lobbying group funded by some the world’s biggest technology companies, including Google, IBM, Amazon and Microsoft, is telling Washington it needs to pull back from regulating Artificial Intelligence (AI).
The body, the so-called ‘Information Technology Industry Council’, this week released a set of ‘AI Policy Principles‘.
In it, it sets out areas where industry, governments, and others can collaborate, as well as specific opportunities for public-private partnership, and acknowledges the need for the tech sector to promote what it styles as “the responsible” development and use of AI.
It also says national governments should support, incentivise, and fund AI research efforts – but refrain from wanting to examine source code or draw up legislation to steer the future development of AI in socially responsible ways.
“We encourage governments to use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI,” it warns.
“This extends to the foundational nature of protecting source code, proprietary algorithms, and other intellectual property [as] failure to do so could present a significant cyber risk.”
You might also like
As the group claims AI will add at a massive $7 trillion to the global economy by 2025, that matters. But some observers may feel the call strikes a false note in a period when many observers feel unregulated tech in the form of possible subversion of recent political elections shows the danger of letting tech have its own way all the time.
The importance of robust ethical frameworks being developed by policymakers for AI was also highlighted at the recent Think AI for Public Sector conference in London organised by Think Digital Partners.
Tech news site Gizmodo notes that the body’s call does seem to back up AI critic Nick Bostrom’s 2016 warning that, “Great resources are devoted to making [progress in AI] happen, with major (and growing) investments from both industry and academia in many countries [but investments] in long-term AI safety…remains orders of magnitude less than investment in increasing AI capabilities.
It also quotes Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University: “We can hope that corporate self-interest will align with public interests, but that is a giant leap of faith, and many companies in ITIC don’t exactly have a great track record at winning public trust.
“It’s important to remember that they’re not in the business of protecting the public or promoting democracy; their business is business.
“When profit motives and humanitarian motives collide, take a wild guess which one usually wins.”