by Anand Rao – @AnandSRao
Artificial intelligence (AI) presents limitless opportunity, but not without potential pitfalls and risks. This paradox has become increasingly evident for government leaders. They want to give domestic companies an edge over the competition, but are also expected to protect their citizens and use AI for social good. They want to support innovation, while still maintaining some level of control over how new technologies impact society at large. With a huge payoff on the line — by our own estimates, AI has the potential to increase worldwide GDP by 14 percent by 2030, an infusion of US$15.7 trillion into the global economy — it should come as no surprise that governments are eager to claim their share.
To date, more than 20 countries, including Canada, China, France, Germany, India, Japan, Kenya, Mexico, New Zealand, Russia, South Korea, the United Arab Emirates, and the U.K., have released AI strategy documents. Global bodies such as the World Economic Forum and industry associations such as the Partnership on AIare convening committees and acting in an advisory capacity in some cases. The Institute of Electrical and Electronics Engineers has also released standards for ethical AI design.
Overall, these new policies outline how governments plan to foster AI development to encourage domestic companies to develop solutions that will boost GDP and offer a host of societal benefits. At the same time, they tackle questions about security, privacy, transparency, and ethics. Given the potential for AI to have disruptive social and environmental effects (pdf), the development of sophisticated national and international governance structures will become increasingly critical. Perhaps no other emerging technology has inspired such scrutiny and discussion.
Such activity introduces an imperative for business leaders to look for ways to help shape and refine the national AI strategies that will impact the regions in which they operate. Our own research reveals that companies recognize this need. PwC’s 22nd Global CEO Survey found that 85 percent of CEOs agree that AI will significantly change the way they do business in the next five years. The survey also found that CEOs believe that AI is good for society — and more than two-thirds of CEOs agree that governments should play a critical and integral role in AI development.
Defining a Policy
National AI policies have significant ground to cover. Besides working to increase domestic competitiveness and help businesses succeed with AI, these strategies aim to address certain key concerns that accompany the technology. Companies that develop AI applications often face the same concerns, but governments can offer a model for businesses to follow while helping to address some of the risks of AI. For example, they can help ensure accuracy and manage bias in AI systems and figure out how to deal with the consequences of human job loss due to increased automation.
Although data security is always a major concern, AI algorithms add a new level of complexity. The more granular the data that is fed to an AI algorithm, the better the algorithm is at personalizing a given experience for the user. And consumers typically appreciate it when companies can provide personalized experiences tailored to their needs. However, in the process, users’ privacy or the confidentiality of their data might be compromised, leading to conscious trade-offs being required in security policies.
Another major concern with respect to AI algorithms is the potential for these algorithms to institutionalize bias. Machine learning algorithms use historical data to detect patterns and make inferences. Thus using historical data, even if it is factual, can lead to biased outcomes. For example, just do an Internet image search for the terms nurse and doctor and you’ll see certain gender stereotypes emerge. The machine learning algorithm will tend to conclude that nurses are generally female and doctors are generally male. Such bias must be mitigated when it can lead to discrimination against a particular group of people.
Some governments — for example, those of the U.K., Canada, the U.S, and Australia — are focused on another key issue: attracting AI talent. They are establishing programs at research organizations and national universities (or even in primary education) to ensure that people with AI expertise will be available to join the domestic workforce. Elsewhere, countries are exploring how to re-skill people whose jobs may be disappearing or changing so they have the AI experience companies need (for example, Finland’s move to train its population on AI).
Meanwhile, countries that are leaders in certain industries are exploring how AI can enhance those industries. For example, the Japanese government developed a plan to ensure that it can compete with the likes of China and the U.S. in the area of robotics, laying out guidelines for regulation and establishing goals for robotics development and adoption in key industries.
Managing Trade-Offs
Some countries have started exploring a series of trade-offs that AI presents in an attempt to address them in their policy documents, acknowledging that all of society — businesses, individual consumers, and academics alike — plays a role in how these issues are managed. The trade-offs boil down to three main categories: innovation versus regulation, the individual versus the state, and transparency versus system vulnerability. In all cases, countries — and companies — will have to determine how best to achieve a balance between one side and the other. None of the trade-offs is mutually exclusive, and how to best strike the right balance will depend on a variety of factors.
Innovation vs. regulation. Too many regulations or ones that are too rigid could stifle companies’ ability to introduce new AI applications by clamping down so much on use of consumer data, for example, that they are unable to properly train their algorithms. The complexity is heightened for multinationals operating in different territories with different regulations. The more data available to train an AI system, the smarter the system can become, so territories with less stringent data-use regulations may gain a leg up when it comes to using AI to create custom products or services.
Companies can help government officials better understand how much and what type of regulated data they need to properly train AI systems, and can help devise ways to comply with existing consumer protection requirements. Some regulators, such as the U.K.’s Financial Conduct Authority, are experimenting with new approaches such as creating a regulatory sandbox. Other countries, such as Canada, are creating AI superclusters to attract private funding and retain talent, and to transfer IP from academic labs to commercial enterprises to speed up innovation and the commercialization of AI.
The individual vs. the state. There is a balancing act between individual data privacy, which remains paramount, and the state’s need to access data to enable a common good or prevent a malicious act. Still, protecting consumer privacy is a top priority for some governments, which will impact how companies in those nations can use consumer data in their AI systems.
In the age of social media and smart devices, the volume of available consumer data is massive — and countries will take different approaches to regulating its use. Some of this will be based on cultural attitudes; in certain parts of the world, people are more open to sharing data. Elsewhere, there is a greater expectation of protection. PwC’s CEO Survey found that respondents in Germany, the U.S., and the U.K. are open to government regulations on data collection, while those in China, India, and Japan favor fewer such limitations.
Transparency vs. system vulnerability. Government AI strategies may also attempt to balance the need for people to trust AI systems by understanding how they work against the desire to protect the systems from being attacked. The easier it is to explain how the AI “thinks,” the logic goes, the easier it becomes for those with malicious intent to infiltrate that system.
This will be a major issue in industries such as finance and healthcare, which house massive amounts of sensitive personal data and require a high level of trust between consumers and service providers.
A Call to Action
As more countries release national AI strategies, businesses should follow these developments closely — and get involved in helping their governments shape policies that will impact the ways that AI and related technologies transform the business landscape of the future. Companies should consider joining policy working groups and jointly advancing AI skills and education, as well as pursuing other efforts that help clarify how to balance their business interests with the greater good: Government and business can come together and create a more just and prosperous world with more transparent and efficient public administration, more effective and accessible healthcare, more livable cities, and a more sustainable planet.
Companies should consider joining policy working groups and jointly advancing AI skills and education, as well as pursuing other efforts that help clarify how to balance their business interests with the greater good.
Companies around the world are already helping shape national AI strategies in meaningful ways. In Canada, which was one of the first countries to release a national AI strategy, companies are investing heavily in the technology so they’ll be able to reap the benefits of policy updates sooner. In the E.U., businesses are partnering with government to up-skill, re-skill, and reassign workers whose jobs have changed as a result of AI initiatives. European executives are also influencing policy as members of the E.U.’s High-Level Expert Group on Artificial Intelligence, whose charter is to come up with recommendations for policy development that address ethical, legal, social, and economic issues related to AI. In the U.S. and Germany, companies with an interest in the autonomous vehicle market have lobbied effectively for laws that allow them to advance their commercial interests and ensure public safety.
Given the massive opportunities and potential risks associated with AI, companies, global bodies, not-for-profit groups, citizens, and policymakers must come together to devise the right strategies that consider the various trade-offs that make sense in their country. Not having a coherent, comprehensive national strategy could put future generations at a competitive disadvantage.