Technology often hits the market at the speed of innovation and we play catch up with its impacts.

We craft policy after the fact, write standards for safety and market growth and, perhaps, ponder how the latest technology will shape our behavior and impact society at large.

Fortunately, this somewhat ad hoc, imperfect approach has worked well enough in a rough-and-tumble world. But sometimes entirely new, potentially far-reaching, high-impact areas of innovation come along that demand vigorous consideration upfront before opportunities for unintended consequences become manifest.

Such is the case, I believe, with artificial intelligence, or AI. Although at its most basic, AI could be as innocuous as a thermostat that “learns” your preferred settings over time, we currently cannot conceive of its limits or fully grasp its implications. But we can certainly see many of the roles that AI already plays and extrapolate from there. I suggest this is an area of innovation that rightly demands that thoughtful consideration of its impacts accompany its technological development.  

At the very least, society has a right, even an obligation, to understand and possibly have a say in how AI is developed and to what ends. A conversation between technologists and policy makers can foster that understanding, and the former can assist the latter in grasping the technology implications of policy decisions in this area of innovation. In fact, the range of stakeholders, concerns and potential impacts is really much more complex than a simple, two-way conversation. The development of AI calls for a thorough, methodical approach to considering the 360-degree sphere of related issues.

IEEE’s intended role

As president of IEEE, I take quite seriously our stated mission to “foster technological innovation and excellence for the benefit of humanity.” To me that means that in area of innovation such as AI, with its seemingly unlimited potential to influence our daily lives and even alter global society, we have an obligation to help frame issues, bring technological expertise to understand the technology’s implications and convene stakeholders to make such discussions as broad and inclusive as possible.

Even now, AI is transforming how we work, play and think in revolutionary ways. But AI is complex enough and its implications daunting enough that an extensive knowledge of specific science or technology disciplines is needed for effective decision-making. At the same time, the cross-domain, multi-disciplinary nature of the topic requires an ability to think in holistic, even philosophical terms about the range of actual and potential AI uses, impacts and policies.

The serious tone here is not meant to imply that AI represents a danger to individuals or society, as science fiction or Hollywood might have it. A big part of the future success of AI will be its acceptance by people. Many of us already use voice recognition—a form of AI—on our smartphones or when dealing with automated call centers. How will we feel when the simplest forms of AI are pervasive across the gamut of domains such as health care, energy, agriculture, communications and transportation? How will we employ AI in the service of manufacturing, defense, deep-sea and/or space exploration?

And who can serve as a trusted source of information and insight to bridge the esoteric world of AI technology and the public, which wants to gets its arms around that technology and its implications?

IEEE’s expertise can be the bridge between the AI experts who understand the technologies, the policy makers who devise the regulatory environment, and the public who have varying levels of interaction, knowledge and acceptance of AI.

Accepting AI

Already, we are seeing headlines both positive and negative about self-driving vehicles. And who hasn’t had a visceral reaction to the notion that self-driving vehicles will be pervasive in a decade? Some may think it’ll improve road safety while others cringe at the thought of not controlling an automobile that’s carrying them at 70 miles per hour.

Now, imagine that it’s the year 2025—only a decade into the future. AI pervades nearly every industry vertical and plays a major if not dominant role in your everyday interactions with your world. Like any disruptive technology, AI presents complex policy challenges, from jobs and the economy to safety and regulatory questions. At this point, we’ve only begun to frame pertinent questions, including:

  • Who determines when and how AI can be used?
  • Who is monitoring AI development?
  • Who ensures compliance with safety standards?
  • Who takes responsibility when an AI malfunctions?
  • What security and privacy safeguards are in place to protect individuals and enterprises?
  • How will AI change the work environment? Will certain jobs be eliminated? Could AI affect income inequality?

IEEE at work

IEEE is already working to identify needs and build consensus for standards, certifications and codes of conduct regarding the implementation of AI technologies for engineers. And IEEE has a distinguished track record in standards development, international collaboration, training and educational opportunities and knowledge transfer. And we have specific initiatives aimed at convening and informing stakeholders in complex areas such as AI.

The IEEE Ad Hoc Committee on Global Public Policy was formed in 2012 with the goal to enable IEEE to better serve governments and society about the social implications of technology.

In April 2016, IEEE launched a new IEEE Standards Association Industry Connections program titled, “The Global Initiative for Ethical Considerations in the Design of Autonomous Systems,” which includes AI. The new Initiative is global, open and inclusive, welcoming all individuals or representatives of organizations dedicated to ethical considerations in the design of autonomous systems. This initiative has published Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies.

The IEEE Society on Social Implications of Technology examines how technology impacts the world, and how the application of technology can improve the world.

In sum, I believe AI offers immense benefits to humanity and it deserves thoughtful consideration to maximize those benefits and minimize potentially negative consequences. And IEEE is uniquely qualified and positioned to play a leading role in AI’s carefully considered rollout.

Karen Bartleson, 2017 IEEE President and CEO

Karen Bartleson has over 35 years of experience in the semiconductor industry, specifically in electronic design automation. She is currently the IEEE President and CEO, and has served in several leadership positions within the IEEE and IEEE Standards Association (IEEE-SA), including President of IEEE-SA in 2013 and 2014, and on the IEEE Board of Directors in 2013 and 2014.

Karen has published numerous articles about standards and universities and has authored the book “The Ten Commandments for Effective Standards: Practical Insights for Creating Technical Standards” (Synopsys Press, 2010). In 2003, she received the Marie R. Pistilli Women in Electronic Design Automation Achievement Award. She earned a B.S. in Engineering Science with a concentration in Electronic Engineering from California Polytechnic State University in 1980.