Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Working Knowledge
Business Research for Business Leaders
  • Browse All Articles
  • Popular Articles
  • Cold Call Podcast
  • Managing the Future of Work Podcast
  • About Us
  • Book
  • Leadership
  • Marketing
  • Finance
  • Management
  • Entrepreneurship
  • All Topics...
  • Topics
    • COVID-19
    • Entrepreneurship
    • Finance
    • Gender
    • Globalization
    • Leadership
    • Management
    • Negotiation
    • Social Enterprise
    • Strategy
  • Sections
    • Book
    • Podcasts
    • HBS Case
    • In Practice
    • Lessons from the Classroom
    • Op-Ed
    • Research & Ideas
    • Research Event
    • Sharpening Your Skills
    • What Do You Think?
    • Working Paper Summaries
  • Browse All
    SUMMING UP: Do We Need an Artificial Intelligence Czar?
    02 Jan 2019What Do You Think?

    SUMMING UP: Do We Need an Artificial Intelligence Czar?

    by James Heskett
    Having government oversee artificial intelligence development is either a sure way to kill a promising technology or the only way to keep our robot overlords at bay. James Heskett's readers debate.
    LinkedIn
    Email
    iStock

    How Should We Organize AI Oversight?

    There is little question about the growing importance of artificial intelligence (AI) and the need for some kind of oversight. But the debate seems to center around whether, and to what extent, government (and its inevitable “czar”) should be involved in AI oversight and regulation. At least that’s the impression created by a small sample of comments on this month’s column.

    Sandeep made the case for an AI czar. “Any public servant is accountable to you and me and we can fire him and find the next guy for that chair," he wrote. "But an incompetent entrepreneur that wants to steal from the better firm and distort the market is accountable only to his bank account and ethics. So we need someone capable enough to make sure markets remain free and fair… an AI czar would work to bring that capability in the government. They would fail at times but would invest in preventing (the) law of (the) jungle from taking over.”

    ArlenMD suggested that there are organizations on whose efforts an oversight initiative could be built. He cited as one example the American Medical Informatics Association as a vehicle “to leverage an existing industry strategy on artificial intelligence research and development …” As he put it, “DARPA is a government agency responsible for developing and deploying weapons for war fighters of the future. We need CARPA—the Cyberhealth Advanced Research Projects Agency … and a “czar” to run it.”

    simeonguyhiggins was less than enthusiastic about these ideas. He remarked, “Let me see if I get this question right. We want to create a government position to oversee AI across all segments of the US economy. Great idea.” After citing a number of ways he feels that government has overspent and underperformed, he asked, with sarcasm, “How could the federal government possibly fail to make the development of AI better?”

    Ramesh, similarly skeptical, nevertheless suggested what sounded like some kind of non-governmental effort led by well-trained individuals. He said, “We don’t need a Czar. We need a set of reasonable people.” He goes on to describe what we need in a LinkedIn article that he wrote. In his words, “we need policies to guide us on how AI is used. Policies that are shaped by well-informed debate, following the highest traditions of democracy." The final say is best left to humans who are "actively trained and encouraged and supported for being human.”

    A clean start or one built on one or more existing initiatives? Government regulation or industry self-regulation? A government agency led by a czar or some kind of industry association? How should we organize AI oversight? What do you think?

    Original Column

    The global community faces staggering challenges this century. We spent much of the 20th century learning how to get along with each another. According to data collected and analyzed by Harvard cognitive psychologist Steven Pinker, we’ve made much progress toward that goal, judging from per capita declines in both natural and unnatural deaths worldwide.

    While the problem of nuclear weapons proliferation remains with us, the 21st century challenges relate to such things as space exploration and the ever-present issue of climate change and the natural disasters associated with it. At the same time, we have made giant strides in methods of addressing nearly any problem one can imagine. Many are associated with the development of artificial intelligence (AI).

    In a nutshell, cloud-enabled data collection and storage can accommodate the so-called big data that provides inputs to machine learning and the creation of artificial intelligence. Each of these related fields of activity and opportunity encompasses a wide swath of the economy. Together, they describe a sector that promises to dwarf, in terms of economic opportunity and impact, traditional activities including mining, manufacturing, and even agriculture.

    We already see problems such as threats to personal privacy, national security, and competition associated with this AI “package.” And in recent hearings in which tech executives appeared before US congressional committees, we saw, in terms of the uninformed, naïve questions posed by legislators to the executives, how unprepared the legislative branch is to deal with issues associated with AI.

    The question is whether the US has the will and capability to coordinate and support major cross-industrial efforts to foster and, if necessary, regulate AI. It not only requires technological expertise but an even more complex challenge of creating standards and universal formats for organizing and coordinating data and its collection from various sources in a form from which machines can learn and develop new insights.

    “The question is whether the US has the will and capability to coordinate and support major cross-industrial efforts to foster and, if necessary, regulate AI.”

    Today, this is being done in a highly fragmented way in the US by competing commercial organizations, many of whose employees appear to distrust the government and its application of their work. One could argue that a field of activity of this importance warrants a cabinet position alongside Transportation, Energy, Agriculture, and Commerce. Compare this to what appear to be faster-moving achievements in AI, particularly in the structuring of data, in the totalitarian economic environment of a China, for example, where high-level leadership for AI exists now.

    What would a government-sponsored artificial intelligence program look like? If space exploration is an example, it would involve sponsored research, education, the establishment of standards, and a plan and timetable for expected results. It would have to rely heavily on private industry with the assistance and support of a government agency.

    It would require high-profile leadership. Inevitably, this person would be labeled a “czar.” Former US Secretary of Defense Ashton Carter has argued that such a person might be needed.

    Does the United States need an artificial intelligence czar? What do you think?

    References:

    David Ignatius, China’s application of AI should be a Sputnik moment for the U.S. But will it be?, The Washington Post, November 6, 2018.

    Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Penguin Random House LLC, 2018).

    Post A Comment
    In order to be published, comments must be on-topic and civil in tone, with no name calling or personal attacks. Your comment may be edited for clarity and length.
      Trending
        • 16 Mar 2023
        • Research & Ideas

        Why Business Travel Still Matters in a Zoom World

        • 04 Sep 2001
        • Research & Ideas

        Is Government Just Stupid? How Bad Decisions Are Made

        • 25 Jan 2022
        • Research & Ideas

        More Proof That Money Can Buy Happiness (or a Life with Less Stress)

        • 25 Feb 2019
        • Research & Ideas

        How Gender Stereotypes Kill a Woman’s Self-Confidence

        • 14 Mar 2023
        • In Practice

        What Does the Failure of Silicon Valley Bank Say About the State of Finance?

    James L. Heskett
    James L. Heskett
    UPS Foundation Professor of Business Logistics, Emeritus
    Contact
    Send an email
    → More Articles
    Find Related Articles
    • Governing Rules, Regulations, and Reforms
    • Technological Innovation
    • Government Legislation
    • Technology
    • United States

    Sign up for our weekly newsletter

    Interested in improving your business? Learn about fresh research and ideas from Harvard Business School faculty.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    ǁ
    Campus Map
    Harvard Business School Working Knowledge
    Baker Library | Bloomberg Center
    Soldiers Field
    Boston, MA 02163
    Email: Editor-in-Chief
    →Map & Directions
    →More Contact Information
    • Make a Gift
    • Site Map
    • Jobs
    • Harvard University
    • Trademarks
    • Policies
    • Accessibility
    • Digital Accessibility
    Copyright © President & Fellows of Harvard College