Artificial intelligence (AI) is the topic of the moment in circles ranging from science to business to religion.
Its potential and implications are driven home when you see robot soccer players improve their game to the point that they behave as a team as they learn to pass, assist, and score against an opponent with the help of AI and relentless 24-hour practice. Or when machines compose better memos, white papers, and poems than you could have written—by means of generative AI that recognizes patterns in masses of training data—while defeating all the world’s chess players, often with unorthodox moves.
Sure, at the moment AI causes machines to do dumb things and say things that aren’t true. And yes, AI could spread damaging information even more rapidly and convincingly than social media. Some pioneer developers even believe generative AI could be a risk to humanity.
Clearly, AI is a big deal with large potential benefits and, at the moment, largely unknown risks for society. It will get more important fast. Why? Two tech giants, Microsoft and Google, are competing for first-mover advantage along with a third competitor, OpenAI, funded in part by Microsoft.
These three competitors are going all out to bring to light the fruits of work they’ve been engaged in for as long as a decade. The usual beta testing in the market is now underway; the kinks will be worked out with the help of all of us.
The questions are, “What will we have?” An incredible tool that will relieve humanity of mind-numbing desk work with attendant improvements in quality and reductions in cost and allow people to pursue more creative and interesting work? Or a tool so powerful that it begins to crowd out even the most creative jobs from writing poetry to providing investment advice while spreading misinformation based on deliberately inaccurate source material?
Given the level of uncertainty about AI’s future use, there is a growing concern about its unfettered development. Several weeks ago, a thousand people who should know something about AI called for a pause in development until somebody can determine whether limits should be placed on AI applications and their use. It’s unclear what is meant by a “pause.” Self-regulation? Monitored by who or what body?
Leaders of the organizations competing all-out for market dominance are also calling for guidance from the outside on what is acceptable. After interviewing Google CEO Sundar Pichai for 60 Minutes last month, CBS correspondent Scott Pelley said Pichai “told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.”
In March, Google President of Global Affairs Kent Walker said that AI “is too important not to regulate.” OpenAI CEO Sam Altman was quoted recently as saying, “I try to be upfront… Am I doing something good? Or really bad?” Meanwhile, their organizations are moving headlong to develop AI.
If we assume that some kind of regulation of AI’s development and use is desirable, should there be a set of uniform global standards and practices? That doesn’t appear likely. China, the European Union, and Brazil, among others, already have drafted unique pieces of legislation to regulate AI in their countries. In China’s early-April draft, the proposed regulation “would require (generative AI) services to generate content that reflects the country’s socialist values.” Ironically, the list doesn’t include the US. Calls are just now beginning to surface in the US—arguably the most advanced in AI development—for some form of oversight.
How effective can such a fragmented approach be to the regulation of a phenomenon that doesn’t respect borders?
The European Union’s Artificial Intelligence Act represents the most extensive approach to regulation thus far. It seeks to oversee AI development as well its results—essentially guiding inputs and regulating the use of outputs. The 107-page Act assigns applications of AI to three risk categories:
“First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
The act at times reads like a science fiction novel. Article 48, for example, states that, “High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning.”
Regulation raises a complex set of challenges. For example, the set of databases on which AI learns will determine how accurate and unbiased its output and advice will be. Do you want learning based on all the information, good and bad, in the world? Or should some sources be banned?
And how about output? Should certain uses of the output be limited? Should disclaimers as to the accuracy of work based on AI be required? And just who is going to determine this in the US? Congress? It’s possibly the one group of individuals that knows less about AI than I do.
Is it conceivable that AI is impossible to regulate in any meaningful way? If so, what are the alternatives, efforts such as more effective education in how to interpret AI-mediated tasks?
How, if at all, should artificial intelligence be regulated? What do you think?
Share your thoughts in the comments below. (A disclaimer: AI was not used in any way in the preparation of this blog post.)
References:
- European Union, “The Artificial Intelligence Act,” April 21, 2021.
- Cade Metz, “The ChatGPT King Isn’t Worried, But He Knows You Might Be,” The New York Times, April 2, 2023.
- Cade Metz, “Tech Leaders Urge a Pause in A.I., Citing ‘Profound Risks to Society,” The New York Times, March 30, 2023.
- Scott Pelley, “Is artificial intelligence advancing too quickly? What AI leaders at Google say,” CBS News, 60 Minutes, April 16, 2023.
- Bailey Schulz, “Schumer proposes plan to address AI’s potential risks,” USA Today, April 18, 2023.
- Nico Grant and Karen Weise, “A.I. Frenzy Leads Tech Giants to Take Risks in Ethics Rules,” The New York Times, April 8, 2023.
Your feedback to last month’s column
How does remote work affect innovation?
Remote work will, on balance, support innovation. That’s the sense of responses to last month’s column.
Susan Turner made the case this way: “WFH (work from home) means solutions pop up as we’re playing with the dog, opening the fridge, making coffee, or even (gasp) pulling laundry out on a break. Science shows we do not access the most innovative mind intentionally. It happens when we step away.”
Reuben added, “I really feel remote may boost innovation drive as people, especially the innovative ones, have more time to ‘dream and explore new ways.’”
David H. Deans said, “Savvy employers don’t have to settle for the limitations of a local talent pool …” Ryan concluded, “The given argument (for in-office innovation) is weak because it uses a pre-pandemic example of collaboration and innovation, ignoring the transformative impact of technology on remote collaboration since the pandemic began.”