How to rein in the AI threat? Let the lawyers loose

Americans are worried by the threat of AI to the future of humanity.

Fifty-five percent of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. More than 1,000 AI experts and funders, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month pause in training new AI models. In turn, Time published an article calling for a permanent global ban.

However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.

For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might - if not bound by ethical concerns - sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.

What about Section 230 of the Communications Decency Act, which has long shielded internet platforms from liability for content created by users? However, Section 230 does not appear to cover AI-generated content. The law outlines the term "information content provider" as referring to "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service."

CHATGPT AND HEALTH CARE: COULD THE AI CHATBOT CHANGE THE PATIENT EXPERIENCE?

The definition of "development" of content "in part" remains somewhat ambiguous. Still, judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies "pre-populated answers" so that it is "much more than a passive transmitter of information provided by others."

Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

The introduction of clear legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. The threat of legal liability will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.

Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.

CLICK HERE TO GET THE OPINION NEWSLETTER

By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.

In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations, and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.

The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations, and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators, and the public come together to chart a responsible course for AI development that safeguards humanity's best interests and promotes a sustainable, equitable future.

CLICK HERE TO READ MORE FROM GLEB TSIPURSKY

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.