AG Advisory Addresses “Unfair And Deceptive” AI Activity



State government is in the early days of coping with artificial intelligence and the potential it has to dramatically change society, and Attorney General Andrea Campbell issued an advisory to AI developers, suppliers and users Tuesday to highlight their respective obligations under Massachusetts’s consumer protection laws.

“AI has tremendous potential benefits to society. It presents exciting opportunities to boost efficiencies and cost-savings in the marketplace, foster innovation and imagination, and spur economic growth,” Campbell’s office wrote in the advisory, adding that its official stance is to encourage innovation and the use of AI that complies with state law.

The advisory added, “However, AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more. Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life.”

The four-page advisory is meant to “address the risks of AI by clarifying the application of existing laws and regulations to AI,” Campbell’s office said. It spells out some things that her office would consider “unfair and deceptive” under state law: falsely advertising the quality, value, or usability of AI systems; making untested and unverified claims that an AI system performs with accuracy equal to a human, is more capable than a human at performing a function, is superior to non-AI products, or is free from bias; and misrepresenting audio or video content of a person for the purpose of deceiving another.

Campbell’s advisory says that AI is “being deployed in ways that can deceive consumers and the public as in the case of chatbots used to perpetrate scams or to surreptitiously collect sensitive personal data from consumers, deepfakes, and voice cloning used for the purpose of deceiving or misleading a listener about the speaker’s true identity.”

AI-created images or videos that often depict situations, actions or speech that never really happened, known commonly as deepfakes, have proliferated as AI technology has become more sophisticated in recent years, and policymakers have taken notice. The revenge porn bill that the Senate passed last month addressed the use of AI to create nonconsensual pornographic images, and the House on Tuesday agreed with the Senate to have the Committee on Election Laws review a petition (SD 2932) of Sen. Barry Finegold for legislation relative to deceptive and fraudulent deepfakes in election communications.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button