The Federal Communications Commission's (FCC) recent adoption of a new regulation makes automated calls using voices generated by artificial intelligence (AI) illegal in the USA. The decision comes in the wake of robocall campaigns falsely imitating the voice of President Joe Biden, and marks a significant step forward in the fight against phone scams.
The evolution of communications regulations
New FCC regulations extend the scope of the Telephone Consumer Protection Act to include voices generated by artificial intelligence in the definition of "artificial" communications. This measure aims to provide law enforcement authorities with more effective tools to combat abuse and fraud associated with automated calling, recognizing the potential for nuisance and deception of these advanced technologies.
The threat of AI robocalls and the legislative response
Faced with increasing cases of scams and misinformation via calls using highly realistic synthetic voices, the FCC has taken drastic measures to eradicate this phenomenon. By classifying such calls as illegal practices, the commission is putting in place a stronger legal framework to protect citizens against the growing risks posed by voice cloning and sophisticated audio manipulation techniques.
Towards better consumer protection
By requiring prior written consent from consumers for all automated calls, including those employing synthetic voices, the FCC is strengthening privacy protection and public tranquility. This proactive approach underscores the authorities' commitment to preserving the integrity of personal communications and combating the misuse of artificial intelligence technologies in telecommunications.