Artificial Intelligence: A Legal and Ethical Storm is Coming. The US Senate is now weighing in. What does this mean for AV?

Last summer, Blake Lemoine, a former engineer on Google’s Responsible AI team, claimed that after interacting hundreds of times with a revolutionary AI system called LaMDA, he believed the program achieved something only found in science fiction – a level of consciousness. He double-downed on those claims earlier this year when he said Microsoft’s chatbot feels like ‘watching the train wreck happen in real time.’

Though we’ve yet to see anything approaching HAL 9000, a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke’s 2001: A Space Odyssey, what we have seen has led another AI intelligence pioneer, Geoffrey Hinton, leaving his role at Google earlier this month to speak out about the “dangers” of the technology he helped to develop.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, the first to report his decision.


On Tuesday, Sam Altman, the 38-year-old CEO of OpenAI, told US senators that he would support regulation in many areas, including preventing election misinformation, making AI-generated content clear to users, and stopping people from using AI “to kill us all.”

He also discussed the potential benefits and harms of artificial intelligence. “If this technology goes wrong, it can go quite wrong,” he said.

Artificial Intelligence (AI) is a rapidly growing field of technology that has the potential to revolutionize the way we live and work. In the pro AV industry, many hail it as a game changer in the digital signage and conferencing sectors. However, it also has a darker side that needs to be acknowledged and addressed. AI technology carries a range of dangers that must be considered before its wider adoption.

Among issues Altman sees in the short term are AI systems that try to influence voters ahead of the 2024 US elections. “It’s one of my areas of greatest concern — the general ability of these models to provide one-on-one interactive disinformation,” he said Altman, during hearing that Sen. Richard Blumenthal (D-CT) began with a convincing but fake recording of his own voice, demonstrating the dangers of artificial intelligence.

Altman enthusiastically supports creating a federal agency to examine AI companies’ work. Christina Montgomery, IBM’s chief privacy and trust officer, disagrees, believing rules should come from existing agencies. Altman endorsed issuing licenses for complying AI products and removing them if violations are found.

He agreed with some lawmakers that new legislation should not cripple startups and smaller initiatives.

“It’s important that any new approach, any new law does not stop innovation from happening with smaller companies, open-source models, researchers doing work at a smaller scale,” he said. “That’s a wonderful part of this ecosystem and of America, and we don’t want to slow it down.”


The most obvious danger of AI technology is that it has the potential to become uncontrollable. AI systems are designed to improve themselves over time, which could eventually become so advanced that humans can no longer control them. This could lead to a range of disastrous consequences, such as the AI system carrying out actions not in the best interest of humans (like commandeering the Discovery One spacecraft in 2001: A Space Odyssey.)

Another danger of AI technology is its potential for bias. AI systems are often created using large datasets that can contain inherent bias. This means that the decisions made by AI systems may be unfair and unjust, as they are based on biased data. This could lead to decisions being made that are discriminatory and potentially harmful to certain groups of people.

Finally, AI technology carries with it a risk of job displacement. As AI systems become more advanced, they can increasingly carry out tasks that humans previously did. This could lead to wide-spread job losses as AI systems become able to do many tasks that humans currently do.


With AI, medical professionals can quickly and accurately assess and diagnose patient symptoms and use the data to develop better, more effective treatments. AI can also automate drug discovery and development processes, enabling researchers to quickly identify potential new treatments and determine the efficacy of existing ones. 

AI can also optimize energy consumption and reduce harmful emissions, helping reduce environmental impact. 

AI can also be applied to assist with the development of autonomous vehicles, potentially reducing traffic accidents and making transportation safer and more efficient. 


The professional audio and video industry has embraced artificial intelligence (AI) more and more in recent years. Here are just a few ways AI is being used: AI is being used to create realistic-looking computer-generated images, audio editing, video editing, speech recognition, music production, and many other tasks. AI can automate tedious tasks, allowing professionals to spend more time on creative projects.

AI can scrutinize user behavior and preferences to deliver customized content recommendations on devices like interactive TVs.

What is most exciting of all are the possibilities related to AV still emerging with AI. For example, the burgeoning fields of virtual and augmented reality stand to benefit from this technology, too. By enabling more immersive experiences in these areas, AI is increasing the accessibility of interactive AV solutions. 

Overall, AI is providing new opportunities for the audiovisual industry to improve the efficiency and quality of its processes and create new forms of content and experiences. 

Ultimately, AI has the potential to revolutionize many aspects of our lives if world governments – and HAL 9000 – lets us.

(2023, May 22). AVIXA. Retrieved from