AS the capabilities of artificial intelligence (AI) continue to grow at an unprecedented rate, a pressing question arises: Are we losing control over this powerful technology? The rapid advancements in AI, from sophisticated machine learning algorithms to highly autonomous robots, are transforming various facets of our lives and industries.

However, this relentless progression also brings forth significant concerns about our ability to manage and regulate AI effectively.

In this article, we will explore the current state of AI development, examining recent breakthroughs, sector-specific applications, ethical considerations, and the potential future impact of AI. We aim to address whether the pace of AI innovation is outstripping our regulatory frameworks and what steps can be taken to ensure that this transformative technology remains aligned with human values and societal needs. Join us as we investigate the dynamic landscape of AI and the critical discussions surrounding its governance and control.

The Seoul AI summit

The AI summit in Seoul is a leading technology and business-focused global conference that shares knowledge on the intersection between AI technology and business models for key players in the industry.

The conference aims to advance global discussions and collaboration on the development and governance of artificial intelligence (AI) technology. The last AI Summit  Seoul was on May 21-22.  Attendees saw and experienced new technologies, as well as the opportunity to connect with others in the industry.

Keep Reading

The next AI Summit Seoul will be held on December 10-11.

AI safety concerns escalate

Experts are increasingly warning about the potential risks and unpreparedness for advanced AI breakthroughs. Key figures in AI safety have left their positions, raising alarms about ensuring AI systems remain safe and aligned with human values as the technology rapidly progresses.

The Guardian reports: “The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts including two “godfathers” of AI, who warn that governments have made insufficient progress in regulating the technology. A shift by tech companies to autonomous systems could “massively amplify” AI’s impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group.” US$6 million fine for faking voices and faces

The Guardian reports: “There have been several high-profile legal cases involving the misuse of AI technology. A consultant was fined US$6 million for creating an AI-generated impersonation of President Biden in a robocall.

“The family of Michael Schumacher won a lawsuit against a publisher for printing an AI-generated interview. A US man faces up to 70 years in prison for using AI to generate child sexual abuse images.

“Steve Kramer, a political consultant, who admitted that he deep-faked Joe Biden’s voice in a robocall that was sent out to thousands of US voters in January 2024, has been indicted and fined US$6 million.

“The robocall, which went out ahead of the first Democratic presidential primary in the US in New Hampshire, used artificial intelligence to fake Biden’s voice telling voters to stay home and “save” their votes for the November general election.

“Ten criminal charges were filed against Kramer out of Rockingham County on May 22, including allegations of bribing, intimidation and impersonation of candidates, TV station WMUR in New Hampshire reported. Similar charges were filed in Merrimack and Belknap counties in New Hampshire, where others reported receiving the robocall.

“Separately, the Federal Communications Commission announced on May 23 that it would fine Kramer US$6 million for the robocalls and also issued a US$2 million fine against Lingo Telecom, which is accused of transmitting the robocall. “It sounded like Joe Biden, and I was, like, ‘That’s weird,’ and then as I listened more, I’m like, ‘It doesn’t really sound like Joe Biden,’” Krista Zurek, who received one of the robocalls, told WMUR.

“The robocall is the first reported deepfake to be used in US presidential politics. Various kinds of attempts at fakes in public life and politics have always been common — but deepfakes use artificial intelligence and various technological tools to copy voices or faces, for example, in ways that are often extremely convincing.

“The incident in New Hampshire prompted the Federal Communications Commission to ban the use of robocalls — which utilise recorded voices on automated calls that dial multiple recipients simultaneously — using voices generated by artificial intelligence,” The Guardian  reported.

 In another development, ChatGPT suspended using a voice resembling Scarlett Johanssons after the actor objected to the unauthorised replication of her likeness.

The controversy echoes past ethical issues in Silicon Valley around consent and privacy.

AI productivity gains

A report found productivity has surged in economic sectors most exposed to AI, highlighting the technology's transformative impact. ChatGPT developer OpenAI has signed a major content deal with News Corp to bring news content from the Wall Street Journal, the New York Post, the Times and the Sunday Times to the artificial intelligence platform, the companies said on Wednesday.

Neither party disclosed a dollar figure for the deal. The deal will give OpenAI access to current and archived content from all of News Corp’s publications. Nvidia reported record growth amid the continuing AI boom.

AI safety standards

On the eve of the Seoul AI summit, 16 international firms signed up to new AI safety standards announced by UK PM Rishi Sunak, though critics argue the standards lack enforcement mechanisms.

A new agreement between 10 countries plus the European Union, reached on May 21 at the AI Seoul summit, has committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.

The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will bring together the publicly backed institutions, similar to the UK’s AI Safety Institute, that have been created since the UK launched the world’s first at the inaugural AI Safety Summit — including those in the US, Japan and Singapore.

In summary, the latest AI news highlights escalating concerns around safety and misuse, major legal battles, lucrative content deals driving AI progress, productivity gains from AI adoption, controversies over unauthorised replication of real individuals, and initial steps towards establishing AI safety standards.

Conclusion

The AI revolution is rapidly reshaping our world, offering immense opportunities alongside complex risks. As AI grows more advanced and autonomous, safety and ethical concerns escalate, exemplified by incidents like deep-faked political calls and AI-generated explicit content, underscoring the urgent need for robust governance.

The breakneck pace of AI breakthroughs and divergent expert views highlight the challenges in balancing innovation with responsible development.

As we navigate this uncharted territory, all stakeholders — tech firms, policymakers, researchers, and society – must prioritise ethical considerations and proactively address the risks posed by these powerful technologies.

Ultimately, the AI revolution holds both immense promise and peril. By adopting a cautious, evidence-based approach grounded in international cooperation, we can harness AI's transformative potential while mitigating dangers and ensuring alignment with human values.

  • Bangure has extensive experience in print and electronic media production and management.  He is also a filmmaker. — naison.bangure@hub-edutech.com