South Korea’s science and information technology minister said on Wednesday the world must cooperate to ensure the successful development of AI, as a global summit on the rapidly evolving technology hosted by his country wrapped up.
The AI summit in Seoul, which is being co-hosted with Britain, discussed concerns such as job security, copyright and inequality on Wednesday, after 16 tech companies signed a voluntary agreement to develop AI safely a day earlier.
A separate pledge was signed on Wednesday by 14 companies including Alphabet’s Google, Microsoft, OpenAI and six Korean companies to use methods such as watermarking to help identify AI-generated content, as well as ensure job creation and help for socially vulnerable groups.
FOX NEWS AI NEWSLETTER: HOW ARTIFICIAL INTELLIGENCE IS RESHAPING MODERN WARFARE
“Cooperation is not an option, it is a necessity,” Lee Jong-Ho, South Korea’s Minister of Science and ICT (information and communication technologies), said in an interview with Reuters.
Han Duck-soo, South Korean Prime Minister, gives a speech during the opening ceremony of the AI Global Forum in Seoul, South Korea, on May 22, 2024. South Korea’s science and information technology minister said on Wednesday the world must cooperate to ensure the successful development of AI, as the summit on the rapidly evolving technology hosted by his country wrapped up. (REUTERS/Kim Soo-hyeon)
“The Seoul summit has further shaped AI safety talks and added discussions about innovation and inclusivity,” Lee said, adding he expects discussions at the next summit to include more collaboration on AI safety institutes.
The first global AI summit was held in Britain in November, and the next in-person gathering is due to take place in France, likely in 2025.
Ministers and officials from multiple countries discussed on Wednesday cooperation between state-backed AI safety institutes to help regulate the technology.
AI experts welcomed the steps made so far to start regulating the technology, though some said rules needed to be enforced.
“We need to move past voluntary… the people affected should be setting the rules via governments,” said Francine Bennett, Director at the AI-focused Ada Lovelace Institute.
AI services should be proven to meet obligatory safety standards before hitting the market, so companies equate safety with profit and stave off any potential public backlash from unexpected harm, said Max Tegmark, President of Future of Life Institute, an organisation vocal about AI systems’ risks.
South Korean science minister Lee said that laws tended to lag behind the speed of advancement in technologies like AI.
“But for safe use by the public, there needs to be flexible laws and regulations in place.”