featured-image

By Daniel Shin Daniel Shin Existential risk in association with AI refers to the potential for advanced artificial intelligence systems to cause severe harm or even lead to human extinction. This risk arises from scenarios where AI systems, especially those with artificial general intelligence (AGI) or superintelligence, might act in ways that are misaligned with human values or beyond human control. It’s a complex and widely debated topic, with some experts advocating for stringent regulations and others emphasizing the potential benefits of AI advancements.

What does that really mean to us personally and practically? Does AI pose several potential threats to human life, ranging from immediate risks to long-term existential dangers? 2024 is the biggest election year in history. Voting has taken or will take place in 2024 in countries including the U.S.



, India, Russia, Iran, Pakistan, Mexico and so forth. We’re halfway through while a total of 76 countries are scheduled to hold elections this year. Hence, misinformation and manipulation deserve scrutiny because AI can generate and spread misinformation at an unprecedented scale during the election year, undermining trust in information sources and destabilizing societies.

AI can be used to manipulate individuals’ behavior and opinions through targeted advertising and social media algorithms. We’ve already seen several major elections where social media platforms played a pivotal role as social media has already become the main battleground for politics. AI has the potential to trigger or escalate conflicts, particularly through its applications in military and strategic contexts.

Some could argue that it is yet too early to talk about loss of human control over weaponized AI where AI triggers autonomous attacks, which could be deployed in warfare without human oversight. However, in scenarios where AI systems are given significant control over military operations, there is a risk that these systems could act in ways that are not fully understood or anticipated by humans at the critical moment. Misinformation and cyber warfare are probably more imminent risks when it comes to the global security agenda.

AI can generate and spread misinformation, which could destabilize societies and create tensions between nations. Additionally, AI-driven cyberattacks could target critical infrastructure, leading to conflicts. AI can process information and make decisions much faster than humans.

In a military context, this could compress decision-making timelines, increasing the risk of hasty or miscalculated actions. Pressure mounts in the AI arms race. The development of advanced AI technologies that are coupled with drones and robots could provoke an arms race among nations, similar to the nuclear arms race.

This competition could increase the likelihood of conflict. This competition among the great powers can be used to develop autonomous weapons systems that operate without human intervention. These systems could make rapid decisions on the battlefield, potentially leading to unintended escalations and casualties.

These risks are significant. It’s important to note that AI also has the potential to enhance security and prevent conflicts through improved surveillance, early warning systems and better decision-making tools by calculating all potential consequences of military actions. The key is to develop and implement AI technologies responsibly and with robust ethical guidelines.

However, it is easier said than done. AI cannot be fully independent from politics. While AI can be designed to operate with a degree of neutrality, complete independence from politics is challenging due to several intertwined factors.

AI systems are created, programmed and maintained by humans, who inherently have political beliefs and biases. These can unintentionally influence the design and functioning of AI systems especially when AI becomes an integral part of military force. Governments and political bodies could regulate AI development and deployment.

Policies and laws governing AI are shaped by political agendas and priorities no matter who is in power. AI technologies can be used for the betterment of society but at the same time AI can be exploited for mass surveillance, enabling oppressive regimes to enforce control and limit freedoms. No one wants uncontrollable superintelligence.

However, technology is faster than the law. The speed of innovation is much quicker than we all imagine and the global dissemination of AI technology is much faster. If AI surpasses human intelligence, it might become uncontrollable.

Just as humans dominate other species due to our superior intelligence, a superintelligent AI could dominate humans. Though, prior to that, power, greed and obsession would sow the seeds for an AI-powered new hegemony at the cost of human existence. Daniel Shin is a venture capitalist and senior luxury fashion executive, overseeing corporate development at MCM, a German luxury brand.

He also teaches at Korea University..

Back to Luxury Page