Military

Militarizing AI Amidst Conditioning Social Fear

Image Credit: Stefan-Bogdan Holeac

On September 24, 2019, Boston Dynamics officially released “Spot” their infamous Robo-Dog to be available on select markets. On September 25, 2019, Georgia Institute of Technology and Northwestern University revealed “smarticles,” which in non-technological terms, is robots made out of robots. Now, the innovations themselves are not the point question. Rather, let’s begin the conversation with the funding that is setting them in motion.

Boston Dynamics have received funding from the Federal Defense Advanced Research Projects Agency and the United States Marine Corps. Georgia Institute of Technology and Northwestern University received funding from the United States Army. In addition, in 2018, it was reported that the Pentagon is investing nearly $1 Billion USD on combat-use robots and artificial intelligence.

However, concurrently, society is bombarded with anti AI sentiment from credible technological experts like:

“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic”New York Times

“I mean with artificial intelligence we’re summoning the demon.” -Elon Musk

“The development of full artificial intelligence could spell the end of the human race…”Steven Hawking

Here is where we need to address the “artificial” elephant in the room — Although we continue to innovate AI, and seemingly encourage the development (in this case towards military utilization), we are also simultaneously being conditioned to fear it. This mixed message is comprised of the consistent, exaggerated social fueling of AI’s underlying intention to undermine humanity while — at the same time — aiding AI development for combat. Though our fears may be warranted to a certain extent, they are left unregulated and un-clarified, whilst burying potential positive AI utilization.

This paradox invites the continuation of AI controversy that will ultimately detract from the potential benefits of non-weaponized AI in combat. (i.e. bomb disposal, LS3 etc.) Arguments swaying in favor of application have valid points for capable benefits— however with the need for significant regulation, restriction, and abidance. The guidelines outlining the acceptable types of artificial intelligence and their functions in combat needs clarification.

One main issue here is the zeroed-in focus on “fully autonomous weaponry” and the fixation of the label. First, there is no doubt that fully-autonomous AI regardless whether they are considered weapons or non weaponized machinery pose a significant threat, and its concern warranted over 100 countries to express the need for limitation. UN Secretary-General António Guterres tweeted: “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” Strong opposition towards “full automony” was also expressed active duty military personnel polling 73 percent in opposition. Even so, there continues to be disagreement among Convention on Certain Conventional Weapons Member States on the type of regulation needed. But in neither of those UN Council or Human Rights Watch articles did they mention anything about semi-autonomous weapons or non weaponized machinery. Whether it is up for speculation if the implied encompasses both is an invitation for loopholes. The potential for this already exists with the Modular Advanced Armed Robotic System (MAARS), which has the capability for lethal weaponry. That being the case, this would further open the door for asymmetric warfare among nations. Not to mention it could be terrifying.

Secondly, but of equal importance, is the fixation on weaponry, not fully autonomous machinery. Fully autonomous machinery and its social attitude has been hiding underneath the weaponry argument and also needs to be discussed. Though for the most part, the non-weaponized machinery mentioned, such as the LS3, is only semi-autonomous. However, the fixation on the term “weaponry” may present a significant problem.

Therefore, clarification is essential to distinguish — for example — a country creating a platoon of LAWS versus manufacturing a semi-autonomous LS3. The ultimate goal of non-weaponized AI usage is to be able to save and aid lives of soldiers, and to strictly prohibit destructive and lethal utilization.

Simultaneously, this conditioned “fear” amplifies our negative association with artificial intelligence. There are two sides of the coin here. This social fear builds the reluctancy to trust AI’s intentions in general — so, how can we see them as an enemy, but expect them to help us in combat?

On the same token, that same fear exasurbates the presumption of destrucivity. Having that underlying attitude may influence the engineering process in confirming the stereotype. Building LAWS is the perfect example of this. We can see this already in companies like Boston Dynamics, who have already experimented with Atlas, or South Korea’s METHOD 1, where both models could easily develop into a LAWS if left unregulated.

In short, unfortunatley in situations like these, we cannot look in fear nor through rose colored glasses. The lack of solidified agreement on clarification and added combination of social attitude go hand in hand. Though the fear is warranted, it is also discouraging trust in innovation. Artificial intelligence can do a world of good, and a world of bad. Those need to be seen collectively, especially with a militarization component.

Show More
Back to top button

Pin It on Pinterest

Share This

Share this post with your friends!

Close