Monday, 1 December 2008

Pentagon hires British robotics expert to advise them on building robots that won't violate the Geneva Conventions



Is it not just the saddest indictment of the human condition to learn that the Pentagon is funnelling billions of dollars into a research programme for the creation of autonomous systems (robots, to you and I), which will conform to the laws of war and be incapable of violating the Geneva Conventions?

Read the full report here.

I mean, are we so far gone - so ethically corrupt - that our only hope of fighting future conflicts in an honourable and, dare I say, "humane" way is to ultimately replace human soldiers with programmed machines? According to Ronald Arkin, a computer scientist at Georgia Tech University working on software for the US Army, it's a distinct possibility. Arkin has written a report which concludes that robots, while not "perfectly ethical in the battlefield", can "perform more ethically than human soldiers." Arkin adds that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events".

The research is a product of ongoing concerns among Pentagon chiefs following various studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants. One such study, conducted by Army medical experts from August 28 to October 3 2006, and published last year, revealed that only 40 percent of American marines and 55 percent of soldiers in Iraq said they would report a fellow service member for killing or injuring an innocent Iraqi. The same study also said that well over one-third of soldiers and marines believed that torture should be allowed to gain information that could save the lives of American troops, or knowledge about insurgents.

However, while the Pentagon's research programme is seemingly born of a desire to eliminate incidents of violent retribution on the battlefield (such as the massacre at Haditha, and acts of torture and prisoner mistreatment), it still doesn't address the real problem: the chain of command.

It's all very well considering replacing the frontline boots on the ground with emotionless robots, but what of those giving the orders? Are those higher up the food chain not at all culpable for either cultivating (or expressly engineering) an aggressive and seemingly amoral climate in which countless battlefield atrocities have been committed over the last seven years? Only last year, a Marine corporal testifying in a court-martial said Marines in his unit began routinely beating Iraqis after officers ordered them to "crank up the violence level." [Cpl. Saul H. Lopezromo was testifying at the murder trial of Marine Cpl. Trent D. Thomas, a Marine corporal who played a key role in the kidnapping and shooting death of an innocent Iraqi civilian.]

It has also been widely reported that the United States' use of torture against enemy combatants being held in detention facilities, such as Abu Ghraib and Guantanamo, was sanctioned at the very highest levels of government. Senior US intelligence officers have subsequently been lobbying the outgoing president Bush to issue pre-emptive pardons to the men and women who could ultimately face charges from the incoming Obama administration for following his orders in the war on terror.

So with this in mind, it might be an idea for the Pentagon to consider replacing the executive branch of the United States government with ethically programmed robots. Let's start at the very top of the tree and see how pre-programmed ethical decision-making from the upper echelons of the command structure affects the frontline ranks of the US war machine! At this point, anything's worth a try.

No comments:

Post a Comment

Share this post