US Risks Superpower Status without Military AI – Info Gadgets
Google is wrong, the US needs military AI
Andrew Thornebrooke
JULY 28, 2018
Project Maven, the Pentagon-backed AI imaging initiative that seeks to “result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right [sic] abuses,” marks just the first steps along very long road for the American defense and intelligence industries.
An evolving security landscape
Maven uses drone footage to build its machine learning models, and drew media attention earlier this year when Google backed out of the project over the concerns of dissident employees. The employees, some 3,000 of whom signed a petition against any involvement with the US military, demanded the company not contribute to any research that would be used for wartime applications.
General James Holmes of the US Air Force told reporters last month, however, that the US would need to vigorously pursue such research if it wanted to seriously deter war. A stance the Pentagon has thrown its weight behind, releasing a memo earlier this week announcing the establishment of the Joint Artificial Intelligence Center.
Former Deputy Defense Secretary Bob Work spoke of the issue at the Defense One Tech Summit, “They say, ‘What if the work is ultimately used to take lives? But what if it saves American lives? 500 American lives? Or 500 lives of our allies?” Work went on to reiterate that Google operates an AI research center in China, the research from which is absolutely within reach of the Chinese military.
In brief, Google’s decision to cater to the whims of employees in backing out of a US defense program, while simultaneously pursuing similar research on the soil of one of America’s largest rivals, could lead to American deaths down the line.
China’s long term goal to supplant US technology businesses through trade appropriation is well known, with China already having invested over $1.3 billion in American AI firms. Additionally, China has had no qualms about investing in the use of AI technologies for defense applications, recently demonstrating the use of surveillance drone swarms designed to carry out advanced processes such as collective decision-making and adaptive formations.
The US’ push for increased use of robotics in the military is well known, and the The US Marine Corps Warfighting Lab successfully demonstrated the ability for a single soldier to operate half a dozen drones simultaneously last month. But the future of such technology is currently uncertain, with the Pentagon recently banning off-the-shelf drones in military operations due to cybersecurity worries. The field implementation of fully secure drone swarms by the US military may be years out.
With the international political climate so severe, and with the rapid advance of AI in the defense sectors of other nations such as China, one is left to wonder at the seeming duplicity of Google employees’ demands over Project Maven.
What makes military AI different?
The arbitrariness of the Google employees’ petition was the subject of ridicule in a Defense One article last week by Johns Hopkins strategic studies student, Sam Bernstein, who wrote:
Like self-driving cars, autonomous weapons should be judged based on their net value, which requires a degree of knowledge about military operations that most AI researchers have not yet taken the time to learn. That does the public a potentially tragic disservice.
Bernstein’s article not only highlights the total lack of strategic, and even ethics training of most workers in the field of AI, but also serves to bring attention to how the public media sphere has warped our understanding of what AI in the battlefield looks like. Why do we consider a self-driving car that kills pedestrians any less a harm than a surveillance drone programmed to find terrorists?
The answer may lie in the moral attribution we popularly give violence, and in our preconceptions of what it means to be “artificially intelligent.”
Autonomous weapons platforms may make decisions, but they make decisions from an incredibly limited pool of possible choices. This goes against what we are often led to believe: that AI (and particularly military AI) is somehow going to develop true sentience and wreak havoc upon the world for its sins.
This strange, postmodern form of apocalypticism can be seen in most popular portrayals of military AI, with even the New York Times referring to military research into AI drone imaging as the “Terminator Conundrum.” But such headlines are alarmist at best, and absurdist at worst.
The particularly negative moral attribution given to machines of war stems, perhaps, more from a misunderstanding of the causal basis of a machine’s learning, than from any decision made by those machines in actuality. This misunderstanding has many roots, but the largest is likely human error in applying intentions to AI decision making processes.
The transference of intention
Earlier this year, the so-called “psychopathic AI,” (actually named “Norman”) demonstrated that AI fundamentally depends upon initial state programming, not dissimilar to a child. The Norman algorithm was made to trawl violent images before trying to analyze Rorschach inkblots. Unsurprisingly, how it interpreted the inkblots was quite gruesome. The study, carried out by MIT’s Media Lab, demonstrates one shortcoming of current AI technology: the inability of AI to meaningfully evolve beyond its programming in terms of choice-making.
Put simply, AI decision-making is incredibly similar to human decision-making in the respect that it is fundamentally based upon prior experience. AI can only interpret information and make decisions based on what data its programming has come across before in the same way that humans attempt to recognize patterns based upon their lived experience.
This is why the Terminator thesis of military AI and the Google employees’ threats of resignation over defense technology is flawed. Both rely upon our subconscious transferring of intention from human to machine. We have transferred the human will to kill onto the machine doing the killing. We think of the cold blooded mercenary, and transfer our understanding of their amorality to the military machine. We think of doom and apocalypse and an uncaring military-industrial complex, and we transfer the intention to do harm to anything it creates.
Will military AI one day replace soldiers in the field of killing? Absolutely. Will this event fundamentally change the landscape of warfare? Absolutely. Does the fact that artificially intelligent algorithms will be the ones pulling the trigger change anything about the need for national security or the desire to protect the nation’s interests from foreign interference? Not even a little.
The US economy may be in a trade war, but its military is in an arms race, and developing combat-oriented AI is just the first step towards winning.
###
Article Prepared by Ollala Corp