First, this post is total Adam Nash bait.
Today's Fresh Air had on P. W. Singer, author of Wired for War: The Robotics Revolution and Conflict in the 21st Century. It's a pretty discussion on a number of fronts, including technical, ethical, policy, and even interface design issues. (Apparently, for example, the military based designs for some of their controllers on X-Box controllers, reasoning both that the video game manufacturers had done a bunch of ergonomics research for them, and that they could save on training costs since the typical military recruit would come "pre-trained" on such controllers.)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Singer and Terry Gross talked about how quickly those laws break down when applied to warfare — since we already have semi-autonomous machines capable of taking lethal action out there, for example. I'm not sure Asimov ever tried to apply, or meant to apply, those laws to war — probably time for some rereading on my part. They did mention one slightly more sinister model: Ed 209.
That scene seems slightly less funny now that a South African military robot killed 9 people and wounded 14 because of a software glitch. That sort of thing starts making me think more of Skynet or, if you prefer less dorky time travel with your sinister future robots, Fred Saberhagen's Berserker series, in which self-replicating machines attempt to exterminate all life in the universe.
I'm definitely going to pick up Singer's book, but also revisit Manuel De Landa's War in the Age of Intelligent Machines, written in 1991 and using early examples of the usage of robotics technology in the Gulf War (as well as much earlier examples, going back to the Renaissance.)