Which leads to the appeal of a mechanized infantry that never gets tired, never gets sick, kills without remorse, follows orders invariantly, doesn't eat or drink, has an upgrade path but most importantly can't be killed only destroyed or broken.
Sure the first generation won't replace all soldiers and they may never replace them all. We will need logistics and support people of course. Its not a perfect solution. But the second it becomes cheaper to put a robot in the place of a human on the battlefield.
I do believe, that it is first not a question of money (being cheaper).
You do not have images of flag covered coffins returning home.
You have no insubordination.
You have the possibility to do economic promotion without being labeled as doing such (regarding international treaties (at least here in the EU)).
You do not need all these training facilities.
Just to name a few.
So they will even do it, if it is not cheaper to do (imho). And logistics can be automated as well (delivery by drones, something like fueling done by other drones, and so on).
I believe we will see drone-carriers analog to aircraft-carriers being at least semi autonomous as well within our lifetime.
Flag covered coffins are a powerful image which help to galvanize the people and maintain public support for a campaign.
It's hard to imagine that people will care much if all they see is a pile of scrap metal being shipped home.
It'll be extremely difficult for a government to use robotic infantry whilst having public support for such an action.
Rather than appealing to a sense of "brotherhood/camaraderie against a common foe" in order to support a campaign - propogandists will likely have to exploit the peoples sense of fear to garner support. This is a shift we have already started to see with the "war on terror".
I'm all for this, not just for our soldiers, but also for the remote population. Part of the problem of actual people in these situations is that they fear for their lives, so make decisions based on their own fear and keeping themselves alive. Eventually, robotic soldiers can hopefully be better than human soldiers at identifying a threat, even if it's just because they are programmed to take more time to do so. There will be a lot of incentives to make them as cautious as possible when categorizing civilian and soldiers in the instances where the only threat is to themselves.
Right - a robot might kill without remorse, but it also never kills because of fear, or hate, or to take revenge for its fallen comrades.
But then why does it kill at all? Presumably because it's been sent to further a human agenda.
If you imagine a noble military purpose - liberating an oppressed population from an aggressive occupying enemy, for example - then your robot soldiers are awesome; they will target only combatants; minimize collateral civilian casualties; they will never loot or rape; they will selflessly interpose themselves between the innocent and those who would harm them. The perfect heroic soldier, better than any human army could be. They will be greeted as liberators.
But to the extent that the underlying human agenda involves pacifying a civilian population, instilling fear, or outright causing terror, there's no reason to think that a robot soldier would not be capable of being far worse than human soldiers. It can't be reasoned with. It doesn't have a conscience. It doesn't matter that it doesn't 'fear for its life' when kids throw stones at it if it's been programmed to respond to that threat with deadly force precisely to discourage other kids from throwing stones.
Ah, but at that time, we have something to work with. Documented evidence of automated military responding with undue force to civilians can be assessed as deemed useful by the world. If guidelines are developed, and nations sign on to them, evidence of overly aggressive robot soldiers can be seen as similar to chemical weapons, or more likely, landmines. As such, sanctions can be imposed, etc.
At the point where we take humans out of the "in-the-moment" decision process, a lot of thorny issues about what's acceptable in specific situations can become less ambiguous. Agreeing on rules is much easier than agreeing on what's acceptable behavior for a person in every situation, because a lot of that depends on state of mind.
Sure the first generation won't replace all soldiers and they may never replace them all. We will need logistics and support people of course. Its not a perfect solution. But the second it becomes cheaper to put a robot in the place of a human on the battlefield.
They will.