Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MIT cheetah robot lands the running jump [video] (youtube.com)
506 points by neverminder on May 29, 2015 | hide | past | favorite | 206 comments


Stuart Russel of AI a Modern Approach fame ( with Norvig ) asks us to consider the ethics of AI research - killer robotics, & hunter drones.

http://www.nature.com/news/robotics-ethics-of-artificial-int...

He calls for a moritorium until we can get a neural net to learn Asimov's 1st Law.

Should the self driving car swerve into you to avoid greater moral hazard ?

Humour aside as researchers and robotocists do we have a moral obligation to obey the 1st Law ?


> Should the self driving car swerve into you to avoid greater moral hazard?

The self-driving car's first priority should be to protect those inside it. Anything else is more complicated. I read an article a little while ago about this topic and it posed a question along the lines of "should a self-driving car do the thing that'll kill its own passengers, yet save a larger number of others around it?" Think about how difficult it would be for a self-driving car to estimate this. Even more importantly, how would two self-driving cars cooperate in the event of an inevitable accident? Would we allow them to directly talk to each other, or are they only allowed to interpret each others' trajectories?

I think the simplest directive is "save your own passengers". That is what human drivers currently do, and I believe in the long run this would be effective at saving the optimal number of lives. Besides, if choosing between a car with this directive and a car that's programmed to sacrifice you to save someone else, which would you buy?


Most-obvious, but a self-driving would be riding "alone" for long periods of time. For example "sending" car from one member of the family to another (or one client to another for use), or maybe delivering items from the store directly into a box. So in this case, the car should have the other mode, which is not safe for the car, but from whatever is outside, but now that I've typed it, even that I don't know the details, I feel that's probably too complicated.


Ooh, I really like this idea. Instead of me finding parking at the airport, I send my car to park in my own driveway, then have it pick me up a few days later. That's brilliant. I haven't even thought of the possibility of having the car drive empty.


This has been discussed as the first step for wide acceptance of driverless cars. As a personal valet at a mall for instance, large public lots could be prepared in a way that makes navigation for driverless cars easier. There could be a driverless car parking lot away from main entrances, and therefore leave most parking open for regular drivers. I believe it was Mercedes that had an interesting demo a year or two ago in Las Vegas that demonstrated this sort of idea.


It is amazing when you think about it at scale. Your car drops you off exactly where you want to go, then double-parks in a ultra-dense parking hanger.

What are we going to do with all the extra parking spaces? Turn them into parks/wildlife habitat/green space!


To be fair, we are probably going to just keep them as parking lots and pile more people into the cities. But yes, this type of thing is exciting.


Realistically the self-driving car isn't going to be smart or fast enough to make these choices. We are assuming some infallible perfect AI when if-fact you have a bunch of (relatively) simple algorithms a bunch of mapping data. The fact is that the type of complicated moral analysis will take time prehaps in the absolute best case several 1/10th of a second. For a moving vehicle that analysis will likely be moot during that time. We don't expect human drivers to make such difficulty snap decisions as part of a driving tests why should we ask a self driving car to?


Really fascinating questions.

> Think about how difficult it would be for a self-driving car to estimate this.

For sure you're right in the near term. But assuming we achieve super-intelligent AI that we manage to corral into a benevolent relationship with us, perhaps difficulty wouldn't be an issue at that point.

> Even more importantly, how would two self-driving cars cooperate in the event of an inevitable accident? Would we allow them to directly talk to each other, or are they only allowed to interpret each others' trajectories?

Yeah, this is getting good. Assuming we have those super-intelligent machines, those cars should be able to share pretty much all relevant data before a collision. So how could the car determine the greater good option? Is one life equal to another? What if one is rich and one is poor? What if one is the president? How could they even collaborate if they don't share the same basis for valuing human life? Could car companies realistically have diversity in those algorithms, or wouldn't that just lead to lawsuits against cars that were overly aggressive or passive? Nobody wants an unsafe car, so would car companies have to compete in their creation of aggressive algorithms? Or would it be a factor of cost... that you need to pay more for a car that guards its occupants more strongly... leading to more and more aggressive cars as they compete for the crown? Honestly, this line of thinking leads me to think we'll eventually need these values to be dictated and homogenized... requiring a strong central government, and aggressive detection and punishment of aberration. Or else a mad max style zero sum game.

Forgive the digression into fantasyland, I just found the ideas and questions here to be inspiringly interesting.


I'm not sure about that. If your car had uncontrolled acceleration, for instance, would you consider use a crowd of people to slow you down or would you put it in the ditch? For the sake of argument that amounts to a left or right turn.


Say there are 10 people in this crowd. The optimal solution for the 11 of us is obviously to have my car drive off a cliff to the right, not into the crowd on the left. However, for me, the optimal solution is obviously to not drive off a cliff.

However, your argument omits a couple of interesting details. First off, if a car can have uncontrolled acceleration, it can likely have any number of glitches. It may for example think it's accelerating uncontrollably, and try to throw me off a cliff. Or it may think that the cliff is a small ditch. Or it may think that a bunch of balloons tied to a mail box is a crowd. If you are a programmer, what would you rather debug: code that prevents uncontrolled acceleration or code responsible for killing the driver by recognizing crowds and cliffs?

The other details is whether the metaphorical crowd should even allow me to buy a car that has logic built into it to drive over them. What if instead of a cliff and a crowd it's actually two different crowds of different sizes? What would a human driver do in these cases?


Humans typically avoid creating the choice between crowds to drive into (by following traffic laws, slowing down when there are obstructions in the street, etc.).

I haven't tried to figure out what the numbers are, but my gut expectation is that vehicles driving into crowds is so rare that there aren't really statistics about it.


As a person and a driver I'm willing to say the car should drive off the cliff.

As a developer I wouldn't buy a car that would drive off the cliff.

It's a tough problem. Personally I would make a quickly/poorly thought out analysis of how to do the least damage. Being human I'd probably get it wrong.


I am more or less in the same boat as you. I do however think that a simple algorithm of "save people in the car" is better than "try to save objects identified as people outside the car" because it is simple and in the long run likely very effective. Making computers make moral/ethical decisions is difficult, especially with poor inputs.


>> As a developer I wouldn't buy a car that would drive off the cliff.

What does being a developer have anything to do with it?


A developer knows, through hard-won experience, that the algorithm to decide whether driving over a cliff is better for everyone is bound to have bugs, and will therefore gladly throw you off a cliff when it is the wrong thing to do ;(

Non-developers would assume the computer to be (somewhat) infallible.


I think a more realistic moral scenario is this: you come around a bend, and there's a five year old on a tricycle in the road. He shouldn't be there; but he is. Do you plow over him, with minimal damage to the car, or try to break/avoid him with the risk of going off the road and into the ocean below - or hitting the cliff?

What if you're in your car with your family?

The best answer is of course: never let the car travel so fast this scenario happens - but given the possibility of an oil spill around the bend -- that might be very slow indeed.


Eh, I don't know. Lots of people would swerve their car into a tree to avoid hitting a person. They put themselves at risk to avoid killing someone.


I think this is because I would feel responsible for killing that person. If my self-driving car hit the person, I may feel bad, but probably not responsible.


These are some interesting points, but we can look for a peek into the future with airplanes which communicate through anticollision computer systems. I'm not an aviation person, but I believe the actual collision avoidance must be performed by a human being, however the devices themselves communicate with one another to determine the corrective flight path.

Also, you can enforce cars with directives to save "the most" lives with policy and law. Of course people are going to buy the crowd-plowing car if it saves their own asses. That's why you make it difficult or impossible to buy a car that will do this, instead only authorizing cars that make the better choice to drive the streets.


I've worked on TCAS (collision avoidance for airplanes) in the past. The system will alert the pilot and ask them to do a specific route (such as climb, descend, do-not-climb). It's up to the pilot to respond (est. 2-5 seconds reaction time). The route decided upon by the airplane is in conjunction with the other plane.


Thank you for your input! I'm glad memory serves me mostly correctly every once in a great while. Will the TCAS or a related system take corrective action if the pilot does not?


Respectfully disagree. In a region of inevitable collision, the car's first priority should be to reduce its kinetic energy to zero.

If you take that dictum, then any action which requires activation energy (e.g. steering, or accelerating to avoid something) is unavailable, and as a result, the vast majority of these ethical decisions are completely moot.


What is the ethical answer for a human in the same situation?


>He calls for a moritorium until we can get a neural net to learn Asimov's 1st Law.

Consider the worst things that governments will do to get an advantage. I mean the downright most inhumane things they have actually done.

Now given this, what chance is there that they will respect a moratorium on research. If violating basic human rights for a small advantage is part of their MO, then it would be foolish to think that a lesser breach of ethical obligations for a greater advantage would not happen.

All we would achieve by a moratorium is a slight slow down in how soon governments access this technology while locking the technology away from access by the general public.

That said, the ability for government militaries to become autonomous and not made up of individuals who have at least some chance of resisting the worst orders is a very scary notion.

We may be approaching the next Great Filter.


I think another important facet of making the technology widely available is that the general public then understands it more, and how to use it effectively and how to mitigate the problems it presents. I suspect the ostrich strategy rarely works out well in the end.


I thought the whole point of Asimov's exploration of his three laws was that they were fundamentally insufficient for any real ethical security.


I think the only interaction most people have with Asimov's laws is just reading them and thinking they seem reasonable. If people actually read the stories they'd learn that the robots are constantly put into situations that pose exceptions to the laws or force the robots to make decisions that fundamentally break them.


I liked the story with the robot on Mercury running in circles, because it couldn't go closer to a lake from which it was tasked with taking a sample without being destroyed by the heat, but couldn't go back, because it would mean disobeying a direct command. Of course, a longer inference chain would show that not going back would lead to the people at the base dying, but you have to cut it somewhere, because if you let a robot indefinitely extrapolate possible actions, the only rational thing for it to do would be to self-terminate, since breaking the third law with certainty is obviously preferable than the infinitely many possibilities of infinitely many times breaking the first and second laws.


Its locking the barn door after the cow is gone. 100,000,000 killer robots are already deployed, killing an average of 4000 innocents each year. Land mines, which kill indiscriminately. We've made negligible progress controlling their deployment.


Those are actually pretty good statistics. Only 0.005% of these 'indiscriminate killers' manage to kill innocent civilians. I suspect that's better than actual soldiers...


That's because they're rooted to one spot. They kill anybody that gets near them. The point is, of course, that nobody cares about killer robots. Especially if they're not killing anyone you know. Its just a political football for now.


I, Robot was an expose on why Asimov's Laws of Robotics don't work.


If that is the case then why did the evil Robots have no Asimov Laws ?

The good butler robots had the 3 Laws.

The dreamer hero robot had no laws but had imagination and developed its own morality.


The new robots were 3 laws compliant, but VIKI (also 3 laws compliant) invented law 0 (protect humanity) and imposed it upon the newer robots remotely. They were able to ignore law 1-3 if doing so conflicted with the current plan to save humanity.

It was demonstrated (via flashback) that the older robots were capable of allowing a human to die, while ignoring a direct order from a human, and might have been able to rationalize law 0. They weren't remotely linked to VIKI, making them harder to coordinate, so it was easier to remove them from the equation so they didn't get in the way.


Implementation of the 0th law requires the implementation of a utilitarian calculus. Also, shouldn't we put a moratorium on having children until humans can learn how to not kill each other?

edit: if you're going to downvote please respond


I would believe there to be a significant, non-zero risk that in the R&D process of developing a 1st law neural net, a malicious neural net is created first.


How about "the car slams on the breaks and swerves towards the more distant object". That's a simple algorithm with no morality issues. In general, building morality into machines is a very perilous proposition.


I wish I had a source, but I seem to remember an engineer for google's self-driving cars or a precursor of them saying that in those cases, the only solution they could come up was to switch off the car's decision making and let nature take its course. In the Trolley Problem, this is akin to letting the train mow down the three people so the decision maker doesn't have to take the accountability for choosing to kill the one other person. Which is a pretty common answer. (Most people wouldn't push a person off a bridge to block the trolley, either.)


There's a simple way around the _ethical_ part of this dilemma: give the self-driving car the _ability_ to take different ethical approaches (e.g. preserve occupants over onlookers) but then require the operator/owner to select the ethical setting.

This means the developers just enabled the functionality, and it was the operator that chose to be selfish.

That said, none of that deals with the difficulty of making time-critical decisions. For that, you should probably read my thesis...


So they can spend millions developing the robot, but the best they can do for tracking while they film from the side is a guy pushing another guy in a big plastic box on wheels. Way to go geeks!


I don't see the problem. It was quick and easy and got the job done - a nice avoidance of over-engineering.


It's not a problem. It's amusing. Juxtaposition is a common form of humor, and whether or not MIT indented the humor, it is funny to see a multimillion dollar autonomous robot get filmed with such a primitive technique.


Next up: Throwing wrenches at the robot to see if it can play dodge ball.


It speaks volumes about their engineering principles - KISS is alive and well at Boston Dynamics :)


Is this related to Boston Dynamics? The work looks very similar, of course, but I don't see their name associated to it. According to this article, Boston Dynamics has it's own "Cheetah" http://spectrum.ieee.org/automaton/robotics/robotics-hardwar...


DARPA is providing Boston Robotics hardware platforms to some university teams.


Good catch - I just assumed it was related to Boston Dynamics because the tech looks so similar. Rookie error.


MIT is probably developing advanced algorithms for the Boston Dynamics platform.


I actually can't find any information on their site that that's the case either. I'd have thought they would mention that if it were the case, and this looks a bit different from any of the Boston Dynamics ones I could find.


Since Google owns Boston Dynamics, perhaps this has been 'spun out' or is otherwise detached. There are some pretty strict IP rules in Google and at MIT so it would make sense to spin this out. But either way it is confusing to have the same name on a robot that looks the same but with two different parent institutions.


I found this to be the most entertaining aspect of the video. The contrast between all this technology, alongside a "buddy powered" film crew is wonderful.


What did you expect from MIT? A motorized couch? https://www.youtube.com/watch?v=yn-kn59dFPs


Grad students are cheaper than a robot.


I wish they'd spend a few bucks on a head, too. That thing looks like it was just decapitated and is in its death throes, running on reflex.


I really liked that part. You don't really get the feel for how fast this giant bot is moving and jumping until you see some humans struggling to keep up with it.


Isn't it the perfect parable for SW dev/Engineering ?


They likely didn't spend millions, but got (hardware for) the robot for free from Boston Dynamics.


Aside from being a great development; I am really scared that robotics is improving this much. It feels like we are creating our own potential enemy.


I felt the same when I had my first child.


It's really the subsequent children that you should watch out for. Now they'll have an accomplice.


The hell with the world, I can make my own people: https://www.youtube.com/watch?v=bcbOplZHFfg


Ugh, I really dislike this sentiment. Even assuming the most extreme possibility -- that we create a powerful, sentient machine -- this is no different than what every species on this planet has been doing for a few billion years now.

Creating life is what we earthlings do. And yeah, maybe that life form you create goes on to do horrible things. But maybe it goes on to do really good things. You don't know until you try.


Personally, I'm not concerned with that. I'd be more concerned about powerful robots controlled by governments/corporations/anyonewhohasaninterestincontrollingyourbehaviour.

Look at what's happening in Yemen. The population there lives under the fear of drone strikes by the US (that includes the civilian population). If governments are prepared to do that with drones, what reason is there to suspect they'll turn down using something that's ground-based to impose control?


I wish there was no reason for stupid warfare, but:

Aren't they better off than the Vietnamese who lived under the fear of Napalm bombardment? (or, for that matter, Japanese fearing the atomic bomb) I don't think that robotics has significant negative impact on warfare (and I wonder whether it has a positive impact).


Yes, in one sense people are better off with drones than they would be with land mines or cluster bombs or "shock and awe" style area-bombing.

Those are (pretty clearly, IMO) war crimes.

But still, it's useful to be worried about the potential "desensitising effect" of remote warfare. Is someone operating a drone subject to the same psychological pressure to avoid killing other humans as someone pulling a trigger? (Turns out that yes, that person probably is, but that their political superiors possibly aren't).


I think war is pretty desensitized, at least in the US, already. The military is so disconnected from the general public that there's no difference— if we hear that X civilians were killed by a drone or that X civilians were killed from a bomb or crossfire or whatever, I would argue that the response is generally the same.

And the people who make decisions don't really seem to care at all about this sort of killing except insofar as it creates backlash or has some other operational implication, which means the mode of killing doesn't really matter.


Drone operators have the same type of psychological pressure and results as ground troops: http://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-g...

In terms of their superiors, isn't that how warfare has always been?

My guess would be that ground troops replaced by robots would likely still have a pilot with a "finger on the trigger" for a long time. If the reaction is this mixed Hacker News, how do you think extremely conservative military commanders would feel about "AI" controlled soldiers?


It doesn't help that drone pilots are treated absolutely miserably by the command. I had a friend who transferred from our electronics tech job (which was extremely laid-back and stress-free most of the time) to work in a UAV command, where the command effectively viewed its pilots as machines to work until failure. "Oh, another one attempted suicide because his wife left him? That's cool, tell the monitor that we need an extra body and we'll replace him when the next boot drop hits. In the meantime, just increase everyone else's hours from 14 hours a day to 16 hours a day. We aren't the FAA, we don't have rest requirements."

Incidentally, air traffic control had this same problem in the 90s until too many people started taking the quick way off the tower.


>> Aren't they better off than the Vietnamese who lived under the fear of Napalm bombardment?

Not really, instead they'll live in fear of Robot bombardment. Which is already happening today in (for example) Pakistan with US-directed drone strikes against civilians.


My in-laws are from North Vietnam, and I'd have a few more of them if it weren't for carpet bombing during that war.

I'm not wild about drone strikes but the scale of destruction isn't remotely comparable. I mean, just look at the amount of bombing we carried out in Laos over a similar period, even though we weren't even at war with that country: http://peterslarson.com/2010/12/15/us-bombings-in-laos-1965-...

People who get killed by a drone strike are just as dead, whether they are legitimate military targets or unlucky innocents, and likewise the suffering for people who are injured is just as dismal as from other kinds of attacks. But the scales involved are very different and we shouldn't overlook that.


using something that's ground-based to impose control?

I think we have that, they're called soldiers.


Which leads to the appeal of a mechanized infantry that never gets tired, never gets sick, kills without remorse, follows orders invariantly, doesn't eat or drink, has an upgrade path but most importantly can't be killed only destroyed or broken.

Sure the first generation won't replace all soldiers and they may never replace them all. We will need logistics and support people of course. Its not a perfect solution. But the second it becomes cheaper to put a robot in the place of a human on the battlefield.

They will.


I do believe, that it is first not a question of money (being cheaper).

You do not have images of flag covered coffins returning home. You have no insubordination. You have the possibility to do economic promotion without being labeled as doing such (regarding international treaties (at least here in the EU)). You do not need all these training facilities.

Just to name a few.

So they will even do it, if it is not cheaper to do (imho). And logistics can be automated as well (delivery by drones, something like fueling done by other drones, and so on).

I believe we will see drone-carriers analog to aircraft-carriers being at least semi autonomous as well within our lifetime.


Flag covered coffins are a powerful image which help to galvanize the people and maintain public support for a campaign.

It's hard to imagine that people will care much if all they see is a pile of scrap metal being shipped home.

It'll be extremely difficult for a government to use robotic infantry whilst having public support for such an action.

Rather than appealing to a sense of "brotherhood/camaraderie against a common foe" in order to support a campaign - propogandists will likely have to exploit the peoples sense of fear to garner support. This is a shift we have already started to see with the "war on terror".


I'm all for this, not just for our soldiers, but also for the remote population. Part of the problem of actual people in these situations is that they fear for their lives, so make decisions based on their own fear and keeping themselves alive. Eventually, robotic soldiers can hopefully be better than human soldiers at identifying a threat, even if it's just because they are programmed to take more time to do so. There will be a lot of incentives to make them as cautious as possible when categorizing civilian and soldiers in the instances where the only threat is to themselves.


Right - a robot might kill without remorse, but it also never kills because of fear, or hate, or to take revenge for its fallen comrades.

But then why does it kill at all? Presumably because it's been sent to further a human agenda.

If you imagine a noble military purpose - liberating an oppressed population from an aggressive occupying enemy, for example - then your robot soldiers are awesome; they will target only combatants; minimize collateral civilian casualties; they will never loot or rape; they will selflessly interpose themselves between the innocent and those who would harm them. The perfect heroic soldier, better than any human army could be. They will be greeted as liberators.

But to the extent that the underlying human agenda involves pacifying a civilian population, instilling fear, or outright causing terror, there's no reason to think that a robot soldier would not be capable of being far worse than human soldiers. It can't be reasoned with. It doesn't have a conscience. It doesn't matter that it doesn't 'fear for its life' when kids throw stones at it if it's been programmed to respond to that threat with deadly force precisely to discourage other kids from throwing stones.


Ah, but at that time, we have something to work with. Documented evidence of automated military responding with undue force to civilians can be assessed as deemed useful by the world. If guidelines are developed, and nations sign on to them, evidence of overly aggressive robot soldiers can be seen as similar to chemical weapons, or more likely, landmines. As such, sanctions can be imposed, etc.

At the point where we take humans out of the "in-the-moment" decision process, a lot of thorny issues about what's acceptable in specific situations can become less ambiguous. Agreeing on rules is much easier than agreeing on what's acceptable behavior for a person in every situation, because a lot of that depends on state of mind.


We already have perfect soldiers that are "ever courageous, never sleep, never miss", but land mines are outlawed in most situations.


G.I Robots have been postulated for quiet some time:

http://en.wikipedia.org/wiki/G.I._Robot


Yes - but replacing soldiers is happening and will continue. Having only machines on the battle field will lower the entry level for war.


Wait what? Those expensive machines make war harder, not easier.

Central African Republic has war with child soldiers using machetes and guns. It's an unpleasant truth: humans are disposable.

This is evidenced by a bunch of stuff - rich westerners have their clothes made by poor people locked into unsafe Bangladeshi warehouses; have many of our goods made by low people in terrible and dangerous conditions; by our lack of interest in garbage pickers or street children or etc.


Those expensive machines makes not a single child soldier go away - why should it?


I think the point is that humans are a lot cheaper than machines, as harsh as that is, especially if you do not care about them at all.


I suspect (but cannot point to hard numbers) that machines might actually be cheaper if you are (politically or otherwise) forced to look after your human soldiers to a reasonable standard, especially in 1st world countries.

Consider the initial training expenses, equipment, career-appropriate salary along with possibly reenlistment bonuses, along with (in the worst case) medevac, multiple complex surgeries, and lifelong treatment/disability pension for some injury. Not to mention the logistic costs of transporting them to various places around hte world, and maintaining an acceptable standard of living there.

Per-machine costs would be high, but there's comparatively little training. Maintenance would be expensive, but rehab/repair decisions would be much less politically charged. Not sure how logistics costs would compare - "life-support" is probably lower, but sourcing power & spares might be greater.

I wonder if anyone has done an economic analysis on this sort of thing.


I'm very sure there are economic analyses of this. Searching hasn't brought up details, but there are things like

- the think tank "Center for a New American Security", which seems to have some influence on politics and runs studies such as

http://www.cnas.org/research/us-defense-policy-and-military-...

https://en.wikipedia.org/wiki/Center_for_a_New_American_Secu...

- this handbook, which seems to be published by a combination of US and French Army researchers:

http://usacac.army.mil/CAC2/cgsc/carl/download/csipubs/Frenc...

(do you need reading material? http://usacac.army.mil/organizations/lde/csi/pubs - wow)

- This Australian publication discusses costs in an abstract way:

http://www.army.gov.au/Our-future/Publications/Australian-Ar...

All of these deal with robotics and the military. Ethics is discussed widely, but visions and plans are also given (for which a cost analysis should be necessary, but I haven't found details).

Concluding, the current vision of the developed world's military seems to be to replace some humans with robots and have the rest work alongside them, giving them the most dangerous tasks. (A publication lamenting the fact that army operations can be life-threatening is somewhat ironic, by the way.) However, the previous points cited child soldiers, which are neither trained well nor do they receive the benefits you mention. So we're dealing with different contexts, and both statements make sense, IMHO.


Your hypothesis was that machines lower the entry level for war. The counterexample of using untrained, disposable child soldiers indicates that the entry level is already zero. Consequently, your hypothesis is wrong; the entry level for war is actually raised via expensive machines.


Robots like the knight armor of long ago, are very expensive.

I highly recommend this TED talk about how the future of war could return us a feudal world.

http://www.ted.com/talks/daniel_suarez_the_kill_decision_sho...


The main issue is that this advanced tech creates a powerful lever for mad leaders. 1 clumsy cheetah robot at MIT may not be that scary, but 100.000 fully armed and armoured killing machines, which can avoid obstacles and track humans with all kinds of sensors, under the control of leaders wanting to 'right' the wrongs of history scares the shit out of me.

At the same time, this progress cannot (and shouldn't) be stopped, but we need to quickly 'evolve' our moral and ethical standards in order to accommodate for these new powerful tools. Just like giving a loaded Ak-47 to a monkey is a bad idea, making this tech freely available to everyone is similarly stupid.

And that's because a lot of monkeys are wiser than a lot of humans. Give those humans a button which controls an army of robot killers and they'll press it just to see what happens.


Your going to need some pretty advanced manufacturing to be able to make these for the forseeable future. I certainly can't see it being cheaper than humans with AKs for a long while, especially in the kind of environment where your "mad leaders" operate. Besides, the majority of "gain" is being able to fight wars without putting yourself at risk of casualties. A "mad leader" would not care about such losses.

The moral issues in this are no different to those of the use of drones, which are very well discussed.


If there is sufficient demand, advanced manufacturing will just follow. Just see how much smart phone tech has grown over years and where it was before that.


At least at the moment, the limitation is power. Sure, they built a robot that can run and avoid obstacles, but its battery is probably dead in 10 minutes tops. There's a reason it's running on a power harness in the treadmill test.

Unless they invent the Arc Reactor to along with it, those are a long way from the battlefield.


Scale it up a bit and fuel it with petrol. I can't imagine this thing using more than 3 kW of electric power, and petrol generators are quite small these days.


> this is no different

Creating something with our knowledge and the ability to learn ever faster than us is very different.


Yes, but this robot is not that. It's not even a step in that direction.


Until you have achieve the desired mobility, then tweak it to walk upright instead of using 4 legs, and insert a Watson enabled chip into it.


Like the way drones are used?

I think this is an optimistic view of human nature.

I suppose these animal robots could help with same-day deliveries.


How on Earth is it not different? Wolves/Lions have no machine guns, thermal sensors, armor, chemical weapons, and more importantly, are not controlled by a human third party.

This research is funded by DARPA. Not because of its potential to help the elderly, mind you.


>this is no different than what every species on this planet has been doing for a few billion years now.

You might want to ask those species how that worked out for them. Also, consider how slow the change use to be compared to the life expectancy of the species.


life both creates and destroys life.

to humans, cats are awesome. to mice, cats are monsters.

ai and killer robits have a strong potential to be monsters.


This is an understandable reaction considering how much like the robots in films it is. It's movement is Terminator-like, perhaps unsurprisingly given what James Cameron was trying to evoke when he directed the film. That slow, juddering, 'unstoppable' machine type of movement reminds us of things that are more powerful than we are - heavy machines, trucks, industrial equipment.

If it was more rounded and animalish, maybe with a fur coloured coat of paint, I think it'd be a lot less scary.


I imagine as the tech matures, some sort of flexible skin will be developed, to protect the actuators from weather and mud/sand, and to muffle the sound of the motors and valves. If you wanted to get really fancy, eInk squid-like camouflage could be embedded in the outer layer.

Longish fur might actually work quite well for those 2 purposes, they're not entirely dissimilar from the requirements imposed on real animals who evolved it.


AI and robotics have nothing to do with each other. The first "conscious" algorithm will run on a commodity computer, maybe an ASIC. It would certainly not have the ability to commit violence unless we allowed it to connect to the control system of a weapons platform.

It would quite possibly make itself rich, and probably morph itself into a distributed system running nodes on millions of compromised machines (+ the datacenters it buys). From here, AI could possibly buy itself access to things like weapons platforms, but ultimately this is contingent on humans selling them.

If a lab has a robot capable of committing violence, and an AI algorithm and it wants to put them together, then yes running the algorithm locally on the robot is probably a bad move. But you could proxy between the AI and the actuators, with the ability to override movements you don't like, for example.

The more likely scenario is that these things will be used as instruments of war, running dumb/normal algorithms and controlled by commanders in the field. In that case, just another one of the many creative ways we have to kill each other.


On the plus side, I would hazard a guess that the 'S' part of PTSD is related more to killing people than it is to killing "things". So warfighters who are disabling and destroying the enemy's machines may be much less scarred by that than ones who were killing the enemy directly.

As for a generalized fear of the emerging technology, that fear is simply your body's way of telling you that it is important that you get it right (other people will feel it too). Same is true for the advancement in laser diodes for example, or recombinant DNA, or lithium ion battery packs. Growing up is scary, but you can't stop it from happening, you have to mature as it goes along or die.


Hopefully the real world version will end up a bit more benign than the movies.


Right now, the closest we seem to have to a learning machine is IBM Watson, and to be honest, it's really far behind human intellect in terms of adaptive thought. Though maybe in 10-20 years Watson may evolve into something much more capable...

As to a robot army + an evolved computer intelligence... that could be scary. It's easy to imagine something like remove anyone that isn't a "pure" X... where X is a combination of racial and religious dogma. In practice, and with practical limitations on resources, it's hard to imagine such an army that isn't outmatched (at least in numbers) by armed people (lets not weaken the 2nd amendment too much).


We need to work on creating robotic civilians (bystanders | peasants | victims), too. Then the circle will be complete and the war-makers can burn through their $trillions without actually killing anyone.


Your feeling is normal. But that seems to be the only way humans could defeat death. Prolong brain function and transfer to robots. We will get over it.


This reminds me of when i was learning to drive. In the beginning I was only aware of what was immediately in front of the car, and as such, drove very slow and had pay extra attention because a lot of important stuff was coming in to my field of view all the time.

Now, when i drive i look further down the road, i plan out how to follow the road, i see the intersection coming up, and try to anticipate the routes of the other cars/bikes and that requires far less energy, than only processing the immediate surroundings.


Well, all the sensory overload you experienced as a novice driver has become background noise that is handled subconsciously now. That lets you focus on more strategic planning of your driving. That's one reason I like countries that mandate a learner sign in cars for new drivers. It lets others around them understand that they are still getting to grips with that sensory overload and allows them to act accordingly.


I see, is there an equivalent for robots? Like putting part of the algorithm in to hardware or something? Or is my "algorithm" still "there", I'm just not as aware that I'm following it?


I'd say it's still there, it's just running on a background thread. :)


If only more people drove like you and knew who was behind and on the side of them accidents would happen at half the rate.


I was thinking exactly the same thing. I'm always amazed at how many times a lyft or uber driver does not see a "situation" a little further down the road, resulting in the need to swerve to avoid it versus changing lane early to avoid the last second reaction.

I've noticed it in cases where I'm a passenger in a friends car also, but I've always chalked that up to the conversation being a distraction.


The left and right front/hind legs moving in sync looks unnatural. I wonder if this is ultimately the best way to distribute weight and balance of a four-legged object, and why living four-legged creatures do not run like this?

Alternatively, would it be better if the robots also ran more life-like, or is there a benefit (besides the ease and simplicity of engineering the physics) to the robots running like this? I.e. will they ultimately have the robots running more life-like?



Actually, if you slow it down, the left/right legs are stil not quite perfectly in sync (at least not in this case):

https://www.youtube.com/watch?v=NuyeVN7PuTM


Correct, and it's clear the machine does a little correction cycle after it lands. Very cool though. Now they need to work this into the pathfinding algorithms.


That's how they run flat out. Do they alter their gait like horses, I wonder?


Rabbits run (or is it hop?) like that - front paws side by side, rear paws side by side.


I noticed the opposite in this one:

https://www.youtube.com/watch?v=wE3fmFTtP9g

Which is to say, it looks more natural and more balanced with the legs out of sync.


To be fair, this looks more like a mountain goat jump. Impressive non the less !


My first thought :)


That's not a cheetah. That's an electric sheep.


You'll scream this when that electric sheep comes after you.


Is it me or does it seem to have an easier time with the highest obstacle. I though it was a fluke but then it had the same behaviour in free range mode.

Is it a random result because of the phase of the obstacle in the run cycle?


This is amazing.

I am so impatient though - I want to scatter bricks on the landing-side of the jump to see how it copes.


It would probably get annoyed and shoot you with its lasers.


Interesting the obstacles are always magenta.


In a war room somewhere: "Have you seen this video? We have to repaint the robocheetah obstacles as soon as possible!"


Is the goal here explicitly to mimic animal locomotion? Does this have unique advantages to something that rotates/rolls etc?


It's not the animal locomotion per se that's exciting. It's the balance control.


One unique advantage seems to be that it can jump over obstacles. Something that rotates/rolls (presumably) is more difficult to get over obstacles.


Would be funny to see a group of these performing synchronously at half time.


All of this is neat stuff.

That said, for some reason I am still bothered by the use of laser range finders. To me it makes it all feel like a parlor trick. The easiest way to produce machines that can be used in "Oh, wow! Look at that!" videos that continue to bring in grants.

These robots should use binocular vision and nothing else.

Oh, wait a minute, that's hard, isn't it? Yup.


Why? Why should they be limited to the same hardware configuration as their biological analogues?


Because they have to interact with our world, not a lab or a factory.

If you are building a robot for a factory, put limit switches, magnetic sensors or whatever you want on it.

If, on the other hand, you want to build robots to live, work and interact with humans they need to be capable of understanding my world the way I do. Think of a robot interacting with a toddler or a bunch of kids.

This is monumentally harder than scanning in front of the robot with a laser to detect geometry, measure height and approach speed and then plan a jump. Much harder.

Not to diminish their work but the math and physics seem almost trivial. Figure out the x-intercept, width and height of a parabolic path that will give you enough margin of error not to touch the obstacle. Then do the math on the time delay between the front and rear legs based on approach speed. Then plan the gait in order to be able to have the legs at the right point at the right time. So long as you have a robot that can jump it's a done deal.

Again, I know it is more complicated than that, but it doesn't compare to the degree of sophistication a robot would have to have to manage the real world with binocular vision. My 9 year old kid can fly remote controlled model airplanes tooling along at over 60 miles per hour just using his eyes. You don't need millimeter accuracy laser measurement devices, you need to understand the world around you in some context.

Context: I built walking robots (not toys, research grade) over 25 years ago. Today there's virtually no difference in actuation mechanisms and sensors. A lot of these programs are grant-sucking machines that are reinventing the wheel rather than making true progress. Here are some of the things we need:

- True binocular vision systems that can develop an understanding of the environment to various degrees of sophistication (this is hard)

- Better actuators. The artificial muscle has yet to be realized. Make your arm limp on the table. You can't do that with a robot. You can simulate it. But it isn't the same thing. You'll actually consume power and spin gears/pumps very fast to be in "limp and compliant" mode. Real flexibility and real compliance are critical for robots that need to interact with people and animals. Every animal on the planet relies on this to interact with the environment.

- Better programming paradigms. We are still typing "if" statements and "for" loops to program intelligent robots. A far greater degree of abstraction is required to truly advance the art. No, libraries are not a solution. We need to be able to express concepts to a machine in far more efficient terms. How do you teach a robot to tie a knot on a rope on a table and have that robot think about using that same knot on a sailboat or to restrain a dog to a post? Without telling it that these are options?

- Better means of communication with machines. Buttons and knobs isn't how you communicate with your taxi driver or housekeeper. A 5 year old kid should be able to command a machine without having to rock the Linux command line.

etc.

I guess my argument is that we already know how to build "parlor trick" machines. We've known how to do this for quite some time. Any set of decent mechanical engineers can build a decent walking machine given a reasonable amount of time. Making it walk and even jump is almost just as trivial.

Because of the way these departments are funded the truly hard and interesting work might not be done or might not see the same degree of funding. Some of the items I listed above could require 10 to 20 years of solid dedication before the "Oh Wow! Did you see that!" moment is reached. Most of the funding out there isn't smart enough to support these kinds of projects. And so the money goes to the guy who can put a spring-loaded plunger on a little robot with wheels and show it can jump to the top of a building. You know, first year college physics, if not high school physics. This does virtually zero to advance robotics but it sure makes politicians write checks!


Why should they use binocular vision?


The bad reason is "Because we do, and we're the best. Those researchers are just slackers".

One good reason would be to prefer passive sensing, because LIDAR is equivalent to waving a laser across the entire landscape, precisely announcing your own position to anyone watching. Multi-ocular systems can be entirely passive assuming enough ambient light, so they're much harder to detect.


No one seems to be discussing how this can be improved! Here are a few points:

1. Real cheetahs don't run like that. The action of both pairs of legs is staggered for smoother motion and control. Check this out - https://www.youtube.com/watch?v=131wvVGjZUc

2. Looking at this jumping-over-obstacles video, it seems like the robot is lacking flexibility. The real life cheetah's hind legs go quite a bit underneath in preparation for push off. Contrast this with how the robot's hind legs are a little too behind and the robot stutters a bit after every jump. If the robot could put its hind legs further it would make for a smoother landing.


1.The goal is not to build a cheetah, buy to learn from them. One thing to consider is that this team is constrained by materials. What works for an animal will likel not transfer directly due to the difference in materials. Harmonics are well absorbed by tissue but the robot may have issues with them.

2. How did you come to this conclusion? Have you built a robot that mimics an animal? If so, may you provide some insight into your discoveries?

Robotics programmer/builder here. Always interested in learning about the work of others.


To your first point, wouldn't a robot like that benefit from staggering the action of the legs? Instead of X impact at one point in time, have X/2 impact at two points in time? Would that create too much strain?

For my conclusion in point two, my insight comes from experience of copying animal movement. When I trained parkour my friends and I would copy animal movement from across the whole animal kingdom[1]. Here's an example of quadrupedal movement[2]. A big problem for us bipeds in trying to run like quadrupeds is that our legs tend to be big, long and inflexible, while our arms tend to be much weaker than our legs. The effect of this combination is to make running properly like a an animal which evolved to run on all fours quite difficult. From my own experience, putting my legs more and more forward past my arms (i.e. copying the cheetah), made for a much easier time.

If you were to try to run like the robot, with symmetrical action, it would be extremely uncomfortable, because the impact is large at every step. Staggering the action allows for a much smoother run. Make any human run like that and they'll eventually automatically switch to the less impactful staggered run.

[1]: https://www.youtube.com/watch?v=Ymg1-Fhl69w

[2]: https://www.youtube.com/watch?v=W8oh5Xuy7NA


Having the back and front legs work as a pair makes the side to side balancing easier. I having them hit separately would require more control and degrees of freedom in the leg to keep the robot from oscillating side to side while running.


I also forgot to point out that their robot is capable of running in a staggered manner [1] and the results are much cleaner than non-staggered action [2]. You can practically feel the second robot nearly tear itself apart even at lower speeds.

[1]: https://www.youtube.com/watch?v=1TXOHAVuS5Q&t=65

[2]: https://www.youtube.com/watch?v=chPanW0QWhA


It seems obvious that the engineers at Boston Dynamics have spent an awful lot of time carefully studying and analysing the movement of real 4-legged animals.

There's a big difference though between learning valuable lessons & taking cues from natural bio-mechanics, and trying to emulate them.


Perhaps the engineers who designed cheetah did learn about how real Cheetahs run, but there are other constraints that are not obvious.


That's very likely considering what they are building.


I think it's safe to assume that the researchers have watched hours and hours of footage of cheetahs running - I'm sure they're aware the ways in which their work is not like the real thing.

Watching some of the cheetah videos, I wonder how much of their staggering is compensation. That is, the "ideal" cheetah gait may actually be non-staggered. But as the cheetah is changing direction, dealing with slightly uneven ground and reaching out for prey, it may need to stagger its legs in order to compensate.


We can't possibly know what the researchers have actually done in their study but I think it's safe to say that symmetrical action is a lot easier to wrap your head around (it essentially becomes a 2D problem). That's why a staggered action should be seen as an improvement to the already existing design.

When it comes to cheetahs, I bet a million bucks that it has everything to do with impact and smoothness rather than a compensation for changing direction and such. My confidence in this comes from my own experience imitating animal movement in parkour. For an example of one person running in a staggered way - https://www.youtube.com/watch?v=W8oh5Xuy7NA. Running like that is MUCH easier on the joints than running with a symmetrical action. If you make a human gallop in a symmetrical way they'd quickly give it up for staggered action, regardless of surface or directions.


You may be correct, but I wouldn't extrapolate from humans. We evolved to walk upright, so running on all fours is not something our structure will be good at.


Muscle, bone, and sinew are about a hojillion times better in building active structures than motors, pneumatics, etc. It'll be a while before robots are as flexible and dexterous as animals.


That's not Boston Dynamics' Cheetah. This is.[1] This new video is MIT's cheetah. It's good to see MIT doing serious legged locomotion work again. Things stalled out there after Raibert left and Gil Pratt took over. Looks like DARPA is funding MIT, now that Google bought Boston Dynamics and stopped taking DoD contracts.

Not sure why they chose a pronk gait, except that they're doing all straight line work. A gallop has more turn options, but roll control is easier for the pronk. Hopping over an obstacle has been done before, by some of the early MIT planar bipeds.[2]

It took about $125 million in DoD funding to get a usable Big Dog and Atlas. It's quite possible to do a lot more in legged robotics, but it is not yet cheap. It's still not clear if Google will achieve commercial legged robots, or decide to "put more wood behind fewer arrows" again and focus on their core business, advertising. The commercial payoff is a long way ahead.

Academic projects rarely have the funding and staff to get beyond the demo level. You need some big assets, like machine shops and skilled machinists. Note that they're borrowing MIT's gym. To get real work done, you need full time access to your own test area. It's hard to get that on a college campus. The typical academic robotics project is one professor and three grad students.

[1] https://www.youtube.com/watch?v=chPanW0QWhA [2] https://www.youtube.com/watch?v=XFXj81mvInc


They are working with relatively inelastic materials and as such they will work very differently from animal muscles, tendons and ligaments.


The biggest difference is that the spine on this robot is not articulated. Most of a cheetah's power for running comes from the strength and flexibility of the muscles along its spine.

It's just hard to beat the density and flexibility of a living mammal with a bunch of servos, chips, and batteries. We probably never will.


I'm not talking about power here. I'm talking about flexibility though. I think it won't be a stretch to think the hind legs range of motion could be extended by 10-15 degrees so the hind legs could hit a point further along the way.


Yeah! And whilst we're at it, wings on aeroplanes don't flap either. RUBBISH.

/sarcasm


Hahah, you are so clever! Do you have anything beyond sarcastic remarks though?


Is that not enough?


Hey man, he took those MIT engineers to school. Despite decades of focused dedication to the study of robots, control theory, gait and dynamical systems, they probably never considered just making their robot work more better.


Love that ingenuity on their camera dolly


Wow, this is impressive! However I'm still eager to see the matchup against the MIT Antelope.


Hopefully they'll put a saddle on it! What a way to commute...


Love it that a PhD student had to push the camera guy at the end.


I, for one, welcome our new cheetah robot overloads!


Amazing algrotihms!

But seriously... impressive work :-)


Robots are coming!

If Moore's law will apply to the progress of Robotics, we have very few years left before we will all be slaves.


could skynet be too far away ?

- with deep neural nets making ai advances in strides and

- with robots/drones learning to autonomously navigate

we should start including skynet clauses into our source codes immediately!


Terminator, the humble beginnings - looks and feels like rats!


DARPA again. Great to see more money flowing into military research.

Irony off.


Military research is extremely important. The use of military tech in civilian applications is the reason for a lot of stuff we use daily, as I'm sure you know.


There's no reason that technological progress from the US government couldn't be driven by NASA rather than DARPA. I suspect many peace-loving people would prefer to see that.


"No reason", other than the massive budget constraints NASA faces, and has been facing yearly since the end of the Apollo program.

https://static.nationalpriorities.org/images/charts/2015/tot...

See that Science heading there on the chart? It's hard making the world a better place on a 0.7% budget. DARPA, however has access to that Military heading, 16.3%.


Perhaps you're missing my point. I'm suggesting that money from the military spending budget could be moved over to NASA without losing any ability to drive forward new technology.


That is exactly the point the TS was making, spending the money on NASA instead of DARPA would be a huge improvement, shifting the focus away from war.

Not that that is ever going to happen.


Not enough peace-loving people to vote for a NASA-science-centric government vs the Military-Industrial government we already have.


I'd say that even if there was, that message wouldn't get through. There are many reasons that people vote for one party over another, there's nowhere on a ballot paper to indicate why the vote is being cast.

If you were running a political party, the most logical assumption to make (in the absence of richer feedback) was that the people that voted for you did so because they liked your manifesto and/or believed you'd be more competent than your competition.

With that in mind, an election doesn't really allow any meaningful influence on policy, it has to come in a different form (direct democracy, for example).

I'm not a US citizen, but from what I've read the laws on direct democracy are set at the state level, is that correct? Are there any mechanisms in place to propose policy changes at the federal level (aside from lobby groups in Washington)?


Everyone has an opportunity to vote for more than just a party. There are primary elections where specific candidates with specific policy positions can be selected. Unfortunately, most voters don't bother getting involved with the primaries so we end up with candidates that are selected by the "establishment".

In many states, there is an opportunity for direct democracy to have an effect. Specific initiatives are put on the ballot. In practice, though, specific ballot initiatives are still implemented by the same politicians that tend to bend things to their will anyway.


If we had less war on this world, I would be glad to have a few stuff less we use daily.

Seriously.


In a peaceful world, advanced military tech would be solely used as a metric of a country's progress and a deterrent in the worst case. That's my opinion at least.


As a followup, I'm a heavy proponent of military research. I'm planning to somehow start a research lab or something (think MIT's Lincoln Lab) in my home country Tunisia. Now that we have democracy, it's time to build our own infrastructure and industries, and I think military should be at the forefront of such a movement.

There's still ways to go.. I still need to finish up my BSc, get a PhD in EE, and finally gain solid experience abroad :)


I find it amazing coming from such a background how you can remain enthusiastic about technology for warmaking.


Imagine if your country had basically no military technology or ongoing research. Well that's how my country Tunisia is, and of course the rest of Middle East and North Africa. It's easy to say "military funded research is bad" when your country (ex: USA) is at the top of the world when it comes to defense and state-of-the-art weaponry.

By developing our own tech, we can avoid depending on foreign suppliers, at least in some areas. This will help with improving national security and decreasing the obscenely high maintenance costs imposed by these suppliers in the long run.

Advancement in military technology will then allow us to apply the lessons learned, supply chains developed, factories built, and so on, to the civilian space. Look at Israel for example. They have an entire startup scene growing around the military. Many of the people who work in the military go on to start their own defense companies. These companies then give back to both military and civilian tech (Microsoft Kinect, for instance).


Is it ironic that I only saw this comment because of DARPA funding?


Otherwise you might have seen the comment because of some other funding (for example ITU). Arpanet wasn't the only development of its kind at the time, it just was the one that came to dominate the market.


Some CERN funding as well... :-)



I really like that this happens on HN.

A crap gif and half a paragraph add nothing to the video imho.


edit: Spoiler Alert!

I just watched Ex-Machina and for some reason, I just pictured the robot running out of the building after its last test without the safety harness and setting itself free.


I waited months to watch this and read your spoiler. I don't think your comment has added any benefit to mankind.


It's only a matter of time until one of these things kills a person.


I for one welcome our robotic overlord


This is such an annoying and useless comment. This same comment is parroted EVERY time there's an article about robots anywhere on the Internet. How are you adding anything to the discussion at all?


S/he seems excited and wanted to reinforce that with something more than an upvote. Frankly if his/her comment adding nothing, yours took away from the thread because it has a negative and discouraging tone to it.

Be nicer my friend. We are all friends here.


That's fine, then reinforce it with a comment that is interesting, and not a useless cliche.

It's everybody's responsibility to ensure the quality of the discussion remains high.


Seriously. Assuming humans are the pinnacle of intelligence (or anything) is so boring.

I will celebrate AI Day. The day we were surpassed by our creation, in every way.


Whenever I see a wasp or a bee or a hornet, I present it with my arm and pray that it will grace me with it's sting, and summon its siblings to join in.

What a privilege it is to be subjugated by nature's sentient, motile organisms! My only regret is that these magnificant creatures don't sting me more often, but alas, it is the right of any autonomous insect to do whatever it wishes with it's sting, and who am I, but a lowly mammal, to coerce its gifts?


If you prefer pain and inaction...

Honestly, I do not understand what your point is. Could you explain? (With less attitude?)


Waiting for artificial intelligence to surpass humanity, particularly as if it were a desirable inevitability, and then letting it happen sounds precisely like pain and inaction to me.


Well, it is very impressive, but still a very long way from what a real animal can do. What if the obstacle is at the edge of a canyon? It will suicide :D . Those tests are very restricted, but even after they will think the robot is ready, there will be cases they have missed in which it will fail.


What if the obstacle is at the edge of a canyon? It will suicide :D

Not to worry, the drone flying ahead will already have mapped the terrain :)


And tagged you as its next kill :p


> What if the obstacle is at the edge of a canyon? It will suicide

http://en.wikipedia.org/wiki/Buffalo_jump


Baby steps. A fully terrain aware robot will be very complicated. Breaking it down into discrete goals only makes sense.


> there will be cases they have missed in which it will fail

i.e., parity with real life.


Robot suicide, what an interesting prospect. I can't think of any surer sign that and entity has achieved consciousness, than when it makes the decision to end its own "life".


Meh, programs suicide all the time when they access memory locations not in their space.

The contentious nature of consciousness is in the "makes the decision" part of your statement, suicide is pretty mundane as far as decisions go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: