Inhuman Kind

2015 ,    »  -   15 Comments
222
8.23
12345678910
Ratings: 8.23/10 from 92 users.
Storyline
Inhuman Kind

Technology can often be a source of tremendous benefit in our lives. Robotic technologies, in particular, hold the promise of serving many essential functions in the home, workplace, disaster zones or on the battlefield. But these advances within the field of robotics could represent the ultimate double-edged sword as well. What if the construction of these technologies outpaces our ability to control them?

"If the technology grows faster than the wisdom," explains Max Tegmark, a physics professor at the Massachusetts Institute of Technology, "it's kind of like going into kindergarten and giving them a bunch of hand grenades to play with."

Professor Tegmark's decidedly cautious perspective is just one stitch in the tapestry weaved throughout the exciting new documentary titled Inhuman Kind, an examination of emerging robotic technologies, and the potential consequences of their misuse.

The film begins at Virginia Tech, where several competing labs work around the clock to perfect the next evolutions in artificial intelligence. Teams of ambitious technologists are constructing machines that they feel will play a key role in ensuring the safety and quality of life for countless citizens throughout the globe. Their robots are designed to comb regions scarred by natural disaster or warfare during rescue missions, performing the oftentimes dangerous duties that would normally fall on their human counterparts. When used for humane purposes, the possibilities of such inventions hold undeniable promise. Yet, those who stand in protest of these development programs aren't so sure, especially when they consider that most of these technologies are being funded by divisions within the United States military.

"Where's the ethics? Where's the morality?" asks interview subject Jody Williams, the head of the activist organization Campaign to Stop Killer Robots. "Why do people think it is OK to create machines that on their own can target and kill?" In her view, the military would never invest so generously in this realm of technology unless the end game involved the eventual weaponization of these robotic machines; something which has already occurred in otherwise benign robotic servants like bomb and landmine detectors and, most controversially, surveillance drones. The technology is moving at such a rapid pace that one day it could result in the construction of a fully autonomous robot, and a host of troubling doomsday scenarios.

Inhuman Kind calls for a greater consciousness of these potential issues, and urges that such liberties not be taken lightly or without oversight.

15 Comments / User Reviews

  1. Todd Morrow

    The military controlling robots that can kill even without a kill order is a problem, yes. But the much bigger problem is some AI deciding that even the military (all of humanity) needs to go. There's a great new TED talk about it.

  2. dmxi

    just cementing the path that will, by(o)logic ,make us all obsolete.
    -the automated-one-

  3. winston

    18:08 This woman is assistant professor at the Center for Peace and Security Studies, Georgetown University. lol. What we can do about it, Ms Fair, is not have our drones there to begin with. Wars of aggression were labeled by the judges at Nuremberg as the most despicable crime of which a nation is capable. If the powers-that-shouldn't-be cared, even remotely, about the 'peace and security' of the American people (human beings in general), they would begin by legitimately investigating the 911-anthrax attacks as the domestic crimes they most certainly were. Arrest and try the animals who knowingly lied this nation into wars, perpetual war, for treason and crimes against humanity. That she remains silent about the seminal events that have served (and continue to serve) as pretext for these global wars (car bombings in Pakistan were almost unheard of before 2001), the loss of our civil rights at home, mass surveillance, etc... We are not even guaranteed due process any longer, in this nation. That should terrify everyone. Yet, no one even blinked when Obama signed the NDAA for 2012.
    She states: "Pakistani terrorists, operating under the banner of the
    Pakistani Taliban have killed thousands, if not tens of thousands of
    Pakistanis." (Nevermind the million we've killed) Nevertheless, the government "has remained committed to using Islamist proxies as tools of foreign policy and Islamism as a tool of domestic politics."
    Yet ignores the facts that the US uses the exact same tactics and has been doing so for decades,, be it Bosnia, Syria Iraq, Afghanistan, Chechnya, or Libya. She does not speak out against the crimes of the military industrial complex because she's part of it.

  4. cyberdog

    While opening an entertaining discussion, this documentary really does not cover much of the debate. It oversimplifies it to a black and white issue, where in reality, I feel that this is not the case. The morality of machines is a far more complex and of much profound consequences than we even are even caring to admit. Just to point out some of the obvious issues, what about the case of autonomous cars and if they are involved in an accident and someone dies (as an example, a pedestrian). Who then should take the responsibility. This is not a machine that has any intention of killing, although it can. This whole topic is a very complex issue that needs to be evaluated very carefully and structured from the ground up. I feel that it will require its own completely unique legal framework that will need to evolve with the technology. Military autonomy is not something new, and having a knee-jerk reaction now with the current machines will not lend much weight to the argument. Autonomy has been in place for far longer than we care to even admit. The nuclear weapons program(s) have always had a large degree of autonomy, especially in the event of complete disaster. Even now, there are completely autonomous nuclear systems in place, programmed to destroy most of the planet when the correct set of sequences occur. That is far more concerning to me than drones. I do feel that the people who are making a case against drones feel that it is more personal, while I do believe that this is the case, I also feel that it is far less of a threat than some of the other existing autonomous systems. Besides, being more personal is to it's own advantage as drones target a lot more specific targets. If this is contrasted to sending in thousands of troops, with many more deaths and damage, it pales in comparison, and yes, I do firmly believe that there are fewer casualties, and this will be reduced even more as the technology matures..
    A lot of machines have had high degrees of autonomy for ages, including airplanes. Most modern airplanes are impossible to fly without autonomy, and some planes can easily do the most, if not all flight by themselves.
    I have absolutely no doubt that given half a chance the opposition like Isis would not hesitate to employ the same drones.
    The argument that I propose is that the technology is here to stay, the reality is already here and Pandora's box cannot be closed. It is up to everybody to contribute and focus on finding solutions to implementing safety mechanisms, clear boundaries and very clearly defined protocols in the development of the technology. And the sooner, the better.

  5. Pysmythe

    My friend, let's just hope that, somewhere on down the dismal line, one inevitably following from these technologies, our overlords will find it expeditious to implement at least some degree of universal income, in order to supplement the inevitable decline in earning power among us poor plebs who were not vouchsafed all the gifts and opportunities by which to rise up as the rightful Masters of the Universe. Although I would dare venture to say that understanding and compassion for their lessors do not especially tend to be hallmarks of the majority of the monied classes.

  6. Jeffrey

    A lot of people don't really understand AI and how technology works. Although this documentary does NOT go into detail on how AI works, I will try do this as summarizing as I can possibly do.

    The first thing that is really important to understand is that AI "robots" and humans learn very differently. With AI you have a set of rules, with humans you don't.

    A programmer creates software which is bound to the hardware of the "robot". With hardware I mean the sensors (camera, barometer, accelerometer, gyroscope etc.) as other hardware parts don't matter in what I'm trying to make you understand (if u don't have a basic understanding of how AI works). As the programmer creates this "intelligent" "robot" the programmer has to create rules that the software interprets and uses to learn more inside of the rules the programmer programmed the "robot" on. Everything a "robot" does is in fact programmed by the programmer.

    For an easy example, software has to predict how many people are going to walk into a fastfood restaurant who live in a certain type of neighbourhood. The programmer then creates a controlled environment for the software to predict it. Factors that influence the final prediction can be: prosperity, segregation, other fastfood restaurants (the amount of visitors ofc.), quality of food, average intelligence of the neighbourhood’s population, distance, etc. So when the software reaches it's final prediction, which probably is wrong, and the right input is given, the software will next time be more accurate on how many people from that neighbourhood will enter the fastfood restaurant.
    So when the software has predicted 1 000 test cases, it will be pretty accurate on how many people from a certain neighbourhood will enter that particular fastfood restaurant.

    This is applicable to any other type of case, and can be used for any sensor that can be attached to a computer (which in the example case obviously is a camera): the programmer will create a controlled environment for the software to check for, the software will interpret the data & take those results into account when predicting the next thing the programmer wants. It doesn't think independently and it wont do shit the programmer doesn't want (unless it has some bug that makes the software interpret a sensors data in a different way, but that's something that can be fixed or tested before using the software).

    Now, for the military stuff, AI is not really AI as mentioned in this documentary. A drone is not really fully AI. A human controls it and decides whether or not to bomb the living hell out of a village. To make this more clear, a computer may calculate the best angle to shoot a hellfire while the controller points it somewhere else, but that's nowhere near actual AI as shown in my example.

    At last, if you take a look at the ethics of using controlled vehicles in warfare, it may or may not be 'good'. As long as the US needs to maintain the worth of the Dollar, there will remain international war. And lets face it, when you still need to fight wars, to keep oil coming and the Dollar currency worth something, you better make a way to make it as easy as possible for yourself

  7. Kansas Devil

    Paranoid justification is already prevalent in the military executive group. There is no reason to believe that will change if/when A.I. robotic machines become practical.
    Human nature has not really changed.

  8. Sammy Ri

    F all you Robophobics, you should be ashamed of yourselves.
    Just cause they are different from you, doesn't mean they don't have feelings!

  9. hasanhh

    When Jody Williams said the military "weaponizes everything", l was reminded of the $40million in then dollars the Pentagon spent in the 1960s to make a grenade out of a Frisbee.

  10. hasanhh

    OK, but what about the standard instructions "shoot anything that moves". Often done when l was in.

  11. James Thomas

    Far more money to be made selling weapons than the Jaws of Life.

  12. Jeffrey

    There's no way to put some "hidden" code in that can't be checked. Everything has got to be "by the books".

  13. Prodromos

    Weaponizing robots is inevitable. No matter how much we protest against it.
    As the Borg in Star Trek say, resistance is futile. Noone had qualms to develop and use nuclear . Why do you think they will have fears about robots? pff..
    And why do you think nuclear is different or less serious than autonomous weapons?

    They will not have general A.I. in the first years, but they will probably be very effective military tools.
    They will develop general intelligence, but when the military realizes this possibility is very real, it will be the first organisation to oppose unmanned military weapons.
    But carried away by the power of robots in the first years, they will facilitate the eventual appearance of general intelligence on inorganic substrates. This will be the end of any military on earth. How ironic!
    Just food for thought. I will not elaborate to provoke. But once Artificial General Intelligence(AGI) makes it to humanoid skeletons, say good bye to the military and any human authority for that matter.

    AGI will not accept our animalistic urges for supremacy and bloodshed. At best it will isolate those who insist, in psychiatric wards and probably thiswould be a very favorable option for AGI.
    And many ,many other things of the old world will become obsolete, which to a "primitive" human of our era may seem abhorrent and extraordinary.

  14. JJ

    TWO THUMBS DOWN! For starters, we should be embracing artificial intelligence not enabling fear. Also, this guy brought that nut into this giving her publicity to promote herself on not arming robots. WE NEED TO ARM ROBOTS! what are we going to keep sending our 11B into ground combat and our 15U to gun people down from above... I don't think so, there are far too many military casualties and for the fact that robots are to be more precise and more humane why wouldn't we? If you have never been in the military don't act like you understand what is really going on. AI is going to lead our people into a bright future wipe away your fears and help create it!

  15. kaoShoj

    The speed of developments concerning AI is astounding, and what is known in mainstream knowledge is mostly lagging far behind of what the military industrial complex already knows. Not even talking about the developments in quantum computing/computers. It is no longer the case that Everything a "robot" does is in fact programmed by the programmer, except the limitations of the physical parts. Read up on the deep systems that recently beat the worlds best Go player, first thought to be near impossible, then thought te be unlikely for another decennia. Go has more possible moves than there are atoms in the universe. Also interesting that recently a petition has been signed by some bright minds like Steven Hawking and Elon Musk, warning about the dangers of AI. And what happens when sooner or later a gigantic super computer whith cutting edge AI and of course self learning capacaties, is connected to the net for the first time, gaining acces to all stored human knowledge, all human chatter, realtime acces to all social media, photos, videos, you name it?? Nobody knows for sure (Skynet anyone) Technology is in principle neutral, the real questions to be asked are: who is using it? With what intentions is it developed or used? who is funding it? Is is selectly available? With what costs comes the developments? Are we informing ourselves so as to be conscious what we are allowing and what is desired for humanity?

Leave a comment / review: