Posts Tagged ‘Personal Injury Law’

Robots and the Law

September 13, 2010

Robots and the Law

On the ceiling of the Sistine  chapel  in  Rome  God  the  Father  extends  his  fingertips  to  touch  those  of  Adam.  By  doing  so,  he  imparts  to him the  gift of  life.

Mankind may be on the threshold of  something similar.  In the not too distant future he may be able to create a robot that is capable of thinking; a robot that has emotions; and a robot that is self-aware.  Human beings are likely to create a robot that may ultimately be more intelligent than they are.

If  human beings do succeed in creating this robotic capability will it lead to the creature turning upon its creators in the manner of the Frankenstein story?  Will it lead to the return of discriminatory laws where robots assume the subservient conditions of slaves in the antebellum South?  Will robots have the rights and obligations of citizens?  Will they have the right to vote, and the freedom to enter into contracts and get married?  Will they be permitted to intermarry with human beings and have sex with them?

If all this seems like fantasy it is not,  and the law,  at the present time, has no  answer  to  these  questions.

Imagine it is the year 2050.   A robot is driving his owner’s nitrogen powered car to the gas station.  The robot has been “in existence” for two years.  He has been sent to fill-it-up before he, the robot, drives the family to its cabin in Maine.

At an intersection the robot runs a yellow traffic light.  He has incorrectly judged the time it would take for him to clear the intersection before the traffic perpendicular to his car proceeded.  He has made a poor judgement.  How is that possible?  Is it conceivable that “it” was daydreaming?

As a result of the crash both the humans and the robot in the second car are “dead” and beyond repair.  The driver of the first car, the robot,  is charged with vehicular homicide.  If found guilty, it will be rendered inoperable.

If  found negligent in a civil suit the careless robot will be indentured for a number of years.  Its work will be monetarily computed and applied to the financial benefit of the deceased’s survivors and owners.  If the robot has its own assets they will be forfeited to the surviving beneficiaries.  Under present law, the owner would be responsible for the actions of his/her robot as a malfunctioning machine.

If the law provides for the accused, in a criminal case, to be tried by a jury of his peers does that mean a robot must be tried by twelve other robots?  If in a civil case a robot is the defendant can the plaintiff’s attorney be prevented from exercising his peremptory challenges so that all the robots are excluded from the jury?  What are the characteristics that make human beings human?

If mankind makes autonomous robots with emotions and feelings to better interact with human beings, doesn’t it  introduce instability into the robot’s architecture?  If he makes a robot that thinks and feels then ultimately the robot may out-learn or reject the very things it was taught.   It would then have the human equivalent of free will.  A robot would become unpredictable and perhaps assume, over time, a superiority complex in relation to human beings.

An example of this reality is portrayed in the Stanley Kubrek movie “2001, A Space Odyssey.”  It is a story about two astronauts traveling into deep space.  Because the trip will take so long a computer with artificial intelligence is programmed to run and monitor not only the equipment but also the vital signs of the two men.  At some point the computer determines that it is superior to the human beings on the flight.  It decides that the men are the weak link in the mission and that it must remove them from control of  the spacecraft.  The computer’s artificial intelligence allows the computer to learn, evaluate and ultimately have emotions.  As a result the computer (a type of immobile robot) develops hubris based upon its superiority complex.  The astronauts spend the rest of the movie fighting the robot for control of the spaceship.  Ultimately, the only way they can prevail is by removing the artificial intelligence data cards from the computer; it becomes inoperative and “dies.”   As it is dying, it is still scheming.  It pleads to them not to disconnect it and that there has been a misunderstanding among them.

Another movie that captures the complexity of the future relationship between humans and robots is “I, Robot” starring Will Smith as a police officer.  The robots in his world have become ubiquitous. He doesn’t trust their reliability towards human beings and the ability of humans to keep the robots under control.  The robots start to develop minds of their own with unpleasant results for the humans.

Humans are developing robots that look and act like them.  They are making these machines the mirror image of humans.  The more robots look like humans the easier it is for humans to interact with them.  In the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology scientists are developing robot heads that mimic the facial expressions, eye movement, and mouth smiles and frowns of people.  These cues and human-like traits are what make human beings comfortable when they interact with robots.

With the remarkable advancement of artificial intelligence, it is not incomprehensible to imagine robots that will learn, learn to learn and act autonomously.  The next step may be that robots will become more intelligent than their human creators.  If you add emotion to a robot’s capability then you introduce into the equation a wild card in which no one can predict how robots will interact with human beings.  There are benign emotions, sorrowful emotions and emotions associated with friendship and love.  These are the ones most advocated by robot enthusiasts.

However, there are other human emotions that robots may mimic, absorb and ultimately inculcate as character traits.  These emotions include anger, rage, jealousy, fear, greed, violence and depression.

In the movie “War Games” starring Mathew Broderick, the U.S. Airforce turns over all decision-making for fighting a nuclear war to a computer.  Only the computer is capable of making the instantaneous decisions necessary to launch intercontinental ballistic missiles in order to counter an enemy attack.  The computer malfunctions and it starts a countdown to thermonuclear war that will destroy mankind.  In an effort to make the computer “learn” and realize its error, the hero of the movie gets it to play checkers in which every game turns into a tie.  At the very last second the computer shuts itself off.  It then voices what it has learned:  “The only way to win is not to play the game.”

This admonition may well be applied to those exploring the frontiers of robots with artificial intelligence.  Man thinks it can control a smart machine or program into it certain limitations or governors to prevent it from turning upon its maker.   These beliefs must be similar to those held by the scientists who built the atom bomb during the Second World War.  Today, the biggest sponsor of robotics and artificial intelligence is the Pentagon.  It envisions robotic soldiers fighting alongside its regular troops.  These robots are described as “better than” human beings in highly dangerous environments.  They can be fitted with video camera eyes that are far more sensitive than humans.  They can be equipped with night vision and infrared capability.  Robots do not need food,  shelter or clothing.   They can be programmed to ignore fear.  They can kill without remorse.  The ties of family, friends and comrades in arms do not concern them.  They will be impervious to pain and compassion.  They will be relentless as machines, even thinking machines, in the pursuit of their destructive missions.  Aerial drones presently used by the U.S. military are a precursor to this scenario.

If a man perfects artificial intelligence in robots it is probable that these machines will have scienter.  This means they will possess a knowing awareness and a premeditation that could lead to passions beyond the imagination of their creators.  Perhaps we will have robots teaching robots and bypassing the human dimension altogether.  What shall we call these people who control the world of robotic friends?  Are they owners, operators, inventors, programmers, masters, superior officers, bosses or overseers?  Will artificially intelligent robots be defined as a person, a product or a thing?  Will robots be considered, under law, an inherently dangerous product or will laws, which apply to human beings, govern them?

What happens if these super soldiers start to act on their own?  What happens if they decide to drop the bomb?  Will man be smart enough to stop them before mankind falls over the precipice and into the abyss?  Will scientists become so enamored by the technological potential of robots that they become overwhelmed by the sheer fascination of being able to do it.  They may forget to ask whether they should do it.  There is some recent precedent for this scientific caution.  At the present time, there is a moratorium by the worldwide scientific community on human cloning.  The potential ramifications of such a technological feat have surpassed man’s ability to cope with the moral, ethical, religious and biological hazards.

Ronald Arkin, a roboticist from Georgia Tech University, recently finished a three-year project with the U.S. Army.  He designed prototype software for what he terms autonomous ethical robots.  He claims that the software contains “ethical architecture” which is based upon the international rules of law.  He states that these machines will have something akin to emotions.  He asserts that he has written software that has the notion of  “guilt.”  He says guilt will act as a preventive mechanism.  It will cause the robots to avoid specific behavior.  Does that mean these robots are capable of refusing to carry out what they perceive to be unethical orders?  If these robots have “emotions” Professor Arkin has also introduced instability and capriciousness into these autonomous systems.  Under the strain of the battlefield these machines may succumb to what humans call post-traumatic stress disorder.  The more humans make robots like themselves the greater the risk that they will develop all the strengths and weaknesses possessed by human beings.  Instead of Mr. Spock’s unemotional and objective demeanor, robots may become “confused” and “irrational.”  In a worst case scenario, the next Praetorian Guard could be composed of artificially intelligent robots.

Isaac Asimov was one of the greatest thinkers about these issues and the godfather of robot science fiction.  In 1950, he wrote the seminal book about robots entitled “I,  Robot.”  In his story he established the Three Laws of Robots:

1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules assume robots to be sophisticated, autonomous and emotional.  They also suggest that in the future robots will be ubiquitous.  The question is how will they relate to humans given the necessity of the Three Laws.  They might become any of the following:  friend, companion, personal assistant, bodyguard, valet, nanny, nurse, soldier, factory worker, competitor or adversary.

There is also the question of whether a robot could become prescient?  Could it think in the way humans do?  Could it have the ability to think about the future?  If robots had the ability to reflect, then, at what point would robots have to grapple with the concept of death or becoming “inoperable?”

This ability to think about the future and to reflect is what separates humans from all the other mammals.  If robots can learn from experience and from past mistakes then certainly over time they could develop prescience.  They may become self-aware and develop a robotic culture different from and perhaps superior to the belief systems of mankind.  At that point robots may be able to negotiate with humans.  Such sophisticated interaction combines intelligence, learning, trial and error behavior, emotion, psychology, strategy, bluffing, facial and body language, and observation.

When robots evolve to become indistinguishable from humans then it will be imperative for the law to address the issue of their “existence.”   They can be built to have their skin, hair, skeletal musculature, voice and body to be similar to humans.  They no longer will be dumb machines performing repetitive tasks on an assembly line.  They may be fully autonomous, intelligent and emotional “beings” or “entities.”  What will they be according to the Law?

When the U.S. Constitution was initially ratified it contained language that defined black people as property.  The law determined that slavery was legal.  Slaves belonged to their masters in just the same way as a horse. Black people could be bought and sold.  What is to prevent robots from becoming the new slaves?  What is to prevent robots from becoming the new overseers?  What is to ultimately prevent them from becoming the new masters?  This reversal of roles is entertainingly depicted in the movie “Planet of the Apes.”  In a post-nuclear holocaust where mankind destroyed its civilization apes are masters and men are slaves.  Certainly technology already exists to make robots smarter than apes.  If man can someday clone man, it is likely he can make robots in the image of man in the same way, it is said, that God made man in his image.

Japan leads the world in the development of artificially intelligent robots. Honda Corporation has a robot called Asimo that can walk, talk, climb stairs and perform tasks that are very human like.  Japan’s leaders have already determined to make robots that are intelligent and autonomous.  Japan has also decided that robots will be ubiquitous.  Its leaders have made far-reaching policy decisions that chart a course for sophisticated robot development.

Japan is one of the most homogeneous countries in the world.  It is 98% native Japanese.  The Japanese like it that way,  but they have a problem. Japan has an aging population.  There are not enough young workers to provide the monetary means to take care of its old people.  There are also not enough people to stay at home and to take care of their parents.  Both husband and wife are wage earners to partly offset the workers that would normally be provided by immigration. Japan’s answer is the use of robots in every facet of life.  It intends to encourage human acceptance of robots that look and act increasingly like their creators.  Japan has already decided that robots will “supplement” human endeavors.  What will Japan do if the robots start thinking for themselves?

Can robots become teachers with a particular robotic point of view? We accept the fact that men and women have different points of view, and that their joint collaboration often produces insight into a problem or a person that neither one of them would have obtained if acting alone.  A collaborative relationship between man and robots is the most likely outcome if robots become intelligent, autonomous and emotional.  Of course, these scenarios always look benign in the beginning, but what if the rise of robots, just like atomic energy, has unintended consequences.  The technology is likely to outstrip man’s ability to understand the pros and cons of another “species” conceivably being able to challenger man’s earthly supremacy.

Perhaps some Supplemental Laws of Robotics are necessary:

1. A robot must not terminate or damage another entity, either human or robot;

2. A robot is responsible for its actions;

3. A freeman (autonomous robot) can be terminated by the State for causing injury and damage in a civil and/or criminal matter.

4. An indentured robot (one owned by a human) may expose and implicate its human owner to civil and criminal sanctions.

How might the future look if robots become human companions and friends?  What we name them will certainly reflect how we define them. It will also provide a glimpse into how we will relate to them.

Consider the friend envisioned by the Japanese.  It is March 17, 2045.  In a suburb of Boston a “unit” backs out of “his” driveway.  It is early morning with the sunrise just breaking over the tree-lined landscape.  Another unit has just started to mow the lawn.  The unit in the car has plugged its electrical/neurological umbilical into the dashboard receptacle.  After the engine started, the GPS activated, the internet-connection confirmed, the home computer instructions downloaded, the unit performed a self-diagnosis and puts the car in gear.  It did so by creating a mental image of where it wanted to go.

The unit was happy.  He had been a member of the Jetson’s household for three years.  Prior to that he had been in a halfway house being further programmed and field-tested.  He had to pass a battery of tests before it could be certified for intimate human interaction as a “home companion.”  For reasons yet unknown to their creators some robots never make it beyond industrial classification.  The scientists that develop them think some robots are just not smart enough or perhaps have not adequately learned to learn.  What is the legal standing of the home companion?  Is it a person?  Does it have rights and responsibilities under law?  Is it some type of being like a dog, that is, a family pet, albeit a very smart one?  Is it merely a very sophisticated machine that is totally dependent upon its human programmers for its survival?

However we use and define robots, their existence will challenge the law for solutions to cope with their existence.  If they are human-like entities that are intelligent,  autonomous and full of emotion, the law will be necessary to ensure man’s survival and his independence.

There are many questions yet to be answered about the use and possible abuse of this powerful new and still evolving technology.  If robots, in some future courtroom, swear to tell the truth upon what value system will they be making their affirmation?  Can a robot make mistakes if it is programmed to learn?  How will it learn from its mistakes?  Who will teach it or will it autonomously teach itself?  If it teaches itself what hierarchy of values will it apply and from what sources?   Is not trial and error part of the learning process?  It seems logical that robots must make mistakes if they are, as humans do, to learn from their mistakes.

As robots evolve to become human-like they will develop the curiosity of human beings.  They may question their own existence and where they come from.  Thinking robots of the future may look upon humans as weak and imperfect creatures.  They may deduce that no human made them because no being can create another being that is superior to itself.

Artificially intelligent robots are likely to develop one of mankind’s primary characteristics, that is, man’s fierce will to live.  It will be natural for man to build robots in his own image and likeness.  If that transpires then human beings run the risk of sowing the seeds of their own destruction.

Copyright 2010

All rights Reserved

Arthur F. Licata,

12 Post Office Square

Boston, MA 02109

There are two main parts to a personal injury case

February 17, 2010

There are two main parts to a personal injury case:

1. liability was someone or a company negligent, that is, careless?

2. damages lost wages, medical expenses, pain and suffering, total or partial disability, limitations on one’s daily activities, scarring or disfigurement.

The plaintiff, that is, the one who is suing has the affirmative duty to prove liability and damages.  It is not the obligation of the defendant to disprove anything.

What is the standard of care, the formula, for proving liability and damages in a personal injury case? To answer that question it is important to first say what it is not. There is a general public misconception of what is required in a civil case.

On t.v. we always hear about “proof beyond a reasonable doubt.” This is the very high stanard of proof the State must provide to the jury to find  a person guilty in a criminal case.

In a civil case the proof, that is, the standard of care is significantly less:

the plaintiff must prove the negligence is “probable.”

The word probable means that  something, a fact for instance, is “more likely than not” to be true. This standard is not the high , very difficult standard in a criminal case.

“More likely than not” can best be understood by visualizing the scales of justice evenly balanced before you. If  the facts tip the scales of justice ever so slightly  one way or the other than there is proof  that it is “probable” or more likely than not.

Arthur F. Licata