29.11.16

Health 4.0: Robots, Artificial Intelligence, and Ethics

Futurist Ray Kurzweil has predicted that computers will be as smart as humans by 2030. By 2045, he claims 'artificial intelligence' systems may be a billion times more powerful than our unaided 'human intelligence'. Quantum Computing and Artificial intelligence (AI) will be key components of autonomous, self-learning robotic systems and ‘smart’ machines of the future. Are you prepared for what this means?

The promise of Artificial Intelligence (AI) has always been just beyond the horizon, not quite realistic yet still capturing our imagination in contemporary movies and literature. At its inception, AI was initially deployed for highly selective defense or space exploration applications. However, over time it has steadily advanced and has begun to be utilized in many other industries, such as transportation, manufacturing, and healthcare.

Health 4.0 Future Scenario: By 2040, a space-based global artificial intelligence (AI) network of satellites will be in place that will monitor and help provide healthcare to people on Earth and in colonies across our solar system on the Moon, Mars, and other locations. The system will be linked to massive global health data warehouses storing data from a wide range of health IT systems, e.g. Electronic Health Record (EHR) systems, Personal Health Records (PHR), Health Information Exchange (HIE) networks, wearable fitness trackers, implantable medical devices, clinical imaging systems, genomic databases and bio-repositories, service robots, and more.

The space-based global AI system will monitor and analyze the health data gathered on all humans in real-time, detecting potential individual and public health issues. The global AI system will detect problems, diagnose them, send alerts to patients and their healthcare providers, diagnose the problems and recommend treatment plans to resolve the healthcare issue. The system will also be interfaced to pharmacies, laboratories, health insurers, public health agencies, and other institutions as needed. The system will also be able to monitor a patient's progress, as well as adherence to recommended treatment plans. It will also seek to anticipate potential healthcare issues, provide preventive health and predictive health information tailored to each human, and even make key healthcare decisions on your behalf.

RoboEthics

How people communicate with each other is very different from how people interact with machines. A growing trend in computer systems design and development now involves looking more closely at how humans interact, communicate and make decisions. The goal is to teach computers, machines, robots, and the Internet of Things (IoT) to better comprehend, communicate, and safely interact with humans. This field of study is currently being referred to as Robot Ethics or AI Machine Ethics.

It’s interesting to note that Robot Ethics, also referred to as RoboEthics, covers both (1) the issue of moral behavior humans need to design and build into robots and AI systems interacting with humans, in addition to (2) the moral obligations of society towards its robots and ‘smart’machines. These ‘robot rights’ may include the right to life and liberty, freedom of thought, expression, and equality before the law – similar to human and animal rights.

'Deep learning' is the is one of the current terms used to describe the process of teaching artificial intelligent (AI) systems and self-learning autonomous robots to understand and solve problems by themselves, rather than having engineers having to code each and every decision or solution these non-human systems will make.

RoboEthics will be a key issue that needs to be addressed in Health 4.0 Systems.

Recent Articles on Robots, AI and Ethics

The following are a selection of recent articles on the topic of Robots, AI and Ethics that you may want to quickly scan as you delve deeper into the topic at hand:


Isaac Asimov’s “Three Laws of Robotics
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • Robotics: Ethics of Artificial Intelligence Development of Lethal Autonomous Weapons Systems (LAWS) may be deployed within a few years - and the stakes are high. LAWS will have the ability to select and engage human targets without human intervention. Think about this!

  • Scholars Delve Deeper into the Ethics Of Artificial Intelligence - The U.S. Constitution says that every person should benefit from equal protection under the law. However, our Founding Fathers never contemplated that a ‘person’ would include an artificial intelligent robot.

  • Researchers establish a Standard for Robotic Ethics - As AI continues its rapid advances, it’s become clearer and clearer that we are dealing with some of the most dangerous technology we’ve ever developed.

  • How Tech Giants Are Devising Real Ethics for Artificial Intelligence - Five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. The importance of the industry effort is underscored in a recent report issued by a Stanford University group.

  • Can Artificial Intelligence Be Ethical? - It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release self-learning AI into the real world, where the unpredictability of the environment may reveal decision-making software errors that have potentially disastrous consequences. Witness the Microsoft chat-bot called ‘Tay’.

  • Will Robots Need Their Own Ethics? - If we view robots as potential agents or persons, with a degree of autonomy that approaches or may even exceed human autonomy, then ‘robot ethics’ depends upon the notion that robots might in some sense be moral agents in their own right.

  • The Ethics of Artificial Intelligence: A Future Dilemma with Humanoid Robotics - Inverse Reinforcement Learning (IRL) allows sensor-based AI systems to observe humans, identify behaviors, which can then be converted into a form of operating system software code. This can then be used to develop ‘humanoid’ robotic systems that attempt to act like a human under most conditions. In other words, we’re turning human behavioral patterns into a programmable algorithm - the Algorithm of Life.

  • BSI's First Robot Code of Ethics Bans AI from Harming Humans - The British Standards Institute (BSI), UK’s leading business standards organization, has published a guide that outlines how robots and robotic systems should take ethics into account. 

  • Artificial Intelligence Will Radically Redesign Healthcare -  AI in healthcare and medicine could organize patient routes or treatment plans better, and also provide physicians with literally all the information they need to make better decisions.

Conclusions and Recommendations

The following are a number of preliminary observations, conclusions, and recommendations for those working on the issue of embedding ethical rules into tomorrow’s self-learning autonomous robots and artificial intelligence (AI) systems:

  • Open’ Solutions: Because Artificial Intelligence (AI) will have such a profound effect on humanity, it is recommended that AI developers have an ethical obligation to be open and transparent in their efforts. For example, check out the existing OpenCog, Open RoboEthics, and OpenAI initiatives aimed at developing ‘open source’ AI systems for humanity.
  • Software DevelopmentSoftware development teams attempting to build ‘ethical' AI and robotic systems in healthcare must be composed not only of Subject Matter Experts (SMA), systems analysts and programmers, but should also include Ethicists and Auditors specifically trained to carefully monitor the ongoing, changing behavior of autonomous self-learning robotic systems.
  • Domestic Legal Issues - With the lightening-fast development of robotics engineering and AI systems, the legal world is already way behind the curve. Congress needs to ensure a national oversight group becomes more proactive in developing laws to protect citizens before industry releases potential harmful systems on an unsuspecting public.
  • International Law - Next-generation self-learning robotic weapon systems will have the ability to make their own logical decisions on who to kill. Unfortunately, many governments around the world are already funding Lethal Autonomous Weapon Systems (LAWS) without taking appropriate precautions to protect humans from AI systems that may go ‘rogue’.

What other conclusions and recommendations do you think need to be highlighted?






Selected References







No comments:

Post a Comment