EditorialROBOTS AND SOCIETY

Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy

+ See all authors and affiliations

Science Robotics  15 Mar 2017:
Vol. 2, Issue 4, eaam8638
DOI: 10.1126/scirobotics.aam8638

Abstract

The regulatory, ethical, and legal barriers imposed on medical robots necessitate careful consideration of different levels of autonomy, as well as the context for use.

From minimally invasive surgery, targeted therapy, and hospital optimization to emergency response, prosthetics, and home assistance, medical robotics represents one of the fastest growing sectors in the medical devices industry. The regulatory, ethical, and legal barriers imposed on medical robots necessitate careful consideration of different levels of autonomy, as well as the context for use. For autonomous vehicles, levels of automation for on-road vehicles (1) are defined, yet no such definitions exist for medical robots. To stimulate discussions, we propose six levels of autonomy for medical robotics as one possible framework (Fig. 1):

Fig. 1 Different levels of autonomy as mapped to robotic surgery.

It is possible that technology may advance faster than regulatory, ethical, and legal frameworks. Risk management during implementation is critical to avoid backlash that would impede progress.

(CREDIT: R. D. MERRIFIELD, HAMLYN CENTRE, IMPERIAL COLLEGE LONDON)

Level 0: No autonomy. This level includes tele-operated robots or prosthetic devices that respond to and follow the user’s command. A surgical robot with motion scaling also fits this category because the output represents the surgeon’s desired motion.

Level 1: Robot assistance. The robot provides some mechanical guidance or assistance during a task while the human has continuous control of the system. Examples include surgical robots with virtual fixtures (or active constraints) (2) and lower-limb devices with balance control.

Level 2: Task autonomy. The robot is autonomous for specific tasks initiated by a human. The difference from Level 1 is that the operator has discrete, rather than continuous, control of the system. An example is surgical suturing (3)—the surgeon indicates where a running suture should be placed, and the robot performs the task autonomously while the surgeon monitors and intervenes as needed.

Level 3: Conditional autonomy. A system generates task strategies but relies on the human to select from among different strategies or to approve an autonomously selected strategy. This type of surgical robot can perform a task without close oversight. An active lower-limb prosthetic device can sense the wearer’s desire to move and adjusts automatically without any direct attention from the wearer.

Level 4: High autonomy. The robot can make medical decisions but under the supervision of a qualified doctor. A surgical analogy would be a robotic resident, who performs the surgery under the supervision of an attending surgeon.

Level 5: Full autonomy (no human needed). This is a “robotic surgeon” that can perform an entire surgery. This can be construed broadly as a system capable of all procedures performed by, say, a general surgeon. A robotic surgeon is currently in the realm of science fiction.

For higher levels of autonomy, the ability of the surgical robotic system to respond to a variety of sensory data will need to be more sophisticated. A key requirement for full autonomy will be technology that replicates the sensorimotor skills of an expert surgeon. With decreasing human oversight and increasing robotic perception, decision-making, and action (the traditional “sense-think-act paradigm”), the risk of malfunction that can cause patient harm will increase. Cybersecurity and privacy are also major issues to consider.

As the level of autonomy in these devices increases, the regulatory challenges will also change. In the United States, the Food and Drug Administration (FDA) reviews and clears robotic-assisted devices via the 510(k) (premarket notification) process. Some future medical robots may instead be classified as high-risk devices (Class 3) requiring the most stringent PMA (premarket approval) regulatory pathway. The difference can be significant. For example, based on a 2014 survey (4), it costs an average of roughly $31 million to bring a medical device to market under the 510(k) program. For a PMA device, the average cost was $94 million. It took an average of 10 months for a 510(k) device from first filing (submission) to clearance. A PMA device took an average of 54 months from first communication to market (4). These regulatory issues may pose significant barriers to innovation, competition, and development, especially for technology start-ups.

At the higher levels of autonomy (specifically Level 5 and possibly Level 4), the robot is not only a medical device but is also practicing medicine. The FDA regulates medical devices but not the practice of medicine, which is left to the medical societies. Handling the overlap is therefore challenging and requires orchestrated effort from all stakeholders. One possibility is for the FDA to certify the safety of a surgical robot design but require licensing and/or certification of the robotic surgeon by the medical establishment, as is currently done for human surgeons.

The implications will vary for different levels of autonomy for different usage contexts (e.g., Level 3 for a home-assistive robot and for a surgical robot can have very different technological challenges and regulatory implications). This is the key issue for medical robotics—unlike autonomous cars, the spectrum of tasks, environments, technology, and risk is practically limitless. A framework that establishes general categories for these areas is an important first step. As the autonomous capabilities of medical robotics grow (5), most of the role of the medical specialists will shift toward diagnosis and decision-making. This shift may mean that dexterity and basic surgical skills may decline as the technologies are introduced, with implications for training and accreditation. At the same time, pattern recognition and self-learning algorithms will improve, allowing medical robotics an increasingly larger role in higher levels of autonomy. If robot performance proves to be superior to that of humans, should we put our trust in fully autonomous medical robots?

Technology may advance faster than regulatory, ethical, and legal frameworks. Risk management during implementation is critical to avoid backlash that would impede progress. We are already at Level 3 for some devices and procedures, and therefore, the challenge will be in broadening the applications to more complex procedures and environments. For surgical robots, one key aspect of Levels 1 to 4 is that the treating physician is still in control to a significant extent. The robotic devices are essentially doing what the physician commands, with varying levels of detail being left to the automated system. Aside from evolving technology, the risk tolerance to autonomous robots is expected to change. As autonomous machines such as self-driving cars become commonplace, we anticipate that acceptance of risk from autonomous robots for medical applications will also increase.

REFERENCES

Acknowledgments: This Editorial originated from some of the discussions held at the Halcyon Dialogue jointly organized by the S&R Foundation and the AAAS on 1 February 2016 at the Halcyon House, Washington, DC. Contributions from all other participants of the Dialogue are graciously acknowledged.
View Abstract

Navigate This Article