Science

Robots Learning Autonomous Surgery by Watching Surgical Videos

Published December 30, 2024

The rise of artificial intelligence (AI) is beginning to reshape various sectors, including healthcare. Recent advancements suggest that surgical robots can be trained to perform operations independently by observing videos of human surgeons. This research is a collaboration between experts from John Hopkins University and Stanford University.

These researchers developed a model that utilizes video footage of robotic arms operated by humans carrying out various surgical tasks. By mimicking these actions, the goal is to minimize the need for programmers to manually input every specific movement for different surgeries. According to insights shared by the Washington Post,

"The robots learned to manipulate needles, tie knots, and suture wounds independently. Interestingly, they demonstrated the ability to correct mistakes without any prompts, such as picking up a dropped needle. Quoting the researchers, they are now testing these skills in comprehensive surgeries on animal cadavers."

Robotic surgery is not a completely new concept; it has existed in operating rooms for quite some time. A viral moment in 2018 showcased the precision of robotic arms, leading to the popular “surgery on a grape” meme. By 2020, it was reported that around 876,000 surgeries were assisted by robotic devices. These instruments can navigate tricky areas of the human body where a surgeon's hands could not reach and are also immune to tremors. Architecturally slim and precise, the tools can significantly minimize nerve damage. Traditionally, these robotic devices are manually controlled by surgeons who remain in charge throughout the procedure.

However, a noteworthy concern exists regarding the transition to more autonomous robots. Critics argue that AI systems, much like ChatGPT, do not truly possess intelligence. They replicate observed actions without grasping the fundamental concepts behind them. This raises the question of their ability to respond to unpredictable surgical complications that have not been accounted for during training.

Before these autonomous robots could be regularly used in surgical settings, they must receive approval from the Food and Drug Administration (FDA). This contrasts with current AI-based tools employed by healthcare professionals to summarize patient interactions and provide recommendations, which do not require such oversight. While doctors are expected to evaluate AI suggestions critically, there are concerns about the risk of them overlooking errors made by AI, particularly when they are overworked.

This situation bears a resemblance to current reports about the military inadequately checking AI-generated targets, with soldiers acting on AI assessments without thorough verification. As detailed in another Washington Post article:

"Soldiers poorly trained in utilizing technological aids have erroneously targeted human beings, often based solely on minimal validation such as the target's gender." This raises alarms about the potential pitfalls if humans overlook important information.

In the world of healthcare, where the stakes are considerably high, the implications of AI errors can be significant. Mistakes made during surgeries or misdiagnoses can lead to devastating outcomes. In discussions with the director of robotic surgery at the University of Miami, he expressed the critical nature of this issue:

"The stakes are so high, because this is a life-and-death issue." Each patient’s anatomy and the nature of diseases can vary widely, creating unique challenges in surgical procedures.

He elaborated, "As a robotic surgeon, I study CT scans and MRIs before performing surgeries with robotic arms. For robots to operate autonomously, they must be proficient in reading these imaging results and learning laparoscopic techniques that require making very small incisions."

While the idea of AI achieving perfection seems unrealistic, the potential repercussions of a surgical mishap caused by an autonomous robot are dire. Questions arise about accountability—who is to blame when things go awry, and how can they be penalized? Although human doctors can make mistakes, the extensive training they undergo and their ability to be held accountable offer some assurance to patients.

If the rationale behind developing this technology is to assist overworked medical professionals, it may be more prudent to reassess the systemic issues contributing to workforce shortages in healthcare. Evidently, a severe shortage of physicians is looming in the U.S., with predictions indicating a deficit of 10,000 to 20,000 surgeons by 2036, according to the American Association of Medical Colleges.

robots, surgery, AI