AI Robot Learns Suturing – Researchers Use Surgery Videos


Medics have to practice suturing repeatedly as it is one of the toughest parts of surgery. But now it is possible to do so using AI robots. A collaborative venture among the University of California, Intel, and Berkeley, has made it possible to use artificial intelligence.

Artificial Intelligence

Artificial intelligence is a technology that allows machines to think like humans. This technology is being widely used these days. Although it is not still possible to make them think like humans, they can perform many tasks using their “intelligence”.


The UC Berkeley team, which Dr. Ajay Tanwani led, has developed an AI system. It is an AI deep-learning system, dubbed Motion2Vec. It can learn things shown to it.

The team designed the system such that the robot watches videos of real surgeries performed by doctors. It then breaks down the movements of the medic and observes the suturing. The procedure includes the insertion of the needle, the extraction, and the hand-off. It watches videos and then performs the exact steps with great accuracy.


Dr. Ken Goldberg, running the UC Berkeley lab and an advisee of the Tanwani’s team on this study, says that YouTube gets around 500 hours of updated data every minute. He added that the data is an unbelievable repository, dataset. The videos are enough to teach the robot suturing.

Also Read: A New Technology Increases Hope For An Implantable Artificial Kidney

He added that almost any individual can observe any of these videos and try to understand what they can. However, robots are unable to do so, as they view it as numerous pixels.

Hence the goal is to make sense out of these ordinary pixels by making the robot watch and analyze the video and break down the parts of the video into important sequences.

To carry out this process, the team proposed the Siamese network to train its AI. These networks are used to study the distance operations using unsupervised or data which was supervised weekly, Tanwani described.

He added that the main idea for this is to produce a huge sum of data which is in the form of recombinant videos. The data is then compressed into a flat dimensional manifold.

The study used networks for comparison to the rank range of similarity between inputs. They are usually used for the recognition of images.

In this instance, the unit is comparing how the arms of the robots are moving and the human doctors performing the same. The goal is to make the performance of the robot nearest to the human level.

The team used 78 videos provided by the JIGSAWS to teach their AI to execute the task. It performed its assignment with 85.5% accuracy and 0.94 centimeters of average targeting accuracy.


It will take years to make the AI participate in the actual operation. However, Tanwani thinks that they will work the same as Driver Assist works on semi-autonomous cars i.e. they’ll be soon ready for wide use for suturing.

They would not substitute human surgeons, though they will perform low-level tasks that have to be repeated.

They will not only suture but perform many other tasks if given proper data. Experts expect that AI robots will be able to help in debridement. However, it will take years before they become functional.

Goldberg explained that they are not at that point as of yet, but are stepping forward towards with the skills of a surgeon supervising the system. Consequently, this will allow them to be able to direct where the series of sutures are needed and convey how many are needed.

Goldberg continued that this way, the robot will be potentially able to carry out this process while surgeons can take a break. This would leave the surgeons well-rested with an increased focus for the next few hours to be able to concentrate on the more complex components of the assigned surgery.

Tanwani believes that technology would aid the surgeons in focusing their time on completing more complex tasks. The robots will perform routine functions.