Spotlight on Dr. Chloe Nobuhara
By Mohammed Al Kadhim
November 12, 2025
Quantifying and Improving Surgical Performance with Artificial Intelligence
Dr. Chloe Nobuhara is a general surgery resident at Stanford Surgery currently doing two years of research in trauma and emergency general surgery while earning and a Masters in Epidemiology. She is especially interested in delirium and cognitive outcomes after surgery and the application of machine learning to analyze large datasets and advance surgical sciences. She completed her undergraduate studies at Northeastern University in b Behavioral nNeuroscience and went on to complete her MD at Duke University.
Dr. Jopling presented at Grand Rounds at Christian Medical College, Ludhiana. (From left to right: Dr. Deepak Jain, Dr. Parvez David Haque, Prachi Singh, Pragati Rai, Cecil Rose Saviour, Dr. Dhruva Ghosh, Dr. Jeffrey Jopling, Dr. Chloe Nobuhara.)
Tell us all about SOAR. What it is and what are you working on?
SOAR stands for Surgical Objective Assessment and Review. It is an AI-based web application we are developing that uses computer vision to segment surgical operations into steps for efficient video review and quantify performance. The collaboration is led by our own faculty, Dr. Tom Weiser, who leads the SAVE program for Wellcome Leap. It is also being led by Dr. Jeff Jopling, an Assistant Professor of Surgery and trauma surgeon at Johns Hopkins, who is also an alum of Stanford Surgery. The project includes a large team at Stanford of about 20 people, including computer science collaborators led by Serena Yeung-Levy as the principal investigator. The webapp automatically analyzes recorded videos of laparoscopic surgeries, specifically laparoscopic cholecystectomies, to segment the operation into steps, detect errors, and provide feedback on surgical performance. The feedback is particularly novel, and includes domains such as tissue handling, exposure quality, and progress made throughout the case. A key focus is on refining how AI translates raw performance metrics, such as hand velocity, into meaningful, actionable feedback that surgeons can use to improve their technique.
One novel aspect of the webapp at this time is what we are calling the binary approach which breaks out the user’s surgical skills into five categories — progress, tissue handling, exposure quality, dissection quality, and psychomotor skills — and it is designed to give feedback every five seconds throughout the procedure indicating good (green) or needs improvement (red). This is something that no other AI application has done yet, and it really helps accurately identify the in the moment skills that need to be improved.
A novel, binary approach to AI analysis of metacompetencies while operating.
The platform has been piloted with residents at Stanford, where it was used to review and critique surgical videos, gathering feedback from users.
You recently returned from India, what was the purpose of your trip?
The trip was to pilot this new AI-powered web application out of the United States for the first time. Dr. Jopling and I traveled to India for the pilot study, and we were there from September 29 to October 3. The pilot took place at Christian Medical College in Ludhiana, India. Ludhiana is one of the largest and most beautiful cities in the state of Punjab. It was important for us to learn more about the unique challenges and needs of the surgeons there, and it was critical for us to gather real-time, boots-on-the-ground feedback on its usability and effectiveness in a new context. This will allow our team to further develop the webapp and ensure that it meets the needs of users around the world.
Dr. Dhruv Ghosh is the main collaborator in India, and there is an existing relationship between Drs. Ghosh, Weiser, and Jopling, and that really helped to facilitate the pilot project. Throughout our visit, only one laparoscopic cholecystectomy was performed in which surgical residents were operating, and the case volume in my opinion makes the stakes of the webapp even more crucial. Each opportunity to operate is a precious learning experience for surgeons and the ability of an application to use AI to more efficiently home in on what should be improved has the ability to really change the future of surgical education globally.
The laparoscopic camera they used was a donated 4K Olympus camera, and the video quality was so incredibly high definition, better than the United States, that it actually crashed the site for a bit. That was a surprise for us! Other surgeons at Christian Medical Hospital had video collections of their own. We uploaded them to the webapp, and it worked equally as well as the videos we had used in the US. Creating a video repository with AI analysis could be like a living document of shared knowledge for the program. We intend to create an account for both attending surgeons and residents to upload their videos, then browse and review relevant videos before doing similar cases. At the same time, attendings would have a chance to comment on residents’ videos and identify the areas where improvement is needed. Another benefit is that attending surgeons would have a joint platform to learn from each other, especially since they each have a subspecialty that they master.
We interviewed a dozen attending surgeons and residents in India, and there was so much enthusiasm.
Surgical residents in Ludhiana Dr. Mayank Mittal (left) and Dr. Stephen Tom Jude (right) experiment with the web application and give critical feedback.
Does the presence of an AI monitoring webapp add any psychological pressure on junior surgeons? And what about patient privacy?
I was worried about this too, but interestingly, the residents we interviewed actually viewed it as a confidential, supportive tool for learning and communication. In other healthcare systems (especially LMICs), residents may not spend much time with their attendings, and there is more emphasis on a hierarchy, so they found this webapp a reliable and safe middle ground where they can learn from the AI feedback and save the more important questions to ask their attendings. Regarding patient privacy, this is a matter we take very seriously. Fortunately, in India, they have already started a different study that is part of the SAVE program through which they have already been recording deidentified laparoscopic videos with patient consent. Going forward, however, in the webapp, we are implementing a state-of-the-art deidentification machine learning algorithm in order to only show the surgical site (like the gallbladder) and blur the video every time the camera is outside of the body, for an additional layer of safety and privacy.
Dr. Jeffrey Jopling and Dr. Chloe Nobuhara discuss operative details with the attending surgeon during a laparoscopic cholecystectomy.
Surgical residents and fellows perform a laparoscopic cholecystectomy with guidance from their attending.
Why was laparoscopic cholecystectomy chosen as the initial procedure for training the AI model? What steps are needed to expand from laparoscopic to open surgeries?
Laparoscopic cholecystectomy is the second most common laparoscopic procedure after appendectomy, and it was chosen because it’s common but not always a straightforward procedure, and there is the feared and potentially fatal consequence of a common bile duct injury. For that reason, it was considered an ideal training opportunity where clinical need meets AI.
The next steps for the project include expanding and training the AI model towards more laparoscopic, robotic, and even open procedures. We are also constantly iterating on the translation of quantitative metrics into actionable, meaningful feedback for surgeons.
We have also developed an algorithm to classify procedures by severity, for example using the. Parkland grading scale which is a five-tiered scale that classifies the severity of gallbladder inflammation based on visual assessment of intraoperative findings. We have tested the algorithm externally and will integrate it into the webapp. In the next stages, this enhancement will help residents to find and watch videos and learn from the feedback on cases that are similar to the cases that they are about to do. In India, residents will be sent out to remote sites, and having such a tool will be very helpful to them when they deal with complex cases by themselves.
Now that we have developed algorithms for laparoscopic cholecystectomy, theoretically we can train models on a smaller number of videos for subsequent laparoscopic operations (i.e. appendectomy, hysterectomy, inguinal hernia). Expanding this model to open surgeries is more challenging, however. For instance, our cholecystectomy dataset was trained on about 700 laparoscopic videos compiled from several different hospitals. Obtaining this type of rare dataset from open surgery would be difficult, but definitely possible if we find a good use case for that model. I’ve worn a GoPro in the OR before for example, it’s quite uncomfortable, but the video quality is absolutely fantastic.
We watched the graduation ceremony for resident Dr. Binay Pramanik. General surgery residency is three years at CMC.
How does your work fit into the broader goals of the Wellcome Leap/ SAVE program under Dr. Weiser? And how do you anticipate it will impact surgical education and training?
We are very optimistic, and we believe that the potential of the AI web application could play a key role in transforming surgical education. It perfectly aligns with the SAVE program's goal of democratizing safe surgery and supporting standardized assessment and global collaboration—if we at Stanford have access to this resource, I would like residents, surgeons, and most importantly patients around the world to benefit as well.
As residents, the demands of training are steadily increasing with a growing patient census, emphasis on non-technical skills, and all with restrictions on work hours. Every minute of our time is precious making the efficient and thoughtful use of technology crucial. New ideas like using a webapp to promote video review have the potential to revolutionize surgical education by making our training more efficient and effective. Within the next 10 years, I think every resident will be relying on video review.