Skip to main content

The end of a residency or fellowship typically marks the start of a career in medicine.

For freshly minted surgeons, it also typically marks the end of formal assessment.

“Once you get your certificate, you’ll likely never again be reassessed in the operating room,” said Dr. Thomas Lendvay, professor of urology at the UW School of Medicine and co-director of the Seattle Children’s Hospital Robotic Surgery Center. “When you are being credentialed, you ask your peers — who are often your friends —to review you on a scale of 1 to 5. And you usually get all 5s. That’s it! There’s no way a patient can get information about a surgeon’s abilities other than asking about the number of cases they’ve done or their education. Neither of which are associated with quality.”

And therein lies the problem, Lendvay said. But it’s one he aims to tackle through C-SATS, or Crowd-Sourced Assessment of Technical Skills. This cloud-based performance management system uses anonymous crowds of nonmedically trained people to assess surgeons’ techniques and skills. Lendvay was recently named UW Medicine’s 2018 Inventor of the Year for its development.

C-SATS was born out of Lendvay’s experience as a urological resident. During his training, he noticed that the surgeons around him had varying levels of skills in the operating room, which led to a variety of patient outcomes, both good and bad — but there was no evaluation process in place to suggest improvements for these highly trained physicians.

Patients, families and surgeons are often left heartbroken when complications occur during surgeries. And those complications cost the American healthcare system billions of dollars each year.

But often surgeons have little to no idea about what could be improved to avoid future complications. If skills assessments are required, experts may come in to assess surgical techniques, but that process is costly, often weekslong and can be full of disagreements among experts.

Lendvay started working on this problem in 2011 with graduate students Tim Kowalewski and Lee White from Blake Hannaford’s UW Electrical Engineering Lab; Bryan Comstock  from the UW Department of Biostatistics; and Derek Streat, a startup expert. They wanted surgeons to have information about how they could improve their skills on a regular basis using scientific evidence.

“We hypothesized that surgical skill assessment is basically pattern recognition,” Lendvay said. “After all, medicine is science. So we wondered: Could crowds of nonmedical people assess surgical skills?”

The answer was a resounding “yes,” the team found. In their studies, surgical performances were recorded via video and uploaded to the secure C-SATS site. Then the videos were combined with a survey, and the whole package was sent to experts and other pre-qualified reviewers for assessment.

Most of the reviewers were laypeople from Amazon’s Mechanical Turk, also called MTurk, which is a crowdsourcing marketplace that allows businesses and individuals to coordinate human intelligence to perform complicated tasks. Reviewers assessed the surgical performance in the videos across five domains: efficiency, depth perception, dexterity, control of instrumentation and tissue handling. (These five domains correlate with patient outcomes.) Finally, the reviewers handed over quantitative scores and qualitative feedback to C-SATS, which relayed the information to the surgeons.

The results were stunning and surprising to many medical experts, who had originally bristled at the idea of average folks assessing surgical skills: According to studies published in the Journal of Surgical Research, JAMA Surgery, Journal of Urology and the Journal of Endourology, laypeople agreed with one another remarkably well when it came to skills assessments. Plus, 90 percent of their evaluations included contextual comments to explain ratings, versus 20 percent of surgeons’ expert evaluations.

Dr. Lendvay and his family
Dr. Lendvay (with his family) at the Inventor of the Year event on October 16, 2018.

This method of evaluation was also rapid. “We could get 1,500 assessments in just 10 hours,” said Lendvay. “On the flipside, it would take months to get 3 to 6 expert reviews.”

Today, 1 in 100 surgeons has been assessed by this evaluation tool, which was recently acquired by Johnson and Johnson. Lendvay’s data shows that if a surgeon gets feedback more than 10 times using C-SATS, their patient outcomes are significantly improved.

For now, Lendvay hopes that his technology will be used to help physicians learn about how they can improve their own skills. He has used the technology himself with good results, he says: “Through C-SATS, I have learned ways of avoiding future complications and my performance has improved.”

Eventually, Lendvay believes most hospitals will use C-SATS to gather information about physician skills and outcomes. Someday, he says, patients may be able to see the scores as well:

“Transparency is where healthcare is headed. Plus, the point of this is to improve surgeons’ skills. A rising tide lifts all boats.”

 

Guest Autho: Jenni Gritters

Leave a Reply