CVS Annotation

Already have an Account ?

Not yet?

Register for the SAGES CVS Challenge Annotation Team here.

How to join the Annotation Team:

Thank you for your interest in participating in the SAGES CVS Challenge as part of the annotation team! To ensure clinical robustness, consistency, and reproducibility of the data, which AI algorithms will be trained on and tested against, Annotations require high qualitative standards.

To become a part of the SAGES CVS Challenge Annotation Team complete the following steps:

 1. Schedule an Onboarding Session with one of our team members by emailing info@cvschallenge.org. Please provide 3 available time slots compatible with the EST time zone. You will learn more about the annotation of surgical video data in general and the CVS in particular.

2. Sign-up to MOSaiC - the tailormade annotation platform for the CVS Challenge and review the Annotation School Protocol & Video. Play around with the software and familiarize yourself with annotation for the CVS Challenge throughout the Trial Annotation.

3. Complete the CVS Annotation School on MOSaiC - you will be asked to annotate select laparoscopic cholecystectomy videos and your annotation performance will be evaluated against clinical experts in the field in a train-to-proficiency fashion. Once your annotations converge with the experts’ you will gain access to the complete dataset for annotation.

High quality and quantity of annotations performed will be recognized in the final manuscript.

Please feel free to ask any questions by emailing info@cvschallenge.org

Thank you,

SAGES CVS Challenge Organizing Team

Introduction to Annotation of Surgical Videos for AI and ML

Presented by Dr. Jennifer Eckhoff, at the SAGES 2022 AI Session in Denver, USA.

 
 

SAGES consensus recommendations on an annotation framework for surgical video

Ozanan R Meireles 1, Guy Rosman 2 3, Maria S Altieri 4, Lawrence Carin 5, Gregory Hager 6, Amin Madani 7, Nicolas Padoy 8 9, Carla M Pugh 10, Patricia Sylla 11, Thomas M Ward 2, Daniel A Hashimoto 12, SAGES Video Annotation for AI Working Groups. PMID: 34231065; DOI: 10.1007/s00464-021-08578-9

Abstract

Background: The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration.

Methods: Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups.

Results: After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established.

Conclusions: While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.

Keywords: Annotation; Artificial intelligence; Computer vision; Consensus; Minimally invasive surgery; Surgical video.

© 2021. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.