We are aware of the importance of scientific contributions to the field of ophthalmology & to healthcare in general.
In order to support the transition from reactive to precision and patient-specific medicine, and to foster the development of solutions to support patient's health and well-being, we are releasing our public peer-reviewed contributions in medical image analysis, computer vision and machine learning.
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction.
In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN).
The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Source: Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography.Sci Rep11, 21893 (2021) - https://doi.org/10.1038/s41598-021-01227-0
Authors: Yasmine Derradji, Agata Mosinska, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet & Irmela Mantel
Date: Nov. 2021
Link: Scientific Reports
Method design. (a) A schematic illustration of our training method with layer segmentation prior. An input to the neural network is an OCT b-scan. The CNN output is a probability map for an atrophic region with a vertical span corresponding to the RPE layer and choroid. The loss is computed in 2d between the prediction and the RORA ground-truth masked with RPE and choroid. (b) A schematic illustration of the training approach without using layer segmentation prior. As the vertical span of the ground-truth bounding box is undefined, the loss is computed in 1d between the maximum probability projections of prediction and ground-truth. (c) Inference workflow—each b-scan in a test volume is fed to the CNN, which outputs RORA prediction. In order to obtain an en face view, the predictions are max-projected and thresholded at 0.5.
Source: Derradji, Y. et al. - Sci Rep11, 21893 (2021)
The purpose of this peer-reviewed publication is to assess the potential of machine learning to predict low and high treatment demand in real life in patients with neovascular age-related macular degeneration (nAMD), retinal vein occlusion (RVO), and diabetic macular edema (DME) treated according to a treat-and-extend regimen (TER).
The study considered 340 patients with nAMD and 333 eyes (285 patients) with RVO or DME treated with anti–vascular endothelial growth factor agents (VEGF) according to a predefined TER from 2014 through 2018. Eyes were grouped by disease into low, moderate, and high treatment demands, defined by the average treatment interval (low, ≥10 weeks; high, ≤5 weeks; moderate, remaining eyes). Two models were trained to predict the probability of the long-term treatment demand of a new patient. Both models use morphological features automatically extracted from the OCT volumes at baseline and after 2 consecutive visits, as well as patient demographic information.
Results and Conclusions: Based on the first 3 visits, it was possible to predict low and high treatment demand in nAMD eyes and in RVO and DME eyes with similar accuracy. The distribution of low, high, and moderate demanders was 127, 42, and 208, respectively, for nAMD and 61, 50, and 222, respectively, for RVO and DME. The nAMD-trained models yielded mean AUCs of 0.79 and 0.79 over the 10-fold crossovers for low and high demand, respectively. Models for RVO and DME showed similar results, with a mean AUC of 0.76 and 0.78 for low and high demand, respectively. Even more importantly, this study revealed that it is possible to predict low demand reasonably well at the first visit, before the first injection.
Source: Gallardo M. et al. Ophthalmology Retina, 5(7), 2021 https://doi.org/10.1016/j.oret.2021.05.002
Authors: Mathias Gallardo, Marion R. Munk, Thomas Kurmann, Sandro De Zanet, Agata Mosinska, Isıl Kutlutürk Karagoz, Martin S. Zinkernagel, Sebastian Wolf and Raphael Sznitman
Date: May 2021
Schematic procedure of the treat-and-extend protocol used in the University Hospital of Bern, Bern, Switzerland. IVT = intravitreal injection; nAMD = neovascular age-related macular degeneration; Ret. Vasc. = retinal vascular.
Illustration of the Treat-and-Extend procedure used in daily clinical practice for treating chronic disease such as AMD, DME and RVO related ME. At each visit, (1) an OCT scan of the retina is acquired for diagnosis and monitoring, (2) the visual acuity of the patient is tested, (3) the patient receives an injection of the anti-VEGF and (4) the clinician decides to extend or not the time interval between two consecutive visits using the observed outcomes at the current visit. Right: Illustration of our prediction algorithm of treatment demand using data from an early stage of the treatment procedure.
Source: ©Mathias Gallardo, Artificial Intelligence in Medical Imaging Lab, ARTORG Center
In this peer reviewed publication, we develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach.
The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance.
Source: Mantel et al. , TVST 2021, 10 (17) - https://doi.org/10.1167/tvst.10.4.17
Authors: Irmela Mantel; Agata Mosinska; Ciara Bergin; Maria Sole Polito; Jacopo Guidotti; Stefanos Apostolopoulos; Carlos Ciller; Sandro De Zanet
Date: April 2021
In this report, we compare OCT drusen volume determined by two different OCT devices (Heidelberg Spectralis OCT and Zeiss PlexElite SS-OCT) using manufacturers’ software and a customized, third party segmentation software. We further compare the automatically assessed drusen volume obtained by these machines with that after manual correction of the automated segmentation.
Comparability of drusen volume among different OCT devices and algorithms are of importance as drusen changes are a hallmark of AMD progression. These changes are tracked in the daily clinical workflow and as surrogate outcome measurements in multicenter trials.
Source: Beck et al. , J. Clin. Med. 2020, 9 (8), 2657 - https://doi.org/10.3390/jcm9082657
Authors: Marco Beck, Devika S. Joshi, Lieselotte Berger, Gerd Klose, Sandro De Zanet, Agata Mosinska, Stefanos Apostolopoulos, Andreas Ebneter, Martin S. Zinkernagel, Sebastian Wolf and Marion R. Munk
Date: August 2020
Link: J. Clin. Med. 2020
In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings.
Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement.
Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.
Source: Apostolopoulos et al. , Nature Sci. Reports , 2020
Authors: Stefanos Apostolopoulos, Jazmín Salas, José L. P. Ordóñez, Shern Shiou Tan, Carlos Ciller, Andreas Ebneter, Martin Zinkernagel, Raphael Sznitman, Sebastian Wolf, Sandro De Zanet & Marion R. Munk
Date: May 2020
Link: Nature Sci-rep Article
Retinal biological markers play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualise these, OCT is often the tool of choice due to its ability to image retinal structures in 3D.
With widespread use in clinical routine, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research.
In this paper, the authors present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. This approach avoids the need for costly segmentation annotations and allows scans to be characterised by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.
Source: Kurmann et al. , Nature Sci. Reports , 2019
Authors: Thomas Kurmann, Siqing Yu, Pablo Márquez-Neila, Andreas Ebneter, Martin Zinkernagel, Marion R. Munk, Sebastian Wolf & Raphael Sznitman
Date: September 2019, Nature Sci-Rep.
Link: Nature Sci-Rep article
Data validation is the process of ensuring that the input to a data processing pipeline is correct and useful. It is a critical part of software systems running in production. Image processing systems are no different, whereby problems with data acquisition, file corruption or data transmission, may lead to a wide range of unexpected issues in the acquired images.
Until now, most image processing systems of this type involved a human in the loop that could detect these errors before further processing. But with the advent of powerful deep learning methods, tools for medical image processing are becoming increasingly autonomous and can go from data acquisition to final medical diagnosis without any human interactions. However, deep networks are known for their inability to detect corruption or errors in the input data.
This paper presents a deep validation method that learns to measure how correct a given image looks. We experimentally assessed the validity of our method and compare it with different baselines, reaching an improvement of more than 10 percent points on all considered datasets.
Authors: Marquez-Neila, P. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
Link: Coming soon!
Image registration, the process of aligning two or more images into the same global spatial reference, is a crucial task in fields like computer vision, pattern recognition and medical image analysis.
This article presents a novel CNN-based feature point detector - GLAMpoints - learned in a semi-supervised manner and trained using reinforcement learning strategies. As a result, we avoid the limitations of point matching and transformation estimation being non-differentiable.
Our detector extracts repeatable, stable interest points with a dense coverage, specifically designed to maximise the correct matching in a specific domain, in contrast to conventional techniques that optimise indirect metrics.
To illustrate the performance of our approach, we apply our method on challenging 2D retinal slit-lamp images, for which classical detectors yield unsatisfactory results due to low image quality and insufficient amount of low-level features. We show that GLAMpoints significantly outperforms classical detectors as well as state of the art CNN-based methods in matching ability and registration quality.
Authors: Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I.
Date: Oct. 2019, ICCV 2019 - Seoul, Korea
Optical Coherence Tomography (OCT) is the primary imaging modality for detecting pathological biomarkers associated to retinal diseases such as Age-Related Macular Degeneration. In practice, clinical diagnosis and treatment strategies are closely linked to biomarkers visible in OCT volumes and the ability to identify these plays an important role in the development of ophthalmic pharmaceutical products.
In this article we present a method that automatically predicts the presence of biomarkers in OCT cross-sections by incorporating information from the entire volume. We do so by adding a bidirectional LSTM to fuse the outputs of a Convolutional Neural Network that predicts individual biomarkers. As a consequence, we avoid the need to use pixel-wise annotations to train our method, and instead provide fine-grained biomarker information regardless. We furthermore show that our approach imposes coherence between biomarker predictions across volume slices and our predictions are superior to several existing approaches to perform the same task.
Authors: Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
Multi-label classification (MLC) problems are becoming increasingly popular in the context of medical imaging. This has in part been driven by the fact that acquiring annotations for MLC is far less burdensome than for semantic segmentation and yet provides more expressiveness than multi-class classification.
However, to train MLCs, most methods have resorted to similar objective functions as with traditional multi-class classification settings. We show in this work that such approaches are not optimal and instead propose a novel deep MLC classification method in affine subspace. At its core, the method attempts to pull features of class-labels towards different affine subspaces while maximising the distance between them.
In this paper we evaluate the method using two MLC medical imaging datasets and show a large performance increase compared to previous multi label frameworks.
Authors: Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
The automatic segmentation of fluid deposits in OCT imaging enables clinically relevant quantification and monitoring of eye disorders over time. Eyes with late-stage diseases are particularly challenging to segment, as their shape is often highly warped and presents a high variability between different devices, specifications and scanning times.
In this context, the RetinAI team proposed a novel fully-Convolutional Neural Network (CNN) architecture which combines dilated residual blocks in an asymmetric U-shape configuration, and can simultaneously segment and classify cysts in pathological eyes.
This articles presents a validation of our approach on the Retouch Challenge with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) ’17 Conference dataset.
Date: Sept. 2017 - MICCAI
The automatic segmentation of retinal layer structures provides clinically-relevant quantification and monitoring of eye disorders over time in OCT. Eyes with late-stage diseases are particularly challenging to segment, as their shape is highly warped due to the presence of pathological biomarkers.
RetinAI has proposed an algorithm which combines dilated residual blocks in an asymmetric U-shape network configuration, and can segment multiple layers of highly pathological eyes in one shot. Our so called BRUnet architecture enables accurate segmentation or retinal layers by modeling the optimization as a supervised regression problem. Using lower computational resources, our strategy achieves superior segmentation performance compared to both state-of-the-art deep learning architectures and other OCT segmentation methods.
Date: Feb. 2017 - MICCAI
Visual inspection of Optical Coherence Tomography (OCT) volumes remains the main method for AMD identification, doing so is time consuming as each cross-section within the volume must be inspected individually by the clinician. In much the same way, acquiring ground truth information for each cross-section is expensive and time consuming.
In this paper, we present a new strategy towards automatic pathology identification in OCT C-scans. By introducing a novel Convolution Neural Network (CNN) architecture, named RetiNet, that directly estimates the state of a C-scan solely using the image data and without any additional information.
Date: Oct. 2016
List of scientific publications
Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep (11), 21893 (2021)
Mantel, I., Mosinska, A., Bergin, C., Polito, MS., Guidotti, J., Apostolopoulos, S., Ciller, C., De Zanet, S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning, TVST, April 2021 10 (17)
Beck, M., Joshi, D.S., Berger, L., Klose, G., De Zanet, S., Mosinska, A., Apostolopoulos, S., Ebneter, A., Zinkernagel, MS., Wolf, S. and Munk, M. R., Comparison of Drusen Volume Assessed by Two Different OCT Devices, J. Clin. Med. 2020, 9 (8), 2657
Apostolopoulos, S., Salas, J., Ordóñez, JL.P., Tan, SS. , Ciller, C., Ebneter, A., Zinkernagel, M., Sznitman, R., Wolf, S., De Zanet, S. & Munk, M.R, Automatically Enhanced OCT Scans of the Retina: A proof of concept study, Nature Scientific Reports, May 2020
Kurmann, T. , Yu, S. , Márquez-Neila, P., Ebneter, A., Zinkernagel, M., Munk, MR., Wolf, S. & Sznitman R., Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans, Nature Scientific Reports, September 2019
Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I. GLAMpoints: Greedily Learned Accurate Match points, ICCV, Seoul - Korea, November 2019
Marquez-Neila, P. & Sznitman, R., Image data validation for medical systems, MICCAI 2019 Shenzhen, China, October 2019
Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R., Fused Detection of Retinal Biomarkers in OCT Volumes, MICCAI 2019 Shenzhen, China, October 2019
Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R., Deep Multi Label Classification in Affine Subspaces, MICCAI 2019 Shenzhen, China, October 2019
Bogunović, H. , Venhuizen, F., Klimscha, S. , Apostolopoulos, S. et al. RETOUCH-The Retinal OCT Fluid Detection and Segmentation Benchmark and Challenge, IEEE Transactions on Medical Imaging, February 2019
Giannakaki-Zimmermann, H., Huf, W., Schaal, K.B., Schürch, K., Dysli, C., Dysli, M., Zenger, A., Ceklic, L., Ciller, C., Apostolopoulos, S., De Zanet, S., Sznitman, R., Ebneter, A., Zinkernagel, MS., Wolf, S., Munk, M., Comparison of choroidal thickness measurements using spectral domain optical coherence tomography in six different settings and with customised automated segmentation, Translational Vision Science & Technology, May 2019
Ciller, C., De Zanet, S. et al. Multi-channel MRI segmentation of eye structures and tumors using patient specific eye features, PlosOne, 2017
Ciller, C., De Zanet, S., Apostolopoulos, S. et al. Automatic Segmentation of Retinoblastoma in Fundus Image Photography using Convolutional Neural Networks, ARVO 2017, Baltimore
Apostolopoulos, S., De Zanet, S., Ciller, C. et al. Pathological OCT Retinal Layer Segmentation Using Branch Residual U-style Networks, MICCAI Quebec & Arxiv, 2017
Apostolopoulos, S. et al. Efficient OCT volume reconstruction from slit lamp microscopes, IEEE TBME, 2017
Apostolopoulos, S., Ciller, C., De Zanet, S. et al. RetiNet: Automatic AMD identification in OCT volumetric data, Arxiv, 2016
De Zanet, S. et al. Retinal slit lamp video mosaicking. International Journal of Computer Assisted Radiology and Surgery, International Journal of Computer Assisted Radiology and Surgery, 2016
De Zanet, S. , Ciller, C. et al. Landmark Detection for Fusion of Fundus and MRI Toward a Patient Specific Multi-modal Eye Model, IEEE TBME, 2015
Ciller, C., De Zanet, S. et al. Automatic Segmentation of the eye in 3D MRI: A novel statistical shape model for treatment planning of retinoblastoma, Int. J. Radiation Oncology BiologyPhysics (Red Journal), 2015