Research & STUDIES

our research

We are aware of the importance of scientific contributions to the field of ophthalmology & to healthcare in general.

In order to support the transition from reactive to precision and patient-specific medicine, and to foster the development of solutions to support patient's health and well-being, we are releasing our public peer-reviewed contributions in medical image analysis, computer vision and machine learning.

For a comprehensive overview of our work, click on the button below and you'll be redirected to our publication list. 

RetinAI's publications list
British Journal of Ophthalmology
Artificial intelligence-based fluid quantification and associated visual outcomes in a real-world, multicentre neovascular age-related macular degeneration national database

Aim To explore associations between artificial intelligence (AI)-based fluid compartment quantifications and 12 months visual outcomes in OCT images from a real-world, multicentre, national cohort of naïve neovascular age-related macular degeneration (nAMD) treated eyes.

Methods Demographics, visual acuity (VA), drug and number of injections data were collected using a validated web-based tool. Fluid compartment quantifications including intraretinal fluid (IRF), subretinal fluid (SRF) and pigment epithelial detachment (PED) in the fovea (1 mm), parafovea (3 mm) and perifovea (6 mm) were measured in nanoliters (nL) using a validated AI-tool.

Results 452 naïve nAMD eyes presented a mean VA gain of +5.5 letters with a median of 7 injections over 12 months. Baseline foveal IRF associated poorer baseline (44.7 vs 63.4 letters) and final VA (52.1 vs 69.1), SRF better final VA (67.1 vs 59.0) and greater VA gains (+7.1 vs +1.9), and PED poorer baseline (48.8 vs 57.3) and final VA (55.1 vs 64.1). Predicted VA gains were greater for foveal SRF (+6.2 vs +0.6), parafoveal SRF (+6.9 vs +1.3), perifoveal SRF (+6.2 vs −0.1) and parafoveal IRF (+7.4 vs +3.6, all p<0.05). Fluid dynamics analysis revealed the greatest relative volume reduction for foveal SRF (−16.4 nL, −86.8%), followed by IRF (−17.2 nL, −84.7%) and PED (−19.1 nL, −28.6%). Subgroup analysis showed greater reductions in eyes with higher number of injections.

Conclusion This real-world study describes an AI-based analysis of fluid dynamics and defines baseline OCT-based patient profiles that associate 12-month visual outcomes in a large cohort of treated naïve nAMD eyes nationwide.

Authors : Ruben Martin-Pinardel, Jordi Izquierdo-Serra, Sandro De Zanet, Alba Parrado-Carrillo, Gonzaga Garay-Aramburu, Martin Puzo, Carolina Arruabarrena, Laura Sararols, Maximino Abraldes, Laura Broc, Jose Juan Escobar-Barranco, Marta Figueroa, Miguel Angel Zapata, José M Ruiz-Moreno, Aina Moll-Udina, Carolina Bernal-Morales, Socorro Alforja, Marc Figueras-Roca, Laia Gómez-Baldó, Carlos Ciller, Stefanos Apostolopoulos, Agata Mosinska, Ricardo P Casaroli Marano, Javier Zarranz-Ventura

Date : January 2023

Link : Article PDF

Case examples of fluid analysis-based patient profiles predictors for 12 months visual acuity outcomes. Left column: raw image of baseline OCT b-scan centred in fovea (month 0, (M0)). Mid-left column: same baseline OCT image with artificial intelligence (AI)-based fluid compartments analysis (month 0, (M0)). Intraretinal fluid (IRF) in red, subretinal fluid (SRF) in yellow, pigment epithelial detachment (PED) in purple. Mid-right column: OCT image with AI-fluid compartment analysis after the completion of 3 monthly injections loading dose (month 3, (M3)). Right column: OCT image with AI-fluid compartment analysis at final 12 months visit (12 months, 12M). Top row: high SRF in fovea (1mm) at baseline, with baseline visual acuity (VA) of 65 letters and VA gain of+9 letters at 12 months. Middle-top row: high SRF in perifovea (6mm) at baseline, with baseline VA of 72 letters and VA gain of +10 letters at 12 months. Middle-bottom row: high IRF in parafovea (3mm), with baseline VA of 50 letters and VA gain of +20 letters at 12 months. Bottom row: high PED in fovea (1mm), with baseline VA of 60 letters and no VA gain at 12 months.

Ophthalmologica
Evaluation of an Artificial Intelligence-based Detector of Sub-and Intra-Retinal Fluid on a large set of OCT volumes in AMD and DME

Introduction: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME).

Methods: A total of 3’981 OCT volumes from 374 patients with AMD and 11’501 OCT volumes from 811 patients with DME, acquired with Heidelberg Spectralis OCT device (Heidelberg Engineering Inc.,Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI-algorithm to detect IRF, SRF separately and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity and precision.

Results: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity and sensitivity for IRF was 0.87, 0.88, 0.84, and 0.93, 0.95,0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort respectively. For detecting any fluid, the AUC was 0.95 and 0.98, the accuracy, specificity and sensitivity was 0.89,0.93, 0.90 and 0.95, 0.88 and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0%versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort.

Discussion/Conclusion: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.

Source: Habra, O; Gallardo M.; Meyer zu Westram T. et al. Ophthalmologica

Authors: Oussama Habra, Matthias Gallardo, Till Meyer zu Westram, Sandro De Zanet, Damien Jaggi, Martin Zinkernagel, Sebastian Wolf, Raphael Sznitman

Date: Oct. 2022

Link: Article PDF

Receiver operating characteristic (ROC) and precision-recall (PR) curves on detection performance of IRF and SRF. First row: AMD. Second row: DME. A and C, ROC curves. B and D, PR curves. The area under the curve (AUC) and area under the PR-curve (AP) with the confidence intervals are specified in parentheses.

Prediction, x-axis of IRF and SRF, the samples divided in 4 categories based on the grader’s annotations (ground truth, y-axis); true negative (TN), false positive (FP), false negative (FN) and true positive (TP).

Investigative Ophthalmology & Visual Science
Coherence analysis between an artificial intelligence algorithm and human experts in diabetic retinopathy screening

Purpose : The aim of this study was to apply an artificial intelligence (AI) algorithm, through deep learning, for the optimized development of an automated diabetic retinopathy (DR) detection algorithm using retinographies and to study the consistency of retina ophthalmologists with the artificial intelligence system in DR screening under routine clinical practice conditions.

Methods : A clinical practice retinographies dataset were used to train an algorithm formed by two component networks which were independently optimized, with the outputs combined to give a single classification for DR. For evaluation, an international standardized retinography dataset in diabetic retinopathy (Messidor-2) was used, which were evaluated by the AI algorithm and two retinal experts with more than 10 years of experience, from different autonomous regions and health systems and diabetic retinopathy screening programs in a blind and independent manner. No prior unification of diagnostic criteria (DR) was performed among the observers to simulate conditions of routine clinical practice, the grades to be used being: absent DR, mild DR, moderate DR, severe DR and proliferative DR. The comparative analysis was performed by grouping DR grades into two groups: Non-derivable (absent DR and mild DR) and Derivable (mild, moderate, severe and proliferative DR).

Results : An image training datsetset of 109,628 images was used for the training phase. Both the AI algorithm and retinal experts analyzed the 1744 images in the evaluation dataset each. The consistency results pitting observer 1 and 2 independently against the AI algorithm were as follows, respectively: a sensitivity of 0.99 and 1; a specificity of 0.74 and 0.71; and an area under the ROC curve of 0.87 and 0.86.

Conclusions : In the current state, our deep learning-based algorithm for retinography-based screening of diabetic retinopathy develops behavior aligned with that of expert retinal ophthalmologists under routine clinical practice conditions. There is the possibility of applying this algorithm in clinical practice with the aim of improving health outcomes compared to the current standard of ophthalmologic management of diabetic patients.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

Authors : Rodrigo Abreu; Jose Natan Rodriguez-Martin; Juan Donate-Lopez; Joseph Blair; Sandro De Zanet; Jose Julio Rodrigo; Carlos Bermúdez-Pérez

Date
: June, 2022

Link : Abstract here

Investigative Ophthalmology & Visual Science
Evaluating an OCT-based Algorithm of Central Subfield Thickness Estimation on AMD and DME patients

Purpose : To evaluate the accuracy of an algorithm to estimate Central Subfield Thickness (CST) from OCT volumes for patients with AMD or DME.

Methods : We collected OCT volumes from two groups of patients, respectively including exudative AMD (541) and DME (1’568) patients. We refer to them as the AMD group and DME group. All patients received an anti-VEGF treatment, were treated and monitored between 01/2013 and 06/2021. We found 3’974 OCT volumes for the AMD group and 11’501 OCT volumes for the DME group. The algorithm to be evaluated relies on the CE-marked Discovery® layer segmentation algorithm (RetinAI Medical AG, Switzerland) and computes the average retinal thickness in the central-1mm region from the ETDRS grid. For each OCT volume, the true CST was measured independently by two graders and a third grader in case of disagreement. The annotations were performed by the Bern Photographic Reading Center (Inselspital, Universitätsspital Bern, Universitätsklinik für Augenheilkunde, Bern, Switzerland). After removing the ungradable OCT volumes and considering the 99th percentile of the thicknesses, the AMD group comprises of 3’894 OCT volumes (537 patients) and the DME group of 11'269 OCT volumes (1’526 patients).

Results : We reported a strong correlation between the annotated values of CSTs and the predicted ones from the algorithm (R2=0.96) for both groups. We observed that the algorithm tends to over-estimate the retinal thickness in general, leading however to a small mean absolute error (prediction - annotation), 3.19μm (95% CI, 2.6–3.8μm) and 10.68μm (95% CI, 10.4–11μm) for AMD and DME groups, respectively. Indeed, relatively to the median annotated CSTs (302μm and 310μm), these errors represent 1.06% and 3.44% of the total retinal thickness for AMD and DME cohorts, respectively. We observed that 5.4% (210/3’894) samples are located outside the 95% limits of agreement for AMD and 5.1% (574/11 269) samples are located outside the 95% limits of agreement in DME.

Conclusions : We report very good performances for CST estimation on OCT volumes with AMD and DME. This unveils the potential of such algorithms to support clinical decision making and to envision new strategies to facilitate annotation for clinical trials. To understand the sources of the difference in CST errors between the two groups, we plan to analyze the ETDRS alignment and to consider the presence of fluids and other biomarkers.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually


Authors : Mathias Gallardo; Oussama Habra; Till Meyer zu Westram; Sandro De Zanet; Sebastian Wolf; Raphael Sznitman; Martin Sebastian Zinkernagel

Date : June, 2022

Link : Abstract here

Graefe's Archive for Clinical and Experimental Ophthalmology
Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients

This research presents an algorithm for accurate fully automatic detection of fovea location in atrophic age-related macular degeneration (AMD), based on spectral-domain optical coherence tomography (SD-OCT) scans.

For this research, image processing was conducted on a cohort of patients affected by geographic atrophy (GA). SD-OCT images (cube volume) from 55 eyes (51 patients) were extracted and processed with a layer segmentation algorithm to segment Ganglion Cell Layer (GCL) and Inner Plexiform Layer (IPL). Their en face thickness projection was convolved with a 2D Gaussian filter to find the global maximum, which corresponded to the detected fovea. The detection accuracy was evaluated by computing the distance between manual annotation and predicted location.

The mean total location error was 0.101±0.145mm; the mean error in horizontal and vertical en face axes was 0.064±0.140mm and 0.063±0.060mm, respectively. The mean error for foveal and extrafoveal retinal pigment epithelium and outer retinal atrophy (RORA) was 0.096±0.070mm and 0.107±0.212mm, respectively. Our method obtained a significantly smaller error than the fovea localization algorithm inbuilt in the OCT device (0.313±0.283mm, p <.001) or a method based on the thinnest central retinal thickness (0.843±1.221, p <.001). Significant outliers are depicted with the reliability score of the method.

The conclusion is that despite retinal anatomical alterations related to GA, the presented algorithm was able to detect the foveal location on SD-OCT cubes with high reliability. Such an algorithm could be useful for studying structural-functional correlations in atrophic AMD and could have further applications in different retinal pathologies.

Source: Montesel, A., Gigon, A., Mosinska et al. Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients. Graefe's Archive for Clinical and Experimental Ophthalmology (2022)

Authors: Andrea Montesel, Anthony Gigon, Agata Mosinska, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet & Irmela Mantel

Date: Jan. 2022

Link: Article

Manual foveal annotation on the SD-OCT standard (left side, above) and the corresponding layer segmentation (left side, below). Detected foveal annotation on (right side, above) and the corresponding layer segmentation (right side, below). Significant degeneration of all retinal layers, including GCL-IPL layers, resulted in distorted en face projection, which no longer matched the Gaussian profle. This resulted in inaccurate detection. [GCL, ganglion cell layer; IPL, inner plexiform layer; SD-OCT, spectral-domain optical coherence tomography]

Translation Vision Science & Technology
Personalized Atrophy Risk Mapping in Age-Related Macular Degeneration

The goal of this study was to develop and validate an automatic algorithm capable of predicting future atrophy progression in a time-continuous fashion, based on volumetric OCT scans only. This was then to be translated into an eye-specific risk map, which would indicate which retinal regions are particularly prone to developing RORA.

In this study longitudinal OCT data from 129 eyes/119 patients with RORA was collected and separated into training and testing groups. RORA was automatically segmented in all scans and additionally manually annotated in the test scans. OCT-based features such as layers thicknesses, mean reflectivity, and a drusen height map served as an input to the deep neural network. Based on the baseline OCT scan or the previous visit OCT, en face RORA predictions were calculated for future patient visits. The performance was quantified over time with the means of Dice scores and square root area errors.

The average Dice score for segmentations at baseline was 0.85. When predicting progression from baseline OCTs, the Dice scores ranged from 0.73 to 0.80 for total RORA area and from 0.46 to 0.72 for RORA growth region. The square root area error ranged from 0.13 mm to 0.33 mm. By providing continuous time output, the model enabled creation of a patient-specific atrophy risk map.

Source: Gigon, A., Mosinska, A., Montesel, A. et al. Personalized Atrophy Risk Mapping in Age-Related Macular Degeneration . Translational Vision Science & Technology November 2021, Vol.10, 18. doi: https://doi.org/10.1167/tvst.10.13.18

Authors: Anthony Gigon, Agata Mosinska, Andrea Montesel, Yasmine Derradji, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet, Irmela Mantel

Date: Nov. 2021

Link: TVST

Figure: Qualitative examples of RORA progression profiles of a range of patients with moderate to large RORA at the baseline. Top row: Infrared fundus image corresponding to the baseline visit, baseline visit prediction and predictions for the follow-up visits. In the followup predictions white regions correspond to RORA already present at the GT baseline, and other colors to RORA growth regions since the baseline—green to true-positive predicted growth, red to false-positive, and blue to false-negative. The predictions were obtained using only the baseline visit acquisition as an input and the caption corresponds to time elapsed since the baseline visit. Bottom row: transfoveal B-scan of baseline OCT (its position in the IR image is denoted with a green line) and manual ground-truth annotation for the follow-up visits.

Scientific Reports, Nov 2021
Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography

Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction.

In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN).

The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.

Source: Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography.Sci Rep 11, 21893 (2021) - https://doi.org/10.1038/s41598-021-01227-0

Authors: Yasmine Derradji, Agata Mosinska, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet & Irmela Mantel

Date: Nov. 2021

Link: Scientific Reports

Method design. (a) A schematic illustration of our training method with layer segmentation prior. An input to the neural network is an OCT b-scan. The CNN output is a probability map for an atrophic region with a vertical span corresponding to the RPE layer and choroid. The loss is computed in 2d between the prediction and the RORA ground-truth masked with RPE and choroid. (b) A schematic illustration of the training approach without using layer segmentation prior. As the vertical span of the ground-truth bounding box is undefined, the loss is computed in 1d between the maximum probability projections of prediction and ground-truth. (c) Inference workflow—each b-scan in a test volume is fed to the CNN, which outputs RORA prediction. In order to obtain an en face view, the predictions are max-projected and thresholded at 0.5.

Source: Derradji, Y. et al. - Sci Rep11, 21893 (2021)

British Journal of Ophthalmology
Does real-time artificial intelligence-based visual pathology enhancement of three-dimensional optical coherence tomography scans optimise treatment decision in patients with nAMD? Rationale and design of the RAZORBILL study
EXTERNAL 

Background/rationale Artificial intelligence (AI)-based clinical decision support tools, being developed across multiple fields in medicine, need to be evaluated for their impact on the treatment and outcomes of patients as well as optimisation of the clinical workflow. The RAZORBILL study will investigate the impact of advanced AI segmentation algorithms on the disease activity assessment in patients with neovascular age-related macular degeneration (nAMD) by enriching three-dimensional (3D) retinal optical coherence tomography (OCT) scans with automated fluid and layer quantification measurements.

Methods RAZORBILL is an observational, multicentre, multinational, open-label study, comprising two phases: (a) clinical data collection (phase I): an observational study design, which enforces neither strict visit schedule nor mandated treatment regimen was chosen as an appropriate design to collect data in a real-world clinical setting to enable evaluation in phase II and (b) OCT enrichment analysis (phase II): de-identified 3D OCT scans will be evaluated for disease activity. Within this evaluation, investigators will review the scans once enriched with segmentation results (i.e., highlighted and quantified pathological fluid volumes) and once in its original (i.e., non-enriched) state. This review will be performed using an integrated crossover design, where investigators are used as their own controls allowing the analysis to account for differences in expertise and individual disease activity definitions.

Conclusions In order to apply novel AI tools to routine clinical care, their benefit as well as operational feasibility need to be carefully investigated. RAZORBILL will inform on the value of AI-based clinical decision support tools. It will clarify if these can be implemented in clinical treatment of patients with nAMD and whether it allows for optimisation of individualised treatment in routine clinical care.

Source: Holz FG, Abreu-Gonzalez R, Bandello F, et al Does real-time artificial intelligence-based visual pathology enhancement of three-dimensional optical coherence tomography scans optimise treatment decision in patients with nAMD? Rationale and design of the RAZORBILL study British Journal of Ophthalmology Published Online First: 06 August 2021. doi: 10.1136/bjophthalmol-2021-319211

Authors: Frank G Holz, Rodrigo Abreu-Gonzalez, Francesco Bandello, Renaud Duval, Louise O'Toole, Daniel Pauleikhoff, Giovanni Staurenghi, Armin Wolf, Daniel Lorand, Andreas Clemens, Benjamin Gmeiner

Date: August 2021

Link: PDF

Flow diagram of RAZORBILL study. Collection of clinical data (including 3D OCT scans) will be performed during routine clinical care(phase I). The data will be stored in the Discovery platform. Analysis of 3D OCT scans will be conducted after 3D OCT scans are partially enriched viasegmentation algorithms (phase II). 3D, three-dimensional; OCT, optical coherence tomography.

Ophthalmology Retina, 2021
Machine Learning Can Predict Anti–VEGF Treatment Demand in a Treat-and-Extend Regimen for Patients with Neovascular AMD, DME, and RVO Associated Macular Edema

The purpose of this peer-reviewed publication is to assess the potential of machine learning to predict low and high treatment demand in real life in patients with neovascular age-related macular degeneration (nAMD), retinal vein occlusion (RVO), and diabetic macular edema (DME) treated according to a treat-and-extend regimen (TER).

The study considered 340 patients with nAMD and 333 eyes (285 patients) with RVO or DME treated with anti–vascular endothelial growth factor agents (VEGF) according to a predefined TER from 2014 through 2018. Eyes were grouped by disease into low, moderate, and high treatment demands, defined by the average treatment interval (low, ≥10 weeks; high, ≤5 weeks; moderate, remaining eyes). Two models were trained to predict the probability of the long-term treatment demand of a new patient. Both models use morphological features automatically extracted from the OCT volumes at baseline and after 2 consecutive visits, as well as patient demographic information.

Results and Conclusions: Based on the first 3 visits, it was possible to predict low and high treatment demand in nAMD eyes and in RVO and DME eyes with similar accuracy. The distribution of low, high, and moderate demanders was 127, 42, and 208, respectively, for nAMD and 61, 50, and 222, respectively, for RVO and DME. The nAMD-trained models yielded mean AUCs of 0.79 and 0.79 over the 10-fold crossovers for low and high demand, respectively. Models for RVO and DME showed similar results, with a mean AUC of 0.76 and 0.78 for low and high demand, respectively. Even more importantly, this study revealed that it is possible to predict low demand reasonably well at the first visit, before the first injection.

Source: Gallardo M. et al. Ophthalmology Retina, 5(7), 2021 https://doi.org/10.1016/j.oret.2021.05.002

Authors: Mathias Gallardo, Marion R. Munk, Thomas Kurmann, Sandro De Zanet, Agata Mosinska, Isıl Kutlutürk Karagoz, Martin S. Zinkernagel, Sebastian Wolf and Raphael Sznitman

Date: May 2021

Link: PDF

Schematic procedure of the treat-and-extend protocol used in the University Hospital of Bern, Bern, Switzerland. IVT = intravitreal injection; nAMD = neovascular age-related macular degeneration; Ret. Vasc. = retinal vascular.

Illustration of the Treat-and-Extend procedure used in daily clinical practice for treating chronic disease such as AMD, DME and RVO related ME. At each visit, (1) an OCT scan of the retina is acquired for diagnosis and monitoring, (2) the visual acuity of the patient is tested, (3) the patient receives an injection of the anti-VEGF and (4) the clinician decides to extend or not the time interval between two consecutive visits using the observed outcomes at the current visit. Right: Illustration of our prediction algorithm of treatment demand using data from an early stage of the treatment procedure.

Source: ©Mathias Gallardo, Artificial Intelligence in Medical Imaging Lab, ARTORG Center

TVST, 2021
Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning

In this peer reviewed publication, we develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach.

The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance.

Source: Mantel et al. , TVST 2021, 10 (17) - https://doi.org/10.1167/tvst.10.4.17

Authors: Irmela Mantel; Agata Mosinska; Ciara Bergin; Maria Sole Polito; Jacopo Guidotti; Stefanos Apostolopoulos; Carlos Ciller; Sandro De Zanet

Date: April 2021

Link:  Translation Vision Science and Technology

J. Clin. Med, 2020
Comparison of Drusen Volume Assessed by Two Different OCT Devices

In this report, we compare OCT drusen volume determined by two different OCT devices (Heidelberg Spectralis OCT and Zeiss PlexElite SS-OCT) using manufacturers’ software and a customized, third party segmentation software. We further compare the automatically assessed drusen volume obtained by these machines with that after manual correction of the automated segmentation.

Comparability of drusen volume among different OCT devices and algorithms are of importance as drusen changes are a hallmark of AMD progression. These changes are tracked in the daily clinical workflow and as surrogate outcome measurements in multicenter trials.

Source: Beck et al. , J. Clin. Med. 2020, 9 (8), 2657 - https://doi.org/10.3390/jcm9082657

Authors: Marco Beck, Devika S. Joshi, Lieselotte Berger, Gerd Klose, Sandro De Zanet, Agata Mosinska, Stefanos Apostolopoulos, Andreas Ebneter, Martin S. Zinkernagel, Sebastian Wolf and Marion R. Munk

Date: August 2020

Link:  J. Clin. Med. 2020

Nature Sci.Rep, 2020
Automatically Enhanced OCT Scans of the Retina: A proof of concept study

In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings.

Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement.

Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.

Source: Apostolopoulos et al. , Nature Sci. Reports , 2020

Authors: Stefanos Apostolopoulos, Jazmín Salas, José L. P. Ordóñez, Shern Shiou Tan, Carlos Ciller, Andreas Ebneter, Martin Zinkernagel, Raphael Sznitman, Sebastian Wolf, Sandro De Zanet & Marion R. Munk

Date: May 2020

Link: Nature Sci-rep Article

Nature Sci.Rep, 2019
Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans

Retinal biological markers play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualise these, OCT is often the tool of choice due to its ability to image retinal structures in 3D.

With widespread use in clinical routine, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research.

In this paper, the authors present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. This approach avoids the need for costly segmentation annotations and allows scans to be characterised by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.

Source: Kurmann et al. , Nature Sci. Reports , 2019

Authors: Thomas Kurmann, Siqing Yu, Pablo Márquez-Neila, Andreas Ebneter, Martin Zinkernagel, Marion R. Munk, Sebastian Wolf & Raphael Sznitman

Date: September 2019, Nature Sci-Rep.

Link: Nature Sci-Rep article

MICCAI, 2019
Image data validation for medical systems

Data validation is the process of ensuring that the input to a data processing pipeline is correct and useful. It is a critical part of software systems running in production. Image processing systems are no different, whereby problems with data acquisition, file corruption or data transmission, may lead to a wide range of unexpected issues in the acquired images.

Until now, most image processing systems of this type involved a human in the loop that could detect these errors before further processing. But with the advent of powerful deep learning methods, tools for medical image processing are becoming increasingly autonomous and can go from data acquisition to final medical diagnosis without any human interactions. However, deep networks are known for their inability to detect corruption or errors in the input data.

This paper presents a deep validation method that learns to measure how correct a given image looks. We experimentally assessed the validity of our method and compare it with different baselines, reaching an improvement of more than 10 percent points on all considered datasets.

Authors: Marquez-Neila, P. & Sznitman, R.

Date: Oct. 2019, MICCAI, Shenzhen, China

Link: MICCAI 2019

ICCV, 2019
GLAMpoints: Greedily Learned Accurate Match points

Image registration, the process of aligning two or more images into the same global spatial reference, is a crucial task in fields like computer vision, pattern recognition and medical image analysis.

This article presents a novel CNN-based feature point detector - GLAMpoints - learned in a semi-supervised manner and trained using reinforcement learning strategies. As a result, we avoid the limitations of point matching and transformation estimation being non-differentiable.

Our detector extracts repeatable, stable interest points with a dense coverage, specifically designed to maximise the correct matching in a specific domain, in contrast to conventional techniques that optimise indirect metrics.

To illustrate the performance of our approach, we apply our method on challenging 2D retinal slit-lamp images, for which classical detectors yield unsatisfactory results due to low image quality and insufficient amount of low-level features. We show that GLAMpoints significantly outperforms classical detectors as well as state of the art CNN-based methods in matching ability and registration quality.

Authors: Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I.

Date: Oct. 2019, ICCV 2019  - Seoul, Korea

Link: ICCV 2019, PDF - Supplementary Material

MICCAI , 2019
Fused Detection of Retinal Biomarkers in OCT Volumes

Optical Coherence Tomography (OCT) is the primary imaging modality for detecting pathological biomarkers associated to retinal diseases such as Age-Related Macular Degeneration. In practice, clinical diagnosis and treatment strategies are closely linked to biomarkers visible in OCT volumes and the ability to identify these plays an important role in the development of ophthalmic pharmaceutical products.

In this article we present a method that automatically predicts the presence of biomarkers in OCT cross-sections by incorporating information from the entire volume. We do so by adding a bidirectional LSTM to fuse the outputs of a Convolutional Neural Network that predicts individual biomarkers. As a consequence, we avoid the need to use pixel-wise annotations to train our method, and instead provide fine-grained biomarker information regardless. We furthermore show that our approach imposes coherence between biomarker predictions across volume slices and our predictions are superior to several existing approaches to perform the same task.

Authors: Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R.

Date: Oct. 2019, MICCAI, Shenzhen, China

Link: PDF

MICCAI, 2019
Deep Multi Label Classification in Affine Subspaces

Multi-label classification (MLC) problems are becoming increasingly popular in the context of medical imaging. This has in part been driven by the fact that acquiring annotations for MLC is far less burdensome than for semantic segmentation and yet provides more expressiveness than multi-class classification.

However, to train MLCs, most methods have resorted to similar objective functions as with traditional multi-class classification settings. We show in this work that such approaches are not optimal and instead propose a novel deep MLC classification method in affine subspace. At its core, the method attempts to pull features of class-labels towards different affine subspaces while maximising the distance between them.

In this paper we evaluate the method using two MLC medical imaging datasets and show a large performance increase compared to previous multi label frameworks.

Authors: Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R.

Date: Oct. 2019, MICCAI, Shenzhen, China

Link: PDF

MICCAI, 2017
Simultaneous Classification and Segmentation of Cysts in Retinal OCT

The automatic segmentation of fluid deposits in OCT imaging enables clinically relevant quantification and monitoring of eye disorders over time. Eyes with late-stage diseases are particularly challenging to segment, as their shape is often highly warped and presents a high variability between different devices, specifications and scanning times.

In this context, the RetinAI team proposed a novel fully-Convolutional Neural Network (CNN) architecture which combines dilated residual blocks in an asymmetric U-shape configuration, and can simultaneously segment and classify cysts in pathological eyes.

This articles presents a validation of our approach on the Retouch Challenge with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) ’17 Conference dataset.

Date: Sept. 2017 - MICCAI

Link: PDF

MICCAI, 2017
Pathological OCT Retinal Layer Segmentation using Branch Residual U-shape Networks - BRUNET

The automatic segmentation of retinal layer structures provides clinically-relevant quantification and monitoring of eye disorders over time in OCT. Eyes with late-stage diseases are particularly challenging to segment, as their shape is highly warped due to the presence of pathological biomarkers.

RetinAI has proposed an algorithm which combines dilated residual blocks in an asymmetric U-shape network configuration, and can segment multiple layers of highly pathological eyes in one shot. Our so called BRUnet architecture enables accurate segmentation or retinal layers by modeling the optimization as a supervised regression problem. Using lower computational resources, our strategy achieves superior segmentation performance compared to both state-of-the-art deep learning architectures and other OCT segmentation methods.

Date: Feb. 2017 - MICCAI

Links: Arxiv - PDF - MICCAI

Arxiv, 2016
RetiNet: Automatic AMD identification in OCT volumetric data

Visual inspection of Optical Coherence Tomography (OCT) volumes remains the main method for AMD identification, doing so is time consuming as each cross-section within the volume must be inspected individually by the clinician. In much the same way, acquiring ground truth information for each cross-section is expensive and time consuming.

In this paper, we present a new strategy towards automatic pathology identification in OCT C-scans. By introducing a novel Convolution Neural Network (CNN) architecture, named RetiNet, that directly estimates the state of a C-scan solely using the image data and without any additional information.

Date: Oct. 2016

Links: Arxiv - PDF

We are aware of the importance of scientific contributions to the field of ophthalmology & to healthcare in general.

In order to support the transition from reactive to precision and patient-specific medicine, and to foster the development of solutions to support patient's health and well-being, we are releasing our public peer-reviewed contributions in medical image analysis, computer vision and machine learning.

List of scientific publications

2024

Mishchuk, A., Blair, J., Munk, M., Mantel, I., et al. Correlations between GA lesions in FAF and morphological outer retinal changes in OCT. Investigative Ophthalmology & Visual Science June 2024, Vol.65, 2277.

Blair, J., Wu, Z., Lasagni Vitar, RM., et al. On the Edge: Quantitative Analysis of Retinal Layer Thickness Surrounding Geographic Atrophy Lesions Using OCT. Investigative Ophthalmology & Visual Science June 2024, Vol.65, 5474

Cao, JA., Wong, CW., De Zanet, S., et al. Geographic atrophy progression in routine clinical practice: before and after pegcetacoplan treatment. Investigative Ophthalmology & Visual Science, June 2024, Vol.65, 4396.

Bernal-Morales, C., Martin-Pinardel, R., Izquierdo-Serra, J., et al. Impact of Fluid Volume Fluctuations assessed by Artificial Intelligence on Visual Outcomes during Anti-VEGF Therapy in nAMD Eyes in the real world: FRB SPAIN-IMAGE project. Investigative Ophthalmology & Visual Science, June 2024, Vol.65, 4361.

2023

Martin-Pinardel, R., Izquierdo-Serra, J., De Zanet, S. et al. Artificial Intelligence-based fluid quantification and associated visual outcomes in a real-world, multicentre neovascular age-related macular degeneration national database, British Journal of Ophthalmology, January 2023

Mishchuk, A., Blair, J., Ronit Munk, M. et al. Relationship between hyperreflective foci, subretinal hyperreflective material/fibrosis and retinal morphologies in patients with AMD, Investigative Ophthalmology & Visual Science, June 2023, Vol.64, 1114.

Ferro Desideri, L., Gallardo M., Ott, M. et al. Biomarker assessment for CNV development prediction in multifocal choroiditis (MFC) and punctate inner choroidopathy (PIC): A large, longitudinal, multicenter study on patients with MFC and PIC using an artificial intelligence-based OCT fluid and biomarker detector, Investigative Ophthalmology & Visual Science, June 2023, Vol.64, 314.

Donate-Lopez, J., Gonzalez-Bueno, G-S., Rodriguez-Martin, J. et al. Multicenter study to validate an artificial intelligence algorithm for the screening of diabetic retinopathy: the CARDS study, Investigative Ophthalmology & Visual Science, June 2023, Vol.64, 239.

Sparkle, R-P., L Gale, S., Kaliukhovich, D., et al. Comparison of a Deep Learning based OCT image segmentation by a traditional reading center for patients with wet AMD, Investigative Ophthalmology & Visual Science, June 2023, Vol.64, 316.

Blair, J., Mishchuk, A., Stadelmann, A., et al. Quantification of layer thicknesses in a-scans shows differences in presence of geographic atrophy, Investigative Ophthalmology & Visual Science, June 2023, Vol.64, 1113.


2022

Montesel, A., Gigon, A., Mosinska, A. et al. Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients, Graefe's Archive for Clinical and Experimental Ophthalmology, 1-10, Jan 2022

Gallardo, M., Habra, O., Meyer zu Westram, T. et al. Evaluating an OCT-based Algorithm of Central Subfield Thickness Estimation on AMD and DME patients, Investigative Ophthalmology & Visual Science, June 2022, Vol.63, 2993 – F0263.

Abreu, R., Rodriguez-Martin, J., Donate-Lopez, J. et al. Coherence analysis between an artificial intelligence algorithm and human experts in diabetic retinopathy screening, Investigative Ophthalmology & Visual Science, June 2022, Vol.63, 2110 – F0099.

Donate-Lopez, J., Abreu-Gonzalez, R., Rodriguez-Martin, N, et al. Refinement of a screening algorithm based on artificial intelligence for the detection of Diabetic Retinopathy in real clinical practice, Euretina Abstracts

Habra, O., Gallardo, M., Meyer zu Westram, T., et al. Evaluation of an Artificial Intelligence-based Detector of Sub- and Intra-Retinal Fluid on a large set of OCT volumes in AMD and DME, EURETINA 2022

Martin-Pinardel, R., Izquierdo-Serra, J., De Zanet S., et al. Quantification of fluid compartments and clinical outcomes in the Spanish Neovascular Age-related Macular Degeneration National Database. FRB SPAIN-IMAGE project. Report 2, British Journal of Ophthalmology, 2022

2021

Gigon, A., Mosinska, A., Montesel, A. et al. Personalized Atrophy Risk Mapping in Age-Related Macular Degeneration . Translational Vision Science & Technology, November 2021, Vol.10, 18

Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep (11), 21893 (2021)

Gallardo M. et al. Machine Learning Can Predict Anti–VEGF Treatment Demand in a Treat-and-Extend Regimen for Patients with Neovascular AMD, DME, and RVO Associated Macular Edema Ophthalmology Retina, 5(7), 2021

Mantel, I., Mosinska, A., Bergin, C., Polito, MS., Guidotti, J., Apostolopoulos, S., Ciller, C., De Zanet, S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning, TVST, April 2021 10 (17)

G Holz, F., Abreu-Gonzalez, R., Bandello, F., et al Does real-time artificial intelligence-based visual pathology enhancement of three--dimensional optical coherence tomography scans optimise treatment decision in patients with nAMD? Rational and design of the RAZORBILL study, British Journal of Ophthalmology, August 2021

Stadelmann, M., Mosinska, A., Gallardo, M., et al. Automated fovea and optic disc detection in the presence of occlusions in Fundus SLO Images, Investigative Ophthalmology & Visual Science June 2021, Vol.62, 109

Mosinska, A., Stadelmann, M., Montesel, A., et al. Longitudinal analysis of Retinal Pigment Epithelium Atrophy progression using automated segmentation algorithm, Investigative Ophthalmology & Visual Science June 2021, Vol.62, 2453

Mosinska, A., Gigon, A., Apostolopoulos, S., et al. Automatic Atrophy Progression Prediction in Age-Related Macular Degeneration, EURETINA, 2021

2020

Beck, M., Joshi, D.S., Berger, L., Klose, G., De Zanet, S., Mosinska, A., Apostolopoulos, S., Ebneter, A., Zinkernagel, MS., Wolf, S. and Munk, M. R., Comparison of Drusen Volume Assessed by Two Different OCT Devices, J. Clin. Med. 2020, 9 (8), 2657

Apostolopoulos, S., Salas, J., Ordóñez,  JL.P., Tan, SS. , Ciller, C., Ebneter, A., Zinkernagel, M., Sznitman, R., Wolf, S., De Zanet, S. & Munk, M.R, Automatically Enhanced OCT Scans of the Retina: A proof of concept study, Nature Scientific Reports, May 2020

De Zanet, S., Mosinska, A., Bergin, C., et al. Automated detection and quantification of pathological fluid in neovascular age-related macular degeneration using a deep learning approach, Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1655

De Vente, C., van Grindven, M., Ciller, C., et al. Estimating Uncertainty of Deep Neural Networks for Age-related Macular Degenration Grading using Optical Coherence Tomography, Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1630

Gallardo, M., Munk, M., Kurmann, T., et al. Machine learning to predict anti-VEGF treatment response in aTreat-and-Extend regimen (TER), Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1629

Derradji, Y., Mosinska, A., De Zanet, S., et al. Evaluation of Retinal Pigment Epithelium Atrophy in Age-Related Macular Degeneration: an Artificial Intelligence approach based on Optical Coherence Tomography, Investigative Ophthalmology & Visual Science June 2020, Vol.61, 2015

De Zanet, S., Tassopoulou, V., Mosinska, AG. et al. Geographic atrophy in age-related macular degeneration: Predicting progression using a deep learning approach, EURETINA, 2020

2019

Kurmann, T. , Yu, S. , Márquez-Neila, P., Ebneter, A., Zinkernagel, M., Munk, MR., Wolf, S. & Sznitman R., Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans, Nature Scientific Reports, September 2019

Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I. GLAMpoints: Greedily Learned Accurate Match points, ICCV, Seoul - Korea, November 2019

Marquez-Neila, P. & Sznitman, R., Image data validation for medical systems, MICCAI 2019 Shenzhen, China, October 2019

Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R., Fused Detection of Retinal Biomarkers in OCT Volumes, MICCAI 2019 Shenzhen, China, October 2019

Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R., Deep Multi Label Classification in Affine Subspaces, MICCAI 2019 Shenzhen, China, October 2019

Marquez-Neila, P. & Sznitman, R., Image Data validation for medical systems, MICCAI 2019, Shenzhen, China, October 2019

Bogunović, H. , Venhuizen, F., Klimscha, S. , Apostolopoulos, S. et al. RETOUCH-The Retinal OCT Fluid Detection and Segmentation Benchmark and Challenge, IEEE Transactions on Medical Imaging, February 2019

Giannakaki-Zimmermann, H., Huf, W., Schaal, K.B., Schürch, K., Dysli, C., Dysli, M., Zenger, A., Ceklic, L., Ciller, C., Apostolopoulos, S., De Zanet, S., Sznitman, R., Ebneter, A., Zinkernagel, MS., Wolf, S., Munk, M., Comparison of choroidal thickness measurements using spectral domain optical coherence tomography in six different settings and with customised automated segmentation, Translational Vision Science & Technology, May 2019

De Zanet, S., Ciller, C., Apostolopoulos, S., et al. OCT Layer Segmentation, Computational Retinal Image Analysis, Chapter 7, p121-133, 2019

2017

Ciller, C., De Zanet, S. et al. Multi-channel MRI segmentation of eye structures and tumors using patient specific eye features, PlosOne, 2017

Ciller, C., De Zanet, S., Apostolopoulos, S. et al. Automatic Segmentation of Retinoblastoma in Fundus Image Photography using Convolutional Neural Networks, ARVO 2017, Baltimore

Apostolopoulos, S., De Zanet, S., Ciller, C. et al. Pathological OCT Retinal Layer Segmentation Using Branch Residual U-style Networks, MICCAI Quebec & Arxiv, 2017

Apostolopoulos, S. et al. Efficient OCT volume reconstruction from slit lamp microscopes, IEEE TBME, 2017

Apostolopoulos, S., Ciller, C., Sznitman, R., et al. Simultaneous Classification and Segmentation of Cysts in Retinal OCT, MICCAI, September 2017

2016

Apostolopoulos, S., Ciller, C., De Zanet, S. et al. RetiNet: Automatic AMD identification in OCT volumetric data, Arxiv, 2016

De Zanet, S. et al. Retinal slit lamp video mosaicking. International Journal of Computer Assisted Radiology and Surgery, International Journal of Computer Assisted Radiology and Surgery, 2016

2015

De Zanet, S. , Ciller, C. et al. Landmark Detection for Fusion of Fundus and MRI Toward a Patient Specific Multi-modal Eye Model, IEEE TBME, 2015

Ciller, C., De Zanet, S. et al. Automatic Segmentation of the eye in 3D MRI: A novel statistical shape model for treatment planning of retinoblastoma, Int. J. Radiation Oncology BiologyPhysics (Red Journal), 2015