We are aware of the importance of scientific contributions to the field of ophthalmology & to healthcare in general.
In order to support the transition from reactive to precision and patient-specific medicine, and to foster the development of solutions to support patient's health and well-being, we are releasing our public peer-reviewed contributions in medical image analysis, computer vision and machine learning.
This research presents an algorithm for accurate fully automatic detection of fovea location in atrophic age-related macular degeneration (AMD), based on spectral-domain optical coherence tomography (SD-OCT) scans.
For this research, image processing was conducted on a cohort of patients affected by geographic atrophy (GA). SD-OCT images (cube volume) from 55 eyes (51 patients) were extracted and processed with a layer segmentation algorithm to segment Ganglion Cell Layer (GCL) and Inner Plexiform Layer (IPL). Their en face thickness projection was convolved with a 2D Gaussian filter to find the global maximum, which corresponded to the detected fovea. The detection accuracy was evaluated by computing the distance between manual annotation and predicted location.
The mean total location error was 0.101±0.145mm; the mean error in horizontal and vertical en face axes was 0.064±0.140mm and 0.063±0.060mm, respectively. The mean error for foveal and extrafoveal retinal pigment epithelium and outer retinal atrophy (RORA) was 0.096±0.070mm and 0.107±0.212mm, respectively. Our method obtained a significantly smaller error than the fovea localization algorithm inbuilt in the OCT device (0.313±0.283mm, p <.001) or a method based on the thinnest central retinal thickness (0.843±1.221, p <.001). Significant outliers are depicted with the reliability score of the method.
The conclusion is that despite retinal anatomical alterations related to GA, the presented algorithm was able to detect the foveal location on SD-OCT cubes with high reliability. Such an algorithm could be useful for studying structural-functional correlations in atrophic AMD and could have further applications in different retinal pathologies.
Source: Montesel, A., Gigon, A., Mosinska et al. Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients. Graefe's Archive for Clinical and Experimental Ophthalmology (2022)
Authors: Andrea Montesel, Anthony Gigon, Agata Mosinska, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet & Irmela Mantel
Date: Jan. 2022
Manual foveal annotation on the SD-OCT standard (left side, above) and the corresponding layer segmentation (left side, below). Detected foveal annotation on (right side, above) and the corresponding layer segmentation (right side, below). Significant degeneration of all retinal layers, including GCL-IPL layers, resulted in distorted en face projection, which no longer matched the Gaussian profle. This resulted in inaccurate detection. [GCL, ganglion cell layer; IPL, inner plexiform layer; SD-OCT, spectral-domain optical coherence tomography]
The goal of this study was to develop and validate an automatic algorithm capable of predicting future atrophy progression in a time-continuous fashion, based on volumetric OCT scans only. This was then to be translated into an eye-specific risk map, which would indicate which retinal regions are particularly prone to developing RORA.
In this study longitudinal OCT data from 129 eyes/119 patients with RORA was collected and separated into training and testing groups. RORA was automatically segmented in all scans and additionally manually annotated in the test scans. OCT-based features such as layers thicknesses, mean reflectivity, and a drusen height map served as an input to the deep neural network. Based on the baseline OCT scan or the previous visit OCT, en face RORA predictions were calculated for future patient visits. The performance was quantified over time with the means of Dice scores and square root area errors.
The average Dice score for segmentations at baseline was 0.85. When predicting progression from baseline OCTs, the Dice scores ranged from 0.73 to 0.80 for total RORA area and from 0.46 to 0.72 for RORA growth region. The square root area error ranged from 0.13 mm to 0.33 mm. By providing continuous time output, the model enabled creation of a patient-specific atrophy risk map.
Source: Gigon, A., Mosinska, A., Montesel, A. et al. Personalized Atrophy Risk Mapping in Age-Related Macular Degeneration . Translational Vision Science & Technology November 2021, Vol.10, 18. doi: https://doi.org/10.1167/tvst.10.13.18
Authors: Anthony Gigon, Agata Mosinska, Andrea Montesel, Yasmine Derradji, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet, Irmela Mantel
Date: Nov. 2021
Figure: Qualitative examples of RORA progression profiles of a range of patients with moderate to large RORA at the baseline. Top row: Infrared fundus image corresponding to the baseline visit, baseline visit prediction and predictions for the follow-up visits. In the followup predictions white regions correspond to RORA already present at the GT baseline, and other colors to RORA growth regions since the baseline—green to true-positive predicted growth, red to false-positive, and blue to false-negative. The predictions were obtained using only the baseline visit acquisition as an input and the caption corresponds to time elapsed since the baseline visit. Bottom row: transfoveal B-scan of baseline OCT (its position in the IR image is denoted with a green line) and manual ground-truth annotation for the follow-up visits.
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction.
In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN).
The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Source: Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography.Sci Rep 11, 21893 (2021) - https://doi.org/10.1038/s41598-021-01227-0
Authors: Yasmine Derradji, Agata Mosinska, Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet & Irmela Mantel
Date: Nov. 2021
Link: Scientific Reports
Method design. (a) A schematic illustration of our training method with layer segmentation prior. An input to the neural network is an OCT b-scan. The CNN output is a probability map for an atrophic region with a vertical span corresponding to the RPE layer and choroid. The loss is computed in 2d between the prediction and the RORA ground-truth masked with RPE and choroid. (b) A schematic illustration of the training approach without using layer segmentation prior. As the vertical span of the ground-truth bounding box is undefined, the loss is computed in 1d between the maximum probability projections of prediction and ground-truth. (c) Inference workflow—each b-scan in a test volume is fed to the CNN, which outputs RORA prediction. In order to obtain an en face view, the predictions are max-projected and thresholded at 0.5.
Source: Derradji, Y. et al. - Sci Rep11, 21893 (2021)
Background/rationale Artificial intelligence (AI)-based clinical decision support tools, being developed across multiple fields in medicine, need to be evaluated for their impact on the treatment and outcomes of patients as well as optimisation of the clinical workflow. The RAZORBILL study will investigate the impact of advanced AI segmentation algorithms on the disease activity assessment in patients with neovascular age-related macular degeneration (nAMD) by enriching three-dimensional (3D) retinal optical coherence tomography (OCT) scans with automated fluid and layer quantification measurements.
Methods RAZORBILL is an observational, multicentre, multinational, open-label study, comprising two phases: (a) clinical data collection (phase I): an observational study design, which enforces neither strict visit schedule nor mandated treatment regimen was chosen as an appropriate design to collect data in a real-world clinical setting to enable evaluation in phase II and (b) OCT enrichment analysis (phase II): de-identified 3D OCT scans will be evaluated for disease activity. Within this evaluation, investigators will review the scans once enriched with segmentation results (i.e., highlighted and quantified pathological fluid volumes) and once in its original (i.e., non-enriched) state. This review will be performed using an integrated crossover design, where investigators are used as their own controls allowing the analysis to account for differences in expertise and individual disease activity definitions.
Conclusions In order to apply novel AI tools to routine clinical care, their benefit as well as operational feasibility need to be carefully investigated. RAZORBILL will inform on the value of AI-based clinical decision support tools. It will clarify if these can be implemented in clinical treatment of patients with nAMD and whether it allows for optimisation of individualised treatment in routine clinical care.
Source: Holz FG, Abreu-Gonzalez R, Bandello F, et al Does real-time artificial intelligence-based visual pathology enhancement of three-dimensional optical coherence tomography scans optimise treatment decision in patients with nAMD? Rationale and design of the RAZORBILL study British Journal of Ophthalmology Published Online First: 06 August 2021. doi: 10.1136/bjophthalmol-2021-319211
Authors: Frank G Holz, Rodrigo Abreu-Gonzalez, Francesco Bandello, Renaud Duval, Louise O'Toole, Daniel Pauleikhoff, Giovanni Staurenghi, Armin Wolf, Daniel Lorand, Andreas Clemens, Benjamin Gmeiner
Date: August 2021
Flow diagram of RAZORBILL study. Collection of clinical data (including 3D OCT scans) will be performed during routine clinical care(phase I). The data will be stored in the Discovery platform. Analysis of 3D OCT scans will be conducted after 3D OCT scans are partially enriched viasegmentation algorithms (phase II). 3D, three-dimensional; OCT, optical coherence tomography.
The purpose of this peer-reviewed publication is to assess the potential of machine learning to predict low and high treatment demand in real life in patients with neovascular age-related macular degeneration (nAMD), retinal vein occlusion (RVO), and diabetic macular edema (DME) treated according to a treat-and-extend regimen (TER).
The study considered 340 patients with nAMD and 333 eyes (285 patients) with RVO or DME treated with anti–vascular endothelial growth factor agents (VEGF) according to a predefined TER from 2014 through 2018. Eyes were grouped by disease into low, moderate, and high treatment demands, defined by the average treatment interval (low, ≥10 weeks; high, ≤5 weeks; moderate, remaining eyes). Two models were trained to predict the probability of the long-term treatment demand of a new patient. Both models use morphological features automatically extracted from the OCT volumes at baseline and after 2 consecutive visits, as well as patient demographic information.
Results and Conclusions: Based on the first 3 visits, it was possible to predict low and high treatment demand in nAMD eyes and in RVO and DME eyes with similar accuracy. The distribution of low, high, and moderate demanders was 127, 42, and 208, respectively, for nAMD and 61, 50, and 222, respectively, for RVO and DME. The nAMD-trained models yielded mean AUCs of 0.79 and 0.79 over the 10-fold crossovers for low and high demand, respectively. Models for RVO and DME showed similar results, with a mean AUC of 0.76 and 0.78 for low and high demand, respectively. Even more importantly, this study revealed that it is possible to predict low demand reasonably well at the first visit, before the first injection.
Source: Gallardo M. et al. Ophthalmology Retina, 5(7), 2021 https://doi.org/10.1016/j.oret.2021.05.002
Authors: Mathias Gallardo, Marion R. Munk, Thomas Kurmann, Sandro De Zanet, Agata Mosinska, Isıl Kutlutürk Karagoz, Martin S. Zinkernagel, Sebastian Wolf and Raphael Sznitman
Date: May 2021
Schematic procedure of the treat-and-extend protocol used in the University Hospital of Bern, Bern, Switzerland. IVT = intravitreal injection; nAMD = neovascular age-related macular degeneration; Ret. Vasc. = retinal vascular.
Illustration of the Treat-and-Extend procedure used in daily clinical practice for treating chronic disease such as AMD, DME and RVO related ME. At each visit, (1) an OCT scan of the retina is acquired for diagnosis and monitoring, (2) the visual acuity of the patient is tested, (3) the patient receives an injection of the anti-VEGF and (4) the clinician decides to extend or not the time interval between two consecutive visits using the observed outcomes at the current visit. Right: Illustration of our prediction algorithm of treatment demand using data from an early stage of the treatment procedure.
Source: ©Mathias Gallardo, Artificial Intelligence in Medical Imaging Lab, ARTORG Center
In this peer reviewed publication, we develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach.
The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance.
Source: Mantel et al. , TVST 2021, 10 (17) - https://doi.org/10.1167/tvst.10.4.17
Authors: Irmela Mantel; Agata Mosinska; Ciara Bergin; Maria Sole Polito; Jacopo Guidotti; Stefanos Apostolopoulos; Carlos Ciller; Sandro De Zanet
Date: April 2021
In this report, we compare OCT drusen volume determined by two different OCT devices (Heidelberg Spectralis OCT and Zeiss PlexElite SS-OCT) using manufacturers’ software and a customized, third party segmentation software. We further compare the automatically assessed drusen volume obtained by these machines with that after manual correction of the automated segmentation.
Comparability of drusen volume among different OCT devices and algorithms are of importance as drusen changes are a hallmark of AMD progression. These changes are tracked in the daily clinical workflow and as surrogate outcome measurements in multicenter trials.
Source: Beck et al. , J. Clin. Med. 2020, 9 (8), 2657 - https://doi.org/10.3390/jcm9082657
Authors: Marco Beck, Devika S. Joshi, Lieselotte Berger, Gerd Klose, Sandro De Zanet, Agata Mosinska, Stefanos Apostolopoulos, Andreas Ebneter, Martin S. Zinkernagel, Sebastian Wolf and Marion R. Munk
Date: August 2020
Link: J. Clin. Med. 2020
In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings.
Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement.
Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.
Source: Apostolopoulos et al. , Nature Sci. Reports , 2020
Authors: Stefanos Apostolopoulos, Jazmín Salas, José L. P. Ordóñez, Shern Shiou Tan, Carlos Ciller, Andreas Ebneter, Martin Zinkernagel, Raphael Sznitman, Sebastian Wolf, Sandro De Zanet & Marion R. Munk
Date: May 2020
Link: Nature Sci-rep Article
Retinal biological markers play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualise these, OCT is often the tool of choice due to its ability to image retinal structures in 3D.
With widespread use in clinical routine, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research.
In this paper, the authors present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. This approach avoids the need for costly segmentation annotations and allows scans to be characterised by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.
Source: Kurmann et al. , Nature Sci. Reports , 2019
Authors: Thomas Kurmann, Siqing Yu, Pablo Márquez-Neila, Andreas Ebneter, Martin Zinkernagel, Marion R. Munk, Sebastian Wolf & Raphael Sznitman
Date: September 2019, Nature Sci-Rep.
Link: Nature Sci-Rep article
Image registration, the process of aligning two or more images into the same global spatial reference, is a crucial task in fields like computer vision, pattern recognition and medical image analysis.
This article presents a novel CNN-based feature point detector - GLAMpoints - learned in a semi-supervised manner and trained using reinforcement learning strategies. As a result, we avoid the limitations of point matching and transformation estimation being non-differentiable.
Our detector extracts repeatable, stable interest points with a dense coverage, specifically designed to maximise the correct matching in a specific domain, in contrast to conventional techniques that optimise indirect metrics.
To illustrate the performance of our approach, we apply our method on challenging 2D retinal slit-lamp images, for which classical detectors yield unsatisfactory results due to low image quality and insufficient amount of low-level features. We show that GLAMpoints significantly outperforms classical detectors as well as state of the art CNN-based methods in matching ability and registration quality.
Authors: Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I.
Date: Oct. 2019, ICCV 2019 - Seoul, Korea
Data validation is the process of ensuring that the input to a data processing pipeline is correct and useful. It is a critical part of software systems running in production. Image processing systems are no different, whereby problems with data acquisition, file corruption or data transmission, may lead to a wide range of unexpected issues in the acquired images.
Until now, most image processing systems of this type involved a human in the loop that could detect these errors before further processing. But with the advent of powerful deep learning methods, tools for medical image processing are becoming increasingly autonomous and can go from data acquisition to final medical diagnosis without any human interactions. However, deep networks are known for their inability to detect corruption or errors in the input data.
This paper presents a deep validation method that learns to measure how correct a given image looks. We experimentally assessed the validity of our method and compare it with different baselines, reaching an improvement of more than 10 percent points on all considered datasets.
Authors: Marquez-Neila, P. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
Link: MICCAI 2019
Optical Coherence Tomography (OCT) is the primary imaging modality for detecting pathological biomarkers associated to retinal diseases such as Age-Related Macular Degeneration. In practice, clinical diagnosis and treatment strategies are closely linked to biomarkers visible in OCT volumes and the ability to identify these plays an important role in the development of ophthalmic pharmaceutical products.
In this article we present a method that automatically predicts the presence of biomarkers in OCT cross-sections by incorporating information from the entire volume. We do so by adding a bidirectional LSTM to fuse the outputs of a Convolutional Neural Network that predicts individual biomarkers. As a consequence, we avoid the need to use pixel-wise annotations to train our method, and instead provide fine-grained biomarker information regardless. We furthermore show that our approach imposes coherence between biomarker predictions across volume slices and our predictions are superior to several existing approaches to perform the same task.
Authors: Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
The automatic segmentation of fluid deposits in OCT imaging enables clinically relevant quantification and monitoring of eye disorders over time. Eyes with late-stage diseases are particularly challenging to segment, as their shape is often highly warped and presents a high variability between different devices, specifications and scanning times.
In this context, the RetinAI team proposed a novel fully-Convolutional Neural Network (CNN) architecture which combines dilated residual blocks in an asymmetric U-shape configuration, and can simultaneously segment and classify cysts in pathological eyes.
This articles presents a validation of our approach on the Retouch Challenge with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) ’17 Conference dataset.
Date: Sept. 2017 - MICCAI
Multi-label classification (MLC) problems are becoming increasingly popular in the context of medical imaging. This has in part been driven by the fact that acquiring annotations for MLC is far less burdensome than for semantic segmentation and yet provides more expressiveness than multi-class classification.
However, to train MLCs, most methods have resorted to similar objective functions as with traditional multi-class classification settings. We show in this work that such approaches are not optimal and instead propose a novel deep MLC classification method in affine subspace. At its core, the method attempts to pull features of class-labels towards different affine subspaces while maximising the distance between them.
In this paper we evaluate the method using two MLC medical imaging datasets and show a large performance increase compared to previous multi label frameworks.
Authors: Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R.
Date: Oct. 2019, MICCAI, Shenzhen, China
The automatic segmentation of retinal layer structures provides clinically-relevant quantification and monitoring of eye disorders over time in OCT. Eyes with late-stage diseases are particularly challenging to segment, as their shape is highly warped due to the presence of pathological biomarkers.
RetinAI has proposed an algorithm which combines dilated residual blocks in an asymmetric U-shape network configuration, and can segment multiple layers of highly pathological eyes in one shot. Our so called BRUnet architecture enables accurate segmentation or retinal layers by modeling the optimization as a supervised regression problem. Using lower computational resources, our strategy achieves superior segmentation performance compared to both state-of-the-art deep learning architectures and other OCT segmentation methods.
Date: Feb. 2017 - MICCAI
Visual inspection of Optical Coherence Tomography (OCT) volumes remains the main method for AMD identification, doing so is time consuming as each cross-section within the volume must be inspected individually by the clinician. In much the same way, acquiring ground truth information for each cross-section is expensive and time consuming.
In this paper, we present a new strategy towards automatic pathology identification in OCT C-scans. By introducing a novel Convolution Neural Network (CNN) architecture, named RetiNet, that directly estimates the state of a C-scan solely using the image data and without any additional information.
Date: Oct. 2016
List of scientific publications
Montesel, A., Gigon, A., Mosinska, A. et al. Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients, Graefe's Archive for Clinical and Experimental Ophthalmology, 1-10, Jan 2022
Gigon, A., Mosinska, A., Montesel, A. et al. Personalized Atrophy Risk Mapping in Age-Related Macular Degeneration . Translational Vision Science & Technology, November 2021, Vol.10, 18
Derradji, Y., Mosinska, A., Apostolopoulos, S. et al. Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep (11), 21893 (2021)
Gallardo M. et al. Machine Learning Can Predict Anti–VEGF Treatment Demand in a Treat-and-Extend Regimen for Patients with Neovascular AMD, DME, and RVO Associated Macular Edema Ophthalmology Retina, 5(7), 2021
Mantel, I., Mosinska, A., Bergin, C., Polito, MS., Guidotti, J., Apostolopoulos, S., Ciller, C., De Zanet, S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning, TVST, April 2021 10 (17)
Beck, M., Joshi, D.S., Berger, L., Klose, G., De Zanet, S., Mosinska, A., Apostolopoulos, S., Ebneter, A., Zinkernagel, MS., Wolf, S. and Munk, M. R., Comparison of Drusen Volume Assessed by Two Different OCT Devices, J. Clin. Med. 2020, 9 (8), 2657
Apostolopoulos, S., Salas, J., Ordóñez, JL.P., Tan, SS. , Ciller, C., Ebneter, A., Zinkernagel, M., Sznitman, R., Wolf, S., De Zanet, S. & Munk, M.R, Automatically Enhanced OCT Scans of the Retina: A proof of concept study, Nature Scientific Reports, May 2020
Kurmann, T. , Yu, S. , Márquez-Neila, P., Ebneter, A., Zinkernagel, M., Munk, MR., Wolf, S. & Sznitman R., Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans, Nature Scientific Reports, September 2019
Truong, P., Mosinska, A., Ciller, C., Apostolopoulos, S., De Zanet, S.I. GLAMpoints: Greedily Learned Accurate Match points, ICCV, Seoul - Korea, November 2019
Marquez-Neila, P. & Sznitman, R., Image data validation for medical systems, MICCAI 2019 Shenzhen, China, October 2019
Kurmann, T. , Márquez-Neila, P., Yu, S., Munk, M., Wolf, S. & Sznitman, R., Fused Detection of Retinal Biomarkers in OCT Volumes, MICCAI 2019 Shenzhen, China, October 2019
Kurmann, T. , Márquez-Neila, P., Wolf, S. & Sznitman, R., Deep Multi Label Classification in Affine Subspaces, MICCAI 2019 Shenzhen, China, October 2019
Bogunović, H. , Venhuizen, F., Klimscha, S. , Apostolopoulos, S. et al. RETOUCH-The Retinal OCT Fluid Detection and Segmentation Benchmark and Challenge, IEEE Transactions on Medical Imaging, February 2019
Giannakaki-Zimmermann, H., Huf, W., Schaal, K.B., Schürch, K., Dysli, C., Dysli, M., Zenger, A., Ceklic, L., Ciller, C., Apostolopoulos, S., De Zanet, S., Sznitman, R., Ebneter, A., Zinkernagel, MS., Wolf, S., Munk, M., Comparison of choroidal thickness measurements using spectral domain optical coherence tomography in six different settings and with customised automated segmentation, Translational Vision Science & Technology, May 2019
Ciller, C., De Zanet, S. et al. Multi-channel MRI segmentation of eye structures and tumors using patient specific eye features, PlosOne, 2017
Ciller, C., De Zanet, S., Apostolopoulos, S. et al. Automatic Segmentation of Retinoblastoma in Fundus Image Photography using Convolutional Neural Networks, ARVO 2017, Baltimore
Apostolopoulos, S., De Zanet, S., Ciller, C. et al. Pathological OCT Retinal Layer Segmentation Using Branch Residual U-style Networks, MICCAI Quebec & Arxiv, 2017
Apostolopoulos, S. et al. Efficient OCT volume reconstruction from slit lamp microscopes, IEEE TBME, 2017
Apostolopoulos, S., Ciller, C., De Zanet, S. et al. RetiNet: Automatic AMD identification in OCT volumetric data, Arxiv, 2016
De Zanet, S. et al. Retinal slit lamp video mosaicking. International Journal of Computer Assisted Radiology and Surgery, International Journal of Computer Assisted Radiology and Surgery, 2016
De Zanet, S. , Ciller, C. et al. Landmark Detection for Fusion of Fundus and MRI Toward a Patient Specific Multi-modal Eye Model, IEEE TBME, 2015
Ciller, C., De Zanet, S. et al. Automatic Segmentation of the eye in 3D MRI: A novel statistical shape model for treatment planning of retinoblastoma, Int. J. Radiation Oncology BiologyPhysics (Red Journal), 2015