A4 Conference proceedings

Evaluation of Unconditioned Deep Generative Synthesis of Retinal Images


Open Access publication

Publication Details
Authors: Kaplan Sinan, Lensu Lasse, Laaksonen Lauri, Uusitalo Hannu
Publisher: Springer Verlag (Germany): Series
Publication year: 2020
Language: English
Related Journal or Series Information: Lecture Notes in Computer Science
Title of parent publication: Advanced Concepts for Intelligent Vision Systems
Journal name in source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Journal acronym: LNCS
Volume number: 12002
Start page: 262
End page: 273
Number of pages: 12
ISBN: 978-3-030-40604-2
eISBN: 978-3-030-40605-9
ISSN: 0302-9743
eISSN: 1611-3349
JUFO-Level of this publication: 1
Open Access: Open Access publication
Location of the parallel saved publication: http://urn.fi/URN:NBN:fi-fe2020120198834

Abstract

Retinal images have been increasingly important in clinical diagnostics of several eye and systemic diseases. To help the medical doctors in this work, automatic and semi-automatic diagnosis methods can be used to increase the efficiency of diagnostic and follow-up processes, as well as enable wider disease screening programs. However, the training of advanced machine learning methods for improved retinal image analysis typically requires large and representative retinal image data sets. Even when large data sets of retinal images are available, the occurrence of different medical conditions is unbalanced in them. Hence, there is a need to enrich the existing data sets by data augmentation and introducing noise that is essential to build robust and reliable machine learning models. One way to overcome these shortcomings relies on generative models for synthesizing images. To study the limits of retinal image synthesis, this paper focuses on the deep generative models including a generative adversarial network and a variational autoencoder to synthesize images from noise without conditioning on any information regarding to the retina. The models are trained with the Kaggle EyePACS retinal image set, and for quantifying the image quality in a no-reference manner, the generated images are compared with the retinal images of the DiaRetDB1 database using common similarity metrics.


KeywordsDeep generative model, Generative adversarial network, Retinal image, Variational autoencoder

Last updated on 2020-01-12 at 09:43