Rabu, 13 Desember 2017

Artificial Intelligence Eyed as Diabetic Retinopathy Screen

Artificial Intelligence Eyed as Diabetic Retinopathy Screen


A computing system using artificial intelligence is highly accurate in identifying people with diabetes who have diabetic retinopathy and related eye diseases and need to be referred for further care, a new study finds.

Results for the development and validation of a “deep learning system” using retinal images of a large, multiethnic study population with diabetes were published in the December 12 issue of the Journal of the American Medical Association by Daniel Shu Wei Ting, MD, PhD, of the Singapore National Eye Center, with an international group of colleagues.

The deep learning system (DLS) is a new artificial intelligence (AI) technology that processes large amounts of data and extracts meaningful patterns from them.

Such systems have achieved promising results compared with previous “pattern-recognition” type image analysis, principal investigator Tien Y Wong, MD, PhD, professor and medical director, Singapore National Eye Center, and chair of ophthalmology at the National University of Singapore, told Medscape Medical News.

“The entire DLS approach does not involve any objective judgment and the feature extraction process is entirely automatic. Numerous unconventional features…are assessed. Thus, DLS can help clinicians detect subtle changes, patterns, and abnormalities that may be overlooked or disregarded by humans,” Dr Wong said.

Indeed, the ultimate goal is to incorporate the DLS into retinal cameras that can be used in a variety of locations, including primary care, pharmacies, or even retail settings to screen people with diabetes in order to detect who needs to see an ophthalmologist.

The approach would be expected to save a considerable amount of money and healthcare resources, according to study coauthor Rohit Varma, MD, MPH, professor of ophthalmology and preventive medicine at the Keck School of Medicine, University of Southern California, Los Angeles.  

“I think over time it will become cheaper and more available. It reduces [labor] costs and increases efficiency. We’d like to see it everywhere that people come,” Dr Varma commented.

Advantages Over Google Study: Much Larger, Diverse Population

The academic investigators aren’t yet working with a manufacturer, but some companies are also pursuing DLS technology for retinopathy screening. Among those is Google, with results on their technology published in JAMA in November 2016.

This new study overcomes some of the limitations of the Google study, which were noted at the time in an editorial by Dr Wong and another of the current coauthors. Namely, the current trial population is much more diverse than Google’s, including a multiethnic population of Asian, black, Hispanic, and white individuals. It also uses a much larger data set (494,661 vs 9963 retinal images).

“This is the largest and most diverse study looking at this. We wanted all different ethnic groups, particularly because pigment in retina varies by race and ethnicity. You have to have a wide range of ethnicities to make sure the DLS is truly detecting normal vs abnormal,” Dr Varma explained.

In addition, the new study also investigates the ability of the DLS to detect two other common eye conditions — possible glaucoma and age-related macular degeneration (AMD) — since that would be required of an eye screening tool in clinical practice, and to assess retinal images of varying quality from different camera types in real-world settings.

High Sensitivity, Specificity Compared With Human Graders

The DLS for referable diabetic retinopathy was developed and trained using retinal images of patients with diabetes who participated in the Singapore National Diabetic Retinopathy Screening Program (SIDRP) between 2010 and 2013, which had screened half of Singapore’s diabetes population by 2015.

For each patient, two digital retinal photographs (optic disc and fovea) were taken of each eye. Training of the DLS entailed exposure of a total 76,370 retinal images (with and without each of the three conditions: diabetic retinopathy, glaucoma, and AMD) to the neural networks, which then adapted to differentiate between normal and abnormal and between conditions. Once the training was complete, the DLS could be used to classify unseen images.

It was then externally validated using 10 additional multiethnic groups of participants with diabetes from different settings (community, population-based, and clinic-based) in Singapore, China, the United Kingdom, United States, and Mexico. A range of retinal cameras were used, and various levels of professionals — from ophthalmologists to trained nonmedical personnel — served as graders.

Referable diabetic retinopathy was defined as a diabetic retinopathy severity level of moderate nonproliferative diabetic retinopathy or worse, diabetic macular edema, and/or ungradable image. Vision-threatening diabetic retinopathy was defined as severe nonproliferative diabetic retinopathy and proliferative diabetic retinopathy.

In the primary validation data set (71,896 images), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening retinopathy, 0.6%, possible glaucoma, 0.1%; and AMD, 2.5%.

Sensitivity of the DLS in detecting referable diabetic retinopathy was comparable to that of trained graders (90.5% vs 91.1%; = .68), although the graders had higher specificity (91.6% vs 99.3%; < .001).

For vision-threatening diabetic retinopathy, the DLS had higher sensitivity compared with trained graders (100% vs 88.5%; < .001), but lower specificity (91.1% vs 99.6%; < .001).

The DLS had 96.4% sensitivity and 87.2% specificity for possible glaucoma; and 93.2% sensitivity and 88.7% specificity for age-related macular degeneration compared with professional graders.

In subsidiary analyses, the DLS performed even better for both referable and vision-threatening diabetic retinopathy in a subset of 35,055 retinal images of excellent quality and performed comparably among age, sex, glycemic control, and ethnic subgroups, as well as with different cameras.

Will Technology Aid Clinicians and Save Costs?

Dr Wong told Medscape Medical News that “clinicians need to embrace technology that will improve care, reduce cost, and allow efficient use of clinician time to treat appropriate patients. They should expect that DLS in time will likely outperform clinicians in ‘simple’ type of classification of disease — that is, absent vs present disease. Clinicians will therefore likely need to see those with disease and treatable disease, rather than everyone.”

But before that happens, Dr Varma said, the researchers are aiming to improve sensitivity and specificity of the DLS even further. “We want to have the false-negative rate to be as close to zero as possible. You want to err on the side of more people getting an exam rather than missing people.”

In order to do that, the group plans to expand the data set even further to reduce the error rates and also to test the DLS within the seven standard fields of retinopathy classification.

In the future, he said, they hope to also develop an algorithm for retinopathy progression to incorporate into a DLS-based tool that ophthalmologists could use in the management of patients with established retinopathy.

Such a tool might be able to identify combinations of features in the retina that ophthalmologists can’t currently see.

“It would be an additional enhancement for retina specialists to reduce the error rate and detect progression earlier.…That’s where we’re going next,” Dr Varma said.

Referring to the screening tool, he also commented, “This is an example of how technology is working toward reducing the overall costs of healthcare….Technology is expensive, but this is an example of where technology will significantly reduce the cost.

“And more important, it will reduce the burden of disease. We hope that fewer people will go blind or lose vision and will get care earlier. I think it’s a win-win for everybody.”

The project received funding from National Medical Research Council, Ministry of Health, Singapore National Health Innovation Center, SingHealth Foundation, and the Tanoto Foundation; and unrestricted donations to the retina division, Johns Hopkins University School of Medicine. Dr Ting and Dr Wong are coinventors on a patent for the deep learning system used in this study. Dr Varma receives funding from the US National Institutes of Health. Disclosures for the coauthors are listed in the paper.

JAMA. 2017;318:2199-2210. Abstract

For more diabetes and endocrinology news, follow us on Twitter and on Facebook.



Source link

Tidak ada komentar:

Posting Komentar