Rabu, 28 Februari 2018

Google's AI Uses Retinal Images to Reveal Cardiovascular Risk

Google's AI Uses Retinal Images to Reveal Cardiovascular Risk


Deep machine learning can extract and quantify several risk factors for cardiovascular disease (CVD) from photographs of the retinal fundus, according to findings published online February 19 in Nature Biomedical Engineering.

Traditional risk factors for CVD include age, sex, smoking status, blood pressure, body mass index, and blood glucose and cholesterol levels. However, a major limitation in considering these risk factors is that many people do not know all of their values, particularly serum cholesterol, for which body mass index is sometimes used as a substitute.

However, another way to assess CVD risk may be from retinal images, which are easily obtained in an outpatient setting. Retinal anatomy may reveal cardiovascular status through the presence of cholesterol emboli, hypertensive retinopathy, and details of blood vessel caliber, bifurcation and further branching patterns, and tortuosity.

Ryan Poplin, from Google Research, Mountain View, California, and colleagues applied deep learning to assess retinal images. Deep learning is a type of machine learning that can transcend the “round up the usual suspects” approach of seeking only what experts prespecify, and instead forms algorithms that recognize predictive features from dense data. The approach has been applied to images to diagnose melanoma and diabetic retinopathy.

The deep-learning models were trained to recognize elevated CVD risk on data from 284,335 patients (48,101 from the UK Biobank and 236,234 from EyePACS) and were validated using two independent datasets of 12,026 patients (from the UK Biobank) and 999 patients (from EyePACS). The UK Biobank represents the general population; the EyePACS group is mostly Hispanic individuals being screened for diabetic retinopathy. The mean ages were similar, at 56.9 ± 8.2 years for UK Biobank participants and 54.9 ± 10.9 years for EyePACS participants.

The strategy identified predicted CVD risk factors that were not known to be quantifiable from images of the retina, including age, with a mean absolute error of less than 3.5 years for both validation datasets. The algorithm also accurately predicted systolic and diastolic blood pressure, body mass index, and HbA1c.

Moreover, the presence of diabetic retinopathy did not alter the identification of CVD risk factors, Poplin and colleagues note.

The researchers also trained a model to predict the onset of major adverse cardiovascular events within 5 years. The endpoint was only available for the relatively healthy UK Biobank group. The researchers identified 631 events among the 48,101 individuals, with 150 of those in a validation subset.

“Despite the limited number of events, our model achieved an area under the receiver operating characteristic curve (AUC) of 0.70 (95% [confidence interval (CI)]: 0.648 to 0.740) from retinal fundus images alone, comparable to an AUC of 0.72 (95% CI: 0.67 to 0.76) for the composite European SCORE risk calculator,” the authors write.

The trained deep-learning models zeroed in on particular anatomical structures that made sense in terms of the predictions, and were validated by ophthalmologists blinded to the prediction task viewing 100 randomly chosen retinal images. The models trained to detect standard risk factors (age, smoking, and systolic blood pressure) highlighted vasculature, whereas those trained to predict HbA1c indicated the perivascular areas. Models trained to predict sex highlighted predominantly the optic disc, blood vessels, and macula.

The researchers, referring to the detected structures and anatomical regions, conclude “we demonstrate not only that these signals are present in the retina, but that they are also quantifiable to a degree of precision not reported before.” They suggest that existing diabetic retinopathy screening programs could be used to also assess cardiovascular disease risk factors.

Limitations of the study include only using images with a 45° field of view, a data set smaller than average for deep learning investigations, and inconsistent availability of lipid data and diabetes diagnoses.

The authors are employees of Google and Verily Life Sciences.

Nat Biomed Eng. Published online February 19, 2018. Abstract

For more news, join us on Facebook and Twitter



Source link

Tidak ada komentar:

Posting Komentar