Abstract
PURPOSE: The purpose of this study was to evaluate the effectiveness of different machine-learning models in predicting retinal sensitivity in geographic atrophy (GA) secondary to age-related macular degeneration (AMD) and compare the progression of sensitivity loss using observed versus inferred data over time.
METHODS: Thirty patients with GA (37 eyes) were recruited for the OMEGA study. Participants underwent fundus-controlled perimetry (microperimetry) and spectral-domain optical coherence tomography (SD-OCT) imaging at baseline and follow-up visits at weeks 12, 24, and 48. Retinal layers were segmented using a custom-written deep-learning algorithm. We used various machine-learning models, including random forest, LASSO regression, and multivariate adaptive regression splines (MARS), to predict retinal sensitivity across three scenarios: (1) unknown patients, (2) known patients at later visits, and (3) interpolation within visits. Predictive accuracy was evaluated using the mean absolute error (MAE), and the models' ability to reduce test variability over time was analyzed using linear mixed models.
RESULTS: The random forest model demonstrated the highest accuracy across all scenarios, with an MAE of 3.67 decibels (dB) for unknown patients, 2.96 dB for known patients at follow-up, and 3.10 dB for within-visit interpolation. The inferred sensitivity data significantly reduced variability compared to the observed data in longitudinal mixed model analysis, with a residual variance of 2.72 dB² versus 8.67 dB², respectively.
CONCLUSIONS: Machine-learning models, particularly the random forest model, effectively predict retinal sensitivity in patients with GA, with patient-specific baseline data improving accuracy for subsequent visits. Inferred sensitivity mapping presents a reliable, functional surrogate endpoint for clinical trials, offering high spatial resolution without extensive psychophysical testing.