Abstract | The escalating prevalence of cognitive disorders, notably Alzheimer's disease (AD), necessitates the development of accessible and cost-effective diagnostic tools for early detection. Changes in both eye movements and speech patterns have been associated with cognitive decline, and combining eye-tracking and speech analysis technologies may have advantages in detecting cognitive decline. While traditional lab-grade eye tracking systems are effective, their widespread adoption is hindered by cost and accessibility. Recent advancements have explored the feasibility of low-cost webcam-based systems, yet challenges persist in accurately classifying eye movements due to noise and lower precision. Our study evaluates a proposed system for cognitive assessment that combines fixation and saccade measurements from webcam-based eye-tracking data with synchronized speech data obtained during cognitive tasks. We extend a previously proposed algorithm to seamlessly combine synchronized eye tracking and speech data streams for comprehensive analysis. The presented results demonstrate promising accuracy for the proposed methods in identifying fixations, saccades, and oral identification respective speech, with minor variations compared to manual annotations. Specifically, the comparison between predicted and actual onset times for fixations and saccades reveals minimal discrepancies, suggesting the algorithm has needed performance. Moreover, the assessment of oral identification onset relative to fixations provides valuable insights into cognitive processing and response times for subject to name the object that they have just fixated on. Our study contributes to advancing AD research and offers potential for developing innovative diagnostic tools. |
---|