Verifying and validating the clinical usefulness of wearable technology
A collaborative team of researchers from the Digital Medicine Society (DiMe) and biomedical engineers at Duke University have developed a framework that will help data scientists and other researchers use better digital health tools for clinical purposes.
As smartwatches and other wearable technologies are becoming more popular, researchers are exploring how they can use biometric data collected by these tools to create beneficial health insights about the users. Although some of these devices are marketed as being clinically validated, there are currently no standards t oensure that the data from digital medicine tools is evaluated and fit for clinical purposes.
In a new paper, Jessilyn Dunn, an assistant professor of biomedical engineering at Duke University, worked with an interdisciplinary, international team of 16 researchers from the DiMe community to propose a three-step framework that evaluates and documents the clinical usefulness of these tools, addressing shortcomings with the current approach to evaluating digital health tools.
The project is one of several collaborations between Dunn and the expert DiMe community that include co-participation in a World Economic Forum initiative focused on managing epidemics with consumer wearables.
The paper appeared on April 14 in the journal npj Digital Medicine.
“In the last decade alone we’ve seen digital biomarker research increase by more than 325 percent, but we haven’t caught up with this growth in terms of standards and evaluation of digital medicine tools,” said Brinnae Bent,” a Ph.D. student in the Dunn lab and one of the authors of the paper. “Our main goal was to develop a common framework for evaluating these new technologies, but we also wanted to create a unifying language for the field so there’s structure as it grows.”
During the evaluation of more traditional medical devices, engineers will verify the software and sensor technology of a product and validate that the end product can accurately mreasure what it claims to measure, like heart rate or activity levels.
But there are no set definiitions for what clinically validated means for wearable devices. For examaple, companies may advertise their heart rate measurements as accurate and clinically validated, but they may have only been tested across a limited range of environments and body and activity types.
These testing discrepancies can lead to varying rates of data accuracy, which Dunn and her lab discovered in their February 2020 Nature Digital Medicine paper.
In their new paper, lead author Jennifer Goldsack, the executive director at DiMe, and Dunn introduce the term Biometric Monitoring technologies, or BioMeTs, as a standardized description for all technologies that combine sensors and other hardware with software to collect biometric data.
The first step of their “V3” framework is verification, where hardware manufacturers will tet sample-level sensor outputs of the devices. In the next step, analytical validation, engineers, data scieitists and physiologists will evaluate the algorithms that process data from the sensor to produce physiological metrics.
The third and final step, clinical validation, is the largest departure from more traditional approaches to medical technology development. During this critical step, a technology vendor or clinical expert will demonstrate that the BioMeT can acceptably identify, measure or predict the clinical, biological, physical or functional experience that it was intended to capture in the target population of users.
“It is essential to make sure tha tthe BioMeTs are indeed measuring what manufacturers claim they are measuring, and do it in a manner that is open and trustworhty,” said Will Wang, a Ph.D. student in the Dunn lab and one of the co-authors of the paper. “If such information is not clinically validated, users and researchers alike will be led to unjustified conclusions.”
Now, the DiMe team, including Dunn, are working to advance teh framework with the Food and Drug Administration to make it the standard methodology for evaluating all BioMeTs.
Source: Read Full Article