online mexician pharmacy

Using deep learning to process raw photoacoustic channel data and guide cardiac interventions

Cardiovascular diseases rank among the top causes of death across the world, and cardiac interventions are similarly very common. For example, cardiac catheter ablation procedures, which are used to treat arrythmias, number in several tens of thousands per year in the US alone. In these procedures, surgeons insert a thin, tramadol sale en antidoping flexible tube called a catheter into the femoral vein in the leg and navigate their way up to the heart, where the problematic tissue is destroyed using cold or focused radiation.

Even though cardiac catheter-based procedures are considered minimally invasive, the position of the catheter tip must be carefully monitored and controlled to prevent damage to the heart. In most cases, surgeons rely on fluoroscopy to localize and guide the catheter tip. However, this approach exposes both the patient and the medical staff to ionizing radiation, which can lead to problems such as increased risk of cancer or birth defects.

An alternative method to guide cardiac catheters involves photoacoustic imaging. In this approach, short laser pulses are delivered using an optical fiber attached to the catheter while a special signal transducer picks up the ensuing ultrasound waves generated within the heart.

Photoacoustic images generated this way can be used to guide robotic arms manipulating the cardiac catheter to enhance precision and minimize risks. However, the algorithms used to automatically detect photoacoustic sources in these images, which are located close to the catheter tip, are susceptible to errors such as reflection artifacts.

A research team led by Muyinatu A. Lediju Bell, the John C. Malone Associate Professor in the Whiting School of Engineering at Johns Hopkins University, U.S., has been working toward a solution to this issue. As reported in a new study published in the Journal of Biomedical Optics, they have developed a new approach for cardiac catheter localization through photoacoustic imaging by leveraging machine learning.

The researchers proposed using a deep convolutional neural network (CNN) to pinpoint the position of cardiac catheter tips in photoacoustic images. However, deep neural networks need to be trained on very large datasets to perform correctly, which would require hours of manual image acquisition and annotation. To circumvent this problem, the team turned to simulated data.

“We trained the network with simulated channel data frames which we formatted to accommodate the field of view of the photoacoustic transducer, including multiple noise levels, signal amplitudes, and sound speeds, to ensure robustness against channel noise, target amplitude, and sound speed differences,” said Bell. To make the CNN more robust, the training dataset also included simulated images with artifacts.

The researchers introduced an additional processing step called “histogram matching” to further enhance the performance of the model. Herein, they automatically modified acquired images so that they looked similar to the simulated images used to train the CNN.

Through ex vivo and in vivo experiments on excised swine hearts and live swine, respectively, the team demonstrated the impressive performance of their deep learning-based approach. The positional errors for the catheter tip were remarkably small; most of them were even smaller than the resolution of the photoacoustic signal transducer. The network achieved Euclidean errors of 1.02 ± 0.84 mm for target depths of 20–100 mm. Furthermore, it exhibited great performance metrics, with precision, recall, and F1 scores as high as 100%.

“Our results demonstrate the potential of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications, with the broader potential to replace fluoroscopy during these procedures,” said Bell.

Overall, this study can pave the way for safer cardiac interventions that require catheters, helping doctors and patients alike in the fight against heart diseases.

More information:
Mardava R. Gubbi et al, Deep learning in vivo catheter tip locations for photoacoustic-guided cardiac interventions, Journal of Biomedical Optics (2023). DOI: 10.1117/1.JBO.29.S1.S11505

Journal information:
Journal of Biomedical Optics

Source: Read Full Article