国际医疗器械设计与制造技术展览会

Dedicated to design & manufacturing for medical device

September 25-27,2024 | SWEECC H1&H2

EN | 中文
   

This algorithm analyzes medical images 1,000 times faster than usual

Massachusetts Institute of Technology researchers have developed an algorithm that makes it easier and 1,000 times quicker to analyze medical images and 3D scans.

 

Medical image registration is a technique that uses two medical images, such as MRI images, to compare and analyze the anatomical differences in detail. In this technique, doctors overlay images from different scans at different times to analyze small changes in growths like tumors. The process is long and can take two or more hours as the system tries to align millions of pixels into a combined scan.

MIT researchers developed a machine-learning algorithm that registers brain scans and other 3D images more than 1,000 times quicker than the traditional medical image registration method. The algorithm learns while registering thousands of pairs of images. When it does that, it gets information about how the images should align and estimates the optimal alignment parameters. After the machine learns the correct parameters, it can use the parameters to map the images of one image to another image at the same time. As a result, registration time is reduced to about a minute to two minutes using a normal computer. It takes less than a second using a GPU with comparable accuracy to state-of-the-art systems, according to the researchers.

“The tasks of aligning a brain MRI should be that different when you’re aligning one pair of brain MRIs or another,” Guha Balakrishnan, a co-author on the study, said in a press release. “There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”

MRI scans are hundreds of 2D images that are stacked to form massive 3D images called volumes that have a million or more 3D pixels called voxels. Aligning all of the voxels in the first volume with he ones in the second can be time-consuming. And combining scans from different machines have different spatial orientations which means matching the voxels can be more complex for the computer.

“You have two different images of two different brains, put them on top of each other, and you start wiggling one until one fits the other. Mathematically, this optimization procedure takes a long time,” Adrian Dalca, senior author on the study, said.

Medical image registration can become a slow process when analyzing scans from large populations. Neuroscientists who need to analyze brain structures across hundreds of patients could spend hundreds of hours trying to analyze the images. The problem the traditional method presents is that the computer doesn’t learn. After each registration, the computer dismisses all of the data pertaining to voxel location.

“Essentially, they start from scratch given a new pair of images,” Balakrishnan said. “After 100 registrations, you should have learned something from the alignment. That’s what we leverage.”

The MIT researchers created an algorithm called VoxelMorph that is powered by a convolutional neural network, which is a machine-learning approach that is often used for image processing. The convolution neural network has many nodes that process image and other information across several layers of computation.

The researchers were able to test the algorithm on 7,000 publicly available MRI scans and tested it on an additional 250 scans. During the machine training, brain scans were sent to the algorithm in pairs. Using the convolutional neural network and a modified computation layer called a spatial transformer, the similarities of voxels in one MRI scan with voxels in the other scan could be captured. The algorithm was able to register all of the 250 test brain scans within two minutes using a traditional central processing unit. It could accurately register the tests in under one second using a graphics processing unit.

The algorithm does require additional information beyond image data, meaning it can be unsupervised. Some of the registration algorithms use the convolutional neural network, which means another traditional algorithm has to be run first to compute accurate registrations.

Balakrishnan and the researchers suggest that the algorithm could be used in a number of applications. They are currently testing it on lung scans, in additional to brain scans. The researchers also suggest that the algorithm could create new opportunities for image registration during operations.

Currently, different scans are used before or during surgeries, but the images aren’t registered until after an operation. This process is especially important when a surgeon is resecting a brain tumor where they scan the brain before and after the procedure to see if they’ve removed the whole tumor. If they missed any, they have to go back and perform another procedure.

Using the algorithm, the researchers say surgeons could register scans in almost real-time during an operation to get a clearer picture on their progress.

“Today they can’t really overlap the images during surgery because it will take two hours, and the surgery is ongoing,” Dalca said. “However, if it only takes a second, you can imagine that it could be feasible.”

X