unizeit Schriftzug

Brain scans you can touch

How can abstract radiological imaging data be made more comprehensible for laypersons? Starting with this question, a team at Kiel University developed an intelligent MRI simulator.

When the plastic head is rotated, the screen shows the corresponding image
© Kerstin Nees

The MRI image on the computer screen corresponds to the viewing angle of the 3D-printed head. When the head is rotated, the screen shows the corresponding image.

Magnetic resonance imaging (MRI) of the head is generally performed when trying to pinpoint pathological changes or injuries to the brain. Doctors can then use MRI images to diagnose, characterise and precisely localise brain tumours or aneurysms, for example. However, drawing correct conclusions on the basis of two-dimensional images is no trivial undertaking. “Radiologists typically require several years before they are fully accustomed to examining and interpreting the sectional images correctly,” explained Dr Jan-Bernd Hövener, Head of the Department of Biomedical Imaging at Radiology and Neuroradiology at the UKSH, Kiel campus, and Professor in the Faculty of Medicine at Kiel University. Indeed, things often get difficult rather quickly when doctors try to explain to patients the issue in their head and where it is located on the basis of images. After all, most of us perceive radiological images as rather abstract and their interpretation can lead to confusion.

However, a team from his working group at the Molecular Imaging North Competence Center (MOIN CC) was convinced that this whole situation can be simplified and therefore set about developing a solution at the UKSH Healthcare Hackathon, a virtual event held in June. The objective of the five-person team, christened MOINCC-plus, was to facilitate intuitive understanding of abstract radiological data. To this end, they used a 3D printer to produce a head and then equipped this with a microcontroller. Using modern nanomechanical sensors and bluetooth, the head model communicates its spatial orientation with a computer. “I can pick up the head in my hand, move it as I wish and then examine the accompanying MRI or CT images on the PC based on the viewing angle,” explained PhD student Frowin Ellermann from the RTG 2154 “Materials for Brain”. Together with computer scientist Leonardo Töpsch, the engineer presented an initial prototype of the MRI simulator during the hackathon and impressed the jury. The team, which also includes Eva Peschke, Eren Yilmaz and Johannes Köpnick, took second place among the teams from Kiel. This was enough to secure their participation in the Hackathon Final in Berlin, where they will compete against the winning teams from the Charité Berlin and University Medical Center Mainz in January 2021.

They are planning to present a further development of their prototype at this event. After all, it is currently possible to alter the perspective from which the model is viewed, but not the depth of the sectional view. The idea for the further development is to use a laser beam on the anatomical model to demonstrate which layer is currently being viewed on the PC. They are using an infrared camera system to implement this. The camera uses reflectors attached to the anatomical model to detect the position and rotation of the model. “With significantly greater precision than before. Based on the position, the relation to the laser can then be detected and processed in the programme,” explained Töpsch. Similarly to the head model, any other 3D-printed anatomical model could then be equipped with reflectors and connected to the computer in the MRI simulator.

The software that runs in the background is already ready to go. Ellermann explained the principle of the software as follows: “You have a cake that you cut into thin slices. This is what MRI does. Our software then takes all of these thin slices and reassembles the cake. The cake is then cut again in the direction from which you are currently viewing it and this slice is displayed on the screen.”

Gleaning useful data is also a challenge here. “We are keen to compare the various images delivered by the two processes of CT and MRI. If images from the same perspective and the same person are placed next to one another, it might then be easier to understand why one method is better than the other for certain diseases or certain body parts,” commented Ellermann. For data protection reasons, the data also requires heavy post-processing to ensure that there is no risk of personal identification. “Even if they are not accompanied by names, the images can still be very individual. This obviously needs to be anonymised, so that the origin of the images can no longer be traced,” explained Eva Peschke, the member of the team who specialises in procuring and processing interesting and versatile data.

The team of developers is focused on three key applications for the new technology: providing and explaining information to patients, for example before scheduled operations, medical teaching to make it easier for students to compare images with real anatomy, as well as public outreach.


Author: Kerstin Nees