Week 9

Medical Visualization & Scientific Visualization



How X-Ray, CT, MRI, fMRI work


X-Rays - https://en.wikipedia.org/wiki/X-ray

At a very high level of description, you have an x-ray source, an image receptor (in the old days a photographic plate), and something in between that you want to look at. X-rays are partially blocked by dense tissue like bones, and less blocked by soft tissue resulting in an image on the image recepter where bones show up white and softer tissue is darker. These are very portable (dentist offices, emergency rooms) and very fast. The big disadvantage is that a 3D volume like our chest or your tooth are compressed into a 2D image which can make working out the 3D location difficult.

More modern techniques create a sequence of slices that can be combined to form a volume of data

CT Scan - https://en.wikipedia.org/wiki/CT_scan

At a very high level of description, a computed tomography scanner uses X-rays to take a series of 2D images to combine into a 3D volume after a bunch of math is applied. CT scans are good for looking at bones. They can also make use of contrast agents, typically iodine.


MRI - https://en.wikipedia.org/wiki/Magnetic_resonance_imaging

At a very high level of description, Magnetic Resonance Imaging uses a powerful magnetic field to align the hydrogen atoms in the water of the body with the direction of the field. A radio frequency electromagnetic field is introduced which causes the atoms to alter their alignment. When the field is turned off they return to their original alignment producing a detectable signal. Atoms in different types of issue return to the original alignment at different rates, based on the density of water in the tissue, so a 3D picture of the different tissues can be built up through a lot of math. If there isn't enough natural contrast between the densities then a contrast agent can be introduced to artificially boost the contrast. MRI is good for looking at soft tissue and finding tumors.

CT typically has better spatial resolution, but MRI has better contrast. MRI takes longer (30 minutes for MRI vs 10 minutes for CT.) Note that aside from a person needing to remain still during the scan, organs like the heart and the lungs are not remaining still that long. CT uses radiation which is bad. MRI uses strong magnetic fields which may be bad. Both are constantly improving.

fMRI - https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging

Functional Magnetic Resonance Imaging works like MRI but the idea is to collect data in real-time (once every few seconds), though at lower resolution, commonly from the brain.

here are some sample datasets: http://graphics.stanford.edu/data/voldata/ and http://digimorph.org/


From the virtual human web page (https://www.nlm.nih.gov/research/visible/visible_human.html) "The Visible Human Male data set consists of MRI, CT and anatomical images. Axial MRI images of the head and neck and longitudinal sections of the rest of the body were obtained at 4 mm intervals. The MRI images are 256 pixel by 256 pixel resolution. Each pixel having 12 bits of grey tone resolution. The CT data consists of axial CT scans of the entire body taken at 1 mm intervals at a resolution of 512 pixels by 512 pixels with each pixel made up of 12 bits of grey tone. The approximately 7.5 megabyte axial anatomical images are 2048 pixels by 1216 pixels, with each pixel defined by 24 bits of color. The anatomical cross-sections are at 1 mm intervals to coincide with the CT axial images. There are 1871 cross-sections for both CT and anatomy. The complete male data set, 15 gigabytes in size, was made publicly available in November, 1994.

The Visible Human Female data set, released in November, 1995, has the same characteristics as the The Visible Human Male with one exception, the axial anatomical images were obtained at 0.33 mm intervals. This resulted in 5,189 anatomical images, and a data set of about 40 gigabytes. Spacing in the "Z" direction was reduced to 0.33mm in order to match the 0.33mm pixel sizing in the "X-Y" plane, thus enabling developers interested in three-dimensional reconstructions to work with cubic voxels."

(and let us remember that in 1995 personal computers were running at 200Mhz, with 16MB of RAM and 1 GB hard drives. Now the visible woman dataset easily fits on a USB stick)

Here are some small JPEG images of a very few of the slices of the visible woman. We will show a movie in class of the full-size images. Once these images were collected and aligned then images slicing through the body on other planes are easy to generate as well as the ability to generate arbitrary planes.

We are going to use ParaView to look at the data

ParaView is available from www.paraview.org and there is a nice intro at:
http://www.bu.edu/tech/support/research/training-consulting/online-tutorials/paraview/

Here is ParaView viewing the 75MB version of the Visible Woman dataset. ParaView is a tool we use in the graduate level scientific visualization course that deals much more with 3D datasets. It is build on top of a library called VTK (the visualization toolkit.) This allows us to take the volumetric data and convert it into polygonal surfaces.

This version of the dataset is made up of 577 slices, 1 slice every 3mm, where each slice is 256 x 256 16-bit values. Note that this is not terribly high resolution. Once the data is loaded in we can set up two contours - one for the skin and one for the bones.

Since this is a 3d block of data values we need to tell ParaView how to read in this data:

Once its read in I can then create 2 contours - one transparent at 800 for skin, and one opaque at 1200 for bone to create surfaces at those two boundaries. This results in something similar to this.



and another way is through volume rendering (change the representation to volume) with the transfer function shown on the right (right click edit color) where we assign colors and opacity to  different ranges of values in the block of data.



More recently newer scans have been made including the Visible Korean Human and the Chinese Visible Human.




Paraview was designed to be a general purpose tool so it is not particularly focused on medical data - which makes it harder to use than the specialized tools we will talk about later, but has the basic functionality that students can recreate in the Computer Graphics II course.


another way to look at this kind of data is through Volume Rendering where we avoid generating polygons

a common way is to use ray-casting

Ray Casting

4 different ray functions - UL:maximum, UR:average, BL:distance=30, BR:composite

a nice program to explore this kind of data on the mac is OsiriX - https://www.osirix-viewer.com/
and a newer company involved in this work that makes a plugin for OsiriX is fovia - https://www.fovia.com/

Osirix




Volume Classification

Regions of Interest






Paraview is currently at version 5.10 and you can download it from:
https://www.paraview.org/download/

There is a detailed introduction and tutorial at:
https://www.paraview.org/Wiki/The_ParaView_Tutorial
and I'd like people to go through Chapter 2 Basic Usage through 2.8.

As usual you should start up a new Jupyter notebook, take screenshots from ParaView as you progress through the tutorial, and add those screen shots showing your progress through each part. When you are done print to PDF and upload it to Gradescope.



Scientific Visualization


Different
          Views of the Earth


In past terms we have had talks various by people in this area that we have worked with:


Paul Morin
Director of the Polar Geospatial Information Center
https://www.pgc.umn.edu/about/


Dr Mark SubbaRao

Astronomer
formerly at the Adler Planetarium
now leading NASA Goddard's Scientific Visualization Studio


Dr Peter Doran
Professor
Department of Geology and Geophysics
Louisiana State University





Three main uses for visualizing scientific data

Some fields like astronomy rely on observation and simulation with very little chance for direct experimentation, others like high energy physics have access to very big experimental setups.

When doing analysis, you want to be able to compare data collected at different times and places, compare real data to simulated data, integrate data from multiple real and simulated sources to solve more complex problems.

Here are a couple other specific use cases:


Endurance

https://www.evl.uic.edu/endurance/

    have elevation of the landscape around the lake
    have older bathymetry data collected at various points of the lake
    have older chemistry data collected at various points in the lake


Lake Bonney
      Satellite Photo

Lake Bonney
      Satellite Photo Close-Up

Lake Bonney Mapping Mission
      2008

ENDURANCE AUV

    Given that sparse data you can form hypotheses and plan for more thorough data collection

    Goals:
       generate accurate bathymetry for the lake
       investigate the chemical composition of the entire volume of the lake

    Mission in December 2008
    Mission in November 2009

    collecting current multispectral satellite photography at beginning and end of deployment
    collecting GPS readings at various locations on site to correctly map the satellite photography to the terrain
    collecting information on lake level each day
   
    collecting data on new bathymetry directly
on a 100m by 100m 2D grid
    collecting data on new bathymetry via constant sonar
    collecting data on chemical content of the lake on a 100m by 100m by every-few-centimeters 3D grid
    collecting photographs of the ice from below on a 100m by 100m 2D grid
 

    each day there is one mission where the vehicle starts from the same site and goes off to several of the measurement sites and collects data.

    however


    first

    ideally you do all of this in the field so you know if you missed collecting any data so you run another mission and collect it again.

    next

    now we can get to more interesting visualization and analysis issues
















CoreWall

Cores brought up from lakes, ice, ocean floor, the antarctic
These cores may be several meters long or several kilometers long

Used to determine the climate thousands to hundreds of millions of years ago

Cores brought up and sliced in half

Data from each core segment is recorded on a paper 'barrel sheet' summary stored in binders
Cores in Minnesota


Why doesn't this work

Core Repository in
      Minnesota

Viewing Core Data on the Floor


CoreWall solution
CoreWall Screens




ANDRILL 2006 - two CoreWall setups in Antarctica
ANDRILL 2007 - seven CoreWall setups including one at the drill site in Antarctica
now installed on JOIDES Resolution drill ship

moving towards 3D MRI / CT scans of cores


Coming Next Time

Project 2 Presentations


last revision 1/06/2022