3D printing the Visible Human skull
The Visible Human Data has been well studied in medical image research. We chose the CT dataset of the visible female dataset to create a skull model for 3D printing. It is an interesting little experiment to see how well our MakerBot 3D printer handles complicated anatomical geometries, and also to demonstrate how easy it is to create a 3D-printable model from real medical imaging data using the tools that we have been developing.
To extract the bone structure out, we used Slicer to segment the bone region (via simple thresholding) and generated a surface mesh model (via the model maker).
As you can see from the above picture, there are a couple of tubes which have an intensity range very close to the skull bone in the original data, and were also segmented out as the bone. These two tubes were placed along the body during the freezing process, and run along the full extent of the original dataset. Obviously better segmentation methods can be used to eliminate the tubes and probably will get more accurate bone segmentation as well. Here we just removed those tubes by checking the connectivity of the extracted surfaces. We loaded the model into Paraview and cleaned up the tubes by “Connectivity” filtering. “Decimation” and “Smooth” filtering were used as refinement steps at the end.
With the help of Slicer and Paraview, we quickly generated our skull surface model. You can find the 3D STL model on this github repository, and the low resolution mesh model is shown here:
Before printing, we further clipped the model into two parts ( the top cap and the remaining bone structure) using the “Clip Closed Surface” filter in Paraview. We wanted to make sure that the 3D structures are stable during the 3D printing. Also, it would be easier to see the inside of the skull with an open cap.
After nearly 10 hours of MakerBot printing, here is the printed Visible Female skull, with printer generated supporting structures:
After manually removing all the supporting columns, the final product looks quite nice in terms of the level of details and the smoothness of the printed surface. This is the 40% of the physical size of the data. Raft(base) and support are recommended to have for this scale and shape complexity.
Hi, very interesting post ! Is it possible to automate this task ?
@Zevran, yes! As a first step, the operations can be composed into a custom Slicer Python module [1]. If further automation is required, then extract the ITK and VTK processing into a Python script or C++ executable.
[1] https://www.slicer.org/slicerWiki/index.php/Documentation/Nightly/Developers/Python_scripting