3D Modeling and Extended Reality Simulations of the Cross-sectional Anatomy of the Cerebrum, Cerebellum, and Brainstem
ABSTRACT
BACKGROUND: Understanding the anatomy of the human cerebrum, cerebellum, and brainstem and their 3-dimensional (3D) relationships is critical for neurosurgery. Although 3D photogrammetric models of cadaver brains and 2-dimensional images of postmortem brain slices are available, neurosurgeons lack free access to 3D models of cross-sectional anatomy of the cerebrum, cerebellum, and brainstem that can be simulated in both augmented reality (AR) and virtual reality (VR).
OBJECTIVE: To create 3D models and AR/VR simulations from 2-dimensional images of cross-sectionally dissected cadaveric specimens of the cerebrum, cerebellum, and brainstem.
METHODS: The Klingler method was used to prepare 3 cadaveric specimens for dissection in the axial, sagittal, and coronal planes. A series of 3D models and AR/VR simulations were then created using 360° photogrammetry.
RESULTS: High-resolution 3D models of cross-sectional anatomy of the cerebrum, cerebellum, and brainstem were obtained and used in creating AR/VR simulations. Eleven axial, 9 sagittal, and 7 coronal 3D models were created. The sections were planned to show important deep anatomic structures. These models can be freely rotated, projected onto any surface, viewed from all angles, and examined at various magnifications.
CONCLUSION: To our knowledge, this detailed study is the first to combine up-to-date technologies (photogrammetry, AR, and VR) for high-resolution 3D visualization of the cross-sectional anatomy of the entire human cerebrum, cerebellum, and brainstem. The resulting 3D images are freely available for use by medical professionals and students for better comprehension of the 3D relationship of the deep and superficial brain anatomy.
INTRODUCTION
New uses for computer-based multimedia technology in medicine, such as in teaching and visualization of complex anatomic structures, continue to evolve.1 The explosion of innovative image and media approaches over the past few decades has greatly augmented traditional ways of learning anatomy. For example, our team of researchers and neurosurgeons recently introduced a novel way to enhance anatomic photographs by merging them with Digital Imaging and Communications in Medicine (DICOM) data, thereby creating a spatially accurate and realistic volumetric model.2 Previous publications have described 3-dimensional (3D) modeling and augmented reality (AR) and virtual reality (VR) simulations for neurosurgical education.3,4 Our primary goal in this study was to create detailed 3D models of the cross-sectional anatomy of the brain, cerebellum, and brainstem from cadaveric dissections for use in AR/VR simulations. These high-fidelity and accessible models could conveniently enhance the training of neurosurgeons and students worldwide.
METHODS
Institutional review board approval was not required because the study did not involve human participants, and no patient data were used. Appropriate permission was obtained for the publication of cadaver images.
Specimen Preparation
The cerebrum, cerebellum, and brainstem of 3 formalin-fixed adult cadavers were used for this study. Specimens were prepared according to the Klingler method, were frozen for at least 2 weeks at 15°C, and then thawed in water for 1 hour before use.5-7
Cadaver Dissection
Brains were serially dissected and sectioned in axial, sagittal, and coronal planes. Serial sections were 1 cm in thickness, but some transections were modified to keep important structures intact. The intercommissural line, which intersects in the midst of the anterior and posterior commissures, was used to define the appropriate section planes.
3D Modeling by 360° Photogrammetry
After sectioning, the brain sections were photographed and 3D photogrammetry models were produced. Cross-sections were photographed using a professional digital single-lens reflex (DSLR) camera (Canon EOS 450D/Digital Rebel XSi, Canon, Inc) and an iPad Pro tablet computer (Apple, Inc). Each dissection stage was captured with a photogrammetry application (Qlone 3D Scanner, EyeCue Vision Technologies, Ltd) to produce volumetric 3D models, as described elsewhere.8 The photogrammetry application works by capturing multiple 2-dimensional (2D) photographs and layering them to create a 3D reconstruction.8 As a result, this model incorporates both geometric and textural data from the specimen that can be exported in many different file formats: obj, stl, fbx (FilmBox), usd (universal scene description), glb, x3D (extensible 3D), and ply (polygon file format). Our resulting models are now available on the Neurosurgical Atlas website9 and can be viewed using any 3D AR viewing program on Android (Google, Inc), Microsoft (Microsoft Corp), and Apple (Apple, Inc) smart devices. The 3D models can be moved and rotated in all directions, projected onto any surface, and examined from both anterior and posterior perspectives with AR compatibility.
RESULTS
Axial Anatomy
Eleven 3D models were created from superior to inferior in the axial plane (Figure 1A–1L, Video). The structures on the superior and inferior surfaces of the axial brain were examined (Figure 1A and 1B). Cross-sectional images were obtained from the superior to the inferior direction (Figure 1C-1L). The superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus, cingulate gyrus, precentral gyrus, and postcentral gyrus were examined (Figure 1C). A section was made that passed through the upper border of the lateral ventricle. The angular gyrus and the supramarginal gyrus were visualized (Figure 1D). When the axial cross-section was continued, the corpus callosum, choroid plexus, and caudate nucleus were exposed (Figure 1E). In the next cross-section, the fornix and insula were visualized (Figure 1F). Then, the thalamus, putamen, internal capsule, and superior colliculus were shown (Figure 1G). Before the brain was removed entirely, the claustrum, globus pallidus, red nucleus, third ventricle, cerebral aqueduct, inferior colliculus, and hippocampus were shown (Figure 1H). Cross-sections were then made in the brainstem-cerebellum complex (Figure 1I). The middle cerebellar peduncle and fourth ventricle were exposed (Figure 1J). Next, a cross-section passing through the dentate nucleus was made (Figure 1K). In the last model, the cerebellar tonsils and medulla oblongata were visualized (Figure 1L).
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Sagittal Anatomy
Nine 3D models were created from left to right in the sagittal plane (Figure 2A-2I, Video). The sagittal cross-sectioning began with a detailed inspection of the essential sulci and gyri on each hemispheric surface (Figure 2A). The central sulcus, superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus, superior temporal gyrus, middle temporal gyrus, inferior temporal gyrus, lateral sulcus, occipital lobe, and cerebellum were examined (Figure 2B). Next, the frontal operculum, temporal operculum, and insula were exposed (Figure 2C). When the trigone of the lateral ventricle was observed, the claustrum and dentate gyrus were visualized in the same section (Figure 2D). Sagittal images of the putamen, globus pallidus, internal capsule, and thalamus were obtained (Figure 2E). Next, the brain, cerebellum, and brainstem were split in half (Figure 2F). After the midsagittal cut, a serial cross-section was made laterally to show structures such as the hippocampus from the medial side (Figure 2G-2I).
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Coronal Anatomy
Seven 3D models were created from anterior to posterior in the coronal plane (Figure 3A-3G, Video). Before coronal cross-sectioning of the brain, superficial structures were visualized (Figure 3A). The frontal limbic area and rostral gyrus were exposed (Figure 3B). When the coronal incisions were continued, the structures of the lateral ventricle frontal horn, corpus callosum, cingulate gyrus, cerebellar peduncle, and interpeduncular fossa were examined (Figure 3C). Next, coronal images of the septum pellucidum, caudate nucleus, thalamus, and optic chiasma were obtained (Figure 3D). Then, the insula, claustrum, putamen, globus pallidus, internal capsule, mammillary body, and interthalamic adhesion were visualized (Figure 3E). The coronal section of the hippocampus, third ventricle, and tectum was exposed (Figure 3F), and then the splenium of the corpus callosum and sur-rounding structures were visualized (Figure 3G).
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
Click here to view related content for this model.
DISCUSSION
To our knowledge, this study is the first to physically section and digitalize an entire cadaveric brain to create an interactive 3D model in the 3 principal imaging planes: axial, sagittal, and coronal. Previous studies10-22 with 3D renderings of the brain lack texture, realistic features, maneuverability, augmentation, and fine cross-sections through multiple planes. Similarly, available cross-sectional radiological images and anatomic photographs of the human brain provide only cross-sectional anatomic information and lack clear visualization of the relationships among superficial and deep structures in real 3D space.
Challenges to Learning Anatomy
Dissection of the human body remains the gold standard of traditional anatomy education because practitioners learn anatomic intricacies and variations and appreciate body parts that cannot be viewed during an operation.12 Dissection also permits haptic feedback and provides students with 3D views of human anatomy not possible with an atlas of images. Despite its many benefits, cadaveric dissection involves logistical, economic, chemical exposure, and moral complexities. Furthermore, the quality of dissection and the experiential results are influenced by various factors, including the quality of the material, number of cadavers available, previous anatomic knowledge of trainees, dissection time, availability of instructors, and self-instruction time.23
More recently, anatomic dissection has been curtailed in medical schools with increasing use of advanced techniques such as prosections and plastinated sections. As a result, students might have only a small window of opportunity for hands-on learning of anatomy during cadaveric dissection. To become proficient, students must often use supplemental materials to achieve sufficient anatomic knowledge, which is usually obtained through self-directed study and use of 2D material content that does not facilitate comprehension of the spatial relationships among structures. After medical school, surgeons often have little exposure to cadaveric material during their surgical residency. We therefore believe that learning human anatomy is best accomplished in an environment that enables anatomic structures to be studied from multiple perspectives.10,11,24-29 The discipline of neurosurgery requires 3D visualization and understanding of sophisticated anatomy that involves complex and overlapping structures. The successful neurosurgeon comprehends the repercussions of each surgical intervention to the intended target and to surrounding tissue.5-7
3D Technology in Neurosurgery Education and Training
Our 3D models display cross-sectional anatomy in a way that enables users to magnify, rotate, and view anatomic structures in the perspective of their choice. Historically, the understanding and teaching of neuroanatomy in 3 dimensions was the goal of many neuroanatomists.2,5,15,26,30-34 For example, beginning in the 1940s, Josef Klingler used wax and plaster casts to create models of his brain dissections.5 In addition, Klingler used wood and metal to create other brain models that contained up to 20 interdigitating parts representing the primary areas of the brain. Albert Rhoton famously used 3D projection of film slides to display details of intricate anatomic dissections of neurovascular structures.35 His legacy of teaching in neurosurgery and neuroanatomy is the result of promoting a 3D understanding of the brain, wherein neurosurgery residents and students were instilled with an excitement for more realism in learning surgical anatomy. These methods became aids in the teaching of complex anatomy, especially that of the deep parts of the brain. In the decades since then, other researchers have developed additional modeling systems with the help of technological advances such as high-resolution neuroimaging, photogrammetry, and 3D-rendering software. Studying human anatomy from cadaveric specimens allows the intuitive depiction of structures and their features (eg, location, size, and spatial interactions); this, in turn, facilitates an appreciation for the 3D architecture of the whole body.15 We believe that the study of cadaveric specimens viewed in 3D cross-sections will facilitate the intuitive interpretation of standard neuroimaging (ie, computed tomography and MRI) in clinical scenarios. AR/VR resources can supplement the learning environment and offer the convenience of mobility and remote access.8,36,37 Learning through AR/VR materials is limited neither to only 2 dimensions nor to a designated environment such as a hospital or laboratory.
Techniques for Producing 3D Visualization and Models
We used photogrammetry to create AR/VR resources for an enhanced learning experience where users can analyze an entire cadaveric brain through individualized cross-sections in the axial, sagittal, and coronal planes while remaining within their desired environment.38,39 Previous studies have incorporated 3D technology for neurosurgery practice, training, and education.10-22 However, most of these studies used artificial textures overlaid on reconstructed data from CT or MRI.10,11,13,17,18 Others used realistic textures but lacked complete 360° maneuverability.12,14,15,19-22 By contrast, our study models combine realistic features with 360° movement. The idea of blending several models to create a common 3D virtual brain model is intriguing but requires a multidisciplinary effort because current image processing or photogrammetry software is not capable of combining surface-rendered photogrammetric models as opposed to neuroimaging software. The integration of neuroimaging and photogrammetry can enhance the accuracy and quality of the 3D anatomic models to promote neuroanatomy and neurosurgery training.2 Blending texture information from photogrammetric models with surface topography of the volumetric MRI-based 3D virtual models can be a promising venue to create more realistic and spatially accurate digital models, although achieving this goal is a technically challenging engineering problem. In addition, machine learning algorithms can be used with surgical photographs and videos to accurately predict surface texture data of a human brain MRI.40 Other researchers have produced exceptional realistic 3D models for neurosurgery education and training. Roh et al16 used a photogrammetry scanner to digitalize a cadaveric specimen throughout each step of a whole-brain dissection and integrated the models with AR/VR simulations for neurosurgery residents. Perfusion aided visualization of vasculature and dural partitions. By contrast, Serrato-Avila et al14 performed white matter dissection of 26 cadaveric brainstems with the cerebellum attached to create 7 realistic 3D models. Unfortunately, only one of their 7 models can be viewed in its entirety from all angles (ie, in 360°). Although both these studies provided illustrative models, neither provided realistic 3D cross-sections that can be correlated with standard neuroimaging data.
By contrast, the models that we produced are unique. As noted above, existing 3D models do not correlate well with standard cross-sectional neuroimaging viewed in 3 orthogonal planes. This lack of correlation makes it difficult for trainees to translate the patient’s pathology from diagnostic imaging to real-world dimensions. However, by using 3D cross-sections, trainees can more easily appreciate the spatial relationship between the precentral gyrus (superficial) and the insula, putamen, and internal capsule (deep), which enables them to better understand how pathology involving the latter structures might affect motor movement through the compromise of the corticospinal tract.
Limitations
Despite the clear advantages and potential utility of this technology and our 3D models, our study does have limitations. First, the process of 3D model creation is susceptible to variations in background lighting and smooth reflective surfaces that can reduce the resolution of the models and distort depth perception. Second, at maximum magnification, the 3D models lose some of their detail and resolution. This limitation might be addressed by using a photogrammetry application with higher-resolution inputs. Third, the modeling process requires expensive materials, including cadaveric specimens and smart devices. Finally, qualitative and quantitative research is still needed to determine the benefits to neurosurgery education and training.
CONCLUSION
Our 3D AR/VR models can serve as an important educational resource that physically demonstrates the nonpathological anatomy being displayed in neuroimaging and can potentially aid the neurosurgeon in understanding the complex anatomy of the brain. This research is available through the Neurosurgical Atlas as a high-fidelity, digital 3D library of cadaveric models. We believe that this technology will do much to improve neuroanatomy training by bridging the gap between what is digitalized and what is real while supplementing standard neuroimaging and other resources that are used to obtain anatomic knowledge.
Contributors: M. E. Gurses, S. Hanalioglu, G. Mignucci-Jiménez, E. Gökalp, N. I. Gonzalez-Romo, A. Gungor, A. A. Cohen-Gadol, U. Türe, M. T. Lawton, and M. C. Preul
Content from Gurses ME, Hanalioglu S, Mignucci-Jiménez G, Gökalp E, Gonzalez-Romo NI, Gungor A, Cohen-Gadol AA, Türe U, Lawton MT, Preul MC. Three-Dimensional Modeling and Extended Reality Simulations of the Cross-Sectional Anatomy of the Cerebrum, Cerebellum, and Brainstem. Oper Neurosurg (Hagerstown). 2023 Apr 21. doi: 10.1227/ons.0000000000000703. PMID: 37083688.
References
1. Qualter J, Sculli F, Oliker A, et al. The biodigital human: a web-based 3D platform for medical visualization and education. Stud Health Technol Inform. 2012;173: 359-361.
2. Hanalioglu S, Romo NG, Mignucci-Jiménez G, et al. Development and validation of a novel methodological pipeline to integrate neuroimaging and photogrammetry for immersive 3D cadaveric neurosurgical simulation. Front Surg. 2022;9:878378.
3. Gurses ME, Gungor A, Gökalp E, et al. Three-dimensional modeling and augmented and virtual reality simulations of the white matter anatomy of the cerebrum. Oper Neurosurg. 2022;23(5):355-366.
4. Gurses ME, Gungor A, Rahmanov S, et al. Three-dimensional modeling and augmented reality and virtual reality simulation of fiber dissection of the cerebellum and brainstem. Oper Neurosurg. 2022;23(5):345-354.
5. Agrawal A, Kapfhammer JP, Kress A, et al. Josef Klingler’s models of white matter tracts: influences on neuroanatomy, neurosurgery, and neuroimaging. Neurosurgery. 2011;69(2):238-254; discussion 252-234.
6. Klingler J. Erleichterung der makroskopischen Pra¨paration des Gehirns durch den Gefrierprozess [Facilitation of the macroscopic dissection of the brain by the freezing process]. Schweize Archiv Neurologie Psychiatr. 1935;36:247-256.
7. Klingler J, Ludwig E. Atlas Cerebri Humani [Atlas of the Human Brain]. Karger; 1956.
8. Gurses ME, Gungor A, Hanalioglu S, et al. Qlone : a simple method to create 360-degree photogrammetry-based 3-dimensional model of cadaveric specimens. Oper Neurosurg. 2021;21(6):E488-E493.
9. The Neurosurgical Atlas. 2022. Accessed 1 May 2022. https://www.neurosurgicalatlas.com
10. Louis RG, Steinberg GK, Duma C, et al. Early experience with virtual and synchronized augmented reality platform for preoperative planning and intraoperative navigation: a case series. Oper Neurosurg. 2021;21(4):189-196.
11. Kockro RA, Stadie A, Schwandt E, et al. A collaborative virtual reality environment for neurosurgical planning and training. Oper Neurosurg. 2007;61(5):379-391; discussion 391.
12. Parraga RG, Possatti LL, Alves RV, Ribas GC, Türe U, de Oliveira E. Microsurgical anatomy and internal architecture of the brainstem in 3D images: surgical con-siderations. J Neurosurg. 2016;124(5):1377-1395.
13. Shao X, Yuan Q, Qian D, et al. Virtual reality technology for teaching neurosurgery of skull base tumor. BMC Med Educ. 2020;20(1):3.
14. Serrato-Avila JL, Paz Archila JA, Silva da Costa MD, et al. Three-dimensional quantitative analysis of the brainstem safe entry zones based on internal structures. World Neurosurg. 2022;158:e64-e74.
15. Fernandez-Miranda JC, Rhoton AL, Jr., Alvarez-Linera J, Kakizawa Y, Choi C, de Oliveira EP. Three-dimensional microsurgical and tractographic anatomy of the white matter of the human brain. Neurosurgery. 2008;62(6):989-1026; discussion 1026.
16. Roh TH, Oh JW, Jang CK, et al. Virtual dissection of the real brain: integration of photographic 3D models into virtual reality and its effect on neurosurgical resident education. Neurosurg Focus. 2021;51(2):E16.
17. Dho YS, Park SJ, Choi H, et al. Development of an inside-out augmented reality technique for neurosurgical navigation. Neurosurg Focus. 2021;51(2):E21.
18. Perin A, Gambatesa E, Galbiati TF, et al. The “STARS-CASCADE” study: virtual reality simulation as a new training approach in vascular neurosurgery. World Neurosurg. 2021;154:e130-e146.
19. Spiriev T, Mitev A, Stoykov V, Dimitrov N, Maslarski I, Nakov V. Three-dimensional immersive photorealistic layered dissection of superficial and deep back muscles: anatomical study. Cureus. 2022;14(7):e26727.
20. Vigo V, Pastor-Escartin F, Doniz-Gonzalez A, et al. The Smith-Robinson approach to the subaxial cervical spine: a stepwise microsurgical technique using volumetric models from anatomic dissections. Oper Neurosurg. 2021;20(1):83-90.
21. Kournoutas I, Vigo V, Chae R, et al. Acquisition of volumetric models of skull base anatomy using endoscopic endonasal approaches: 3D scanning of deep corridors via photogrammetry. World Neurosurg. 2019;129:372-377.
22. Rodriguez Rubio R, Xie W, Vigo V, et al. Immersive surgical anatomy of the retrosigmoid approach. Cureus. 2021;13(6):e16068.
23. Johnson EO, Charchanti AV, Troupis TG. Modernization of an anatomy class: from conceptualization to implementation. A case for integrated multimodal-multidisciplinary teaching. Anat Sci Educ. 2012;5(6):354-366.
24. Türe U, Yasargil MG, Friedman AH, Al-Mefty O. Fiber dissection technique: lateral aspect of the brain. Neurosurgery. 2000;47(2):417-427; discussion 426-427.
25. Türe U, Yas¸argil DCH, Al-Mefty O, Yasargil MG. Topographic anatomy of the insular region. J Neurosurg. 1999;90(4):720-733.
26. Yagmurlu K, Vlasak AL, Rhoton AL, Jr. Three-dimensional topographic fiber tract anatomy of the cerebrum. Oper Neurosurg. 2015;11(2):274-305; discussion 305.
27. Ribas EC, Yagmurlu K, de Oliveira E, Ribas GC, Rhoton A. Microsurgical anatomy of the central core of the brain. J Neurosurg. 2018;129(3):752-769.
28. Kikinis R, Gleason PL, Moriarty TM, et al. Computer-assisted interactive three-dimensional planning for neurosurgical procedures. Neurosurgery. 1996;38(4): 640- 649; discussion 649.
29. Silen C, Wirell S, Kvist J, Nylander E, Smedby O. Advanced 3D visualization in student-centred medical education. Med Teach. 2008;30(5):e115-e124.
30. Kakizawa Y, Hongo K, Rhoton AL, Jr. Construction of a three-dimensional interactive model of the skull base and cranial nerves. Neurosurgery. 2007;60(5): 901-910; discussion 901-910.
31. Yagmurlu K, Rhoton AL, Jr., Tanriover N, Bennett JA. Three-dimensional microsurgical anatomy and the safe entry zones of the brainstem. Oper Neurosurg. 2014;10(4):602-620; discussion 619-620.
32. Martins C, Ribas EC, Rhoton AL, Jr., Ribas GC. Three-dimensional digital projection in neurosurgical education: technical note. J Neurosurg. 2015;123(4):1077-1080.
33. Gungor A, Baydin S, Middlebrooks EH, Tanriover N, Isler C, Rhoton AL, Jr. The white matter tracts of the cerebrum in ventricular surgery and hydrocephalus. J Neurosurg. 2017;126(3):945-971.
34. Cavalcanti DD, Feindel W, Goodrich JT, Dagi TF, Prestigiacomo CJ, Preul MC. Anatomy, technology, art, and culture: toward a realistic perspective of the brain. Neurosurg Focus. 2009;27(3):E2.
35. Farhadi DS, Jubran JH, Zhao X, et al. The neuroanatomic studies of Albert L. Rhoton Jr. in historical context: an analysis of origin, evolution, and application. World Neurosurg. 2021;151:258-276.
36. Sahin B, Hanalioglu S. The continuing impact of coronavirus disease 2019 on neurosurgical training at the 1-year mark: results of a nationwide survey of neurosurgery residents in Turkey. World Neurosurg. 2021;151: e857-e870.
37. Lazaro T, Srinivasan VM, Rahman M, et al. Virtual education in neurosurgery during the COVID-19 pandemic. Neurosurg Focus. 2020;49(6):E17.
38. Ogata H, Matsuka Y, Bishouty MME, Yano Y. LORAMS: linking physical objects and videos for capturing and sharing learning experiences towards ubiquitous learning. Int J Mob Learn Organ. 2009;3(4):337-350.
39. Kinshuk, Graf S. Ubiquitous learning. In: Seel NM, ed. Encyclopedia of the Sciences of Learning. Springer; 2012:3361-3363.
40. Gonzalez-Romo NI, Hanalioglu S, Mignucci-Jim´enez G, Abramov I, Xu Y, Preul MC. Anatomical depth estimation and three-dimensional reconstruction of mi-crosurgical anatomy using monoscopic high-definition photogrammetry and machine learning. Oper Neurosurg. 2022;24(4):432-444.
Please login to post a comment.