majumdar 251005

Category: Education

Presentation Description

No description available.


Presentation Transcript

Grid Enabled Image Guided Neurosurgery Using High Performance Computing: 

Grid Enabled Image Guided Neurosurgery Using High Performance Computing A Majumdar1, A Birnbaum1, D Choi1, T. Devadithya2, A Trivedi3, S. K. Warfield4, N. Archip4, K. Baldridge1,5, Petr Krysl3, June Andrews6 1 San Diego Supercomputer Center & 3Structural Engineering Dept University of California San Diego 2Computer Science Dept, Indiana University 4 Computational Radiology Lab Brigham and Women’s Hospital Harvard Medical School 5 Universität Zürich 6 Electrical Engineering, UC Berkeley Grants: NSF: ITR 0427183, 0426558, REU; NIH:P41 RR13218, P01 CA67165, LM0078651; I3 grant (IBM)

Neurosurgery Challenge: 

Neurosurgery Challenge Challenges : Remove as much tumor tissue as possible Minimize the removal of healthy tissue Avoid the disruption of critical anatomical structures Know when to stop the resection process Compounded by the intra-operative brain deformation as a result of the surgical process Important to quantify and correct for these deformations while surgery is in progress Real-time constraints – provide images ~once/hour within few mins during surgery lasting 6 to 8 hours

Intraoperative MRI Scanner at BWH (0.5 T): 

Intraoperative MRI Scanner at BWH (0.5 T)

Brain Deformation: 

Brain Deformation Before surgery After surgery

Overall Process: 

Overall Process Before image guided neurosurgery During image guided neurosurgery

Timing During Surgery: 

Timing During Surgery Time (min) Before surgery During surgery 0 10 20 30 40 Preop segmentation Intraop MRI Segmentation Registration Surface displacement Biomechanical simulation Visualization Surgical progress

Current Prototype DDDAS Inside Hospital: 

Current Prototype DDDAS Inside Hospital

Two Research Aspects: 

Two Research Aspects Grid Architecture – grid scheduling, on demand remote access to multi-teraflop machines, data transfer/sharing Development of detailed advanced non-linear scalable hyper elastic biomechanical model

Intra-op MRI with pre-op fMRI: 

Intra-op MRI with pre-op fMRI

Scheduling Experiment #1 on 2 TeraGrid Clusters: 

Scheduling Experiment #1 on 2 TeraGrid Clusters TeraGrid is a NSF funded grid infrastructure across multiple research and academic sites Queue delays at SDSC and NCSA TG were measured over 3 days for 5 mins wall clock time on 2 to 64 CPUs Single job submitted at a time If job didn’t start within 10 mins, job terminated, next one processed What is the likelihood of job running 313 jobs to NCSA TG cluster and 332 to SDSC TG cluster – 50 to 56 jobs of each size on each cluster

% of submitted tasks that run as a function of CPUs requested: 

% of submitted tasks that run as a function of CPUs requested Average queue delay for tasks that began running within10 mins TeraGrid Experiment Results

Scheduling Experiment#2 on 5 TeraGrid Clusters: 

Scheduling Experiment#2 on 5 TeraGrid Clusters The real-time constraint of this application requires that data transfer and simulation altogether take about 10 mins, otherwise these results are not of use to surgeons Assume simulation and data transfer (both ways) together takes 10 mins and data transfer takes 4 mins Leaves 6 mins for biomechanical simulation on remote HPC machines Assume biomechanical model is scalable i.e. better results achieved on higher number of processors Objective : Get simulation done in 6 mins Get maximum number of processors available within 6 mins Allow 4 mins to wait in the queue; this leaves 2 mins for actual simulation

Experiment Characteristics: 

Experiment Characteristics Flooding scheduler approach – experiment 1: Simultaneously submit 8, 16, 32, 64, 128 procs jobs to multiple clusters - SDSC DataStar, SDSC TG, NCSA TG, ANL TG, PSC TG When a lower count job starts (at any center) kill all the lower CPU count jobs at all the other centers Results : out of 1464 job submissions over ~7 days, only 6 failed giving success of 99.59%; 128 CPU jobs ran greater than 50% of time; at least 64 CPU jobs ran more than 80% of time Next slide gives time varying behavior with 6 hour intervals for this experiment 4 other experiments were performed by taking out some of the successful clusters as well as taking scheduler cycle time into account on DataStar As number of clusters were reduced, success rate goes down

Data Transfer: 

Data Transfer We are investigating grid based data transfer mechanisms such as globus-url-copy, SRB All hospitals have firewalls for security and patient data privacy – single port of entry to internal machines Transfer time in seconds for 20 MB file

Mesh Model with Brain Segmentation: 

Mesh Model with Brain Segmentation

Current and New Biomechanical Models: 

Current and New Biomechanical Models Current linear elastic material model RTBM Advanced biomechanical model FAMULS (AMR) Advanced model is based on conforming adaptive refinement method Inspired by the theory of wavelets this refinement produces globally compatible meshes by construction Replicate the linear elastic result produced by RTBM using FAMULS



Deformation Simulation After Cut: 

Deformation Simulation After Cut No – AMR FAMULS 3 level AMR FAMULS RTBM

Advanced Biomechanical Model: 

Advanced Biomechanical Model The current solver is based on small strain isotropic elastic principle New biomechanical model Inhomogeneous scalable non-linear kinematics with hyper elastic model with AMR Increase resolution close to the level of MRI voxels i.e. millions of FEM meshes New high resolution complex model still has to meet the real time constraint of neurosurgery Requires fast access to remote multi-tflop systems

Parallel Registration Performance: 

Parallel Registration Performance

Parallel Rendering Performance: 

Parallel Rendering Performance

Parallel RTBM Performance: 

Parallel RTBM Performance (43584 meshes, 214035 tetrahedral elements) - 10.00 20.00 30.00 40.00 50.00 60.00 1 2 4 8 16 32 # of CPUs Elapsed Time (sec) IBM Power3 IA64 TeraGrid IBM Power4

End to End (BWH  SDSCBWH) Timing : 

End to End (BWH  SDSCBWH) Timing RTBM – not during surgery Rendering - during Surgery


End-to-end Timing of RTBM Timing of transferring ~20 MB files from BWH to SDSC, running simulations on 16 nodes (32 procs), transferring files back to BWH = 9 + (60 + 7) + 50 = 124 sec. Capable of providing biomechanical brain deformation simulation results (using the linear elastic model) to the surgery room at BWH within ~2 mins using TG machines at SDSC

End-to-end Timing of Rendering : 

End-to-end Timing of Rendering Intra-op MRI data sent from BWH to SDSC during a surgery, parallel rendering performed at SDSC, rendered viz sent back to BWH (but not shown to surgeons) Total time (for two sets of data) = 2*53 + 2* 7.4 + 0.2 + 13.7 = 148.4 sec DURING SURGERY

Current and Future DDDAS Research: 

Current and Future DDDAS Research Continuing research and development in grid architecture, on demand computing, data transfer Continuing development of advanced biomechanical model and parallel algorithm Future DDDAS - near-continuous instead of once an hour 3-D MRI based Scanner at BWH can provide one 2-D slice every 3 sec or three orthogonal 2-D slices every 6 sec Near-continuous DDDAS architecture Requires major research, development and implementation work in the biomechanical application domain Requires research in the closed loop system of dynamic image driven continuous biomechanical simulation and 3-D volumetric FEM results based surgical navigation and steering

authorStream Live Help