Use of a 3D CNN to functionally validate the registration of TCGA GBM/LGG expert segmentation labels on to the original multi-institutional MRI brain dataset


Year:

Session type:

Theme:

Andrew Ho, Matt Williams

Abstract

Background

Pre-operative MRI brain scans from The Cancer Genome Atlas glioblastoma multiforme (GBM) and low grade glioma (LGG) collections form the basis of the BraTS dataset.  The publicly-available BraTS expert segmentation labels and associated annual challenge are widely used for training and assessing brain tumour ML segmentation algorithms.

Prior to release, the scans have undergone multiple pre-processing steps, including co-registration to the SRI24 atlas, skull-stripping, resampling, smoothing and intensity histogram matching.  This makes it challenging to use these annotated scans for radiomics analysis as features from the original scans may have been distorted by pre-processing.

Method

Previously calculated individual scan/sequence-level affine transformation matrices from SRI24-space (i.e. BraTS scans) to original patient-space were inverted and used to register the original scans to SRI24-space and apply the BraTS labels.  This registration was validated by proxy of assessing segmentation performance of the DeepMedic 3D convolutional neural network.

Two models were simultaneously trained, using 102 GBM and 59 LGG scans.  Model 1 was trained on the BraTS processed dataset.  Model 2 was trained on the original imaging (skull-stripped using the same mask as the BraTS dataset).  K-fold cross-validation with 10 folds, stratified for tumour grade, generated whole tumour DICE scores for each scan with both models.

Results

Mean DICE score with model 1 was 0.808.  Mean DICE score with model 2 was 0.881.  Two-tailed Wilcoxon signed rank test showed this difference to be significant (p < 0.05), validating the affine transformations and indicating that model 2 provided a better dataset for deep learning training than the BraTS pre-processed dataset.

Conclusion

Functional validation, via 3D CNN segmentation performance, of the affine transformation matrices between the original imaging and BraTS segmentations permits the use of published expert segmentation labels on the original imaging.  This can be used to develop deep learning models using multi-institutional real-world scans, extracting features for radiomics analysis without artifacts introduced by pre-processing.  Training the CNN on the original imaging yielded better performance implying useful features may have been distorted in the pre-processing steps.

Impact statement

Validated registration of expert segmentation labels to their original imaging allows their use for radiomics analysis without distortion introduced by BraTS pre-processing.