Prostate cancer is the most common cancer among US men. However, prostate imaging is still challenging despite the advances in multi-parametric magnetic resonance imaging (MRI), which provides both morphologic and functional information pertaining to the pathological regions. Along with whole prostate gland segmentation, distinguishing between the central gland (CG) and peripheral zone (PZ) can guide toward differential diagnosis, since the frequency and severity of tumors differ in these regions; however, their boundary is often weak and fuzzy. This work presents a preliminary study on deep learning to automatically delineate the CG and PZ, aiming at evaluating the generalization ability of convolutional neural networks (CNNs) on two multi-centric MRI prostate datasets. Especially, we compared three CNN-based architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation performances achieved with/without pre-training were compared in 4-fold cross-validation. In general, U-Net outperforms the other methods, especially when training and testing are performed on multiple datasets.
CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study
Rundo L.;
2020-01-01
Abstract
Prostate cancer is the most common cancer among US men. However, prostate imaging is still challenging despite the advances in multi-parametric magnetic resonance imaging (MRI), which provides both morphologic and functional information pertaining to the pathological regions. Along with whole prostate gland segmentation, distinguishing between the central gland (CG) and peripheral zone (PZ) can guide toward differential diagnosis, since the frequency and severity of tumors differ in these regions; however, their boundary is often weak and fuzzy. This work presents a preliminary study on deep learning to automatically delineate the CG and PZ, aiming at evaluating the generalization ability of convolutional neural networks (CNNs) on two multi-centric MRI prostate datasets. Especially, we compared three CNN-based architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation performances achieved with/without pre-training were compared in 4-fold cross-validation. In general, U-Net outperforms the other methods, especially when training and testing are performed on multiple datasets.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.