Practicable Paradigms for Semi-Automated Expert-User Post-Processing of Deep-Learning Segmentations in 3D Radiology

  • Julian Jany  
  • Gerald Adam Zwettler 
  • a,b Department of Software Engineering, School of Informatics, Communications and Media, University of Applied Sciences Upper Austria, Softwarepark 11, Hagenberg, 4232, Austria
  • Research Group Advanced Information Systems and Technology, Research and Development Department, University of Applied Sciences Upper Austria, Softwarepark 11, Hagenberg, 4232, Austria
Cite as
Jany J., Zwettler G.A. (2021). Practicable Paradigms for Semi-Automated Expert-User Post-Processing of Deep-Learning Segmentations in 3D Radiology. Proceedings of the 10th International Workshop on Innovative Simulation for Healthcare (IWISH 2021), pp. 41-50. DOI: https://doi.org/10.46354/i3m.2021.iwish.007

Abstract

With recent improvements in deep-learning architectures and availability of GPU hardware, state of the art deep learning (DL) has already manifested as powerful image processing technology in the clinical routine to provide segmentation results of high accuracy. As a drawback, it’s black-box nature does not naturally feature inspection and post-processing by medical experts. We present a Graph segmentation (GS) approach that derives it’s fitness function from arbitrary DL results in a generic way. To allow for efficient and effective post-processing by the medical experts, various interaction paradigms are presented and evaluated in this paper. The trade-off of GS compared to the initial DL results is marginal (delta JI= 0.196%), yet potential DL segmentation errors can be corrected in a reliable way. The intuitive approach shows a high level of both, inter and intra user reproducibility. Change propagation of corrected slices keeps the demand for user-interaction to a minimum when successfully correction potential weaknesses in the DL segmentations. Thereby, the formerly error-prone slice mini-batches get corrected in an automated way with the JI being significantly increased.

References

  1. Aggarwal, P., Vig, R., Bhadoria, S., and Dethe, C. (2011). Role of segmentation in medical imaging: A comparative study. International Journal of Computer Applications, 29:54–61.
  2. Boykov, Y. and Funka-Lea, G. (2006). Graph cuts and efficient nd image segmentation. International Journal of Computer Vision - IJCV, 70:109–131.
  3. Feng, X., Tustison, N. J., Patel, S. H., and Meyer, C. H. (2020). Brain tumor segmentation using an ensemble of 3d u-nets and overall survival prediction using radiomic features. Frontiers in computational neuroscience, 14:25.
  4. Islam, M., Sanghani, P., See, A. A. Q., James, M. L., King, N. K. K., and Ren, H. (2018). Ichnet: Intracerebral hemorrhage (ich) segmentation using deep learning. In International MICCAI Brainlesion Workshop, pages 456–463. Springer.
  5. Laplante, P. A., editor (2018). Encyclopedia of Image Processing. CRC Press.
  6. Larrazabal, A. J., Martinez, C., and Ferrante, E. (2019). Anatomical priors for image segmentation via post-processing with denoising autoencoders. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 585–593. Springer.
  7. Lin, E. and Alessio, A. (2009). What are the basic concepts of temporal, contrast, and spatial resolution in cardiac ct? Journal of cardiovascular computed tomography, 3(6):403–408.
  8. Rajagopalan, S., Karwoski, R. A., and Robb, R. A. (2003). Shape-based interpolation of porous and tortuous binary objects. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 957–958. Springer.
  9. Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597.
  10. Sakinis, T., Milletari, F., Roth, H., Koratis, P., Kostandy, P., Philbrick, K., Akkus, Z., Xu, Z., Xu, D., and Erickson, B. J. (2019). Interactive segmentation of medical images through fully convolutional neural networks. arXiv preprint arXiv:1903.08205.
  11. Schenk, A., Prause, G., and Peitgen, H.-O. (2001). Local-cost computation for ecient segmentation of 3d objects with live wire. Proceedings of SPIE - The International Society for Optical Engineering.
  12. Strakos, P., Jaros, M., Karasek, T., Kozubek, T., Vavra, P., and Jonszta, T. (2015). Review of the software used for 3d volumetric reconstruction  of the liver. Int. J. Comput. Electr. Autom. Control Inf. Eng, 9(2):422–426.
  13. Xu, N., Price, B. L., Cohen, S., Yang, J., and Huang, T. S. (2016). Deep interactive object selection. CoRR, abs/1603.04042.
  14. Zabriskie, N. (2017). Graph cut image segmentation with custom gui. https://github.com/NathanZabriskie/GraphCut.
  15. Zhou, T., Ruan, S., and Canu, S. (2019). A review: Deep learning for medical image segmentation using multi-modality fusion. Array, 3-4:100004.
  16. Zwettler, G., Holmes III, D., and Backfrieder, W. (2020a). Pre- and post-processing strategies for generic slice-wise segmentation of tomographic 3d datasets utilizing u-net deep learning models trained for specic diagnostic domains. In Farinella, G., Radeva, P., and Braz, J., editors, VISIGRAPP 2020 - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pages 66–78. INSTICC. 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISAPP 2020 ; Conference date: 27-02-2020 Through 29-02-2020.
  17. Zwettler, G. A., Backfrieder, W., Karwoski, R., and Holmes III, D. (2021). Generic user-guided interaction paradigm for precise post-slice-wise processing of tomographic deep learning segmentations utilizing graph cut and graph segmentation. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,, pages 235–244. INSTICC, SciTePress.
  18. Zwettler, G. A., Holmes III, D. R., and Backfrieder, W. (2020b). Strategies for training deep learning models in medical domains with small reference datasets. Journal of WSCG, 28(1-2).