Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Systems and methods for training machine learning algorithms for inverse problems without fully sampled reference data

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    October 01, 2024
  • معلومة اضافية
    • Patent Number:
      12106,401
    • Appl. No:
      17/075411
    • Application Filed:
      October 20, 2020
    • نبذة مختصرة :
      Self-supervised training of machine learning (“ML”) algorithms for reconstruction in inverse problems are described. These techniques do not require fully sampled training data. As an example, a physics-based ML reconstruction can be trained without requiring fully-sampled training data. In this way, such ML-based reconstruction algorithms can be trained on existing databases of undersampled images or in a scan-specific manner.
    • Inventors:
      REGENTS OF THE UNIVERSITY OF MINNESOTA (Minneapolis, MN, US)
    • Assignees:
      REGENTS OF THE UNIVERSITY OF MINNESOTA (Minneapolis, MN, US)
    • Claim:
      1. A computer-implemented method for training a machine learning algorithm to reconstruct an image, the method comprising: accessing sub-sampled data with a computer system, wherein the sub-sampled data are k-space data acquired with a magnetic resonance imaging (MRI) system; dividing the sub-sampled data into a first k-space data subset and a second k-space data subset using the computer system; training a machine learning algorithm by inputting the first k-space data subset to the machine learning algorithm during training to enforce data consistency using a forward model while evaluating a loss function during training of the machine learning algorithm using the second k-space data subset; and storing the trained machine learning algorithm in the computer system for later use.
    • Claim:
      2. The method of claim 1 , wherein the loss function is defined between an output image and the second k-space data subset.
    • Claim:
      3. The method of claim 2 , wherein the second data subset comprises a vector of k-space data points.
    • Claim:
      4. The method of claim 1 , wherein the first k-space data subset is selected such that it comprises a number of elements that is a fraction of a total number of elements in the sub-sampled data.
    • Claim:
      5. The method of claim 1 , wherein the machine learning algorithm comprises a neural network.
    • Claim:
      6. The method of claim 5 , wherein the neural network is a convolutional neural network.
    • Claim:
      7. The method of claim 6 , wherein the convolutional neural network is a residual neural network.
    • Claim:
      8. The method of claim 5 , wherein the neural network is implemented with an unrolled neural network architecture comprising a plurality of steps, each step including a regularization unit and a data consistency unit.
    • Claim:
      9. The method of claim 1 , wherein the sub-sampled data are partitioned into a number of partitions, such that the first k-space data subset comprises a plurality of first k-space data subsets, wherein the plurality of first k-space data subsets comprises a number of first k-space data subsets that is equal to the number of partitions, and the second k-space data subset comprises a plurality of second k-space data subsets, wherein the plurality of second k-space data subsets comprises a number of second k-space data subsets that is equal to the number of partitions.
    • Claim:
      10. The method of claim 1 , further comprising reconstructing an image by accessing image data with the computer system, retrieving the trained machine learning algorithm with the computer system, and inputting the image data to the trained machine learning algorithm, generating output as a reconstructed image.
    • Claim:
      11. The method of claim 10 , further comprising fine-tuning the trained machine learning algorithm using the image data accessed with the computer system.
    • Claim:
      12. The method of claim 1 , further comprising reconstructing an image by retrieving the trained machine learning algorithm with the computer system, and inputting the sub-sampled data to the trained machine learning algorithm, generating output as a reconstructed image.
    • Claim:
      13. The method of claim 1 , wherein the sub-sampled data comprise a database of sub-sampled data and accessing the sub-sampled data includes accessing a set of sub-sampled data from the database.
    • Claim:
      14. The method of claim 1 , wherein training the machine learning algorithm on the first subset of data comprises using a forward operator when training for data consistency.
    • Claim:
      15. The method of claim 1 , wherein the sub-sampled data comprise scan-specific data obtained from the subject.
    • Claim:
      16. The method of claim 15 , further comprising reconstructing an image by retrieving the trained machine learning algorithm with the computer system, and inputting the sub-sampled data to the trained machine learning algorithm, generating output as a reconstructed image.
    • Claim:
      17. A method for reconstructing an image from undersampled k-space data, the method comprising: accessing a pre-trained neural network with a computer system, wherein the pre-trained neural network has been trained on sub-sampled k-space data that were divided into a first k-space data subset and a second k-space data subset, wherein the pre-trained neural network was trained on the first k-space data subset while evaluating a loss function using the second k-space data subset during training; accessing undersampled k-space data with the computer system, wherein the undersampled k-space data were obtained from a subject using a magnetic resonance imaging (MRI) system; inputting the undersampled k-space data to the pre-trained neural network, generating output as a reconstructed image that depicts the subject; and displaying the image to a user using the computer system.
    • Claim:
      18. The method of claim 17 , wherein the pre-trained neural network is fine-tuned before inputting the undersampled k-space data to the pre-trained neural network, wherein the pre-trained neural network is fine-tuned by: dividing the undersampled k-space data into a first data subset and a second data subset; applying the first data subset to the pre-trained neural network, generating output as an output image; transforming the output image into k-space, generating output as network output k-space data; and minimizing a loss function between the second data subset and the network output k-space data in order to generate fine-tuned network parameters that best estimate the second data subset based on the loss function.
    • Claim:
      19. A computer-implemented method for training a machine learning algorithm to reconstruct an image, the method comprising: accessing sub-sampled data with a computer system, wherein the sub-sampled data are k-space data acquired with a magnetic resonance imaging (MRI) system; dividing the sub-sampled data into at least a first k-space data subset and a second k-space data subset using the computer system; training a machine learning algorithm by inputting the first k-space data subset to the machine learning algorithm during training to enforce data consistency using a forward model while evaluating a loss function during training of the machine learning algorithm using at least the second k-space data subset; and storing the trained machine learning algorithm in the computer system for later use.
    • Claim:
      20. The method of claim 19 , wherein the sub-sampled data are partitioned into a number of partitions comprising at least the first k-space data subset and the second k-space data subset.
    • Claim:
      21. The method of claim 20 , wherein the sub-sampled data are partitioned into the number of partitions, such that the first k-space data subset comprises a plurality of first k-space data subsets, wherein the plurality of first k-space data subsets comprises a number of first k-space data subsets that is equal to the number of partitions, and the second k-space data subset comprises a plurality of second k-space data subsets, wherein the plurality of second k-space data subsets comprises a number of second k-space data subsets that is equal to the number of partitions.
    • Claim:
      22. The method of claim 19 , wherein the loss function is defined between an output image and at least the second k-space data subset.
    • Claim:
      23. The method of claim 22 , wherein the second k-space data subset comprises a vector of k-space data points.
    • Claim:
      24. The method of claim 19 , wherein the machine learning algorithm comprises a neural network.
    • Claim:
      25. The method of claim 24 , wherein the neural network is implemented with an unrolled neural network architecture comprising a plurality of steps, each step including a regularization unit and a data consistency unit.
    • Claim:
      26. The method of claim 19 , wherein the sub-sampled data comprise a database of sub-sampled data and accessing the sub-sampled data includes accessing a set of sub-sampled data from the database.
    • Claim:
      27. The method of claim 19 , wherein the sub-sampled data comprise scan-specific data obtained from the subject.
    • Claim:
      28. The method of claim 27 , further comprising reconstructing an image by retrieving the trained machine learning algorithm with the computer system, and inputting the sub-sampled data to the trained machine learning algorithm, generating output as a reconstructed image.
    • Patent References Cited:
      9297873 March 2016 Block
      10712416 July 2020 Sandino
      20170053402 February 2017 Migukin
      20190257905 August 2019 Cheng
      20190385047 December 2019 Lei
      20200090036 March 2020 Nakata
      20200311541 October 2020 Cmielowski
      20200311878 October 2020 Matsuura
      20220130017 April 2022 Zhang

































    • Other References:
      Cheng, J., Wang, H., Ying, L., Liang, D. (2019). Model Learning: Primal Dual Networks for Fast MR Imaging. In: , et al. Medical Image Computing and Computer Assisted Intervention—MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol. 11766. Springer, Cham (Year: 2019). cited by examiner
      Aggarwal, Hemant K., Merry P. Mani, and Mathews Jacob. “MoDL: Model-based deep learning architecture for inverse problems.” IEEE transactions on medical imaging 38.2 (2018): 394-405. (Year: 2018). cited by examiner
      Hyun, Chang Min, et al. “Deep learning for undersampled MRI reconstruction.” Physics in Medicine & Biology 63.13 (2018): 135007. (Year: 2018). cited by examiner
      Knoll, Florian, et al. “Deep learning methods for parallel magnetic resonance image reconstruction.” arXiv preprint arXiv:1904.01112 (2019). (Year: 2019). cited by examiner
      Liang, Dong, et al. “Deep MRI reconstruction: unrolled optimization algorithms meet neural networks.” arXiv preprint arXiv:1907.11711 (2019). (Year: 2019). cited by examiner
      Qin, Chen, et al. “Convolutional recurrent neural networks for dynamic MR image reconstruction.” IEEE transactions on medical imaging 38.1 (2018): 280-290. (Year: 2018). cited by examiner
      Rueckert, Daniel, and Julia A. Schnabel. “Model-based and data-driven strategies in medical image computing.” Proceedings of the IEEE 108.1 (2019): 110-124. (Year: 2019). cited by examiner
      Wu, Dufan, Kyungsang Kim, and Quanzheng Li. “Computationally efficient deep neural network for computed tomography image reconstruction.” Medical physics 46.11 (2019): 4763-4776. (Year: 2019). cited by examiner
      Zhussip, Magauiya, Shakarim Soltanayev, and Se Young Chun. “Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. (Year: 2019). cited by examiner
      Hammernik, Kerstin, et al. “Learning a variational network for reconstruction of accelerated MRI data.” Magnetic resonance in medicine 79.6 (2018): 3055-3071. cited by applicant
      Senouf, O., et al. “Self-supervised learning of inverse problem solvers in medical imaging.” arXiv preprint arXiv:1905.09325 (2019). cited by applicant
      Akcakaya M, Basha TA, Goddu B, Goepfert LA, Kissinger KV, Tarokh V, Manning WJ, Nezafat R. Low-dimensional-structure self-learning and thresholding: Regularization beyond compressed sensing for MRI Reconstruction. Magn Reson Med 2011;66(3):756-767. cited by applicant
      Akcakaya M, Moeller S, Weingartner S, Ugurbil K. Scan-specific robust artificial-neural networks for k-space interpolation (RAKI) reconstruction: Database-free deep learning for fast imaging. Magn Reson Med 2019;81(1):439-453. cited by applicant
      Akcakaya M, Nam S, Hu P, Moghari MH, Ngo LH, Tarokh V, Manning WJ, Nezafat R. Compressed sensing with wavelet domain dependencies for coronary MRI: a retrospective study. IEEE Trans Med Imaging 2011;30(5):1090-1099. cited by applicant
      Dar Suh, Özbey M, Çatli AB, Çukur T. A Transfer-Learning Approach for Accelerated MRI Using Deep Neural Networks. Magn Reson Med 2020. cited by applicant
      Han, Y. et al. “k-space deep learning for accelerated MRI,” arXiv preprint arXiv:1805.03779 (2019). cited by applicant
      Han, Y. et al. “Deep learning with domain adaptation for accelerated projection-reconstruction MR,” arXiv preprint arXiv:1703.01135 (2018). cited by applicant
      Hosseini SAH, et al. “Accelerated coronary MRI using 3D SPIRiT-RAKI with sparsity regularization,” in Proc. IEEE ISBI, 2019, pp. 1692-1695. cited by applicant
      Hosseini SAH, et al. “Accelerated coronary MRI with sRAKI: a database-free self-consistent neural network k-space reconstruction for arbitrary undersampling,” PLoS ONE, 2020, 15(2):e0229418. cited by applicant
      Hosseini SAH, et al. “sRAKI-RNN: accelerated MRI with scan-specific recurrent neural networks using densely connected blocks,” in SPIE Wavelets and Sparsity XVIII, 2019, p. 111381B. cited by applicant
      Hosseini SAH, et al. Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms. arXiv preprint arXiv:1912.07197 (2020). cited by applicant
      Kim, T. et al. “LORAKI: Autocalibrated Recurrent Neural Networks for Autoregressive MRI Reconstruction in k-Space,” arXiv preprint arXiv:1904.09390 (2019). cited by applicant
      Knoll, F., et al. “Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge,” arXiv preprint arXiv:2001.02518 (2020). cited by applicant
      Knoll, F., et al. “Deep-learning methods for parallel magnetic resonance imaging reconstruction: a survey of the current approaches, trends, and issues,” IEEE Signal Processing Magazine, vol. 37, No. 1, pp. 128-140, 2020. cited by applicant
      Kwon, K. et al. “A parallel MR imaging method using multilayer perceptron,” Medical Physics, vol. 44, No. 12, pp. 6209-6224, 2017. cited by applicant
      Lee, D. et al. “Deep residual learning for accelerated MRI using magnitude and phase networks,” arXiv preprint arXiv:1804.00432 (2018). cited by applicant
      Lei K, Mardani M, Pauly JM, Vasawanala SS. Wasserstein GANs for MR Imaging: from Paired to Unpaired Training. arXiv preprint arXiv:1910.07048 (2020). cited by applicant
      Liang D, Cheng J, Ke Z, Ying L. Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks. IEEE Signal Processing Magazine, 2020, 37(1):141-151. cited by applicant
      Schlemper, J. et al. “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Trans Med Imaging, vol. 37, No. 2, pp. 491-503, 2018. cited by applicant
      Sim B, Oh G, Lim S, Ye JC. Optimal Transport, CycleGAN, and Penalized LS for Unsupervised Learning in Inverse Problems. arXiv preprint arXiv:1909.12116 (2019). cited by applicant
      Wang, S. et al. “Accelerating magnetic resonance imaging via deep learning,” in Proc IEEE ISBI, 2016, pp. 514-517. cited by applicant
      Yaman B, Hosseini SAH, Akcakaya M. Noise2Inpaint: Learning Referenceless Denoising by Inpainting Unrolling. arXiv preprint arXiv:2006.09450 (2020). cited by applicant
      Yaman B, Hosseini SAH, Moeller S, Ellermann J, Ugurbil K, Akcakaya M. Self-Supervised Learning of Physics-Guided Reconstruction Neural Networks Without Fully Sampled Reference Data. Magn Reson Med, 2020, 84(6):3172-3191. cited by applicant
      Yaman B, Hosseini SAH, Moeller S, Ellermann J, Ugurbil K, Akcakaya M. Self-Supervised Physics-Based Deep Learning MRI Reconstruction Without Fully-Sampled Data. arXiv preprint arXiv:1910.09116 (2019). cited by applicant
    • Primary Examiner:
      Yang, Qian
    • Attorney, Agent or Firm:
      QUARLES & BRADY LLP
    • الرقم المعرف:
      edspgr.12106401