Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Detection of plant diseases with multi-stage, multi-scale deep learning

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    March 28, 2023
  • معلومة اضافية
    • Patent Number:
      11615,276
    • Appl. No:
      17/567635
    • Application Filed:
      January 03, 2022
    • نبذة مختصرة :
      In some embodiments, a computer-implemented method is disclosed. The method comprises receiving a plant image from a user device and applying a first digital model to first regions within the image for classifying each of the first regions into a class of a first set of classes corresponding to a first plurality of plant diseases, a healthy condition, or a combination of a second plurality of plant diseases. The method also includes applying a second digital model to one or more second regions within the image for classifying each of the one or more second regions into a class of a second set of classes corresponding to the second plurality of plant diseases. The method then includes transmitting classification data related to the classes of the first set of classes and the classes of the second set of classes to the user device into which the regions are classified.
    • Inventors:
      CLIMATE LLC (San Francisco, CA, US)
    • Assignees:
      CLIMATE LLC (Saint Louis, MO, US)
    • Claim:
      1. A computer-implemented method of recognizing plant diseases having multi-sized symptoms from a plant image, comprising: receiving a new image from a user device; applying, by a processor, a first digital model to a plurality of first regions within the new image for classifying each of the plurality of first regions within the new image into a class of a first set of classes corresponding to a first plurality of plant diseases, a healthy condition, or a combination of a second plurality of plant diseases; applying, by the processor, a second digital model to one or more second regions within the new image for classifying each of the one or more second regions within the new image into a class of a second set of classes corresponding to the second plurality of plant diseases, the one or more second regions each corresponding to a combination of multiple first regions of the plurality of first regions, the multiple first regions each further being classified into the class corresponding to the combination of the second plurality of plant diseases; and transmitting classification data related to the class of the first set of classes and the class of the second set of classes for each of the plurality of first regions and each of the one or more second regions to the user device.
    • Claim:
      2. The computer implemented method of claim 1 , wherein the first digital model or the second digital model is a convolutional neural network (CNN) or a decision tree; and wherein the first plurality of plant diseases includes Common Rust, Eyespot, Southern Rust, or Gray Leaf Spot at an early stage, and the second plurality of plant diseases includes Goss's Wilt, Northern Leaf Blight, or Gray Leaf Spot at a late stage.
    • Claim:
      3. The computer implemented method of claim 1 , further comprising: obtaining, by the processor, a first training set from at least a first photo showing a first symptom of one of the first plurality of plant diseases, a second photo showing no symptom, and a third photo showing a partial second symptom of one of the second plurality of plant diseases, the first training set including a label of the class of the first set of classes for each of a first set of areas in the first photo, the second photo, or the third photo, the first, second, and third photos corresponding to similarly-sized fields of view; building, by the processor, the first digital model from the first training set; obtaining a second training set from at least a fourth photo showing the second symptom, the second training set including a label of the class of the second set of classes for each of a second set of areas in the fourth photo; and building the second digital model from the second training set.
    • Claim:
      4. The computer implemented method of claim 3 , wherein obtaining the first training set comprises: identifying a size of a sliding window; determining a first scaling factor; determining a first image size based on the size of the sliding window and the first scaling factor; and resizing the first photo, the second photo, or the third photo according to the first image size to obtain a first resized photo, a second resize photo, or a third resized photo.
    • Claim:
      5. The computer implemented method of claim 4 , wherein obtaining the second training set comprises: determining a second scaling factor smaller than the first scaling factor; determining a second image size based on the size of the sliding window and the second scaling factor; and resizing the fourth photo according to the second image size to obtain a fourth resized photo.
    • Claim:
      6. The computer implemented method of claim 5 , further comprising determining a first stride and a second stride smaller than the first stride; wherein obtaining the first training set further comprises extracting a first set of areas from the first resized photo, the second resized photo, or the third resized photo using the sliding window with the first stride; and wherein obtaining the second training set further comprises extracting a second set of areas from the fourth resized photo using the sliding window with the second stride.
    • Claim:
      7. The computer-implemented method of claim 6 , wherein ones of the second set of areas overlap with other ones of the second set of areas.
    • Claim:
      8. The computer-implemented method of claim 6 , wherein applying the first digital model comprises: resizing the new image according to the first image size to obtain a first updated image; and extracting the plurality of first regions from the first updated image using the sliding window with the first stride.
    • Claim:
      9. The computer-implemented method of claim 8 , wherein applying the second digital model comprises: masking each of the plurality of first regions in the new image that is classified into a class corresponding to one of the first plurality of plant diseases or a healthy condition to obtain a masked image; resizing the masked image according to the second image size to obtain a second updated image; and extracting the one or more second regions from the second updated image using the sliding window with the second stride.
    • Claim:
      10. The computer-implemented method of claim 3 , wherein the first training set is further obtained from a specific photo showing a third symptom of one of the first plurality of plant diseases and a fourth symptom of one of the second plurality of plant diseases, the fourth symptom overlapping with the third symptom.
    • Claim:
      11. The computer-implemented method of claim 1 , wherein applying the second digital model comprises resizing a portion of a combination of multiple first regions of the plurality of first regions to obtain the one second region.
    • Claim:
      12. The computer implemented method of claim 1 , further comprising: computing a total size of the plurality of first regions and the one or more second regions classified into each of the first set of classes and the second set of classes; and determining a dominant class of the first set of classes and the second set of classes such that the total size of the plurality of first regions and the one or more second regions classified into the dominant class is largest, the classification data including information regarding the dominant class.
    • Claim:
      13. One or more non-transitory computer-readable media storing one or more sequences of instructions which when executed cause one or more processors to execute a method of recognizing plant diseases having multi-sized symptoms from a plant image, the method comprising: obtaining a first training set from at least a first photo showing a first symptom of one of a first plurality of plant diseases, a second photo showing no symptom, and a third photo showing a partial second symptom of one of a second plurality of plant diseases, the first training set including a label of a class of a first set of classes for each of a first set of areas in the first photo, the second photo, or the third photo, the first, second, and third photos corresponding to similarly-sized fields of view; building a first digital model from the first training set for classifying an image into a class of the first set of classes corresponding to the first plurality of plant diseases, a healthy condition, or a combination of the second plurality of plant diseases; obtaining a second training set from at least a fourth photo showing the second symptom, the second training set including a label of a class of a second set of classes for each of a second set of areas in the fourth photo; building a second digital model from the second training set for classifying an image into a class of the second set of classes corresponding to the second plurality of plant diseases; receiving a new image from a user device; applying the first digital model to a plurality of first regions within the new image to obtain a plurality of classifications; applying the second digital model to one or more second regions, each corresponding to a combination of multiple first regions of the plurality of first regions within the new image, to obtain one or more classifications, the multiple first regions being classified into the class corresponding to the combination of the second plurality of plant diseases; and transmitting classification data related to the plurality of classifications and the one or more classifications to the user device.
    • Claim:
      14. The one or more non-transitory computer-readable media of claim 13 , wherein obtaining the first training set comprises: identifying a size of a sliding window; determining a first scaling factor; determining a first image size based on the size and the first scaling factor; and resizing the first photo, the second photo, or the third photo according to the first image size to obtain a first resized photo, a second resize photo, or a third resized photo.
    • Claim:
      15. The one or more non-transitory computer-readable media of claim 14 , wherein obtaining the second training set comprises: determining a second scaling factor smaller than the first scaling factor; determining a second image size based on the size of the sliding window and the second scaling factor; and resizing the fourth photo according to the second image size to obtain a fourth resized photo.
    • Claim:
      16. The one or more non-transitory computer-readable media of claim 15 , wherein the instructions, when executed, further cause the or more processors to perform: determining a first stride and a second stride smaller than the first stride; wherein obtaining the first training set further comprises extracting a first set of areas from the first resized photo, the second resized photo, or the third resized photo using the sliding window with the first stride; and wherein obtaining the second training set further comprises extracting a second set of areas from the fourth resized photo using the sliding window with the second stride.
    • Claim:
      17. The one or more non-transitory computer-readable media of claim 16 , wherein obtaining the first training set further comprises assigning the label of the class of the first set of classes to each of the first set of areas; wherein obtaining the second training set further comprises assigning the label of the class of the second set of classes to each of the second set of areas; and wherein ones of the second set of areas overlap with other ones of the second set of areas.
    • Claim:
      18. The one or more non-transitory computer-readable media of claim 16 , wherein applying the first digital model comprises: resizing the new image according to the first image size to obtain a first updated image; and extracting the plurality of first regions from the first updated image using the sliding window with the first stride.
    • Claim:
      19. The one or more non-transitory computer-readable media of claim 18 , wherein applying the second digital model comprises: masking each of the plurality of first regions in the new image that is classified into a class corresponding to one of the first plurality of plant diseases or a healthy condition to obtain a masked image; resizing the masked image according to the second image size to obtain a second updated image; and extracting the one or more second regions from the second updated image using the sliding window with the second stride.
    • Claim:
      20. The one or more non-transitory computer-readable media of claim 13 , wherein applying the second digital model comprises resizing a portion of a combination of multiple first regions of the first plurality of regions to obtain the one second region.
    • Patent References Cited:
      10255670 April 2019 Wu
      10423850 September 2019 Chen
      10438302 October 2019 Bedoya
      10713542 July 2020 Gui
      11216702 January 2022 Gui et al.
      20130136312 May 2013 Tseng
      20180197048 July 2018 Micks
      20180330166 November 2018 Redden
      20190066234 February 2019 Bedoya
      20190108413 April 2019 Chen
      20190114481 April 2019 DeChant
      20200120267 April 2020 Stelmar Netto
      20200124581 April 2020 Gui
      20200134392 April 2020 Gui
      20200342273 October 2020 Gui et al.
      20210035689 February 2021 Liu
      3078119 October 2018




















    • Other References:
      Ren et al., “Faster R-CNN: Towards Real-time Object Detection With Region Proposal Networks”, CoRR, abs/1506.01497, dated Jan. 6, 2016, 14 pages. cited by applicant
      Dai et al., “R-FCN: Object DetectionVia Regionbased Fully Convolutional Networks”, http://arxiv.org/abs/1605.06409, CoRR, abs/1605.06409, dated Jul. 21, 2016, 11 pages. cited by applicant
      DeChant et al., “Automated Identification of Northern Leaf Blight-infected Maize Plants From Field Imagery Using Deep Learning”,Phytopathology, dated 2017, 7 pages. cited by applicant
      Garcia et al., “Digital Image Processing Techniques for Detecting, Quantifying and Classifying Plant Diseases”, dated Dec. 7, 2013, 12 pages. cited by applicant
      Gui et al., “Project Rakel: Multi-label Classification for Identifying Multiple Corn Foliar Diseases”, The Climate Corporation (TCC), dated Nov. 28, 2017, 15 pages. cited by applicant
      He et al., “Deep Residual Learning for Image Recognition”. In 2016 IEEE Conference on Comguter Vision and Pattern Recognition dated 2016, pp. 770-778. cited by applicant
      Konstantinos, F., “Deep Learning Lodels for Plant Disease Detection and Dignosis” dated Feb. 7, 2018, 9 pages. cited by applicant
      Krizhevsky et al., “Image Net Classification with Deep Convolutional Neural Networks”, Cited in Advances in Neural Information Processing Systems 25 (NIPS 2012), Curran Associates, dated 2012, 9 pages. cited by applicant
      Redmon et al., “You Only Look Once: Unified, Real-time Object Detection”. CoRR, http://arxiv.org/abs/1506.02640, dated Jun. 8, 2015, 10 pages. cited by applicant
      Yao et al., “Learning to Diagnose from Scratch by Exploiting Dependencies Among Labels”, CoRR, https://arxiv.org/abs/1710.10501, dated Feb. 1, 2018, 12 pages. cited by applicant
      Sermanet et al., “Overfeat: Integrated Recognition, Localization and Detection Using Convolutional Networks” CoRR, http://arxiv.org/abs/1312.6229, dated Feb. 24, 2014 pages. cited by applicant
      Simonyan et al., “Very Deep Convolutional Networks for Large-scale Image Recognition”, https://arxiv.org/abs/1409.1556, dated Sep. 4, 2014, 14 pages. cited by applicant
      Stone et al., “Teaching Compositionality to Cnns” CoRR, https://arxiv.org/abs/1706.04313, dated Jun. 14, 2017, 10 pages. cited by applicant
      Szegedy et al., “Going Deeper with Convolutions”, CoRR, https://arxiv.org/abs/1409.4842, dated Sep. 17, 2014, 12 pages. cited by applicant
      Wang et al., “CNN-RNN: A Unified Framework for Multi-label Image Classification”,CoRR, https://arxiv.org/abs/1604.04573, dated 2016, 10 pages. cited by applicant
      Wei et al., “CNN: Single-label to Multi-label”, CoRR, https://arxiv.org/abs/1406.5726,dated Jul. 9, 2014, pages. cited by applicant
      Liu et al., “SSD: Single Shot Multibox Detector” CoRR, https://arxiv.org/abs/1512.02325v1for vl, dated Dec. 8, 2015, 17 pages. cited by applicant
      Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge”, International Journal of Computer Vision (IJCV), dated Jan. 30, 2015, 43 pages. cited by applicant
      Fuentes, Alvaro F. et al.: “High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System With Refinement Filter Bank”, Frontiers in Plant Science, vol. 9, Aug. 29, 2018, pp. 1-15 (15 pages). cited by applicant
      U.S. Appl. No. 16/928,857, filed Jul. 14, 2020, Gui et al. cited by applicant
      U.S. Appl. No. 16/662,017, filed Oct. 23, 2019, Gui et al. cited by applicant
    • Primary Examiner:
      Motsinger, Sean T
    • Attorney, Agent or Firm:
      Harness, Dickey & Pierce, P.L.C.
    • الرقم المعرف:
      edspgr.11615276