Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Distributed real-time computing framework using in-storage processing

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    April 25, 2017
  • معلومة اضافية
    • Patent Number:
      9,632,831
    • Appl. No:
      14/663249
    • Application Filed:
      March 19, 2015
    • نبذة مختصرة :
      According to one general aspect, a scheduler computing device may include a computing task memory configured to store at least one computing task. The computing task may be executed by a data node of a distributed computing system, wherein the distributed computing system includes at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a memory. The scheduler computing device may include a processor configured to assign the computing task to be executed by either the central processor of a data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of data associated with the computing task.
    • Inventors:
      Lee, Jaehwan (Fremont, CA, US); Ki, Yang Seok (Palo Alto, CA, US)
    • Assignees:
      SAMSUNG ELECTRONICS CO., LTD. (KR)
    • Claim:
      1. A scheduler computing device comprising: a computing task memory configured to store at least one computing task, wherein the computing task is to be executed by a data node of a distributed computing system, wherein the distributed computing system comprises at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a non-volatile memory; and a processor configured to reduce the transmittal of data between elements of the data node by: deciding whether to assign the computing task to be executed by either the central processor of the data node or the controller processor of the intelligent storage medium based, at least in part, upon an amount of output data associated with the computing task compared to an amount of input data associated with the computing task, wherein: when the amount of output data is greater than the amount of input data, assigned the computing task to the central processor of the data node; and when the amount of output data is less than or equal to the amount of input data, assigned the computing task to the controller processor of the intelligent storage medium.
    • Claim:
      2. The scheduler computing device of claim 1 , wherein the processor is configured to: assign a computing task to either the central processor of the data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of input data associated with the computing task relative to an amount of output data associated with the computing task.
    • Claim:
      3. The scheduler computing device of claim 1 , wherein the processor is configured to: divide a larger computing task into one or more smaller computing tasks, wherein each of the computing tasks includes a chain of one or more operations, and wherein each smaller computing task is performed by either the central processor of the data node or the intelligent storage medium of the data node; classify each smaller computing task into one or at least two categories, wherein a first category is assigned to the central processor of the data node, and a second category is assigned to the intelligent storage medium of the data node; and based upon the category associated with a smaller computing task, assign each respective smaller computing task to either the central processor of the data node or the intelligent storage medium of the data node.
    • Claim:
      4. The scheduler computing device of claim 1 , wherein the processor is configured to: classify the computing task into one or at least three categories, wherein a first category is to be assigned to the central processor of the data node, a second category is to be assigned to the intelligent storage medium of the data node, and a third category can be assigned to either the central processor or the intelligent storage medium of the data node; and if a current computing task is associated with the third category, assign the current computing task to either the central processor of the data node or the intelligent storage medium of the data node, based upon a category associated with either a prior computing task or a next computing task.
    • Claim:
      5. The scheduler computing device of claim 1 , wherein each data node further comprises a main memory; and wherein the scheduler computing device is configured to dictate that an output data of the computing task be stored in either the main memory of the data node or the memory of the intelligent storage medium.
    • Claim:
      6. The scheduler computing device of claim 1 , wherein the distributed computing system further includes a second plurality of data nodes, each data node in the second plurality of data nodes comprising a central processor and a simple storage medium, wherein the simple storage medium includes a memory; and wherein the processor of the scheduler computing device is configured to: assign a computing task to a data node of the plurality of data nodes that includes intelligent storage mediums, or a data node of the plurality of data nodes that includes simple storage mediums, based, at least in part, upon which data node stores a piece of data associated with the computing task, and if the computing task is assigned to a data node of the plurality of data nodes that includes simple storage mediums, assigning an entire computing task to the central processor of the data node.
    • Claim:
      7. A method comprising: receiving a computing task, wherein the computing task includes a plurality of operations; allocating the computing task to a data node, wherein the data node includes a central processor and an intelligent storage medium, and wherein the intelligent storage medium includes a controller processor and a non-volatile memory; dividing the computing task into at least a first chain of operations and a second chain of operations, wherein dividing the computing task includes determining for each chain of operations an amount of output data associated with a respective chain of operations and an amount of input data associated with the respective chain of operations; if, for a respective chain of operations, the amount of output data is less than the amount of input data, assigning the respective chain of operations to the intelligent storage medium of the data node; and if, for a respective chain of operations, the amount of output data is greater than the amount of input data, assigning the respective chain of operations to the central processor of the data node.
    • Claim:
      8. The method of claim 7 , wherein dividing includes: categorizing each operation into at least a first category or a second category; wherein an operation associated with the first category generates an amount of output data that is less than an amount of input data; and wherein an operation associated with the second category generates an amount of output data that is greater than an amount of input data.
    • Claim:
      9. The method of claim 8 , wherein dividing includes: determining an operation at which the computing task transitions from operations of one category of operations and to operations of another category of operations; and dividing the computing task into different chains of operations at the operation.
    • Claim:
      10. The method of claim 7 , wherein dividing includes: classifying each operation into one of at least three categories, wherein a first category is associated with the central processor of the data node, a second category is associated with the intelligent storage medium of the data node, and a third category is associated with both the central processor and the intelligent storage medium of the data node; and if a current operation is associated with the third category, assigning the current operation to either the central processor of a data node or the intelligent storage medium of the data node, based upon a category associated with either a prior operation or a next operation.
    • Claim:
      11. The method of claim 7 , further comprising: assigning an output location for an output data generated by the first chain of operations.
    • Claim:
      12. The method of claim 11 , wherein assigning an output location includes: if a next operation is assigned to the central processor of the data node, dictate that an output data of the first chain of operations be stored in a memory of the data node; and if the next operation is assigned to the intelligent storage medium, dictate that an output data of the first chain of operations be stored in the intelligent storage medium.
    • Claim:
      13. A data node comprising: a central processor configured to execute at least one of a first set of operations upon data stored by an intelligent storage medium; the intelligent storage medium comprising: a memory configured to store data in a semi-permanent manner, and a controller processor configured to execute at least one of a second set of operations upon data stored by the intelligent storage medium; and a network interface configured to receive a plurality of operations from a scheduling computing device; and wherein the data node is configured to: divide the computing task into at least the first set of operations and the second set of operations based, at least in part, upon an amount of output data associated with a respective set of operations compared to an amount of input data associated with the respective set of operations, if, for a respective set of operations, the amount of output data is less than the amount of input data, assign the respective set of operations to the central processor for execution, and if, for a respective set of operations, the amount of output data is greater than the amount of input data, assigning the respective set of operations to the intelligent storage medium for execution.
    • Claim:
      14. The data node of claim 13 , wherein the data node is configured to assign an operation to either the first set of operations or the second set of operations, based, at least in part, upon an amount of input data associated with the operation relative to an amount of output data associated with the operation.
    • Claim:
      15. The data node of claim 13 , wherein all of the plurality of operations are included in the second set of operations, and the first set of operations is empty; and wherein the intelligent storage medium is configured to execute all of the plurality of operations.
    • Claim:
      16. The data node of claim 13 , wherein that data node comprises a main memory configured to store data in a temporary manner; and wherein an output data associated with the operation is stored in either the main memory of the data node or the memory of the intelligent storage medium.
    • Claim:
      17. The data node of claim 16 , wherein the data node is configured to: if a next operation is assigned to the central processor, store the output data in the main memory of the data node; and if a next operation is assigned to the central processor, store the output data in the memory of the intelligent storage medium.
    • Claim:
      18. The data node of claim 16 , wherein the data node is configured to associate an output assignment with the second set of operations; and wherein the data node is configured to, in response to the output assignment, store the output data in the main memory of the data node or the memory of the intelligent storage medium.
    • Claim:
      19. The data node of claim 13 , wherein the central processor is capable of executing the second set of operations.
    • Patent References Cited:
      7657706 February 2010 Iyer
      8819335 August 2014 Salessi
      2013/0191555 July 2013 Liu
    • Primary Examiner:
      Zhao, Bing
    • Attorney, Agent or Firm:
      Renaissance IP Law Group LLP
    • الرقم المعرف:
      edspgr.09632831