Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

MULTIPLIER TUNING POSTPROCESSING FOR MACHINE LEARNING BIAS MITIGATION

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    December 5, 2024
  • معلومة اضافية
    • Document Number:
      20240403674
    • Appl. No:
      18/529300
    • Application Filed:
      December 05, 2023
    • نبذة مختصرة :
      In an embodiment, a computer infers, from an input (e.g. that represents a person) that contains a value of a sensitive feature that has a plurality of multipliers, a probability of a majority class (i.e. an outcome). Based on the value of the sensitive feature in the input, from the multipliers of the sensitive feature, a multiplier is selected that is specific to both of the sensitive feature and the value of the sensitive feature. The input is classified based on a multiplicative product of the probability of the majority class and the multiplier that is specific to both of the sensitive feature and the value of the sensitive feature. In an embodiment, a black-box bi-objective optimizer generates multipliers on a Pareto frontier from which a user may interactively select a combination of multipliers that provide a best tradeoff between fairness and accuracy.
    • Claim:
      1. A method comprising: inferring, from an input that contains a value of a feature that has a plurality of multipliers, a probability of a class; selecting based on the value of the feature in the input, from the plurality of multipliers of the feature, a multiplier that is specific to both of the feature and the value of the feature; and classifying the input based on a multiplicative product of the probability of the class and the multiplier that is specific to both of the feature and the value of the feature; wherein the method is performed by one or more computers.
    • Claim:
      2. The method of claim 1 wherein: said probability of the class is a first probability of the class; said multiplier that is specific to both of the feature and the value of the feature is a first value-specific multiplier; the method further comprises: inferring, from a second input that contains a second value of the feature, a second probability of the class; selecting based on the second value of the feature in the second input, from the plurality of multipliers of the feature, a second multiplier that is specific to both of the feature and the second value of the feature; classifying the second input based on the second multiplier that is specific to both of the feature and the second value of the feature and not based on the first value-specific multiplier.
    • Claim:
      3. The method of claim 2 further comprising: inferring, from a third input that contains the second value of the feature, a third probability of the class; inferring, from a fourth input that contains the second value of the feature, a fourth probability of the class; detecting, not based on the first probability of the class, a minimum probability for the second value of the feature and a maximum probability for the second value of the feature based on: the second probability of the class, the third probability of the class, and the fourth probability of the class.
    • Claim:
      4. The method of claim 3 further comprising generating the second multiplier that is specific to both of the feature and the second value of the feature based on the minimum probability for the second value of the feature and the maximum probability for the second value of the feature.
    • Claim:
      5. The method of claim 4 wherein said generating the second multiplier that is specific to both of the feature and the second value of the feature is based on a first ratio of the minimum probability for the second value of the feature over the maximum probability for the second value of the feature and a second ratio of the maximum probability for the second value of the feature over the minimum probability for the second value of the feature.
    • Claim:
      6. The method of claim 5 wherein the second multiplier that is specific to both of the feature and the second value of the feature is generated in a range from the first ratio to the second ratio.
    • Claim:
      7. The method of claim 1 further comprising generating, by a bi-objective optimizer, the plurality of multipliers of the feature.
    • Claim:
      8. The method of claim 7 further comprising receiving, by the bi-objective optimizer, two validation scores that are based on the plurality of multipliers of the feature.
    • Claim:
      9. The method of claim 8 wherein the two validation scores that are based on the plurality of multipliers of the feature are a fitness score and a fairness score.
    • Claim:
      10. The method of claim 1 wherein: said inferring is performed by a classifier that was trained; the method further comprises without retraining the classifier: adjusting the multiplier that is specific to both of the feature and the value of the feature; reclassifying the input.
    • Claim:
      11. The method of claim 10 wherein said adjusting and said reclassifying do not use the classifier.
    • Claim:
      12. The method of claim 1 further comprising: generating multiple pluralities of multipliers of the feature; detecting a subset of the multiple pluralities of multipliers of the feature that are on a bi-objective Pareto frontier.
    • Claim:
      13. The method of claim 1 wherein: the method further comprises: inferring, from the input that contains the value of the feature, a second probability of a second class and a third probability of a third class; rescaling, based on said multiplicative product of the probability of the class and the multiplier that is specific to both of the feature and the value of the feature, the second probability of the second class and the third probability of the third class; said classifying the input is not based on said rescaling the second probability of the second class and the third probability of the third class.
    • Claim:
      14. The method of claim 1 wherein: the multiplier that is specific to both of the feature and the value of the feature is less than one; said probability of the class is a probability of a first class; the method further comprises from the input that contains the value of the feature, inferring a second probability of a second class that is less than the probability of the first class and a third probability of a third class; said classifying the input comprises classifying the input as the second class.
    • Claim:
      15. A method comprising: generating, by a bi-objective optimizer, a first multiplier for a first class of a plurality of mutually exclusive classes and a second multiplier for a second class of the plurality of mutually exclusive classes; generating, from an input, an inference that contains a probability of the first class and a probability of the second class; and classifying the input based on: a first multiplicative product of the probability of the first class and the first multiplier, and a second multiplicative product of the probability of the second class and the second multiplier; wherein the method is performed by one or more computers.
    • Claim:
      16. The method of claim 15 wherein: said classifying the input uses a plurality of multipliers that contains the first multiplier and the second multiplier; the inference contains a plurality of probabilities that contains the probability of the first class and the probability of the second class; a count of the plurality of probabilities equals a count of the plurality of mutually exclusive classes; a count of the plurality of multipliers is less than the count of the plurality of probabilities.
    • Claim:
      17. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause: inferring, from an input that contains a value of a feature that has a plurality of multipliers, a probability of a class; selecting based on the value of the feature in the input, from the plurality of multipliers of the feature, a multiplier that is specific to both of the feature and the value of the feature; and classifying the input based on a multiplicative product of the probability of the class and the multiplier that is specific to both of the feature and the value of the feature.
    • Claim:
      18. The one or more non-transitory computer-readable media of claim 17 wherein: said probability of the class is a first probability of the class; said multiplier that is specific to both of the feature and the value of the feature is a first value-specific multiplier; the instructions further cause: inferring, from a second input that contains a second value of the feature, a second probability of the class; selecting based on the second value of the feature in the second input, from the plurality of multipliers of the feature, a second multiplier that is specific to both of the feature and the second value of the feature; classifying the second input based on the second multiplier that is specific to both of the feature and the second value of the feature and not based on the first value-specific multiplier.
    • Claim:
      19. The one or more non-transitory computer-readable media of claim 17 wherein the instructions further cause generating, by a bi-objective optimizer, the plurality of multipliers of the feature.
    • Claim:
      20. The one or more non-transitory computer-readable media of claim 17 wherein: said inferring is performed by a classifier that was trained; the instructions further cause without retraining the classifier: adjusting the multiplier that is specific to both of the feature and the value of the feature; reclassifying the input.
    • Claim:
      21. The one or more non-transitory computer-readable media of claim 17 wherein the instructions further cause: generating multiple pluralities of multipliers of the feature; detecting a subset of the multiple pluralities of multipliers of the feature that are on a bi-objective Pareto frontier.
    • Claim:
      22. The one or more non-transitory computer-readable media of claim 17 wherein: the multiplier that is specific to both of the feature and the value of the feature is less than one; said probability of the class is a probability of a first class; the instructions further cause from the input that contains the value of the feature, inferring a second probability of a second class that is less than the probability of the first class and a third probability of a third class; said classifying the input comprises classifying the input as the second class.
    • Claim:
      23. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause: generating, by a bi-objective optimizer, a first multiplier for a first class of a plurality of mutually exclusive classes and a second multiplier for a second class of the plurality of mutually exclusive classes; generating, from an input, an inference that contains a probability of the first class and a probability of the second class; and classifying the input based on: a first multiplicative product of the probability of the first class and the first multiplier, and a second multiplicative product of the probability of the second class and the second multiplier.
    • Current International Class:
      06; 06
    • الرقم المعرف:
      edspap.20240403674