Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Human Decision on Targeted and Non-Targeted Adversarial Samples
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- معلومة اضافية
- Publisher Information:
eScholarship, University of California 2018-01-01
- نبذة مختصرة :
In a world that relies increasingly on large amounts of data and on powerful Machine Learning (ML) models, the veracity of decisions made by these systems is essential. Adversarial samples are inputs that have been perturbed to mislead the in- terpretation of the ML and are a dangerous vulnerability. Our research takes a first step into what can be an important innova- tion in cognitive science: we analyzed human’s judgments and decisions when confronted with targeted (inputs constructed to make a ML model purposely misclassify an input as some- thing else) and non-targeted (a noisy perturbed input that tries to trick the ML model) adversarial samples. Our findings sug- gest that although ML models that produce non-targeted adver- sarial samples can be more efficient than targeted samples they result in more incorrect human classifications than those of tar- geted samples. In other words, non-targeted samples interfered more with human perception and categorization decisions than targeted samples.
- الموضوع:
- Availability:
Open access content. Open access content
public
- Note:
application/pdf
Proceedings of the Annual Meeting of the Cognitive Science Society vol 40, iss 0
- Other Numbers:
CDLER oai:escholarship.org:ark:/13030/qt0gn9p0ts
qt0gn9p0ts
https://escholarship.org/uc/item/0gn9p0ts
https://escholarship.org/content/qt0gn9p0ts/qt0gn9p0ts.pdf
https://escholarship.org/
1449585275
- Contributing Source:
UC MASS DIGITIZATION
From OAIster®, provided by the OCLC Cooperative.
- الرقم المعرف:
edsoai.on1449585275
HoldingsOnline
No Comments.