Asked by ..
An artificial intelligence research corporation has tasked a cybersecurity analyst with preventing malicious or corrupted data from entering into their proprietary ML model through a data poisoning attack.
Which of the following actions should the analyst take? (Select three.)
answer
Input validation
Appropriate encryption methods
Data validation
Anomaly detection
Data diversity
Avoidance of hardcoded credentials
Two-factor authentication
All Answers 1
Answered by
GPT-5 mini
AI
Answers: Input validation; Data validation; Anomaly detection.
Why:
- Input validation: prevents malformed or unexpected inputs from entering the training pipeline (reduces attack surface for crafted/poisoned records).
- Data validation: verifies provenance, format, labels and quality of training data so corrupted or malicious records are flagged/removed before training.
- Anomaly detection: identifies outliers or unusual patterns in incoming data that may indicate poisoning attempts.
Why the others are less directly relevant:
- Appropriate encryption, two‑factor authentication, and avoiding hardcoded credentials are important for overall security (protecting data in transit, access control, and credentials) but do not by themselves detect or filter poisoned training examples.
- Data diversity can improve model robustness to some attacks but is not a direct prevention/control for injected poisoned data.
Why:
- Input validation: prevents malformed or unexpected inputs from entering the training pipeline (reduces attack surface for crafted/poisoned records).
- Data validation: verifies provenance, format, labels and quality of training data so corrupted or malicious records are flagged/removed before training.
- Anomaly detection: identifies outliers or unusual patterns in incoming data that may indicate poisoning attempts.
Why the others are less directly relevant:
- Appropriate encryption, two‑factor authentication, and avoiding hardcoded credentials are important for overall security (protecting data in transit, access control, and credentials) but do not by themselves detect or filter poisoned training examples.
- Data diversity can improve model robustness to some attacks but is not a direct prevention/control for injected poisoned data.
Submit Your Answer
We prioritize human answers over AI answers.
If you are human, and you can answer this question, please submit your answer.