Security, Assessment and Monitoring
Checklist / Questions to ask your Development team
How will you measure correctness of the predictions?
How will you measure security of the AI application
External Resources - Tools to use
LF AI : Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, and verify Machine Learning models and applications against adversarial threats
Case Studies
Further Readings