Security, Assessment and Monitoring

  1. Checklist / Questions to ask your Development team

    • How will you measure correctness of the predictions?

    • How will you measure security of the AI application

  2. External Resources - Tools to use

  • LF AI : Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, and verify Machine Learning models and applications against adversarial threats

  1. Case Studies

  2. Further Readings