QA / QC, Moderation, Review
Formalized protocols for: safety-oriented content policies, transparency around content review, and arbitration mechanism for users to contest system decisions.
User-generated content
Algorithm-generated content
For regulators / policy makers
Are regulators able to query content portals for sample sets to ensure compliance?
Policy questions for user generated content
How does your system handle QA/QC metrics?
Do you have documentation regarding error rates, false positive/negatives; etc
Does your system allow for batching of flagged content
For 3rd party auditors?
For regulators upon request?
Does the user-generated content undergo automated safety review? If not, what internal protocols trigger a review e.g. sentiment analysis?
What technical standards have been used to evaluate benchmarks for safety flagging?
Intake mechanism to compel disclosure for what privacy regulations are being adhered to? (eg HIPAA? GDPR?)
Privacy enforcement for various classes of data?
PII
Health data
3rd party data transfers
Tracking data on sensitive/high-risk browsing
Biometric data