I recognize that people sometimes use products in harmful ways and that some people are more vulnerable to abuse and harassment than I am. I pledge to design and build better moderation and safety tools that will protect people at risk, regardless of whether they are our paying customers. I will aim to do so before something harmful happens on our platform.

I will think about:

  • Can children use the product, and is there anything we can do to make it safer for minors?
  • How could our product be used to target minorities, people in abusive relationships, or other vulnerable groups? Could it be used to endanger national security?
  • How can we improve moderation and reporting tools and processes? Is there anything we can do to prevent abuse and harassment before it happens?

Suggested actions:

  • Make it easy to report abuse and harassment and have a transparent process in place to deal with such complaints. Reporting tools should also be available to people who are not using our product but might still be affected by it.
  • Aim to make our Terms of Service easy to read and understand and consider adding a Code of Conduct. Be clear about what the expected usage is and what actions can lead to bans, and enforce our rules consistently in a transparent way.
  • Security should be a priority when collecting personal and location data, and we shouldn’t collect personal or location data unless we need it. Everyone with internal access should be educated about social engineering methods.
  • Make it easy for people to change privacy settings and safeguard accounts with strong passwords and easy-to-use two-factor authentication.
  • If children and minors can use our product, implement features that provide additional safety, content warnings, parental controls.
  • If our product allows public posts and messaging, make it easy for people to block, mute, report hate speech of various types, and think about mechanisms for stopping or slowing the spread of misinformation.