Content moderation is a process of reviewing user generated content that has been uploaded and reported on social media pages. >6k agents across globally dispersed teams are responsible for reviewing content for policy violations.
Agents are trained on content policies which define the kinds of abusive content that can’t be posted on the social media platform. However both the service providers and agents were reporting several key issues…
Solution: I lead on the development of a new content management system to be created for global content moderation agents which will be the central resource for all policy, enforcement and performance support resources needed to provide a high quality, consistent scaled policy enforcement service to better serve online communities.
Partnering with: Engineers x2, Program Manager, Content Designers x3, Policy Specialists x3
Content Management System where all policy guidance, local cultural abuse contextual information, learning material and quality support functionality has been centralized.
A new notification system pushes mandatory policy updates to impacted agents which need to be acknowledged before progressing to live content moderation queues.
The agent training and performance support experience has also been digitized into micro elearning modules with integrated knowledge assessments and certification.
Key Functionality of the delivered solution...