Wednesday, October 28, 2020 - 4:00pm to 4:30pm
Event Calendar Category
LIDS & Stats Tea
Zoom meeting id
931 1136 1554
Join Zoom meeting
Social media platforms moderate content using a process known as algorithmic filtering (AF). While AF has the potential to greatly improve the user experience, it has also drawn intense scrutiny for its roles in, for example, spreading fake news, amplifying hate speech, and facilitating digital red-lining. However, regulating AF can be harmful to the social media ecosystem by interfering with personalization, lowering profits, and restricting free speech. We are interested in whether or not it is possible to design a regulation without removing content or hurting performance. In this work, we propose a framework for regulating AF. We show that it has desirable statistical guarantees, including holding the platform more accountable to users. We then prove that there are conditions under which the regulation imposes little to no long-term performance cost on the SMP. Interestingly, the key mechanism that aligns performative and regulatory interests is content diversity. An additional finding is that the proposed regulation does not remove content.
Sarah is a PhD student working with Prof. Devavrat Shah. She studies machine learning theory with a focus on social good. Her ongoing projects examine the influence of social media on users, incentive alignment in human-machine systems, and feedback loops in machine learning. She has previously worked in the areas of autonomous vehicles, communication networks, reinforcement learning, and robotics.