Research‎ > ‎

Protest Activity Detection and Perceived Violence Estimation from Social Media Images

Introduction

We develop a novel visual model which can recognize protesters, describe their activities by visual attributes and estimate the level of perceived violence in an image. Studies of social media and protests use natural language processing to track how individuals use hashtags and links, often with a focus on those items’ diffusion. These approaches, however, may not be effective in fully characterizing actual real-world protests (e.g., violent or peaceful) or estimating the demographics of participants (e.g., age, gender, and race) and their emotions. Our system characterizes protests along these dimensions. We have collected geotagged tweets and their images from 2013-2017 and analyzed multiple major protest events in that period. A multi-task convolutional neural network is employed in order to automatically classify the presence of protesters in an image and predict its visual attributes, perceived violence and exhibited emotions. 



Code & Model


UCLA Protest Image Dataset
  • Our novel dataset has 40,764 images (11,659 protest images and hard negatives) with various annotations of visual attributes and sentiments. 
  • Contact Jungseock Joo (jjoo at comm.ucla.edu)


Paper
  • Protest Activity Detection and Perceived Violence Estimation from Social Media Images
    Donghyeon Won, Zachary C. Steinert-Threlkeld, Jungseock Joo
    ACM Multimedia, 2017