Sensing social media to predict the result of Euro elections 2014

on .

eu-electionsGiven that we had a lot of the components in place and a "certain level" of trust in the power of social media*,  we decided to go for a wild experiment: predict the results of the upcoming (now gone) Euro elections using social signals, and publish the result before the polls closed. So, 48 days before the elections, we started monitoring the online discussions on Twitter around the names of the most popular parties (using their official names, Twitter account names and some frequently used abbreviations). All in all, we collected around 370K tweets during the pre-election period. At the same time, we also collected poll results from multiple sources in order to use them as a kind of "ground truth" for building our predictive models.

We then computed a set of features from the collected tweets. Those features were related to the percentage of messages mentioning a given party, the number of unique user accounts discussing about it, and the sentiment (positive/negative) expressed for each of those. Those features were computed on a daily and on a per-party basis. Due to the lack of a Greek sentiment lexicon (we are working on it!) and the not-so-effective performance of supervised learning approaches for sentiment detection in Greek, we had to turn to a heuristics-based method for automatically detecting the sentiment polarity (positive/negative) of a tweet: we used three different lexicons in English and translated them into Greek by using Google Translate. In order to detect a tweet's sentiment, we used a naïve counting method and assigned the majority class label (positive/negative) to it. Finally, we used a 7-day moving average filter for all features to smooth their values in time.

In terms of using public polls, we aggregated the results published by different polling companies during the time period between April 6th to May 24th to serve as target values for our predictive models. Every poll is usually conducted over a small period (2-3 days) on a sample of the country's population (normally around 1,000 to 1,400 people). For every poll, we ignored the percentage of participants. Then, we assigned the percentage reported for every party on the polling dates as the actual percentage it would achieve if the elections were held on one of these days. In the case that two or more polls were conducted on a certain date, we used the sample size of every poll as a weight and assigned the weighted average percentage to every party. Finally, after calculating the percentages of every party for every date that there was a poll conducted on, we assigned the “undecided” voters analogously to all parties based on their percentage.

Using the extracted features above and the target values from public polls, we trained four different regression models (Linear Regression, ε-SVM, Sequential Minimal Optimization, Gaussian Process) and came up with the Euro election predictions about three hours before the polls closed. To be more precise, as input features we used both Twitter features and the poll results at day t, and as output features we set the poll results at day t+1. In cases that there were no polls during a day, we used interpolation to derive a proxy value for that day.

Note that this is one of the few times that such predictions were published before the actual publication of Exit Poll estimates or official results. Obviously, we were very curious to see what was going to happen...

The results turned out to be pretty accurate! The table below presents presents our predictions side-by-side a number of different polls (note that for polls and exit-polls we made a re-adjustment of the results by assigning all "undecided" equally to all parties), the Exit-Polls, and the actual results at the time of writing.

  Results 

Exit Polls

Social Sensor

Poll Watch  Meta Polls
ΚΑΠΑ Research  Pulse  Alco Rass Public Issue GPO Marc
ΣΥΡΙΖΑ Συνασπισμός Ριζοσπαστικής Αριστεράς 26.58  28.65   27.27 29.60  29.00  29.72  27.62   30.02  28.52   30.00  26.13   28.18 
22.74  25.58  24.89  26.00  25.50  24.62  24.86   25.81  25.37  27.50   25.00   24.05 
xa logo 9.38  9.21  8.98  8.00  9.40  9.44  9.39  9.55  10.26  8.00   9.57   9.97 
elia logo 8.03  8.18  7.73  6.50  7.30  6.72  7.73  6.58  6.09  8.50   7.88   6.41 
 potami logo 6.61  6.14  8.38  8.00  7.70  8.79  7.73  7.20  9.13  6.50   7.88   6.87 
6.07  6.14  6.35  6.00  6.10  6.94  6.63  5.96  6.88  6.50   6.31   6.30 
3.45  4.09  4.23  5.10  4.00  4.34  4.42  4.09  4.28  3.00   3.94   4.58 
1.21  2.05  2.71  3.20  2.40  1.52  2.76  2.23  1.69  2.50   2.82   2.75 
 OTHER 15.93  9.97  9.47  7.60  8.60  7.92  8.84   8.56 7.78   7.50  10.47   10.88 

 

To get a better idea of how close each poll or estimate is to the actual results, the following table presents the Mean Square Error between each of them and the actual results: 

 

Exit Polls

Social Sensor

Poll Watch  Meta Polls ΚΑΠΑ Research  Pulse   Alco   Rass  Public Issue  GPO   Marc 
MSE 5.47 5.90 11.34 7.85 9.51 6.77 8.84 9.96 12.20 4.42 4.05
ΣΥΡΙΖΑ Συνασπισμός Ριζοσπαστικής Αριστεράς 4.18  0.45 9.0  5.76 9.72 1.05 11.73 3.70 11.56 0.22 2.49
 8.21 4.62  10.82  7.78 3.65 4.63 9.59 7.06 22.94 5.24 1.81
xa logo  0.03 0.17 1.93  0.00 0.00 0.00 0.03 0.76 1.93 0.03 0.33
elia logo 0.03 0.08 2.31 0.52 1.68 0.08 2.09 3.73 0.23 0.02 2.58
 potami logo 0.22 3.13 1.93 1.19 4.73 1.27 0.34 6.36 0.01 1.62 0.07
0.01 0.08 0.01 0.00 0.76 0.31 0.01  0.65 0.18 0.06 0.05
0.40 0.59 2.69 0.29 0.77 0.92 0.40 0.68 0.21 0.23 1.26
0.70 2.25 3.96 1.42 0.10 2.41 1.05 0.23 1.66 2.58 2.37
 OTHER  35.47 41.73 69.39 53.73 64.20 50.27  54.31  66.44 71.06  29.78 25.48

 

As can be seen, the SocialSensor approach produces better estimates compared to almost all competing poll results (with the exception of GPO and Marc that outperformed even the Exit Polls) and approaches the accuracy of the Exit Poll prediction. The SocialSensor predictions are especially accurate on the first four parties compared to other polls. Also note that the SocialSensor approach is much cheaper to derive compared to others, since it is mostly based on automatic data analysis components. Obviously, such results should be read with a grain of salt, since prediction is an inherently challenging problem with a large number of factors affecting the outcome. It will be interesting to see whether this approach (or an extension of it) can be successfully (and repeatedly) applied in the future.

More details on our approach will be provided as part of the upcoming deliverable D2.3 Social stream mining framework and will be submitted for publication to an upcoming venue. For more details, please get in touch with Adam Tsakalidis (@adtsakal) and Symeon Papadopoulos (@sympapadopoulos). 

 

* Obviously there has been much prior research work on this exciting new area. Tumasjan et al. discovered some strong correlation between the volume and sentiment of Tweets and the power of political parties. In contrast, Metaxas et al. provide a critical review of the problem, pointing out the extremely challenging aspects of the problem. Choy et al. combine sentiment detection analysis with reweighting techniques to predict the outcome of the 2011 Presidential election in Singapore. Finally, Tjong Kim Sang and Bos used entity counts and sentiment analysis to predict the results of the 2011 Dutch Senate Election. More recently, Lampos et al. argue that a substantial amount of messages referring to election entities (parties, candidates) should be filtered and not used for predictive modelling and propose a highly effective method to improve prediction performance.