Big data as a tool for detecting (and punishing?) bullies

A group of researchers has developed a machine learning model that can detect tweets relating to bullying, and even identify bullies, victims and witnesses. How the model works and what it found In order to train their model, the researchers fed it two sets of tweets one they had determined to be about bullying activity and another that was not. Once the model had learned the language identifiers of tweets relating to bullying, it was time to turn it loose on real-world tweets. Not only did the system start identifying a great number of tweets, but it also discovered time patterns (they occur most frequently during the school week) and was able to pick out who played what role in the bullying.

Author: Publication Date: Crawl Date: Source: Topic: Contributor: Type: Language: Format:
August 03, 2012 August 05, 2012 GigaOM via Google News Applications Ethics MachineLearning NewsFinder Text English HTML

Whitelist words:

  1. ethic question: 1 occurrence
  2. machin learn: 1 occurrence

Categorization (need >= 0.5 to match):

  1. AIOverview: 0.00878838
  2. Agents: 0.254907
  3. Applications: 0.841839
  4. CognitiveScience: 0.125451
  5. Education: 0.0585897
  6. Ethics: 0.521274
  7. Games: 0.0113352
  8. History: 0.0135249
  9. Interfaces: 0.0236939
  10. MachineLearning: 0.571719
  11. NaturalLanguage: 0.00240236
  12. Philosophy: 0.00546275
  13. Reasoning: 0.00107939
  14. Representation: 0.00183033
  15. Robots: 0.242453
  16. ScienceFiction: 0.036377
  17. Speech: 0.00312588
  18. Systems: 0.00828046
  19. Vision: 0.0509929

Duplicates (threshold=0.17):

  1. (sim=0.461) 10009: Learning machines scour Twitter in service of bullying research


  1. Published
AAAI   Recent Changes   Edit   History   Print   Contact Us
Page last modified on August 06, 2012, at 12:00 AM