Inferring Private Information Using Social Network Data

Presented at: 18th International World Wide Web Conference (WWW2009)

by Jack Lindamood, Raymond Heatherly, Murat Kantarcioglu, Bhavani Thuraisingham

Webpage: http://www2009.eprints.org/153/1/p1145.pdf

On-line social networks, such as Facebook, are increasingly utilized by many users. These networks allow people to publish details about themselves and connect to their friends. Some of the information revealed inside these networks is private and it is possible that corporations could use learning algorithms on the released data to predict undisclosed private information. In this paper, we explore how to launch inference attacks using released social networking data to predict undisclosed private information about individuals. We then explore the effectiveness of possible sanitization techniques that can be used to combat such inference attacks under different scenarios. social network data could be used to predict some individual private trait that a user is not willing to disclose (e.g., political or religious affiliation) and explore the effect of possible data sanitization alternatives on preventing such private information leakage. To our knowledge this is the first comprehensive paper that discusses the problem of inferring private traits using real-life social network data and possible sanitization approaches to prevent such inference. First, we present a ıve modification of Na¨ Bayes classification that is suitable for classifying large amount of social network data. Our modified Na¨ Bayes algorithm predicts privacy sensitive trait ıve information using both node traits and link structure. We compare the accuracy of our learning method based on link structure against the accuracy of our learning method based on node traits. Please see extended version of this paper [3] for further details of our modified Naive Bayes classifier. In order to protect privacy, we sanitize both trait (e.g., deleting some information from a user's on-line profile) and link details (e.g., deleting links between friends) and explore the effect they have on combating possible inference attacks. Our initial results indicate that just sanitizing trait information or link information may not be enough to prevent inference attacks and comprehensive sanitization techniques that involve both aspects are needed in practice. Similar to our paper, in [2], authors consider ways to infer private information via friendship links by creating a Bayesian Network from the links inside a social network. A similar privacy problem for online social networks is discussed in [4]. Compared to [2] and [4], we provide techniques that help in choosing the most effective traits or links that need to be removed for protecting privacy.

Keywords: Poster Session


Resource URI on the dog food server: http://data.semanticweb.org/conference/www/2009/paper/153


Explore this resource elsewhere: