Direkt zum Inhalt

Karr, A. F., & Reiter, J. P. (2014). Using Statistics to Protect Privacy. In J. Lane, V. Stodden, S. Bender, & H. Nissenbaum (Eds.), Privacy, Big Data, and the Public Good: Frameworks for Engagement (p. pp 276-295). Cambridge University Press. https://doi.org/10.1017/CBO9781107590205.017


Those who generate data – for example, official statistics agencies, survey organizations, and principal investigators, henceforth all called agencies – have a long history of providing access to their data to researchers, policy analysts, decision makers, and the general public. At the same time, these agencies are obligated ethically and often legally to protect the confidentiality of data subjects’ identities and sensitive attributes. Simply stripping names, exact addresses, and other direct identifiers typically does not suffice to protect confidentiality. When the released data include variables that are readily available in external files, such as demographic characteristics or employment histories, ill-intentioned users – henceforth called intruders – may be able to link records in the released data to records in external files, thereby compromising the agency’s promise of confidentiality to those who provided the data. In response to this threat, agencies have developed an impressive variety of strategies for reducing the risks of unintended disclosures, ranging from restricting data access to altering data before release. Strategies that fall into the latter category are known as statistical disclosure limitation (SDL) techniques. Most SDL techniques have been developed for data derived from probability surveys or censuses. Even in complete form, these data would not typically be thought of as big data, with respect to scale (numbers of cases and attributes), complexity of attribute types, or structure: most datasets are released, if not actually structured, as flat files.