8 research outputs found

    Privacy preservation in mobile social networks

    Get PDF
    In this day and age with the prevalence of smartphones, networking has evolved in an intricate and complex way. With the help of a technology-driven society, the term "social networking" was created and came to mean using media platforms such as Myspace, Facebook, and Twitter to connect and interact with friends, family, or even complete strangers. Websites are created and put online each day, with many of them possessing hidden threats that the average person does not think about. A key feature that was created for vast amount of utility was the use of location-based services, where many websites inform their users that the website will be using the users' locations to enhance the functionality. However, still far too many websites do not inform their users that they may be tracked, or to what degree. In a similar juxtaposed scenario, the evolution of these social networks has allowed countless people to share photos with others online. While this seems harmless at face-value, there may be times in which people share photos of friends or other non-consenting individuals who do not want that picture viewable to anyone at the photo owner's control. There exists a lack of privacy controls for users to precisely de fine how they wish websites to use their location information, and for how others may share images of them online. This dissertation introduces two models that help mitigate these privacy concerns for social network users. MoveWithMe is an Android and iOS application which creates decoys that move locations along with the user in a consistent and semantically secure way. REMIND is the second model that performs rich probability calculations to determine which friends in a social network may pose a risk for privacy breaches when sharing images. Both models have undergone extensive testing to demonstrate their effectiveness and efficiency.Includes bibliographical reference

    ์‚ฌ์ง„์˜ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€ ์ธก์ •๊ณผ ์˜จ๋ผ์ธ ๊ณต์œ  ํ–‰๋™ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ์–ธ๋ก ์ •๋ณดํ•™๊ณผ, 2017. 8. ์ด์ค€ํ™˜.์Šค๋งˆํŠธํฐ ๋ณด๊ธ‰์ด ํ™•์‚ฐ๋˜๋ฉด์„œ ์Šค๋งˆํŠธํฐ์œผ๋กœ ์‚ฌ์ง„์„ ์ฐ๋Š” ์‚ฌ๋žŒ์ด ๋งŽ์•„์กŒ๋Š”๋ฐ, ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ์Šค๋งˆํŠธํฐ์— ์ €์žฅ๋˜์–ด ์žˆ๋Š” ๋‚ด ์‚ฌ์ง„์„ ์šฐ์—ฐํžˆ ๋ณด๊ฑฐ๋‚˜ ์Šค๋งˆํŠธํฐ ์กฐ์ž‘ ์‹ค์ˆ˜๋กœ ๋‹ค๋ฅธ ์‚ฌ์ง„์„ ์˜จ๋ผ์ธ์— ์ž˜๋ชป ๊ณต์œ ํ•ด ํ”„๋ผ์ด๋ฒ„์‹œ๋ฅผ ์นจํ•ด๋‹นํ•  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ์˜จ๋ผ์ธ์— ๊ณต์œ ๋˜๋Š” ์‚ฌ์ง„์€ ๋ˆ„๊ตฌ์™€ ํ•จ๊ป˜ ์–ด๋””์—์„œ ๋ฌด์—‡์„ ํ•˜๊ณ  ์žˆ๋Š”์ง€์™€ ๊ฐ™์ด ๋งŽ์€ ๊ฒƒ์„ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์ง„์ด ์™ธ๋ถ€๋กœ ์ž˜๋ชป ์œ ์ถœ๋  ๊ฒฝ์šฐ ํ”„๋ผ์ด๋ฒ„์‹œ ์นจํ•ด๋Š” ํ‰์†Œ๋ณด๋‹ค ๋”์šฑ ํฌ๊ฒŒ ๋Š๊ปด์ง„๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ฐ”์ผ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ์ž๋™์œผ๋กœ ์Šค๋งˆํŠธํฐ ๊ฐค๋Ÿฌ๋ฆฌ์— ์žˆ๋Š” ์‚ฌ์ง„๋“ค์˜ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์„ ์ธก์ •ํ•˜๊ณ , ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์€ ์‚ฌ์ง„์€ ์ด์šฉ์ž๊ฐ€ ์˜จ๋ผ์ธ์— ๊ณต์œ ํ•  ๋•Œ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์€ ์‚ฌ์ง„์ž„์„ ํ•œ ๋ฒˆ ๋” ํ™•์ธํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค€๋‹ค๋ฉด, ์‚ฌ์šฉ์ž ์ž…์žฅ์—์„œ๋Š” ํ›จ์”ฌ ํŽธ๋ฆฌํ•˜๊ณ  ์•ˆ์ „ํ•˜๊ฒŒ ์‚ฌ์ง„์„ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋ผ๋Š” ์ƒ๊ฐ์—์„œ ์ด ์—ฐ๊ตฌ๊ฐ€ ์‹œ์ž‘๋˜์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์‚ฌ์ง„์˜ ์–ด๋–ค ์š”์ธ์ด ์ง์ ‘์ ์œผ๋กœ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์„ ๋†’๊ฒŒ ๋งŒ๋“œ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์˜จ๋ผ์ธ ๊ณต๊ฐœ๋ฅผ ํ•˜์ง€ ๋ชปํ•˜๊ฒŒ ๋งŒ๋“œ๋Š”์ง€๋ฅผ ๋ฐํ˜”๋‹ค. ๋˜ํ•œ ์—ฐ๊ตฌ์šฉ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์ œ์ž‘ํ•˜์—ฌ ์‹คํ—˜ ์ฐธ์—ฌ์ž ๋ณธ์ธ์˜ ์Šค๋งˆํŠธํฐ์— ์ €์žฅ๋˜์–ด์žˆ๋Š” ์‚ฌ์ง„์œผ๋กœ ์‹คํ—˜์„ ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ณด๋‹ค ์ •ํ™•๋„ ๋†’๊ณ  ์‹ ๋ขฐ๋„ ์žˆ๋Š” ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ๋„์ถœํ•˜์˜€๋‹ค. ์‚ฌ์ง„ ์† ์ธ๋ฌผ์˜ ์ˆ˜๊ฐ€ ๋งŽ์„์ˆ˜๋ก, ์ธ๋ฌผ์˜ ์–ผ๊ตด ํฌ๊ธฐ๊ฐ€ ํด์ˆ˜๋ก, ์‚ฌ์ง„์— ๊ฐ€์กฑ ์–ผ๊ตด, ์—ฐ์ธ ์–ผ๊ตด, ๋ณธ์ธ ์–ผ๊ตด์ด ์žˆ์œผ๋ฉด ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์•„์ง€๊ณ , ์‚ฌ์ง„์„ ์ฐ์€ ์‹œ๊ฐ„ ์—ญ์‹œ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ์˜จ๋ผ์ธ ๊ณต๊ฐœ ๊ฐ€๋Šฅ ์—ฌ๋ถ€๋„ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€๊ณผ ๋น„์Šทํ•˜๊ฒŒ ์‚ฌ์ง„ ์† ์ธ๋ฌผ์˜ ์ˆ˜๊ฐ€ ๋งŽ์„์ˆ˜๋ก, ์‚ฌ์ง„ ์† ์ธ๋ฌผ์˜ ์–ผ๊ตด ํฌ๊ธฐ๊ฐ€ ํด์ˆ˜๋ก, ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์ง„ ์†์— ๊ฐ€์กฑ์˜ ์–ผ๊ตด์ด ์žˆ๊ฑฐ๋‚˜, ์—ฐ์ธ์˜ ์–ผ๊ตด์ด ์žˆ๊ฑฐ๋‚˜, ๋ณธ์ธ์˜ ์–ผ๊ตด์ด ์žˆ์œผ๋ฉด ์˜จ๋ผ์ธ ๊ณต๊ฐœ๋ฅผ ์ ๊ฒŒ ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ๋˜ํ•œ ์˜ค์ „ 3์‹œ~9์‹œ ๊ทธ๋ฆฌ๊ณ  ์˜ค์ „ 9์‹œ~์˜คํ›„ 3์‹œ์— ์ฐํžŒ ์‚ฌ์ง„์€ ์˜จ๋ผ์ธ ๊ณต๊ฐœ๋ฅผ ์ ๊ฒŒ ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ํ•œ ๊ฐ€์ง€ ํฅ๋ฏธ๋กœ์šด ์ ์€ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์œผ๋‚˜ ์˜จ๋ผ์ธ ๊ณต๊ฐœ ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ํŒ๋‹จํ•œ ์‚ฌ์ง„์ด ๋ช‡ ์žฅ ์žˆ์—ˆ๋Š”๋ฐ, ์ด๋Ÿฌํ•œ ์‚ฌ์ง„๋“ค์€ ๋Œ€๋ถ€๋ถ„ ๋ฌด์–ธ๊ฐ€๋ฅผ ์ž๋ž‘ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์ด ๋งŽ์•˜๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์— ๋”ฐ๋ผ ์˜จ๋ผ์ธ ๊ณต๊ฐœ ๊ฐ€๋Šฅ ์—ฌ๋ถ€๊ฐ€ ํŒ๋‹จ๋˜์ง€๋งŒ, ํŠน์ • ๋ช‡ ๊ฐœ์˜ ์‚ฌ์ง„์˜ ๊ฒฝ์šฐ ์ž์‹ ์„ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•œ ์ „๋žต์  ์„ ํƒ์ด ์ ์šฉ๋˜์–ด ์˜จ๋ผ์ธ ๊ณต๊ฐœ ๊ฐ€๋Šฅ ์—ฌ๋ถ€๊ฐ€ ํŒ๋‹จ๋˜๋Š” ๊ฒƒ์œผ๋กœ ํ•ด์„๋œ๋‹ค. ๋”ฐ๋ผ์„œ ์‚ฌ์ง„ ์†์— ๊ธ์ •์ ์ธ ์š”์ธ์ด ์žˆ๋Š”์ง€๋ฅผ ๋ณด๊ธฐ ์œ„ํ•ด ํ”ผ์‚ฌ์ฒด๊ฐ€ ์ž˜ ๋‚˜์˜จ ์ •๋„๋‚˜ ์‚ฌ์ง„์˜ ์ „์ฒด์ ์ธ ๋งค๋ ฅ๋„๋ฅผ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋Š” ๋จธ์‹ ๋Ÿฌ๋‹(๊ธฐ๊ณ„ํ•™์Šต) ๊ธฐ๋ฒ•์„ ๋™์›ํ•œ ์ถ”๊ฐ€๋ถ„์„์˜ ๊ฐ€๋Šฅ์„ฑ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ํ† ๋Œ€๋กœ ์‚ฌ์ง„์˜ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์„ ํŒ๋‹จํ•˜์—ฌ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์€ ์‚ฌ์ง„์€ ์‰ฝ๊ฒŒ ์œ ์ถœ๋˜์ง€ ์•Š๋„๋ก ๋ณดํ˜ธํ•ด์ฃผ๋Š” ๊ธฐ๋Šฅ์„ ํƒ‘์žฌํ•œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ œ์ž‘์ด ๊ฐ€๋Šฅํ•  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.1. ๋ฌธ์ œ ์ œ๊ธฐ 1 2. ๊ธฐ์กด ์—ฐ๊ตฌ ๊ฒ€ํ†  6 2.1. ํ”„๋ผ์ด๋ฒ„์‹œ ๊ฐœ๋… 6 2.2. ์ž๊ธฐ ๋…ธ์ถœ๊ณผ ํ”„๋ผ์ด๋ฒ„์‹œ ๋ณดํ˜ธ์˜ ์ „๋žต์  ์„ ํƒ 7 2.3. ํ”„๋ผ์ด๋ฒ„์‹œ ์—ผ๋ ค์™€ ๋ณดํ˜ธ ํ–‰๋™์˜ ๊ด€๊ณ„ 10 2.4. ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์‚ฌ์ง„ ํ”„๋ผ์ด๋ฒ„์‹œ ๋ถ„์„ ์—ฐ๊ตฌ 14 2.5. ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ์‚ฌ์ง„ ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€ ์ธก์ • 17 2.5.1. ์‚ฌ์ง„ ์† ์ธ๋ฌผ ๋ถ„์„ 20 2.5.2. ์‚ฌ์ง„์ด ์ฐํžŒ ์žฅ์†Œ์™€ ์‹œ๊ฐ„ ๋ถ„์„ 24 2.6. ์˜จ๋ผ์ธ ์‚ฌ์ง„ ๊ณต์œ  ํ–‰๋™ 26 3. ์—ฐ๊ตฌ ๋ฌธ์ œ ๋ฐ ์—ฐ๊ตฌ ๋ชจํ˜• 30 4. ์‚ฌ์ „์กฐ์‚ฌ 34 4.1. ์กฐ์‚ฌ๋Œ€์ƒ 34 4.2. ๋ณ€์ธ์˜ ์ธก์ • ๋ฐ ๋ถ„์„๊ฒฐ๊ณผ 36 4.3. ์‚ฌ์ „์กฐ์‚ฌ์˜ ์˜์˜ ๋ฐ ํ•œ๊ณ„ 39 5. ์Šค๋งˆํŠธํฐ ์•ฑ์„ ์ด์šฉํ•œ ์‹คํ—˜ 40 5.1. ์‹คํ—˜ ์„ค๊ณ„ 40 5.2. ์‹คํ—˜ ๋Œ€์ƒ 44 5.3. ์ฃผ์š” ๊ฐœ๋… ์ •์˜ ๋ฐ ์ธก์ • ๋ฐฉ๋ฒ• 45 6. ์—ฐ๊ตฌ ๊ฒฐ๊ณผ 47 6.1. ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€ ์ธก์ • 47 6.2. ์˜จ๋ผ์ธ ๊ณต๊ฐœ ๊ฐ€๋Šฅ ์—ฌ๋ถ€ ํŒ๋‹จ 53 6.3. ํ”„๋ผ์ด๋ฒ„์‹œ ์ˆ˜์ค€์ด ๋†’์ง€๋งŒ ์˜จ๋ผ์ธ์— ๊ณต๊ฐœ ๊ฐ€๋Šฅํ•œ ์‚ฌ์ง„์˜ ํŠน์ง•๋“ค 59 7. ๋…ผ์˜ 61 8. ๊ฒฐ๋ก  66 ์ฐธ๊ณ ๋ฌธํ—Œ 70 Abstract 76Maste

    Privacy Intelligence: A Survey on Image Sharing on Online Social Networks

    Full text link
    Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also led to an increased risk of privacy invasion. The recent image leaks from popular OSN services and the abuse of personal photos using advanced algorithms (e.g. DeepFake) have prompted the public to rethink individual privacy needs when sharing images on OSNs. However, OSN image sharing itself is relatively complicated, and systems currently in place to manage privacy in practice are labor-intensive yet fail to provide personalized, accurate and flexible privacy protection. As a result, an more intelligent environment for privacy-friendly OSN image sharing is in demand. To fill the gap, we contribute a systematic survey of 'privacy intelligence' solutions that target modern privacy issues related to OSN image sharing. Specifically, we present a high-level analysis framework based on the entire lifecycle of OSN image sharing to address the various privacy issues and solutions facing this interdisciplinary field. The framework is divided into three main stages: local management, online management and social experience. At each stage, we identify typical sharing-related user behaviors, the privacy issues generated by those behaviors, and review representative intelligent solutions. The resulting analysis describes an intelligent privacy-enhancing chain for closed-loop privacy management. We also discuss the challenges and future directions existing at each stage, as well as in publicly available datasets.Comment: 32 pages, 9 figures. Under revie

    Understanding and controlling leakage in machine learning

    Get PDF
    Machine learning models are being increasingly adopted in a variety of real-world scenarios. However, the privacy and confidentiality implications introduced in these scenarios are not well understood. Towards better understanding such implications, we focus on scenarios involving interactions between numerous parties prior to, during, and after training relevant models. Central to these interactions is sharing information for a purpose e.g., contributing data samples towards a dataset, returning predictions via an API. This thesis takes a step toward understanding and controlling leakage of private information during such interactions. In the first part of the thesis we investigate leakage of private information in visual data and specifically, photos representative of content shared on social networks. There is a long line of work to tackle leakage of personally identifiable information in social photos, especially using face- and body-level visual cues. However, we argue this presents only a narrow perspective as images reveal a wide spectrum of multimodal private information (e.g., disabilities, name-tags). Consequently, we work towards a Visual Privacy Advisor that aims to holistically identify and mitigate private risks when sharing social photos. In the second part, we address leakage during training of ML models. We observe learning algorithms are being increasingly used to train models on rich decentralized datasets e.g., personal data on numerous mobile devices. In such cases, information in the form of high-dimensional model parameter updates are anonymously aggregated from participating individuals. However, we find that the updates encode sufficient identifiable information and allows them to be linked back to participating individuals. We additionally propose methods to mitigate this leakage while maintaining high utility of the updates. In the third part, we discuss leakage of confidential information during inference time of black-box models. In particular, we find models lend themselves to model functionality stealing attacks: an adversary can interact with the black-box model towards creating a replica `knock-off' model that exhibits similar test-set performances. As such attacks pose a severe threat to the intellectual property of the model owner, we also work towards effective defenses. Our defense strategy by introducing bounded and controlled perturbations to predictions can significantly amplify model stealing attackers' error rates. In summary, this thesis advances understanding of privacy leakage when information is shared in raw visual forms, during training of models, and at inference time when deployed as black-boxes. In each of the cases, we further propose techniques to mitigate leakage of information to enable wide-spread adoption of techniques in real-world scenarios.Modelle fรผr maschinelles Lernen werden zunehmend in einer Vielzahl realer Szenarien eingesetzt. Die in diesen Szenarien vorgestellten Auswirkungen auf Datenschutz und Vertraulichkeit wurden jedoch nicht vollstรคndig untersucht. Um solche Implikationen besser zu verstehen, konzentrieren wir uns auf Szenarien, die Interaktionen zwischen mehreren Parteien vor, wรคhrend und nach dem Training relevanter Modelle beinhalten. Das Teilen von Informationen fรผr einen Zweck, z. B. das Einbringen von Datenproben in einen Datensatz oder die Rรผckgabe von Vorhersagen รผber eine API, ist zentral fรผr diese Interaktionen. Diese Arbeit verhilft zu einem besseren Verstรคndnis und zur Kontrolle des Verlusts privater Informationen wรคhrend solcher Interaktionen. Im ersten Teil dieser Arbeit untersuchen wir den Verlust privater Informationen bei visuellen Daten und insbesondere bei Fotos, die fรผr Inhalte reprรคsentativ sind, die in sozialen Netzwerken geteilt werden. Es gibt eine lange Reihe von Arbeiten, die das Problem des Verlustes persรถnlich identifizierbarer Informationen in sozialen Fotos angehen, insbesondere mithilfe visueller Hinweise auf Gesichts- und Kรถrperebene. Wir argumentieren jedoch, dass dies nur eine enge Perspektive darstellt, da Bilder ein breites Spektrum multimodaler privater Informationen (z. B. Behinderungen, Namensschilder) offenbaren. Aus diesem Grund arbeiten wir auf einen Visual Privacy Advisor hin, der darauf abzielt, private Risiken beim Teilen sozialer Fotos ganzheitlich zu identifizieren und zu minimieren. Im zweiten Teil befassen wir uns mit Datenverlusten wรคhrend des Trainings von ML-Modellen. Wir beobachten, dass zunehmend Lernalgorithmen verwendet werden, um Modelle auf umfangreichen dezentralen Datensรคtzen zu trainieren, z. B. persรถnlichen Daten auf zahlreichen Mobilgerรคten. In solchen Fรคllen werden Informationen von teilnehmenden Personen in Form von hochdimensionalen Modellparameteraktualisierungen anonym verbunden. Wir stellen jedoch fest, dass die Aktualisierungen ausreichend identifizierbare Informationen codieren und es ermรถglichen, sie mit teilnehmenden Personen zu verknรผpfen. Wir schlagen zudem Methoden vor, um diesen Datenverlust zu verringern und gleichzeitig die hohe Nรผtzlichkeit der Aktualisierungen zu erhalten. Im dritten Teil diskutieren wir den Verlust vertraulicher Informationen wรคhrend der Inferenzzeit von Black-Box-Modellen. Insbesondere finden wir, dass sich Modelle fรผr die Entwicklung von Angriffen, die auf Funktionalitรคtsdiebstahl abzielen, eignen: Ein Gegner kann mit dem Black-Box-Modell interagieren, um ein Replikat-Knock-Off-Modell zu erstellen, das รคhnliche Test-Set-Leistungen aufweist. Da solche Angriffe eine ernsthafte Bedrohung fรผr das geistige Eigentum des Modellbesitzers darstellen, arbeiten wir auch an einer wirksamen Verteidigung. Unsere Verteidigungsstrategie durch die Einfรผhrung begrenzter und kontrollierter Stรถrungen in Vorhersagen kann die Fehlerraten von Modelldiebstahlangriffen erheblich verbessern. Zusammenfassend lรคsst sich sagen, dass diese Arbeit das Verstรคndnis von Datenschutzverlusten beim Informationsaustausch verbessert, sei es bei rohen visuellen Formen, wรคhrend des Trainings von Modellen oder wรคhrend der Inferenzzeit von Black-Box-Modellen. In jedem Fall schlagen wir ferner Techniken zur Verringerung des Informationsverlusts vor, um eine weit verbreitete Anwendung von Techniken in realen Szenarien zu ermรถglichen.Max Planck Institute for Informatic

    Personalized privacy-aware image classification

    No full text
    Conference of 6th ACM International Conference on Multimedia Retrieval, ICMR 2016 ; Conference Date: 6 June 2016 Through 9 June 2016; Conference Code:122023International audienceInformation sharing in online social networks is a daily practice for billions of users. The sharing process facilitates the maintenance of users' social ties but also entails privacy disclosure in relation to other users and third parties. Depending on the intentions of the latter, this disclosure can become a risk. It is thus important to propose tools that empower the users in their relations to social networks and third parties connected to them. As part of USEMP, a coordinated research effort aimed at user empowerment, we introduce a system that performs privacy-aware classification of images. We show that generic privacy models perform badly with real-life datasets in which images are contributed by individuals because they ignore the subjective nature of privacy. Motivated by this, we develop personalized privacy classification models that, utilizing small amounts of user feedback, provide significantly better performance than generic models. The proposed semi-personalized models lead to performance improvements for the best generic model ranging from 4%, when 5 user-specific examples are provided, to 18% with 35 examples. Furthermore, by using a semantic representation space for these models we manage to provide intuitive explanations of their decisions and to gain novel insights with respect to individuals' privacy concerns stemming from image sharing. We hope that the results reported here will motivate other researchers and practitioners to propose new methods of exploiting user feedback and of explaining privacy classifications to users
    corecore