5,302 research outputs found
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
Joint optimisation of privacy and cost of in-app mobile user profiling and targeted ads
Online mobile advertising ecosystems provide advertising and analytics
services that collect, aggregate, process and trade rich amount of consumer's
personal data and carries out interests-based ads targeting, which raised
serious privacy risks and growing trends of users feeling uncomfortable while
using internet services. In this paper, we address user's privacy concerns by
developing an optimal dynamic optimisation cost-effective framework for
preserving user privacy for profiling, ads-based inferencing, temporal apps
usage behavioral patterns and interest-based ads targeting. A major challenge
in solving this dynamic model is the lack of knowledge of time-varying updates
during profiling process. We formulate a mixed-integer optimisation problem and
develop an equivalent problem to show that proposed algorithm does not require
knowledge of time-varying updates in user behavior. Following, we develop an
online control algorithm to solve equivalent problem using Lyapunov
optimisation and to overcome difficulty of solving nonlinear programming by
decomposing it into various cases and achieve trade-off between user privacy,
cost and targeted ads. We carry out extensive experimentations and demonstrate
proposed framework's applicability by implementing its critical components
using POC `System App'. We compare proposed framework with other privacy
protecting approaches and investigate that it achieves better privacy and
functionality for various performance parameters
Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework
The generative Artificial Intelligence (AI) tools based on Large Language
Models (LLMs) use billions of parameters to extensively analyse large datasets
and extract critical private information such as, context, specific details,
identifying information etc. This have raised serious threats to user privacy
and reluctance to use such tools. This article proposes the conceptual model
called PrivChatGPT, a privacy-preserving model for LLMs that consists of two
main components i.e., preserving user privacy during the data
curation/pre-processing together with preserving private context and the
private training process for large-scale data. To demonstrate its
applicability, we show how a private mechanism could be integrated into the
existing model for training LLMs to protect user privacy; specifically, we
employed differential privacy and private training using Reinforcement Learning
(RL). We measure the privacy loss and evaluate the measure of uncertainty or
randomness once differential privacy is applied. It further recursively
evaluates the level of privacy guarantees and the measure of uncertainty of
public database and resources, during each update when new information is added
for training purposes. To critically evaluate the use of differential privacy
for private LLMs, we hypothetically compared other mechanisms e..g, Blockchain,
private information retrieval, randomisation, for various performance measures
such as the model performance and accuracy, computational complexity, privacy
vs. utility etc. We conclude that differential privacy, randomisation, and
obfuscation can impact utility and performance of trained models, conversely,
the use of ToR, Blockchain, and PIR may introduce additional computational
complexity and high training latency. We believe that the proposed model could
be used as a benchmark for proposing privacy preserving LLMs for generative AI
tools
CHORUS Deliverable 4.5: Report of the 3rd CHORUS Conference
The third and last CHORUS conference on Multimedia Search Engines took place from the 26th to the 27th of May 2009 in Brussels, Belgium. About 100 participants from 15 European countries, the US, Japan and Australia learned about the latest developments in the domain. An exhibition of 13 stands presented 16 research projects currently ongoing around the
world
Shortest Path Computation with No Information Leakage
Shortest path computation is one of the most common queries in location-based
services (LBSs). Although particularly useful, such queries raise serious
privacy concerns. Exposing to a (potentially untrusted) LBS the client's
position and her destination may reveal personal information, such as social
habits, health condition, shopping preferences, lifestyle choices, etc. The
only existing method for privacy-preserving shortest path computation follows
the obfuscation paradigm; it prevents the LBS from inferring the source and
destination of the query with a probability higher than a threshold. This
implies, however, that the LBS still deduces some information (albeit not
exact) about the client's location and her destination. In this paper we aim at
strong privacy, where the adversary learns nothing about the shortest path
query. We achieve this via established private information retrieval
techniques, which we treat as black-box building blocks. Experiments on real,
large-scale road networks assess the practicality of our schemes.Comment: VLDB201
Privacy in the Smart City - Applications, Technologies, Challenges and Solutions
Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers and data sources for the attacks, giving structure to the fuzzy term “smart city”. Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities
- …