7,225 research outputs found
Relations among Security Metrics for Template Protection Algorithms
Many biometric template protection algorithms have been proposed mainly in
two approaches: biometric feature transformation and biometric cryptosystem.
Security evaluation of the proposed algorithms are often conducted in various
inconsistent manner. Thus, it is strongly demanded to establish the common
evaluation metrics for easier comparison among many algorithms. Simoens et al.
and Nagar et al. proposed good metrics covering nearly all aspect of
requirements expected for biometric template protection algorithms. One
drawback of the two papers is that they are biased to experimental evaluation
of security of biometric template protection algorithms. Therefore, it was
still difficult mainly for algorithms in biometric cryptosystem to prove their
security according to the proposed metrics. This paper will give a formal
definitions for security metrics proposed by Simoens et al. and Nagar et al. so
that it can be used for the evaluation of both of the two approaches. Further,
this paper will discuss the relations among several notions of security
metrics
Homomorphic Encryption for Speaker Recognition: Protection of Biometric Templates and Vendor Model Parameters
Data privacy is crucial when dealing with biometric data. Accounting for the
latest European data privacy regulation and payment service directive,
biometric template protection is essential for any commercial application.
Ensuring unlinkability across biometric service operators, irreversibility of
leaked encrypted templates, and renewability of e.g., voice models following
the i-vector paradigm, biometric voice-based systems are prepared for the
latest EU data privacy legislation. Employing Paillier cryptosystems, Euclidean
and cosine comparators are known to ensure data privacy demands, without loss
of discrimination nor calibration performance. Bridging gaps from template
protection to speaker recognition, two architectures are proposed for the
two-covariance comparator, serving as a generative model in this study. The
first architecture preserves privacy of biometric data capture subjects. In the
second architecture, model parameters of the comparator are encrypted as well,
such that biometric service providers can supply the same comparison modules
employing different key pairs to multiple biometric service operators. An
experimental proof-of-concept and complexity analysis is carried out on the
data from the 2013-2014 NIST i-vector machine learning challenge
The Meeting of Acquaintances: A Cost-efficient Authentication Scheme for Light-weight Objects with Transient Trust Level and Plurality Approach
Wireless sensor networks consist of a large number of distributed sensor
nodes so that potential risks are becoming more and more unpredictable. The new
entrants pose the potential risks when they move into the secure zone. To build
a door wall that provides safe and secured for the system, many recent research
works applied the initial authentication process. However, the majority of the
previous articles only focused on the Central Authority (CA) since this leads
to an increase in the computation cost and energy consumption for the specific
cases on the Internet of Things (IoT). Hence, in this article, we will lessen
the importance of these third parties through proposing an enhanced
authentication mechanism that includes key management and evaluation based on
the past interactions to assist the objects joining a secured area without any
nearby CA. We refer to a mobility dataset from CRAWDAD collected at the
University Politehnica of Bucharest and rebuild into a new random dataset
larger than the old one. The new one is an input for a simulated authenticating
algorithm to observe the communication cost and resource usage of devices. Our
proposal helps the authenticating flexible, being strict with unknown devices
into the secured zone. The threshold of maximum friends can modify based on the
optimization of the symmetric-key algorithm to diminish communication costs
(our experimental results compare to previous schemes less than 2000 bits) and
raise flexibility in resource-constrained environments.Comment: 27 page
Secure Cloud-Edge Deployments, with Trust
Assessing the security level of IoT applications to be deployed to
heterogeneous Cloud-Edge infrastructures operated by different providers is a
non-trivial task. In this article, we present a methodology that permits to
express security requirements for IoT applications, as well as infrastructure
security capabilities, in a simple and declarative manner, and to automatically
obtain an explainable assessment of the security level of the possible
application deployments. The methodology also considers the impact of trust
relations among different stakeholders using or managing Cloud-Edge
infrastructures. A lifelike example is used to showcase the prototyped
implementation of the methodology
Ensuring patients privacy in a cryptographic-based-electronic health records using bio-cryptography
Several recent works have proposed and implemented cryptography as a means to
preserve privacy and security of patients health data. Nevertheless, the
weakest point of electronic health record (EHR) systems that relied on these
cryptographic schemes is key management. Thus, this paper presents the
development of privacy and security system for cryptography-based-EHR by taking
advantage of the uniqueness of fingerprint and iris characteristic features to
secure cryptographic keys in a bio-cryptography framework. The results of the
system evaluation showed significant improvements in terms of time efficiency
of this approach to cryptographic-based-EHR. Both the fuzzy vault and fuzzy
commitment demonstrated false acceptance rate (FAR) of 0%, which reduces the
likelihood of imposters gaining successful access to the keys protecting
patients protected health information. This result also justifies the
feasibility of implementing fuzzy key binding scheme in real applications,
especially fuzzy vault which demonstrated a better performance during key
reconstruction
Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure
This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version
Towards cross-lingual alerting for bursty epidemic events
Background: Online news reports are increasingly becoming a source for event
based early warning systems that detect natural disasters. Harnessing the
massive volume of information available from multilingual newswire presents as
many challenges as opportunities due to the patterns of reporting complex
spatiotemporal events. Results: In this article we study the problem of
utilising correlated event reports across languages. We track the evolution of
16 disease outbreaks using 5 temporal aberration detection algorithms on
text-mined events classified according to disease and outbreak country. Using
ProMED reports as a silver standard, comparative analysis of news data for 13
languages over a 129 day trial period showed improved sensitivity, F1 and
timeliness across most models using cross-lingual events. We report a detailed
case study analysis for Cholera in Angola 2010 which highlights the challenges
faced in correlating news events with the silver standard. Conclusions: The
results show that automated health surveillance using multilingual text mining
has the potential to turn low value news into high value alerts if informed
choices are used to govern the selection of models and data sources. An
implementation of the C2 alerting algorithm using multilingual news is
available at the BioCaster portal http://born.nii.ac.jp/?page=globalroundup
Design and Deployment of an Access Control Module for Data Lakes
Nowadays big data is considered an extremely valued asset for companies, which are discovering new avenues to use it for their business profit. However, an organization’s ability to effectively extract valuable information from data is based on its knowledge management infrastructure. Thus, most organizations are transitioning from data warehouse (DW) storages to data lake (DL) infrastructures, from which further insights are derived.
The present work is carried out as part of a cybersecurity project in a financial institution that manages vast volumes and variety of data that is kept in a data lake. Although DL is presented as the answer to the current big data scenario, this infrastructure presents certain flaws on authentication and access control. Preceding work on DL access control points out that the main goal is to avoid fraudulent behaviors derived from user’s access, such as secondary use1, that could result in business data being exposed to third parties.
To overcome the risk, traditional mechanisms attempt to identify these behaviors based on rules, however, they cannot reveal all different kinds of fraud because they only look for known patterns of misuse.
The present work proposes a novel access control system for data lakes, assisted by Oracle’s database audit trail and based on anomaly detection mechanisms, that automatically looks for events that do not conform the normal or expected behavior. Thus, the overall aim of this project is to develop and deploy an automated system for identifying abnormal accesses to the DL, which can be separated into four subgoals: explore the different technologies that could be applied in the domain of anomaly detection, design the solution, deploy it, and evaluate the results.
For the purpose, feature engineering is performed, and four different unsupervised ML models are built and evaluated. According to the quality of the results, the better model is finally productionalized with Docker.
To conclude, although anomaly detection has been a lasting yet active research area for several decades, there are still some unique problem complexities and challenges that leave the way open for the proposed solution to be further improved.Doble Grado en Ingeniería Informática y Administración de Empresa
- …