43,354 research outputs found

    Improved Generalization for Secure Data Publishing

    Get PDF
    In data publishing, privacy and utility are essential for data owners and users respectively, which cannot coexist well. This incompatibility puts the data privacy researchers under an obligation to find newer and reliable privacy preserving tradeoff-techniques. Data providers like many public and private organizations (e.g. hospitals and banks) publish microdata of individuals for various research purposes. Publishing microdata may compromise the privacy of individuals. To prevent the privacy of individuals, data must be published after removing personal identifiers like name and social security numbers. Removal of the personal identifiers appears as not enough to protect the privacy of individuals. K-anonymity model is used to publish microdata by preserving the individual's privacy through generalization. There exist many state-of-the-arts generalization-based techniques, which deal with pre-defined attacks like background knowledge attack, similarity attack, probability attack and so on. However, existing generalization-based techniques compromise the data utility while ensuring privacy. It is an open question to find an efficient technique that is able to set a trade-off between privacy and utility. In this paper, we discussed existing generalization hierarchies and their limitations in detail. We have also proposed three new generalization techniques including conventional generalization hierarchies, divisors based generalization hierarchies and cardinality-based generalization hierarchies. Extensive experiments on the real-world dataset acknowledge that our technique outperforms among the existing techniques in terms of better utility

    Design Your Career - Design Your Life

    Get PDF
    This research investigates the current plague of unemployment and underemployment that nearly half of qualified individuals in the field of Visual Communications are met with after graduation. Students who major in this field dedicate a tremendous amount of time, money, and energy toward developing a broad skillset that resolves critical matters of communication through visual solutions. Research has demonstrated that despite conditions that are subject to ongoing change of economy, industry, and marketplace there are contributing factors that must be addressed to overcome un/underemployment regardless of circumstances. These include an underdeveloped network of professional contacts, deficiency in recognizing or responding to changing conditions, and a limited ability to customize one’s career around their unique specialization. The purpose of this study is to provide students who major in Visual Communications the information and tools needed to incorporate their ability to adapt and problem solve from their skillset into their search for work. To explore this issue, information was gathered through secondary research that involved data from federal databases, case studies, literature review, and secondary research in general. Return on investment for one’s education is measured in consideration of three primary themes: job satisfaction, income, and quality of life, which may provide hopeful opportunity for professionals in Visual Communications to overcome un/underemployment through career customization

    Constructing practical Fuzzy Extractors using QIM

    Get PDF
    Fuzzy extractors are a powerful tool to extract randomness from noisy data. A fuzzy extractor can extract randomness only if the source data is discrete while in practice source data is continuous. Using quantizers to transform continuous data into discrete data is a commonly used solution. However, as far as we know no study has been made of the effect of the quantization strategy on the performance of fuzzy extractors. We construct the encoding and the decoding function of a fuzzy extractor using quantization index modulation (QIM) and we express properties of this fuzzy extractor in terms of parameters of the used QIM. We present and analyze an optimal (in the sense of embedding rate) two dimensional construction. Our 6-hexagonal tiling construction offers ( log2 6 / 2-1) approx. 3 extra bits per dimension of the space compared to the known square quantization based fuzzy extractor

    Emerging privacy challenges and approaches in CAV systems

    Get PDF
    The growth of Internet-connected devices, Internet-enabled services and Internet of Things systems continues at a rapid pace, and their application to transport systems is heralded as game-changing. Numerous developing CAV (Connected and Autonomous Vehicle) functions, such as traffic planning, optimisation, management, safety-critical and cooperative autonomous driving applications, rely on data from various sources. The efficacy of these functions is highly dependent on the dimensionality, amount and accuracy of the data being shared. It holds, in general, that the greater the amount of data available, the greater the efficacy of the function. However, much of this data is privacy-sensitive, including personal, commercial and research data. Location data and its correlation with identity and temporal data can help infer other personal information, such as home/work locations, age, job, behavioural features, habits, social relationships. This work categorises the emerging privacy challenges and solutions for CAV systems and identifies the knowledge gap for future research, which will minimise and mitigate privacy concerns without hampering the efficacy of the functions

    Privacy-Preserving Reengineering of Model-View-Controller Application Architectures Using Linked Data

    Get PDF
    When a legacy system’s software architecture cannot be redesigned, implementing additional privacy requirements is often complex, unreliable and costly to maintain. This paper presents a privacy-by-design approach to reengineer web applications as linked data-enabled and implement access control and privacy preservation properties. The method is based on the knowledge of the application architecture, which for the Web of data is commonly designed on the basis of a model-view-controller pattern. Whereas wrapping techniques commonly used to link data of web applications duplicate the security source code, the new approach allows for the controlled disclosure of an application’s data, while preserving non-functional properties such as privacy preservation. The solution has been implemented and compared with existing linked data frameworks in terms of reliability, maintainability and complexity

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Regression Driven F--Transform and Application to Smoothing of Financial Time Series

    Full text link
    In this paper we propose to extend the definition of fuzzy transform in order to consider an interpolation of models that are richer than the standard fuzzy transform. We focus on polynomial models, linear in particular, although the approach can be easily applied to other classes of models. As an example of application, we consider the smoothing of time series in finance. A comparison with moving averages is performed using NIFTY 50 stock market index. Experimental results show that a regression driven fuzzy transform (RDFT) provides a smoothing approximation of time series, similar to moving average, but with a smaller delay. This is an important feature for finance and other application, where time plays a key role.Comment: IFSA-SCIS 2017, 5 pages, 6 figures, 1 tabl
    • …
    corecore