317,810 research outputs found

    Big Data Privacy Context: Literature Effects On Secure Informational Assets

    Get PDF
    This article's objective is the identification of research opportunities in the current big data privacy domain, evaluating literature effects on secure informational assets. Until now, no study has analyzed such relation. Its results can foster science, technologies and businesses. To achieve these objectives, a big data privacy Systematic Literature Review (SLR) is performed on the main scientific peer reviewed journals in Scopus database. Bibliometrics and text mining analysis complement the SLR. This study provides support to big data privacy researchers on: most and least researched themes, research novelty, most cited works and authors, themes evolution through time and many others. In addition, TOPSIS and VIKOR ranks were developed to evaluate literature effects versus informational assets indicators. Secure Internet Servers (SIS) was chosen as decision criteria. Results show that big data privacy literature is strongly focused on computational aspects. However, individuals, societies, organizations and governments face a technological change that has just started to be investigated, with growing concerns on law and regulation aspects. TOPSIS and VIKOR Ranks differed in several positions and the only consistent country between literature and SIS adoption is the United States. Countries in the lowest ranking positions represent future research opportunities.Comment: 21 pages, 9 figure

    ShenZhen transportation system (SZTS): a novel big data benchmark suite

    Get PDF
    Data analytics is at the core of the supply chain for both products and services in modern economies and societies. Big data workloads, however, are placing unprecedented demands on computing technologies, calling for a deep understanding and characterization of these emerging workloads. In this paper, we propose ShenZhen Transportation System (SZTS), a novel big data Hadoop benchmark suite comprised of real-life transportation analysis applications with real-life input data sets from Shenzhen in China. SZTS uniquely focuses on a specific and real-life application domain whereas other existing Hadoop benchmark suites, such as HiBench and CloudRank-D, consist of generic algorithms with synthetic inputs. We perform a cross-layer workload characterization at the microarchitecture level, the operating system (OS) level, and the job level, revealing unique characteristics of SZTS compared to existing Hadoop benchmarks as well as general-purpose multi-core PARSEC benchmarks. We also study the sensitivity of workload behavior with respect to input data size, and we propose a methodology for identifying representative input data sets

    Collaborative Representation based Classification for Face Recognition

    Full text link
    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm characterization of coding coefficient is related to the degree of discrimination of facial features. Extensive experiments were conducted to verify the face recognition accuracy and efficiency of CRC with different instantiations.Comment: It is a substantial revision of a previous conference paper (L. Zhang, M. Yang, et al. "Sparse Representation or Collaborative Representation: Which Helps Face Recognition?" in ICCV 2011

    Analyse the risks of ad hoc programming in web development and develop a metrics of appropriate tools

    Get PDF
    Today the World Wide Web has become one of the most powerful tools for business promotion and social networking. As the use of websites and web applications to promote the businesses has increased drastically over the past few years, the complexity of managing them and protecting them from security threats has become a complicated task for the organizations. On the other hand, most of the web projects are at risk and less secure due to lack of quality programming. Although there are plenty of frameworks available for free in the market to improve the quality of programming, most of the programmers use ad hoc programming rather than using frameworks which could save their time and repeated work. The research identifies the different frameworks in PHP and .NET programming, and evaluates their benefits and drawbacks in the web application development. The research aims to help web development companies to minimize the risks involved in developing large web projects and develop a metrics of appropriate frameworks to be used for the specific projects. The study examined the way web applications were developed in different software companies and the advantages of using frameworks while developing them. The findings of the results show that it was not only the experience of developers that motivated them to use frameworks. The major conclusions and recommendations drawn from this research were that the main reasons behind web developers avoiding frameworks are that they are difficult to learn and implement. Also, the motivations factors for programmers towards using frameworks were self-efficiency, habit of learning new things and awareness about the benefits of frameworks. The research recommended companies to use appropriate frameworks to protect their projects against security threats like SQL injection and RSS injectio

    A Compendium of Core Lexicon Checklists

    Get PDF
    Core Lexicon (CoreLex) is a relatively new approach assessing lexical use in discourse. CoreLex examines the specific lexical items used to tell a story, or how typical lexical items are compared with a normative sample. This method has great potential for clinical utilization because CoreLex measures are fast, easy to administer, and correlate with microlinguistic and macrolinguistic discourse measures. The purpose of this article is to provide clinicians with a centralized resource for currently available CoreLex checklists, including information regarding development, norms, and guidelines for use
    • …
    corecore