7,418 research outputs found

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    From access and integration to mining of secure genomic data sets across the grid

    Get PDF
    The UK Department of Trade and Industry (DTI) funded BRIDGES project (Biomedical Research Informatics Delivered by Grid Enabled Services) has developed a Grid infrastructure to support cardiovascular research. This includes the provision of a compute Grid and a data Grid infrastructure with security at its heart. In this paper we focus on the BRIDGES data Grid. A primary aim of the BRIDGES data Grid is to help control the complexity in access to and integration of a myriad of genomic data sets through simple Grid based tools. We outline these tools, how they are delivered to the end user scientists. We also describe how these tools are to be extended in the BBSRC funded Grid Enabled Microarray Expression Profile Search (GEMEPS) to support a richer vocabulary of search capabilities to support mining of microarray data sets. As with BRIDGES, fine grain Grid security underpins GEMEPS

    A Technology Proposal for a Management Information System for the Director’s Office, NAL.

    Get PDF
    This technology proposal attempts in giving a viable solution for a Management Information System (MIS) for the Director's Office. In today's IT scenario, an Organization's success greatly depends on its ability to get accurate and timely data on its operations of varied nature and to manage this data effectively to guide its activities and meet its goals. To cater to the information needs of an Organization or an Office like the Director's Office, information systems are developed and deployed to gather and process data in ways that produce a variety of information to the end-user. MIS can therefore can be defined as an integrated user-machine system for providing information to support operations, management and decision-making functions in an Organization. The system in a nutshell, utilizes computer hardware and software, manual procedures, models for analysis planning, control and decision-making and a database. Using state-of-the-art front-end and back-end web based tools, this technology proposal attempts to provide a single-point Information Management, Information Storage, Information Querying and Information Retrieval interface to the Director and his office for handling all information traffic flow in and out of the Director's Office

    Big Data Mining and Semantic Technologies: Challenges and Opportunities

    Get PDF
    Big data a term coined due to the explosion in the quantity and diversity of high frequency digital data which is having a potential for valuable insights has drawn the most attention in the area of research and development. Converting big data to actionable insights requires depth understanding of big data, its characteristics, challenges and current technological trends. A rise of big data is changing the existing data storage, management, processing and analytical mechanisms and leads to the new architecture/ecosystems to handle big data applications. This paper covers finding of our research study about big data characteristic, various types of analysis associated with it and basic big data types. First, we are presenting the big data study from data mining and analysis perspective and discuss the challenges and next, we present the result of research study on meaningful use of big data in the context of semantic technologies. Moreover, we discuss various case studies related to social media analysis and recent development trends to identify potential research directions for big data with semantic technologies. DOI: 10.17762/ijritcc2321-8169.150711

    The Future is Big Graphs! A Community View on Graph Processing Systems

    Get PDF
    Graphs are by nature unifying abstractions that can leverage interconnectedness to represent, explore, predict, and explain real- and digital-world phenomena. Although real users and consumers of graph instances and graph workloads understand these abstractions, future problems will require new abstractions and systems. What needs to happen in the next decade for big graph processing to continue to succeed?Comment: 12 pages, 3 figures, collaboration between the large-scale systems and data management communities, work started at the Dagstuhl Seminar 19491 on Big Graph Processing Systems, to be published in the Communications of the AC

    IS Programs Responding to Industry Demands for Data Scientists: A Comparison Between 2011-2016

    Get PDF
    The term data scientist has only been in common use since 2008, but in 2016 it is considered one of the top careers in the United States. The purpose of this paper is to explore the growth of data science content areas such as analytics, business intelligence, and big data in AACSB Information Systems (IS) programs between 2011 and 2016. A secondary purpose is to analyze the effect of IS programs’ adherence to IS 2010 Model Curriculum Guidelines for undergraduate MIS programs, as well as the impact of IS programs offering an advanced database course in 2011 on data science course offerings in 2016. A majority (60%) of AACSB IS programs added data science-related courses between 2011 and 2016. Results indicate dramatic increases in courses offered in big data analytics (583%), visualization (300%), business data analysis (260%), and business intelligence (236%). ANOVA results also find a significant effect of departments offering advanced database courses in 2011 on new analytics course offerings in 2016. A Chi-Square analysis did not find an effect of IS 2010 Model Curriculum adherence on analytics course offerings in 2016. Implications of our findings for an MIS department’s ability to respond to changing needs of the marketplace and its students are discussed
    corecore