6,629 research outputs found

    Evaluation of Storage Systems for Big Data Analytics

    Get PDF
    abstract: Recent trends in big data storage systems show a shift from disk centric models to memory centric models. The primary challenges faced by these systems are speed, scalability, and fault tolerance. It is interesting to investigate the performance of these two models with respect to some big data applications. This thesis studies the performance of Ceph (a disk centric model) and Alluxio (a memory centric model) and evaluates whether a hybrid model provides any performance benefits with respect to big data applications. To this end, an application TechTalk is created that uses Ceph to store data and Alluxio to perform data analytics. The functionalities of the application include offline lecture storage, live recording of classes, content analysis and reference generation. The knowledge base of videos is constructed by analyzing the offline data using machine learning techniques. This training dataset provides knowledge to construct the index of an online stream. The indexed metadata enables the students to search, view and access the relevant content. The performance of the application is benchmarked in different use cases to demonstrate the benefits of the hybrid model.Dissertation/ThesisMasters Thesis Computer Science 201

    The promise of digital healthcare technologies

    Get PDF
    Digital health technologies have been in use for many years in a wide spectrum of healthcare scenarios. This narrative review outlines the current use and the future strategies and significance of digital health technologies in modern healthcare applications. It covers the current state of the scientific field (delineating major strengths, limitations, and applications) and envisions the future impact of relevant emerging key technologies. Furthermore, we attempt to provide recommendations for innovative approaches that would accelerate and benefit the research, translation and utilization of digital health technologies

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    MaCuDE IS Task Force Phase II Report: Views of Industry Leaders on Big Data Analytics and AI

    Get PDF
    This paper represents the Phase II report of the Management Curriculum for the Digital Era (MaCuDE) disciplinary task force on information systems (IS). Aligned with the current work of the AIS (Association for Information Systems) and ACM (Association for Computing Machinery), we focus on the current and future industry driven educational needs and requirements posed by big data analytics (BDA), artificial intelligence (AI), machine learning (ML), and related innovations. In this report, we probe and report on the views of industry leaders regarding BDA/AI education needs. We conducted 18 rich semi-structured interviews with a representative sample of industry leaders around key changes and issues related to workforce demands in digital transformation and associated educational needs. We performed a grounded theory based analysis of key themes in reported education needs. We note the shifting meaning of AI and BDA phenomena and identify three main organizational level needs for the digital era -capability improvement and transformation, decision-making strategies and tactics, and changes in operations or products- and connect them to three individual professional competencies- fundamental environmental competencies, data information and content, and system design competencies- necessary to deliver them. Based on the analysis we outline several novel competency-based IS curriculum recommendations for the master\u27s and undergraduate level IS education

    Review Focus On Computational Healthcare Tools For Sustainability

    Get PDF
    The medical industry is experiencing an increase in the amount of data generated in terms of complexity, diversity, and timeliness; the industry increasingly relies on the collection and analysis of data. Therefore, to make better decisions, we need to collect data and conduct effective analysis. The cloud is a good choice for on-demand services for storing, processing, and analyzing data. Medical data released and shared through the cloud are very popular in practice, and information and knowledge bases can be enriched and shared through the cloud. The revolution presented by the cloud and big data can have a huge impact on the healthcare industry, and a new healthcare system is evolving. This is why we need to design a more appropriate health care system to meet the challenges presented by this revolution. The diversity of data sources requires a uniform standard of heterogeneous data management. On the one hand, due to the diversification of medical equipment, the data formats and the amount of data generated by various devices may be quite different, which requires that the system support data access by various medical devices to ensure high scalability and satisfy actual medical needs. On the other hand, the system needs to convert the received data into a unified standard to improve the efficiency of data storage, query, retrieval, processing, and analysis. This paper presents Review Study On Existing Computational Healthcare Tools For Sustainability

    Taking Computation to Data: Integrating Privacy-preserving AI techniques and Blockchain Allowing Secure Analysis of Sensitive Data on Premise

    Get PDF
    PhD thesis in Information technologyWith the advancement of artificial intelligence (AI), digital pathology has seen significant progress in recent years. However, the use of medical AI raises concerns about patient data privacy. The CLARIFY project is a research project funded under the European Union’s Marie Sklodowska-Curie Actions (MSCA) program. The primary objective of CLARIFY is to create a reliable, automated digital diagnostic platform that utilizes cloud-based data algorithms and artificial intelligence to enable interpretation and diagnosis of wholeslide-images (WSI) from any location, maximizing the advantages of AI-based digital pathology. My research as an early stage researcher for the CLARIFY project centers on securing information systems using machine learning and access control techniques. To achieve this goal, I extensively researched privacy protection technologies such as federated learning, differential privacy, dataset distillation, and blockchain. These technologies have different priorities in terms of privacy, computational efficiency, and usability. Therefore, we designed a computing system that supports different levels of privacy security, based on the concept: taking computation to data. Our approach is based on two design principles. First, when external users need to access internal data, a robust access control mechanism must be established to limit unauthorized access. Second, it implies that raw data should be processed to ensure privacy and security. Specifically, we use smart contractbased access control and decentralized identity technology at the system security boundary to ensure the flexibility and immutability of verification. If the user’s raw data still cannot be directly accessed, we propose to use dataset distillation technology to filter out privacy, or use locally trained model as data agent. Our research focuses on improving the usability of these methods, and this thesis serves as a demonstration of current privacy-preserving and secure computing technologies
    • …
    corecore