11,586 research outputs found

    A Hybrid Approach for Data Analytics for Internet of Things

    Full text link
    The vision of the Internet of Things is to allow currently unconnected physical objects to be connected to the internet. There will be an extremely large number of internet connected devices that will be much more than the number of human being in the world all producing data. These data will be collected and delivered to the cloud for processing, especially with a view of finding meaningful information to then take action. However, ideally the data needs to be analysed locally to increase privacy, give quick responses to people and to reduce use of network and storage resources. To tackle these problems, distributed data analytics can be proposed to collect and analyse the data either in the edge or fog devices. In this paper, we explore a hybrid approach which means that both innetwork level and cloud level processing should work together to build effective IoT data analytics in order to overcome their respective weaknesses and use their specific strengths. Specifically, we collected raw data locally and extracted features by applying data fusion techniques on the data on resource constrained devices to reduce the data and then send the extracted features to the cloud for processing. We evaluated the accuracy and data consumption over network and thus show that it is feasible to increase privacy and maintain accuracy while reducing data communication demands.Comment: Accepted to be published in the Proceedings of the 7th ACM International Conference on the Internet of Things (IoT 2017

    Educational Warehouse: Modular, Private and Secure Cloudable Architecture System for Educational Data Storage, Analysis and Access

    Get PDF
    [Abstract] Data in the educational context are becoming increasingly important in decision-making and teaching-learning processes. Similar to the industrial context, educational institutions are adopting data-processing technologies at all levels. To achieve representative results, the processes of extraction, transformation and uploading of educational data should be ubiquitous because, without useful data, either internal or external, it is difficult to perform a proper analysis and to obtain unbiased educational results. It should be noted that the source and type of data are heterogeneous and that the analytical processes can be so diverse that it opens up a practical problem of management and access to the data generated. At the same time, ensuring the privacy, identity, confidentiality and security of students and their data is a “sine qua non” condition for complying with the legal issues involved while achieving the required ethical premises. This work proposes a modular and scalable data system architecture that solves the complexity of data management and access. On the one hand, it allows educational institutions to collect any data generated in both the teaching-learning and management processes. On the other hand, it will enable external access to this data under appropriate privacy and security conditions.Generalitat de Catalunya; 2017 SGR 93

    Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Get PDF
    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems ‘expose’ relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT)

    Internet Predictions

    Get PDF
    More than a dozen leading experts give their opinions on where the Internet is headed and where it will be in the next decade in terms of technology, policy, and applications. They cover topics ranging from the Internet of Things to climate change to the digital storage of the future. A summary of the articles is available in the Web extras section

    Towards a standardized real-time data repository based on laboratory test results

    Get PDF
    Healthcare facilities use huge quantities of real-time and analytical data to discover meaningful information from patient clinical lab results. Advanced analytics and machine learning algorithms help doctors identify and treat patients more accurately. Accurate models must be trained, tested, and validated with enough data. New real-time data allows healthcare practitioners to quickly and accurately analyse patient demands. Healthcare organizations can improve patient care and outcomes through knowledge discovery. The goal of this effort is to develop a real-time data repository based on patient clinical exams. This collection feeds real-time monitoring panels and machine or deep learning algorithms that forecast patient progression from clinical lab results. Integrate HL7 messages from diverse sources, preprocess them, and add them to an API-accessible data warehouse. In conclusion, the proposed method creates an international-standard data warehouse. This data warehouse can increase healthcare decision-making accuracy and efficacy when utilised with machine learning models, improving patient care and outcomes through more personalised treatment options.FCT - Fundação para a Ciência e a Tecnologia (UIDB/00319/2020

    Enhancing Privacy in Big Data through an Asymmetric Secure Storage Protocol with Data Sharing

    Get PDF
    Cloud computing has become integral to handling large-scale data in the era of big data. Storing such vast amounts of data locally is cost-prohibitive, necessitating the use of cloud storage services. However, reliance on a single cloud storage provider (CSP) raises concerns such as service interruptions and security vulnerabilities, including insider threats. To address these issues, this research proposes a novel approach where big data files are distributed across multiple CSPs using an asymmetric security framework. Metadata encryption and decentralized file access management are facilitated through a dew computing intermediary, enhancing security and ensuring privacy. Unlike previous approaches, this protocol employs group key encryption and secret sharing schemes within SSGK for efficient data protection and access control. Extensive security and performance evaluations demonstrate significant reductions in security risks and privacy breaches while optimizing storage efficiency

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager
    corecore