13,940 research outputs found

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    Blockchain and Distributed Autonomous Community Ecosystems: Opportunities to Democratize Finance and Delivery of Transport, Housing, Urban Greening and Community Infrastructure

    Get PDF
    This report investigates and develops specifications for using blockchain and distributed organizations to enable decentralized delivery and finance of urban infrastructure. The project explores use cases, including: providing urban greening, street or transit infrastructure; services for street beautification, cleaning and weed or graffiti abatement; potential ways of resource allocation ADU; permitting and land allocation; and homeless housing. It establishes a general process flow for this blockchain architecture, which involves: 1) the creation of blocks (transactions); 2) sending these blocks to nodes (users) on the network for an action (mining) and then validation that that action has taken place; and 3) then adding the block to the blockchain. These processes involve the potential for creating new economic value for cities and neighborhoods through proof-of-work, which can be issued through a token (possibly a graphic non-fungible token), certificate, or possible financial reward. We find that encouraging trading of assets at the local level can enable the creation of value that could be translated into sustainable “mining actions” that could eventually provide the economic backstop and basis for new local investment mechanisms or currencies (e.g., local cryptocurrency). These processes also provide an innovative local, distributed funding mechanism for transportation, housing and other civic infrastructure

    Proceedings of the 2nd Computer Science Student Workshop: Microsoft Istanbul, Turkey, April 9, 2011

    Get PDF

    The Rise of Decentralized Autonomous Organizations: Coordination and Growth within Cryptocurrencies

    Get PDF
    The rise of cryptocurrencies such as Bitcoin is driving a paradigm shift in organization design. Their underlying blockchain technology enables a novel form of organizing, which I call the “decentralized autonomous organization” (DAO). This study explores how tasks are coordinated within DAOs that provide decentralized and open payment systems that do not rely on centralized intermediaries (e.g., banks). Guided by a Bitcoin pilot case study followed by a three-stage research design that uses both qualitative and quantitative data, this inductive study examines twenty DAOs in the cryptocurrency industry to address the following question: How are DAOs coordinated to enable growth? Results from the pilot study suggest that task coordination within DAOs is enabled by distributed consensus mechanisms at various levels. Further, findings from interview data reveal that DAOs coordinate tasks through “machine consensus” and “social consensus” mechanisms that operate at varying degrees of decentralization. Subsequent fuzzy-set qualitative comparative analyses (fsQCA), explaining when DAOs grow or decline, show that social consensus mechanisms can partially substitute machine consensus mechanisms in less decentralized DAOs. Taken together, the results unpack how DAO growth relies on the interplay between machine consensus, social consensus, and decentralization mechanisms. To conclude, I formulate three propositions to outline a theory of DAO coordination and discuss how this novel form of organizing calls for a revision of our conventional understanding of task coordination and organizational growth

    Security Engineering of Patient-Centered Health Care Information Systems in Peer-to-Peer Environments: Systematic Review

    Get PDF
    Background: Patient-centered health care information systems (PHSs) enable patients to take control and become knowledgeable about their own health, preferably in a secure environment. Current and emerging PHSs use either a centralized database, peer-to-peer (P2P) technology, or distributed ledger technology for PHS deployment. The evolving COVID-19 decentralized Bluetooth-based tracing systems are examples of disease-centric P2P PHSs. Although using P2P technology for the provision of PHSs can be flexible, scalable, resilient to a single point of failure, and inexpensive for patients, the use of health information on P2P networks poses major security issues as users must manage information security largely by themselves. Objective: This study aims to identify the inherent security issues for PHS deployment in P2P networks and how they can be overcome. In addition, this study reviews different P2P architectures and proposes a suitable architecture for P2P PHS deployment. Methods: A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Thematic analysis was used for data analysis. We searched the following databases: IEEE Digital Library, PubMed, Science Direct, ACM Digital Library, Scopus, and Semantic Scholar. The search was conducted on articles published between 2008 and 2020. The Common Vulnerability Scoring System was used as a guide for rating security issues. Results: Our findings are consolidated into 8 key security issues associated with PHS implementation and deployment on P2P networks and 7 factors promoting them. Moreover, we propose a suitable architecture for P2P PHSs and guidelines for the provision of PHSs while maintaining information security. Conclusions: Despite the clear advantages of P2P PHSs, the absence of centralized controls and inconsistent views of the network on some P2P systems have profound adverse impacts in terms of security. The security issues identified in this study need to be addressed to increase patients\u27 intention to use PHSs on P2P networks by making them safe to use

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    ECG classification and prognostic approach towards personalized healthcare

    Get PDF
    A very important aspect of personalized healthcare is to continuously monitor an individual’s health using wearable biomedical devices and essentially to analyse and if possible to predict potential health hazards that may prove fatal if not treated in time. The prediction aspect embedded in the system helps in avoiding delays in providing timely medical treatment, even before an individual reaches a critical condition. Despite of the availability of modern wearable health monitoring devices, the real-time analyses and prediction component seems to be missing with these devices. The research work illustrated in this paper, at an outset, focussed on constantly monitoring an individual's ECG readings using a wearable 3-lead ECG kit and more importantly focussed on performing real-time analyses to detect arrhythmia to be able to identify and predict heart risk. Also, current research shows extensive use of heart rate variability (HRV) analysis and machine learning for arrhythmia classification, which however depends on the morphology of the ECG waveforms and the sensitivity of the ECG equipment. Since a wearable 3-lead ECG kit was used, the accuracy of classification had to be dealt with at the machine learning phase, so a unique feature extraction method was developed to increase the accuracy of classification. As a case study a very widely used Arrhythmia database (MIT-BIH, Physionet) was used to develop learning, classification and prediction models. Neuralnet fitting models on the extracted features showed mean-squared error of as low as 0.0085 and regression value as high as 0.99. Current experiments show 99.4% accuracy using k-NN Classification models and show values of Cross-Entropy Error of 7.6 and misclassification error value of 1.2 on test data using scaled conjugate gradient pattern matching algorithms. Software components were developed for wearable devices that took ECG readings from a 3-Lead ECG data acquisition kit in real time, de-noised, filtered and relayed the sample readings to the tele health analytical server. The analytical server performed the classification and prediction tasks based on the trained classification models and could raise appropriate alarms if ECG abnormalities of V (Premature Ventricular Contraction: PVC), A (Atrial Premature Beat: APB), L (Left bundle branch block beat), R (Right bundle branch block beat) type annotations in MITDB were detected. The instruments were networked using IoT (Internet of Things) devices and abnormal ECG events related to arrhythmia, from analytical server could be logged using an FHIR web service implementation, according to a SNOMED coding system and could be accessed in Electronic Health Record by the concerned medic to take appropriate and timely decisions. The system focused on ‘preventive care rather than remedial cure’ which has become a major focus of all the health care and cure institutions across the globe

    Research Naval Postgraduate School, v.12, no.3, October 2002

    Get PDF
    NPS Research is published by the Research and Sponsored Programs, Office of the Vice President and Dean of Research, in accordance with NAVSOP-35. Views and opinions expressed are not necessarily those of the Department of the Navy.Approved for public release; distribution is unlimited
    corecore