200 research outputs found

    Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

    Full text link
    Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy

    Health State Estimation

    Full text link
    Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.Comment: Ph.D. Dissertation @ University of California, Irvin

    Identifying success factors in crowdsourced geographic information use in government

    Get PDF
    Crowdsourcing geographic information in government is focusing on projects that are engaging people who are not government officials and employees in collecting, editing and sharing information with governmental bodies. This type of projects emerged in the past decade, due to technological and societal changes - such as the increased use of smartphones, combined with growing levels of education and technical abilities to use them by citizens. They also flourished due to the need for updated data in relatively quick time when financial resources are low. They range from recording the experience of feeling an earthquake to recording the location of businesses during the summer time. 50 cases of projects in which crowdsourced geographic information was used by governmental bodies across the world are analysed. About 60% of the cases were examined in 2014 and in 2017, to allow for comparison and identification of success and failure. The analysis looked at different aspects and their relationship to success: the drivers to start a project; scope and aims; stakeholders and relationships; inputs into the project; technical and organisational aspect; and problems encountered. The main key factors of the case studies were analysed with the use of Qualitative Comparative Analysis (QCA) which is an analytical method that combines quantitative and qualitative tools in sociological research. From the analysis, we can conclude that there is no “magic bullet” or a perfect methodology for a successful crowdsourcing in government project. Unless the organisation has reached maturity in the area of crowdsourcing, identifying a champion and starting a project that will not address authoritative datasets directly is a good way to ensure early success and start the process of organisational learning on how to run such projects. Governmental support and trust is undisputed. If the choice is to use new technologies, this should be accompanied by an investment of appropriate resources within the organisation to ensure that the investment bear fruits. Alternatively, using an existing technology that was successful elsewhere and investing in training and capacity building is another path for success. We also identified the importance of intermediary Non-Governmental Organizations (NGOs) with the experience and knowledge in working with crowdsourcing within a partnership. These organizations have the knowledge and skills to implement projects at the boundary between government and the crowd, and therefore can offer the experience to ensure better implementation. Changes and improvement of public services, or a focus on environmental monitoring can be a good basis for a project. Capturing base mapping is a good point to start, too. The recommendation of the report address organisational issues, resources, and legal aspects

    What have we learned from the pandemic?

    Get PDF

    Data and resource management in wireless networks via data compression, GPS-free dissemination, and learning

    Get PDF
    “This research proposes several innovative approaches to collect data efficiently from large scale WSNs. First, a Z-compression algorithm has been proposed which exploits the temporal locality of the multi-dimensional sensing data and adapts the Z-order encoding algorithm to map multi-dimensional data to a one-dimensional data stream. The extended version of Z-compression adapts itself to working in low power WSNs running under low power listening (LPL) mode, and comprehensively analyzes its performance compressing both real-world and synthetic datasets. Second, it proposed an efficient geospatial based data collection scheme for IoTs that reduces redundant rebroadcast of up to 95% by only collecting the data of interest. As most of the low-cost wireless sensors won’t be equipped with a GPS module, the virtual coordinates are used to estimate the locations. The proposed work utilizes the anchor-based virtual coordinate system and DV-Hop (Distance vector of hops to anchors) to estimate the relative location of nodes to anchors. Also, it uses circle and hyperbola constraints to encode the position of interest (POI) and any user-defined trajectory into a data request message which allows only the sensors in the POI and routing trajectory to collect and route. It also provides location anonymity by avoiding using and transmitting GPS location information. This has been extended also for heterogeneous WSNs and refined the encoding algorithm by replacing the circle constraints with the ellipse constraints. Last, it proposes a framework that predicts the trajectory of the moving object using a Sequence-to-Sequence learning (Seq2Seq) model and only wakes-up the sensors that fall within the predicted trajectory of the moving object with a specially designed control packet. It reduces the computation time of encoding geospatial trajectory by more than 90% and preserves the location anonymity for the local edge servers”--Abstract, page iv
    • …
    corecore