888 research outputs found

    City Data Fusion: Sensor Data Fusion in the Internet of Things

    Full text link
    Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.Comment: Accepted to be published in International Journal of Distributed Systems and Technologies (IJDST), 201

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    Designing Human-Centered Collective Intelligence

    Get PDF
    Human-Centered Collective Intelligence (HCCI) is an emergent research area that seeks to bring together major research areas like machine learning, statistical modeling, information retrieval, market research, and software engineering to address challenges pertaining to deriving intelligent insights and solutions through the collaboration of several intelligent sensors, devices and data sources. An archetypal contextual CI scenario might be concerned with deriving affect-driven intelligence through multimodal emotion detection sources in a bid to determine the likability of one movie trailer over another. On the other hand, the key tenets to designing robust and evolutionary software and infrastructure architecture models to address cross-cutting quality concerns is of keen interest in the “Cloud” age of today. Some of the key quality concerns of interest in CI scenarios span the gamut of security and privacy, scalability, performance, fault-tolerance, and reliability. I present recent advances in CI system design with a focus on highlighting optimal solutions for the aforementioned cross-cutting concerns. I also describe a number of design challenges and a framework that I have determined to be critical to designing CI systems. With inspiration from machine learning, computational advertising, ubiquitous computing, and sociable robotics, this literature incorporates theories and concepts from various viewpoints to empower the collective intelligence engine, ZOEI, to discover affective state and emotional intent across multiple mediums. The discerned affective state is used in recommender systems among others to support content personalization. I dive into the design of optimal architectures that allow humans and intelligent systems to work collectively to solve complex problems. I present an evaluation of various studies that leverage the ZOEI framework to design collective intelligence

    Internet of Things Strategic Research Roadmap

    Get PDF
    Internet of Things (IoT) is an integrated part of Future Internet including existing and evolving Internet and network developments and could be conceptually defined as a dynamic global network infrastructure with self configuring capabilities based on standard and interoperable communication protocols where physical and virtual “things” have identities, physical attributes, and virtual personalities, use intelligent interfaces, and are seamlessly integrated into the information network

    Model-based myoelectric control of robots for assistance and rehabilitation

    Get PDF
    The first anthropomorphic robots and exoskeletons were developed with the idea of combining man and machine into an intimate symbiotic unit that can perform as one joint system. A human-robot interface consists of processes of two different nature: (1) the physical interaction (pHRI) between the device and its user and (2) the exchange of cognitive information (cHRI) between the human and the robot. To achieve the symbiosis between the two actors, both need to be optimized. The evolution of mechanical design and the introduction of new materials pushed pHRI to new frontiers on ergonomics and assistance performance. However, cHRI still lacks on this direction because is more complicated: it requires communication from the cognitive processes occuring in the human agent to the robot, e.g. intention detection; but also from the robot to the human agent, e.g. feedback modalities such as haptic cues. A possible innovation is the inclusion of the electromyographic signal, the command signal from our brain to the musculoskeletal system for the movement, in the robot control loop. The aim of this thesis was to develop a real-time control framework for an assistive device that can generate the same force produced by the muscles. To do this, I incorporated in the robot control loop a detailed musculoskeletal model that estimates the net torque at the joint level by taking as inputs the electromyography signals and kinematic data. This module is called myoprocessor. Here I present two applications of this control approach: the first was implemented on a soft wearable arm exosuit in order to evaluate the adaptation of the controller on different motion and loads. The second one, was a generation of myoprocessor-driven force field on a planar robot manipulandum in order to study the modularity changes of the musculoskeletal system. Both applications showed that the device controlled by myoprocessor works symbiotically with the user, by reducing the muscular activity and preserving the motor performance. The ability of seamlessly combining musculoskeletal force estimators with assistive devices opens new avenues for assisting human movement both in healthy and impaired individuals

    State of AI-based monitoring in smart manufacturing and introduction to focused section

    Get PDF
    Over the past few decades, intelligentization, supported by artificial intelligence (AI) technologies, has become an important trend for industrial manufacturing, accelerating the development of smart manufacturing. In modern industries, standard AI has been endowed with additional attributes, yielding the so-called industrial artificial intelligence (IAI) that has become the technical core of smart manufacturing. AI-powered manufacturing brings remarkable improvements in many aspects of closed-loop production chains from manufacturing processes to end product logistics. In particular, IAI incorporating domain knowledge has benefited the area of production monitoring considerably. Advanced AI methods such as deep neural networks, adversarial training, and transfer learning have been widely used to support both diagnostics and predictive maintenance of the entire production process. It is generally believed that IAI is the critical technologies needed to drive the future evolution of industrial manufacturing. This article offers a comprehensive overview of AI-powered manufacturing and its applications in monitoring. More specifically, it summarizes the key technologies of IAI and discusses their typical application scenarios with respect to three major aspects of production monitoring: fault diagnosis, remaining useful life prediction, and quality inspection. In addition, the existing problems and future research directions of IAI are also discussed. This article further introduces the papers in this focused section on AI-based monitoring in smart manufacturing by weaving them into the overview, highlighting how they contribute to and extend the body of literature in this area

    Multiagent Industrial Symbiosis Systems

    Get PDF

    Developing Accessible Collection and Presentation Methods for Observational Data

    Get PDF
    The processes of collecting, cleaning, and presenting data are critical in ensuring the proper analysis of data at a later date. An opportunity exists to enhance the data collection and presentation process for those who are not data scientists – such as healthcare professionals and businesspeople interested in using data to help them make decisions. In this work, creating an observational data collection and presentation tool is investigated, with a focus on developing a tool prioritizing user-friendliness and context preservation of the data collected. This aim is achieved via the integration of three approaches to data collection and presentation.In the first approach, the collection of observational data is structured and carried out via a trichotomous, tailored, sub-branching scoring (TTSS) system. The system allows for deep levels of data collection while enabling data to be summarized quickly by a user via collapsing details. The system is evaluated against the stated requirements of usability and extensibility, proving the latter by providing examples of various evaluations created using the TTSS framework.Next, this approach is integrated with automated data collection via mobile device sensors, to facilitate the efficient completion of the assessment. Results are presented from a system used to combine the capture of complex data from the built environment and compare the results of the data collection, including how the system uses quantitative measures specifically. This approach is evaluated against other solutions for obtaining data about the accessibility of a built environment, and several assessments taken in the field are compared to illustrate the system’s flexibility. The extension of the system for automated data capture is also discussed.Finally, the use of accessibility information for data context preservation is integrated. This approach is evaluated via investigation of how accessible media entries improve the quality of search for an archival website. Human-generated accessibility information is compared to computer-generated accessibility information, as well as simple reliance on titles/metadata. This is followed by a discussion of how improved accessibility can benefit the understanding of gathered observational data’s context
    corecore