22,432 research outputs found

    Enabling Self-aware Smart Buildings by Augmented Reality

    Full text link
    Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 201

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

    Full text link
    Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE

    CGAMES'2009

    Get PDF

    Vu: Integrating AR Technology and Interaction into an Event Planning App

    Get PDF
    Planning a social event can be expensive and time consuming. To minimalize the risk of event problems, the organizer can consult professional event planners. However, a consultant can also be costly. Therefore, purchasing decor, food, and other items, without knowing if they look right or fit the venue, is a guessing game, and the game could be an expensive one. If the original plan cannot be completed efficiently, then modifying or improving these works are likely to cost extra time and funds. However, testing the revised plan may also increase the likelihood of risk in future. By integrating Augmented Reality (AR) into an event planning App, the App will allow users to arrange virtual items onto environment captured by the device. Thus, users can envision their plan and make changes before actually making purchases, calling in construction teams and doing the decorations. The goal of this thesis is to integrate AR into an App design that allows users to design, view, and make budgets for their event plan in advance, optimizing their design beforehand

    Consumer behavior in augmented shopping reality: A review, synthesis, and research agenda

    Get PDF
    The application of augmented reality (AR) is receiving great interest in e-commerce, m-commerce, and brick-and-mortar-retailing. A growing body of literature has explored several different facets of how consumers react to the upcoming augmented shopping reality. This systematic literature review summarizes the findings of 56 empirical papers that analyzed consumers’ experience with AR, acceptance of AR, and behavioral reactions to AR in various online and offline environments. The review synthesizes current knowledge and critically discusses the empirical studies conceptually and methodologically. Finally, the review outlines the theoretical basis as well as the independent, mediating, moderating, and dependent variables analyzed in previous AR research. Based on this synthesis, the paper develops an integrative framework model, which helps derive directives for future research on augmented shopping reality

    Effective Gesture Based Framework for Capturing User Input

    Full text link
    Computers today aren't just confined to laptops and desktops. Mobile gadgets like mobile phones and laptops also make use of it. However, one input device that hasn't changed in the last 50 years is the QWERTY keyboard. Users of virtual keyboards can type on any surface as if it were a keyboard thanks to sensor technology and artificial intelligence. In this research, we use the idea of image processing to create an application for seeing a computer keyboard using a novel framework which can detect hand gestures with precise accuracy while also being sustainable and financially viable. A camera is used to capture keyboard images and finger movements which subsequently acts as a virtual keyboard. In addition, a visible virtual mouse that accepts finger coordinates as input is also described in this study. This system has a direct benefit of reducing peripheral cost, reducing electronics waste generated due to external devices and providing accessibility to people who cannot use the traditional keyboard and mouse

    Enhancing the museum experience with a sustainable solution based on contextual information obtained from an on-line analysis of users’ behaviour

    Get PDF
    Human computer interaction has evolved in the last years in order to enhance users’ experiences and provide more intuitive and usable systems. A major leap through in this scenario is obtained by embedding, in the physical environment, sensors capable of detecting and processing users’ context (position, pose, gaze, ...). Feeded by the so collected information flows, user interface paradigms may shift from stereotyped gestures on physical devices, to more direct and intuitive ones that reduce the semantic gap between the action and the corresponding system reaction or even anticipate the user’s needs, thus limiting the overall learning effort and increasing user satisfaction. In order to make this process effective, the context of the user (i.e. where s/he is, what is s/he doing, who s/he is, what are her/his preferences and also actual perception and needs) must be properly understood. While collecting data on some aspects can be easy, interpreting them all in a meaningful way in order to improve the overall user experience is much harder. This is more evident when we consider informal learning environments like museums, i.e. places that are designed to elicit visitor response towards the artifacts on display and the cultural themes proposed. In such a situation, in fact, the system should adapt to the attention paid by the user choosing the appropriate content for the user’s purposes, presenting an intuitive interface to navigate it. My research goal is focused on collecting, in a simple,unobtrusive, and sustainable way, contextual information about the visitors with the purpose of creating more engaging and personalized experiences

    Runtime reconfiguration of physical and virtual pervasive systems

    Full text link
    Today, almost everyone comes in contact with smart environments during their everyday’s life. Environments such as smart homes, smart offices, or pervasive classrooms contain a plethora of heterogeneous connected devices and provide diverse services to users. The main goal of such smart environments is to support users during their daily chores and simplify the interaction with the technology. Pervasive Middlewares can be used for a seamless communication between all available devices and by integrating them directly into the environment. Only a few years ago, a user entering a meeting room had to set up, for example, the projector and connect a computer manually or teachers had to distribute files via mail. With the rise of smart environments these tasks can be automated by the system, e.g., upon entering a room, the smartphone automatically connects to a display and the presentation starts. Besides all the advantages of smart environments, they also bring up two major problems. First, while the built-in automatic adaptation of many smart environments is often able to adjust the system in a helpful way, there are situations where the user has something different in mind. In such cases, it can be challenging for unexperienced users to configure the system to their needs. Second, while users are getting increasingly mobile, they still want to use the systems they are accustomed to. As an example, an employee on a business trip wants to join a meeting taking place in a smart meeting room. Thus, smart environments need to be accessible remotely and should provide all users with the same functionalities and user experience. For these reasons, this thesis presents the PerFlow system consisting of three parts. First, the PerFlow Middleware which allows the reconfiguration of a pervasive system during runtime. Second, with the PerFlow Tool unexperi- enced end users are able to create new configurations without having previous knowledge in programming distributed systems. Therefore, a specialized visual scripting language is designed, which allows the creation of rules for the commu- nication between different devices. Third, to offer remote participants the same user experience, the PerFlow Virtual Extension allows the implementation of pervasive applications for virtual environments. After introducing the design for the PerFlow system, the implementation details and an evaluation of the developed prototype is outlined. The evaluation discusses the usability of the system in a real world scenario and the performance implications of the middle- ware evaluated in our own pervasive learning environment, the PerLE testbed. Further, a two stage user study is introduced to analyze the ease of use and the usefulness of the visual scripting tool
    • 

    corecore