1,235 research outputs found

    Enabling Micro-level Demand-Side Grid Flexiblity in Resource Constrained Environments

    Full text link
    The increased penetration of uncertain and variable renewable energy presents various resource and operational electric grid challenges. Micro-level (household and small commercial) demand-side grid flexibility could be a cost-effective strategy to integrate high penetrations of wind and solar energy, but literature and field deployments exploring the necessary information and communication technologies (ICTs) are scant. This paper presents an exploratory framework for enabling information driven grid flexibility through the Internet of Things (IoT), and a proof-of-concept wireless sensor gateway (FlexBox) to collect the necessary parameters for adequately monitoring and actuating the micro-level demand-side. In the summer of 2015, thirty sensor gateways were deployed in the city of Managua (Nicaragua) to develop a baseline for a near future small-scale demand response pilot implementation. FlexBox field data has begun shedding light on relationships between ambient temperature and load energy consumption, load and building envelope energy efficiency challenges, latency communication network challenges, and opportunities to engage existing demand-side user behavioral patterns. Information driven grid flexibility strategies present great opportunity to develop new technologies, system architectures, and implementation approaches that can easily scale across regions, incomes, and levels of development

    DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding

    Full text link
    Human face exhibits an inherent hierarchy in its representations (i.e., holistic facial expressions can be encoded via a set of facial action units (AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown great results in unsupervised extraction of hierarchical latent representations from large amounts of image data, while being robust to noise and other undesired artifacts. Potentially, this makes VAEs a suitable approach for learning facial features for AU intensity estimation. Yet, most existing VAE-based methods apply classifiers learned separately from the encoded features. By contrast, the non-parametric (probabilistic) approaches, such as Gaussian Processes (GPs), typically outperform their parametric counterparts, but cannot deal easily with large amounts of data. To this end, we propose a novel VAE semi-parametric modeling framework, named DeepCoder, which combines the modeling power of parametric (convolutional) and nonparametric (ordinal GPs) VAEs, for joint learning of (1) latent representations at multiple levels in a task hierarchy1, and (2) classification of multiple ordinal outputs. We show on benchmark datasets for AU intensity estimation that the proposed DeepCoder outperforms the state-of-the-art approaches, and related VAEs and deep learning models.Comment: ICCV 2017 - accepte

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing \approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    Automating Real-Time Fault Detection for the University of Tennessee Space Institute, Aviation Systems’ Flight Testing and Airborne Science Applications

    Get PDF
    The UTSI Aviation Systems program has conducted many successful airborne science campaigns in collaboration with premier research organizations including NASA and NOAA. Each airborne science mission requires dedicated FTEs to monitor the various instruments onboard the aircraft. A typical mission requires aircraft to be instrumented with a wide range of sensors (with approximately 145 data parameters). Monitoring the instruments requires highly skilled personnel who have a thorough understanding of the system. With the advent of UTSI Aviation Systems program increasing capabilities to conduct multiple missions, using multiple airborne platforms, the requirement of a skilled FTE for each mission could effectively impede mission readiness. Conversely, the customers have also expressed interest in increased involvement in the airborne science missions and hence have to be accommodated within the limited confines of the aircraft. As a result of these requirements, a real-time expert system has been developed (using LabVIEW) to monitor mission-critical instrumentation. The program will provide the user with a tool to monitor the performance of the airborne sensors without requiring extensive knowledge of the system and rigorous training. The overall effect would be an increase in flexibility while simultaneously enhancing quality of operation wherein a mission would not be flown with a defective sensor onboard. The following work describes the algorithms, system architecture and coding techniques used to develop the “go no-go” program. As the program is under constant refinement, the descriptions presented reflect the current state of the software

    The 1982 ASEE-NASA Faculty Fellowship program (Aeronautics and Research)

    Get PDF
    The NASA/ASEE Summer Faculty Fellowship Program (Aeronautics and Research) conducted at the NASA Goddard Space Flight Center during the summer of 1982 is described. Abstracts of the Final Reports submitted by the Fellows detailing the results of their research are also presented
    corecore