3,126 research outputs found

    Thermography methodologies for detecting energy related building defects

    Get PDF
    Thermography is becoming more widely used amongst construction professionals for energy related defect detection in buildings. Until quite recently, most of the research and practical use of building thermography has centred on employing a building walk-around or walk-through methodology to detect sources of unacceptable energy use. However, thermographers are now creating new building thermography methodologies that seek to address some of the known limitations, such as camera spatial resolution, transient climatic conditions and differences in material properties. Often such limitations are misunderstood and sometimes ignored. This study presents a review of the existing literature, covering both well-established and emerging building thermography methodologies. By critically appraising techniques and observing methodology applications for specific energy related defects, a much clearer picture has been formed that will help thermographic researchers and thermographers to decide upon the best methodology for performing building thermography investigations and for the invention of new approaches. Whilst this paper shows that many of the different passive building thermography methodologies seek to address particular building issues such as defects and energy use, it has also demonstrated a lack of correlation between the different methodology types, where one methodology is often chosen over another for a particular reason, rather than making use of several methodologies to better understand building performance. Therefore this paper has identified the potential for using several passive building thermography methodologies together in a phased approach to building surveying using thermography. For example, a less costly and faster survey could be conducted to quickly identify certain defects before enabling more time consuming and expensive surveys to hone in on these with greater detail and spatial resolution if deemed necessary. © 2014 Elsevier Ltd

    Small unmanned airborne systems to support oil and gas pipeline monitoring and mapping

    Get PDF
    Acknowledgments We thank Johan Havelaar, Aeryon Labs Inc., AeronVironment Inc. and Aeronautics Inc. for kindly permitting the use of materials in Fig. 1.Peer reviewedPublisher PD

    Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    Get PDF
    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world

    Multi-modal video analysis for early fire detection

    Get PDF
    In dit proefschrift worden verschillende aspecten van een intelligent videogebaseerd branddetectiesysteem onderzocht. In een eerste luik ligt de nadruk op de multimodale verwerking van visuele, infrarood en time-of-flight videobeelden, die de louter visuele detectie verbetert. Om de verwerkingskost zo minimaal mogelijk te houden, met het oog op real-time detectie, is er voor elk van het type sensoren een set ’low-cost’ brandkarakteristieken geselecteerd die vuur en vlammen uniek beschrijven. Door het samenvoegen van de verschillende typen informatie kunnen het aantal gemiste detecties en valse alarmen worden gereduceerd, wat resulteert in een significante verbetering van videogebaseerde branddetectie. Om de multimodale detectieresultaten te kunnen combineren, dienen de multimodale beelden wel geregistreerd (~gealigneerd) te zijn. Het tweede luik van dit proefschrift focust zich hoofdzakelijk op dit samenvoegen van multimodale data en behandelt een nieuwe silhouet gebaseerde registratiemethode. In het derde en tevens laatste luik van dit proefschrift worden methodes voorgesteld om videogebaseerde brandanalyse, en in een latere fase ook brandmodellering, uit te voeren. Elk van de voorgestelde technieken voor multimodale detectie en multi-view lokalisatie zijn uitvoerig getest in de praktijk. Zo werden onder andere succesvolle testen uitgevoerd voor de vroegtijdige detectie van wagenbranden in ondergrondse parkeergarages

    Thermal infrared work at ITC:a personal, historic perspective of transitions

    Get PDF

    Hardware for recognition of human activities: a review of smart home and AAL related technologies

    Get PDF
    Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL—smartphones, wearables, video, and electronic components—and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard

    On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Get PDF
    The ubiquitous nature of GPS has fostered its widespread integration of navigation into a variety of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation

    DRONE TECHNOLOGY: IS IT WORTH THE INVESTMENT IN AGRICULTURE

    Get PDF
    From the earliest of times, the human race has sought to better understand this world and its surroundings. In the last century, aeronautical engineering and aerial imagery have evolved to allow a deeper understanding into how this world lives and breathes. Now more than ever, these two technological advancements are changing the way we view this world and how we are to sustain it for a brighter, healthier future. Over time, the advances of these two technologies were combined and the birth of spectral sensing and drone technology arrived. In their earliest years, drones and spectral imaging were only available to government agencies. In the mid-1990s, President Clinton declassified this technology and allowed the public to utilize and invest in their development. Today, the world has incorporated these technologies into a number of applications; one of these being in agriculture. In the last decade, significant interest into drone technology and its possible applications have been researched. Many benefits have been discovered in the agricultural sector by incorporating drone and spectral technology. A big part of incorporating a new piece of equipment or technology into any operation is the economic feasibility. Understanding drone and spectral technology can do and what it can provide, is crucial in making a sound decision when considering investing in drone technology. This document discusses the earliest developments of drone technology, its current status, and the predicted future. It also provides basic information about drone designs, drone regulations, types of spectral sensors, their capabilities, and some of the research being done in agriculture to advance these technologies. Additionally, a case study looking at a wild oat infestation in spring wheat will be addressed. This case study involves two crop consultants and their decision to invest in drone technology. Advisor: Gary L. Hei

    "Reading Between the Heat": Co-Teaching Body Thermal Signatures for Non-intrusive Stress Detection

    Full text link
    Stress impacts our physical and mental health as well as our social life. A passive and contactless indoor stress monitoring system can unlock numerous important applications such as workplace productivity assessment, smart homes, and personalized mental health monitoring. While the thermal signatures from a user's body captured by a thermal camera can provide important information about the "fight-flight" response of the sympathetic and parasympathetic nervous system, relying solely on thermal imaging for training a stress prediction model often lead to overfitting and consequently a suboptimal performance. This paper addresses this challenge by introducing ThermaStrain, a novel co-teaching framework that achieves high-stress prediction performance by transferring knowledge from the wearable modality to the contactless thermal modality. During training, ThermaStrain incorporates a wearable electrodermal activity (EDA) sensor to generate stress-indicative representations from thermal videos, emulating stress-indicative representations from a wearable EDA sensor. During testing, only thermal sensing is used, and stress-indicative patterns from thermal data and emulated EDA representations are extracted to improve stress assessment. The study collected a comprehensive dataset with thermal video and EDA data under various stress conditions and distances. ThermaStrain achieves an F1 score of 0.8293 in binary stress classification, outperforming the thermal-only baseline approach by over 9%. Extensive evaluations highlight ThermaStrain's effectiveness in recognizing stress-indicative attributes, its adaptability across distances and stress scenarios, real-time executability on edge platforms, its applicability to multi-individual sensing, ability to function on limited visibility and unfamiliar conditions, and the advantages of its co-teaching approach.Comment: 29 page
    • 

    corecore