43,364 research outputs found

    PresenceSense: Zero-training Algorithm for Individual Presence Detection based on Power Monitoring

    Full text link
    Non-intrusive presence detection of individuals in commercial buildings is much easier to implement than intrusive methods such as passive infrared, acoustic sensors, and camera. Individual power consumption, while providing useful feedback and motivation for energy saving, can be used as a valuable source for presence detection. We conduct pilot experiments in an office setting to collect individual presence data by ultrasonic sensors, acceleration sensors, and WiFi access points, in addition to the individual power monitoring data. PresenceSense (PS), a semi-supervised learning algorithm based on power measurement that trains itself with only unlabeled data, is proposed, analyzed and evaluated in the study. Without any labeling efforts, which are usually tedious and time consuming, PresenceSense outperforms popular models whose parameters are optimized over a large training set. The results are interpreted and potential applications of PresenceSense on other data sources are discussed. The significance of this study attaches to space security, occupancy behavior modeling, and energy saving of plug loads.Comment: BuildSys 201

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    ๋‹จ์ผ ์Œํ–ฅ ์„ผ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋‹ค์ธต ์ฒ ๊ทผ ์ฝ˜ํฌ๋ฆฌํŠธ ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ์˜ ์ข…๋ฅ˜์™€ ์œ„์น˜ ์ถ”์ •

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์กฐ์„ ํ•ด์–‘๊ณตํ•™๊ณผ, 2021. 2. ์„ฑ์šฐ์ œ.The construction of multi-story residential buildings triggers indoor noise. Indoor noise in residential areas has been investigated to ascertain the effect of noise on occupants and to improve their quality of life. In buildings, indoor acoustic noise transmitted from various sources travels through these structures and exerts an unpleasant effect on occupants. Inter-floor noise is identified as a severe type of indoor noise in residential areas. The identification of noise is considered a fundamental step that is essential for studying the challenges of noise pollution. By harnessing a sound level meter, long-term measurement, and site surveying, previous studies have been conducted on the identification of noise in residential areas to estimate the level, type, and position of generated noise. However, it is challenging to identify the source type and position of noise travelling through multi-story residential buildings owing to the difficulty of the human ear in intercepting these sounds. Recent studies on the identification of indoor noise are limited to noise sources and receivers on a single level of the floor, and they require multiple sensor channels to determine the time difference of arrival. Residential buildings, which are usually reinforced concrete structures, are considered to be concrete, steel, and fluid-mixed media with high structural complexity and occupants that have insufficient knowledge of the details of their properties. In this study, we propose a data-driven identification of noise in reinforced concrete buildings via the learning-based localization method using a single sensor. Actual experiments were conducted in a campus building, as well as two apartment buildings. Performance was analyzed according to several source types and positions that apply the deep convolutional neural network (CNN)-based supervised learning. The validations against the datasets obtained in three buildings verified the generalizability of the proposed method. In addition, noise identification data transferred within different floor sections in a single building and between similar buildings were presented in this study. Although indoor noise identification is emphasized in this work, the proposed method can be beneficial for other noise identification methods that employ a single sensor.๊ณต๋™์ฃผํƒ์˜ ์ฆ๊ฐ€๋กœ ๊ฑด๋ฌผ ๋‚ด ์ด์›ƒ ๊ฐ„์˜ ์†Œ์Œ ๋ฌธ์ œ๊ฐ€ ์‚ฌํšŒ์ ์œผ๋กœ ๋Œ€๋‘๋˜๊ณ  ์žˆ๋‹ค. ๊ฑฐ์ฃผ์ž์—๊ฒŒ ๋…ธ์ถœ๋œ ์†Œ์Œ์€ ๊ฑฐ์ฃผ์ž์˜ ๊ฑด๊ฐ• ๋ฌธ์ œ์— ์ง๊ฒฐ๋  ์ˆ˜๋„ ์žˆ์œผ๋ฏ€๋กœ ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ์— ๊ด€ํ•œ ์—ฌ๋Ÿฌ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. ๋‹ค์ธต ๊ฑด๋ฌผ ๋‚ด์—์„œ ๋ฐœ์ƒํ•œ ์†Œ์Œ์€ ๊ฑด๋ฌผ์˜ ๊ตฌ์กฐ๋ฅผ ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ธต์œผ๋กœ ์ „๋‹ฌ๋˜๋ฉฐ ์ด๋Ÿฌํ•œ ์ธต๊ฐ„์†Œ์Œ์€ ์ฃผ๋ณ€ ์ด์›ƒ์—๊ฒŒ ๊ณ ํ†ต์œผ๋กœ ๋‹ค๊ฐ€์˜ฌ ์ˆ˜ ์žˆ๋‹ค. ์†Œ์Œ์›์˜ ๊ทœ๋ช…์€ ์†Œ์Œ์„ ๋‹ค๋ฃฐ ๋•Œ ์„ ํ–‰๋˜์–ด์•ผ ํ•˜๋Š” ๋ฐ” ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ์˜ ์ค€์œ„, ์ข…๋ฅ˜, ์œ„์น˜ ํŒŒ์•…์— ๊ด€๋ จ๋œ ์—ฐ๊ตฌ๋“ค์ด ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. ์†Œ์Œ์˜ ์ค€์œ„๋Š” ์†Œ์Œ์ธก์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ธก์ •์ด ๊ฐ€๋Šฅํ•˜๋‚˜ ๊ฑด๋ฌผ์˜ ๊ตฌ์กฐ๋ฅผ ๋”ฐ๋ผ ์ „๋‹ฌ๋œ ์†Œ์Œ์˜ ์ข…๋ฅ˜์™€ ์œ„์น˜๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๊ฒƒ์€ ์ถ”์ •์ด ํ•„์š”ํ•œ ๋ฌธ์ œ์ด๋ฉฐ ์‚ฌ๋žŒ์˜ ์ฒญ๋ ฅ์— ์˜์กดํ•˜์—ฌ์„œ ํ’€๊ธฐ๋„ ์–ด๋ ต๋‹ค. ์ตœ๊ทผ ์—ฐ๊ตฌ๋œ ๊ด€๋ จ ์—ฐ๊ตฌ๋ฅผ ์‚ดํŽด๋ณด๋ฉด ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ์˜ ์ข…๋ฅ˜๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ์—ฐ๊ตฌ๋Š” ๊ฑฐ์˜ ๋‹ค๋ค„์ง€์ง€ ์•Š์•˜๊ณ , ์†Œ์Œ์› ์œ„์น˜ ์ถ”์ • ์—ฐ๊ตฌ์˜ ๊ฒฝ์šฐ ๋™์ผ ์ธต์— ์†Œ์Œ์›๊ณผ ์—ฌ๋Ÿฌ ์ฑ„๋„์˜ ์ˆ˜์‹ ๊ธฐ๊ฐ€ ์œ„์น˜ํ•œ ๊ฒฝ์šฐ๋ฅผ ๋‹ค์ค‘์ธก๋Ÿ‰ (multilateration) ์„ ํ†ตํ•˜์—ฌ ์ œํ•œ์ ์œผ๋กœ ๋‹ค๋ค˜๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ˜„๋Œ€ ๊ฑฐ์ฃผ์šฉ ๊ฑด์ถ•๋ฌผ์˜ ๋Œ€๋ถ€๋ถ„์€ ์ฒ ๊ทผ ์ฝ˜ํฌ๋ฆฌํŠธ ๊ตฌ์กฐ์ด๋ฉฐ ์ธต๊ฐ„์˜ ์†Œ์Œ ์ „๋‹ฌ ํ™˜๊ฒฝ์€ ์ฝ˜ํฌ๋ฆฌํŠธ, ์ฒ ๊ทผ, ์œ ์ฒด๊ฐ€ ํ˜ผ์žฌํ•˜๋Š” ๋ณต์žกํ•œ ํ™˜๊ฒฝ์ด๋‹ค. ์ผ๋ฐ˜์ธ ๊ฑฐ์ฃผ์ž๊ฐ€ ์ด๋Ÿฌํ•œ ํ™˜๊ฒฝ์—์„œ์˜ ์†Œ์Œ ์ „๋‹ฌ ํ™˜๊ฒฝ์„ ํŒŒ์•…ํ•˜๊ณ  ์†Œ์Œ์˜ ์ „๋‹ฌ ๋ชจ๋ธ์„ ์„ธ์›Œ ์†Œ์Œ์„ ๊ทœ๋ช…ํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ต๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋ชจ๋ฐ”์ผ ์žฅ์น˜ (mobile device) ์˜ ๋‹จ์ผ ์Œํ–ฅ ์„ผ์„œ๋กœ ์ธก์ •ํ•œ ์†Œ์Œ๊ณผ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ (data-driven) ์˜ ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ ๊ทœ๋ช… ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜๊ณ  ํ•œ ๊ฐœ์˜ ์บ ํผ์Šค ๊ฑด๋ฌผ๊ณผ ๋‘ ์•„ํŒŒํŠธ ๊ฑด๋ฌผ์—์„œ ์ง„ํ–‰ํ•œ ์‹คํ—˜์„ ํ†ตํ•˜์—ฌ ์ด ๊ธฐ๋ฒ•์˜ ์œ ์šฉ์„ฑ๊ณผ ๋ณดํŽธ์„ฑ์„ ๋ณด์˜€๋‹ค. ๋˜ํ•œ ํ•œ ์ธต๊ฐ„์—์„œ ํ•™์Šตํ•œ ์†Œ์Œ ๊ทœ๋ช… ์ง€์‹์„ ๋™์ผ ๊ฑด๋ฌผ์˜ ๋‹ค๋ฅธ ์ธต๊ฐ„์—์„œ์˜ ์†Œ์Œ ๊ทœ๋ช…์—, ํ•œ ๊ฑด๋ฌผ์—์„œ ํ•™์Šตํ•œ ์†Œ์Œ ๊ทœ๋ช… ์ง€์‹์„ ๋‹ค๋ฅธ ๊ฑด๋ฌผ ๋‚ด ์†Œ์Œ ๊ทœ๋ช…์— ํ™œ์šฉ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์ œ์•ˆํ•˜๋Š” ๊ธฐ๋ฒ•์€ ์†Œ์Œ ์ „๋‹ฌ ํ™˜๊ฒฝ ํŒŒ์•… ๋ฐ ๋ชจ๋ธ์„ ์–ป๊ธฐ ์–ด๋ ค์šด ๋ถ„์•ผ์—์„œ์˜ ์ ์šฉ์—๋„ ์œ ์šฉํ•  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.Abstract I Contents iii List of Figures vi List of Tables ix 1 Introduction 2 1.1 Backgrounds 2 1.2 Approach 5 1.2.1 Data-driven noise identification 5 1.2.2 Source type classification and localization 7 1.2.3 Knowledge transfer 10 1.3 Contributions 15 1.4 Outline of the Dissertation 16 2 Source type classification and localization of acoustic noises in a reinforced concrete structure 28 2.1 Introduction 29 2.1.1 Motivation 29 2.1.2 Related literature 29 2.1.3 Approach 30 2.1.4 Contributions of this chapter 31 2.2 Campus building inter-floor noise dataset 32 2.2.1 Selecting source type and source position 32 2.2.2 Generating and collecting inter-floor noise 33 2.3 Supervised learning of inter-floor noises 36 2.3.1 Convolutional neural networks for acoustic scene classification 36 2.3.2 Network architecture 36 2.3.3 Evaluation 40 2.3.4 Source type classification results 41 2.3.5 Localizationresults....................... 41 2.4 Source type classification and localization of inter-floor noises generated on unlearned positions 47 2.4.1 Source type classification of inter-floor noises from unlearned positions 48 2.4.2 Localization of inter-floor noises from unlearned positions 50 2.5 Summary 52 2.6 Acknowledgments 53 3 Knowledge transfer between reinforced concrete structures 61 3.1 Introduction 62 3.1.1 Motivation 62 3.1.2 Related Literature 62 3.1.3 Approach 63 3.1.4 Contributions of this chapter 63 3.2 Apartment building inter-floor noise dataset 64 3.3 Inter-floor noise classification 70 3.3.1 Onset detection 70 3.3.2 Convolutional neural network-based classifier 71 3.3.3 Network training 75 3.3.4 Source type classification and localization tasks 75 3.4 Performance Evaluation 79 3.4.1 Source type classification results in a single apartment building 79 3.4.2 Localization results in a single apartment building 80 3.4.3 Results of knowledge transfer between the apartment buildings 81 3.5 Summary 87 3.6 Acknowledgments 94 4 Conclusions 96 4.1 Findings and limitations 97 4.2 Applications 97 4.2.1 Marine structures 98 4.2.2 Mobile application 98 4.3 Future study 100 4.3.1 Learning with building structure representation 100 4.3.2 Learning with data measured at multiple receiver locations 100 4.3.3 Task oriented algorithm 101 A Precision, recall, and F1 score of the classification results 102 B Data analysis 105 C Using a one-dimensional convolutional neural network and feature visualization 112 Abstract (In Korean) 124Docto

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Object-Based Greenhouse Classification from GeoEye-1 and WorldView-2 Stereo Imagery

    Get PDF
    Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages

    Enabling stream processing for people-centric IoT based on the fog computing paradigm

    Get PDF
    The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture
    • โ€ฆ
    corecore