3,470 research outputs found

    A Robust Structured Tracker Using Local Deep Features

    Get PDF
    Deep features extracted from convolutional neural networks have been recently utilized in visual tracking to obtain a generic and semantic representation of target candidates. In this paper, we propose a robust structured tracker using local deep features (STLDF). This tracker exploits the deep features of local patches inside target candidates and sparsely represents them by a set of templates in the particle filter framework. The proposed STLDF utilizes a new optimization model, which employs a group-sparsity regularization term to adopt local and spatial information of the target candidates and attain the spatial layout structure among them. To solve the optimization model, we propose an efficient and fast numerical algorithm that consists of two subproblems with the close-form solutions. Different evaluations in terms of success and precision on the benchmarks of challenging image sequences (e.g., OTB50 and OTB100) demonstrate the superior performance of the STLDF against several state-of-the-art trackers

    Requirement analysis for building practical accident warning systems based on vehicular ad-hoc networks

    Get PDF
    An Accident Warning System (AWS) is a safety application that provides collision avoidance notifications for next generation vehicles whilst Vehicular Ad-hoc Networks (VANETs) provide the communication functionality to exchange these notifi- cations. Despite much previous research, there is little agreement on the requirements for accident warning systems. In order to build a practical warning system, it is important to ascertain the system requirements, information to be exchanged, and protocols needed for communication between vehicles. This paper presents a practical model of an accident warning system by stipulating the requirements in a realistic manner and thoroughly reviewing previous proposals with a view to identify gaps in this area

    Impact of a Non-Traditional Research Approach

    Get PDF
    abstract: Construction Management research has not been successful in changing the practices of the construction industry. The method of receiving grants and the peer review paper system that academics rely on to achieve promotion, does not align to academic researchers becoming experts who can bring change to industry practices. Poor construction industry performance has been documented for the past 25 years in the international construction management field. However, after 25 years of billions of dollars of research investment, the solution remains elusive. Research has shown that very few researchers have a hypothesis, run cycles of research tests in the industry, and result in changing industry practices. The most impactful research identified in this thesis, has led to conclusions that pre-planning is critical, hiring contractors who have expertise will result in better performance, and risk is mitigated when the supply chain partners work together and expertise is utilized at the beginning of projects. The problems with construction non-performance have persisted. Legal contract issues have become more important. Traditional research approaches have not identified the severity and the source of construction non-performance. The problem seems to be as complex as ever. The construction industry practices and the academic research community remain in silos. This research proposes that the problem may be in the traditional construction management research structure and methodology. The research has identified a unique non-traditional research program that has documented over 1700 industry tests, which has resulted in a decrease in client management by up to 79%, contractors adding value by up to 38%, increased customer satisfaction by up to 140%, reduced change order rates as low as -0.6%, and decreased cost of services by up to 31%. The purpose of this thesis is to document the performance of the non-traditional research program around the above identified results. The documentation of such an effort will shed more light on what is required for a sustainable, industry impacting, and academic expert based research program.Dissertation/ThesisMasters Thesis Construction 201

    Energy Disaggregation Using Elastic Matching Algorithms

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/)In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic matching algorithms were evaluated across different datasets and the experimental results showed that the minimum variance matching algorithm outperforms all other evaluated matching algorithms. The best performing minimum variance matching algorithm improved the energy disaggregation accuracy by 2.7% when compared to the baseline dynamic time warping algorithm.Peer reviewedFinal Published versio

    Radiative Contour Mapping Using UAS Swarm

    Full text link
    The work is related to the simulation and design of small and medium scale unmanned aerial system (UAS), and its implementation for radiation measurement and contour mapping with onboard radiation sensors. The compact high-resolution CZT sensors were integrated to UAS platforms as the plug-and-play components using Robot Operation System. The onboard data analysis provides time and position-stamped intensities of gamma-ray peaks for each sensor that are used as the input data for the swarm flight control algorithm. In this work, a UAS swarm is implemented for radiation measurement and contour mapping. The swarm of UAS has advantages over a single agent based approach in detecting radiative sources and effectively mapping the area. The proposed method can locate sources of radiation as well as mapping the contaminated area for enhancing situation awareness capabilities for first responders. This approach uses simultaneous radiation measurements by multiple UAS flying in a circular formation to find the steepest gradient of radiation to determine a bulk heading angle for the swarm for contour mapping, which can provide a relatively precise boundary of safety for potential human exploration

    Sensor Technologies for Intelligent Transportation Systems

    Get PDF
    Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment

    Deep Data Locality on Apache Hadoop

    Full text link
    The amount of data being collected in various areas such as social media, network, scientific instrument, mobile devices, and sensors is growing continuously, and the technology to process them is also advancing rapidly. One of the fundamental technologies to process big data is Apache Hadoop that has been adopted by many commercial products, such as InfoSphere by IBM, or Spark by Cloudera. MapReduce on Hadoop has been widely used in many data science applications. As a dominant big data processing platform, the performance of MapReduce on Hadoop system has a significant impact on the big data processing capability across multiple industries. Most of the research for improving the speed of big data analysis has been on Hadoop modules such as Hadoop common, Hadoop Distribute File System (HDFS), Hadoop Yet Another Resource Negotiator (YARN) and Hadoop MapReduce. In this research, we focused on data locality on HDFS to improve the performance of MapReduce. To reduce the amount of data transfer, MapReduce has been utilizing data locality. However, even though the majority of the processing cost occurs in the later stages, data locality has been utilized only in the early stages, which we call Shallow Data Locality (SDL). As a result, the benefit of data locality has not been fully realized. We have explored a new concept called Deep Data Locality (DDL) where the data is pre-arranged to maximize the locality in the later stages. Specifically, we introduce two implementation methods of the DDL, i.e., block-based DDL and key-based DDL. In block-based DDL, the data blocks are pre-arranged to reduce the block copying time in two ways. First the RLM blocks are eliminated. Under the conventional default block placement policy (DBPP), data blocks are randomly placed on any available slave nodes, requiring a copy of RLM (Rack-Local Map) blocks. In block-based DDL, blocks are placed to avoid RLMs to reduce the block copy time. Second, block-based DDL concentrates the blocks in a smaller number of nodes and reduces the data transfer time among them. We analyzed the block distribution status with the customer review data from TripAdvisor and measured the performances with Terasort Benchmark. Our test result shows that the execution times of Map and Shuffle have been improved by up to 25% and 31% respectively. In key-based DDL, the input data is divided into several blocks and stored in HDFS before going into the Map stage. In comparison with conventional blocks that have random keys, our blocks have a unique key. This requires a pre-sorting of the key-value pairs, which can be done during ETL process. This eliminates some data movements in map, shuffle, and reduce stages, and thereby improves the performance. In our experiments, MapReduce with key-based DDL performed 21.9% faster than default MapReduce and 13.3% faster than MapReduce with block-based DDL. Additionally, key-based DDL can be combined with other methods to further improve the performance. When key-based DDL and block-based DDL are combined, the Hadoop performance went up by 34.4%. In this research, we developed the MapReduce workflow models with a novel computational model. We developed a numerical simulator that integrates the computational models. The model faithfully predicts the Hadoop performance under various conditions

    Developing a Parasocial Relationship with Hotel Brands on Facebook: Will Millennials Differ from GenXers?

    Full text link
    Facebook, particularly its brand page, is becoming one of the most powerful tool for relationship building and customer engagement for hospitality companies. As the social media marketing practices evolve in the hospitality industry, the industry starts to realize the importance of customer participation behaviors based on relationship quality rather than quantity of interactions and the rising significance of the Millennials generation. To respond to this trend, this study pursues an empirical investigation of the antecedents for consumer-hotel brand relationship on Facebook, and the potential differences between Millennials and non-Millennials, particularly the GenXers. It also examines the potential varying relational consequences on consumers\u27 online participation behaviors and brand loyalty between these two groups. More specifically, this study positions Facebook as an innovative communication medium, and applies the “parasocial relationship” framework in mediated communication literature as an overarching theoretical guide. Five social-media related factors are included to explain the psychological mechanisms of consumer’s parasocial relationship with brands: utilitarian benefits, hedonic benefits, perceived self-disclosure, perceived interactivity, and perceived information overload. This study also investigates the effects of parasocial relationship on Facebook users’ online participation behaviors with brands and their offline brand loyalty. The hypothesized model is tested with multi-group SEM modelling. Practical and theoretical implications are also discussed in the study

    Deep learning of appearance affinity for multi-object tracking and re-identification: a comparative view

    Get PDF
    Recognizing the identity of a query individual in a surveillance sequence is the core of Multi-Object Tracking (MOT) and Re-Identification (Re-Id) algorithms. Both tasks can be addressed by measuring the appearance affinity between people observations with a deep neural model. Nevertheless, the differences in their specifications and, consequently, in the characteristics and constraints of the available training data for each one of these tasks, arise from the necessity of employing different learning approaches to attain each one of them. This article offers a comparative view of the Double-Margin-Contrastive and the Triplet loss function, and analyzes the benefits and drawbacks of applying each one of them to learn an Appearance Affinity model for Tracking and Re-Identification. A batch of experiments have been conducted, and their results support the hypothesis concluded from the presented study: Triplet loss function is more effective than the Contrastive one when an Re-Id model is learnt, and, conversely, in the MOT domain, the Contrastive loss can better discriminate between pairs of images rendering the same person or not.This research was funded by the Spanish Government through the CICYT projects (TRA2016-78886-C3-1-R and RTI2018-096036-B-C21), Universidad Carlos III of Madrid through (PEAVAUTO-CM-UC3M), the Comunidad de Madrid through SEGVAUTO-4.0-CM (P2018/EMT-4362), and the Ministerio de EducaciĂłn, Cultura y Deporte para la FormaciĂłn de Profesorado Universitario (FPU14/02143)
    • …
    corecore