15 research outputs found

    Compressive Sensing for Target Detection and Tracking within Wireless Visual Sensor Networks-based Surveillance applications

    Get PDF
    Wireless Visual Sensor Networks (WVSNs) have gained signiļ¬cant importance in the last few years and have emerged in several distinctive applications. The main aim of this research is to investigate the use of adaptive Compressive Sensing (CS) for eļ¬ƒcient target detection and tracking in WVSN-based surveillance applications. CS is expected to overcome the WVSN resource constraints such as memory limitation, communication bandwidth and battery constraints. In addition, adaptive CS dynamically chooses variable compression rates according to diļ¬€erent data sets to represent captured images in an eļ¬ƒcient way hence saving energy and memory space. In this work, a literature review on compressive sensing, target detection and tracking for WVSN is carried out to investigate existing techniques. Only single view target tracking is considered to keep minimum number of visual sensor nodes in a wake-up state to optimize the use of nodes and save battery life which is limited in WVSNs. To reduce the size of captured images an adaptive block CS technique is proposed and implemented to compress the high volume data images before being transmitted through the wireless channel. The proposed technique divides the image to blocks and adaptively chooses the compression rate for relative blocks containing the target according to the sparsity nature of images. At the receiver side, the compressed image is then reconstructed and target detection and tracking are performed to investigate the eļ¬€ect of CS on the tracking performance. Least mean square adaptive ļ¬lter is used to predicts targetā€™s next location, an iterative quantized clipped LMS technique is proposed and compared with other variants of LMS and results have shown that it achieved lower error rates than other variants of lMS. The tracking is performed in both indoor and outdoor environments for single/multi targets. Results have shown that with adaptive block compressive sensing (CS) up to 31% measurements of data are required to be transmitted for less sparse images and 15% for more sparse, while preserving the 33dB image quality and the required detection and tracking performance. Adaptive CS resulted in 82% energy saving as compared to transmitting the required image with no C

    Image Sensor Nonuniformity Correction by a Scene-Based Maximum Likelihood Approach

    Get PDF
    Image sensors come with a spatial inhomogeneity, known as Fixed Pattern Noise or image sensor nonuniformity, which degrades the image quality. These nonuniformities are regarded as the systematic errors of the image sensor, however, they change with the sensor temperature and with time. This makes laboratory calibrations unsatisfying. Scene based nonuniformity correction methods are therefore necessary to correct for these sensor errors. In this thesis, a new maximum likelihood estimation method is developed that estimates a sensorā€™s nonuniformities from a given set of input images. The method follows a rigorous mathematical derivation that exploits the available sensor statistics and uses only well-motivated assumptions. While previous methods need to optimize a free parameter, the new methodā€™s parameters are defined by the statistics of the input data. Furthermore, the new method reaches a better performance than the previous methods. Specialized developments that include a row- or column-wise and a combined estimation of the nonuniformity parameters are introduced as well and are of relevance for typical industrial applications. Finally it is shown that the previous methods can be regarded as simplifications of the newly developed method. This deliberation gives a new view onto the problem of scene based nonuniformity estimation and allows to select the best method for a given application

    Mobile and Wireless Communications

    Get PDF
    Mobile and Wireless Communications have been one of the major revolutions of the late twentieth century. We are witnessing a very fast growth in these technologies where mobile and wireless communications have become so ubiquitous in our society and indispensable for our daily lives. The relentless demand for higher data rates with better quality of services to comply with state-of-the art applications has revolutionized the wireless communication field and led to the emergence of new technologies such as Bluetooth, WiFi, Wimax, Ultra wideband, OFDMA. Moreover, the market tendency confirms that this revolution is not ready to stop in the foreseen future. Mobile and wireless communications applications cover diverse areas including entertainment, industrialist, biomedical, medicine, safety and security, and others, which definitely are improving our daily life. Wireless communication network is a multidisciplinary field addressing different aspects raging from theoretical analysis, system architecture design, and hardware and software implementations. While different new applications are requiring higher data rates and better quality of service and prolonging the mobile battery life, new development and advanced research studies and systems and circuits designs are necessary to keep pace with the market requirements. This book covers the most advanced research and development topics in mobile and wireless communication networks. It is divided into two parts with a total of thirty-four stand-alone chapters covering various areas of wireless communications of special topics including: physical layer and network layer, access methods and scheduling, techniques and technologies, antenna and amplifier design, integrated circuit design, applications and systems. These chapters present advanced novel and cutting-edge results and development related to wireless communication offering the readers the opportunity to enrich their knowledge in specific topics as well as to explore the whole field of rapidly emerging mobile and wireless networks. We hope that this book will be useful for students, researchers and practitioners in their research studies

    Interaction and Mechanics: Understanding Course-work Engagement in Large Science Lectures

    Full text link
    Post-secondary institutions have developed several interventions to address what Chamblisā€™ (2014) calls the arithmetic of classroom engagement. Large lecture courses limit the potential for student/instructor interaction. Instead, large lecture courses have historically relied on an industrialized model of information delivery. Very little is known about how students develop their strategies for completing their course-work in this context. The aim of this study was to outline a conceptual framework describing how undergraduates become engaged in their course-work in large science lecture courses. Course-work engagement refers to the set of practices that are part of studentsā€™ efforts to successfully complete a course. Course-work engagement is goal oriented behavior, shaped by the beliefs that individual holds about their self and the course. In the framework, I propose that studentsā€™ initial beliefs states catalyze their behavioral engagement in the course which is conditioned through feedback from working with peers, from performance assessments, and through interactions with the instructor. This study was conducted in a large (n=551) undergraduate introductory physics course. The course was composed of three lecture sections, each taught by a different instructor. Based on a review of the literature, I posed the following research questions: 1. What are the relationships among studentsā€™ peer interactions, their digital instructional technology use, and their performance on assessments in a physics lecture course? 2. How does the instructional system shape studentsā€™ engagement in peer interactions and their use of digital instructional technologies in a course? In this study, I employed three methods of data collection. First, I observed instruction in all three sections throughout the semester to characterize similarities and differences among the three lecture sections. Second, I administered two surveys to collect information about studentsā€™ goals for the course, their expectations for success, their beliefs about the social and academic community in the course, and the names of peers in the course who the student collaborated with in out-of-class study groups. Surveys were administered before the first and final exam in the course. Third, I used learning analytics data from a practice problem website to characterize studentsā€™ usage of the tool for study preparation before and after the first exam. Through the stochastic actor based modeling, I identified three salient factors on studentsā€™ likelihood of participating in out-of-class study groups. First, being underrepresented in the course may have shaped studentsā€™ opportunities to participate in out-of-class study groups. Women and international students both attempted to participate at higher rates than men and domestic students, respectively. However, women and international students were unlikely to have their relationships reciprocated over the semester. Second, when study tools are incorporated into out-of-class study groups, social influence appears to play a significant role in the formation of course-work engagement. For example, students who were non-users of the practice problem website tended to adopt the use behavior of their higher intensity peers. Third, changes in studentsā€™ beliefs about the course were significantly related to changes in their course grade. In terms of performance, students who experienced changes to their course beliefs, or what attempted to form new out of class study groups in the lead up to the third exam, were likely to experience academic difficulty. This study highlights the important role of time and the dynamic role of social interaction on the development of course-work engagement in large science lecture courses.PHDHigher EducationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138776/1/mbrowng_1.pd

    Proceedings of the Third International Mobile Satellite Conference (IMSC 1993)

    Get PDF
    Satellite-based mobile communications systems provide voice and data communications to users over a vast geographic area. The users may communicate via mobile or hand-held terminals, which may also provide access to terrestrial cellular communications services. While the first and second International Mobile Satellite Conferences (IMSC) mostly concentrated on technical advances, this Third IMSC also focuses on the increasing worldwide commercial activities in Mobile Satellite Services. Because of the large service areas provided by such systems, it is important to consider political and regulatory issues in addition to technical and user requirements issues. Topics covered include: the direct broadcast of audio programming from satellites; spacecraft technology; regulatory and policy considerations; advanced system concepts and analysis; propagation; and user requirements and applications

    Integrated structural analysis using isogeometric finite element methods

    Get PDF
    The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration. An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof. The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback. The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested. Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model. The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed. When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure

    Move Acceptance in Local Search Metaheuristics for Cross-domain Heuristic Search

    Get PDF
    Many real-world combinatorial optimisation problems (COPs) are computationally hard problems and search methods are frequently preferred as solution techniques. Traditionally, an expert with domain knowledge designs, and tailors the search method for solving a particular COP. This process is usually expensive, requiring a lot of effort and time and often results in problem specific algorithms that can not be applied to another COP. Then, the domain expert either needs to design a new search method, or reconfigure an existing search method to solve that COP. This prompted interest into developing more general, problem-domain-independent high-level search methods that can be re-used, capable of solving not just a single problem but multiple COPs. The cross-domain search problem is a relatively new concept and represents a high-level issue that involves designing a single solution method for solving a multitude of COPs preferably with the least or no expert intervention. Cross-domain search methods are algorithms designed to tackle the cross-domain search problem. Such methods are of interest to researchers and practitioners worldwide as they offer a single off-the-shelf go-to approach to problem solving. Furthermore, if a cross-domain search method has a good performance, then it can be expected to solve `any' given COP well and in a reasonable time frame. When a practitioner is tasked with solving a new or unknown COP, they are tasked with a decision-making dilemma. This entails the decision of what algorithm they should use, what parameters should be used for that algorithm, and whether any other algorithm can outperform it. A well designed cross-domain search method that performs well and does not require re-tuning can fulfil this dilemma allowing practitioners to find good-enough solutions to such problems. Researchers on the other hand strive to find high-quality solutions to these problems; however, such a cross-domain search method provides them with a good benchmark to which they can compare their solution methods to, and should ultimately aim to outperform. In this work, move acceptance methods, which are a component of traditional search methods, such as metaheuristics and hyper-heuristics, are explored under a cross-domain search framework. A survey of the existing move acceptance methods as a part of local search metaheuristics is conducted based on the hyper-heuristic literature as solution methods to the cross-domain search problem. Furthermore, a taxonomy is provided for classifying them based on their design characteristics. The cross-domain performance of existing move acceptance methods, covering the taxonomy, is compared across a total of 45 problem instances spanning 9 problem domains, and the effects of parameter tuning versus choice of the move acceptance method are explored. A novel move acceptance method (HAMSTA) is proposed to overcome the shortcomings of the existing methods to improve the cross-domain performance of a local search metaheuristic. HAMSTA is capable of outperforming the cross-domain performances of existing methods that are re-tuned for each domain, despite itself using only a single cross-domain parameter configuration derived from tuning experiments that considers 2 instances each from 4 domains; hence, HAMSTA requires no expert intervention to re-configure it to perform well for solving multiple COPs with 37 problem instances unseen by HAMSTA, 25 of which are from unseen domains. HAMSTA is therefore shown to have the potential to fulfil the aforementioned decision-making dilemma

    Move Acceptance in Local Search Metaheuristics for Cross-domain Heuristic Search

    Get PDF
    Many real-world combinatorial optimisation problems (COPs) are computationally hard problems and search methods are frequently preferred as solution techniques. Traditionally, an expert with domain knowledge designs, and tailors the search method for solving a particular COP. This process is usually expensive, requiring a lot of effort and time and often results in problem specific algorithms that can not be applied to another COP. Then, the domain expert either needs to design a new search method, or reconfigure an existing search method to solve that COP. This prompted interest into developing more general, problem-domain-independent high-level search methods that can be re-used, capable of solving not just a single problem but multiple COPs. The cross-domain search problem is a relatively new concept and represents a high-level issue that involves designing a single solution method for solving a multitude of COPs preferably with the least or no expert intervention. Cross-domain search methods are algorithms designed to tackle the cross-domain search problem. Such methods are of interest to researchers and practitioners worldwide as they offer a single off-the-shelf go-to approach to problem solving. Furthermore, if a cross-domain search method has a good performance, then it can be expected to solve `any' given COP well and in a reasonable time frame. When a practitioner is tasked with solving a new or unknown COP, they are tasked with a decision-making dilemma. This entails the decision of what algorithm they should use, what parameters should be used for that algorithm, and whether any other algorithm can outperform it. A well designed cross-domain search method that performs well and does not require re-tuning can fulfil this dilemma allowing practitioners to find good-enough solutions to such problems. Researchers on the other hand strive to find high-quality solutions to these problems; however, such a cross-domain search method provides them with a good benchmark to which they can compare their solution methods to, and should ultimately aim to outperform. In this work, move acceptance methods, which are a component of traditional search methods, such as metaheuristics and hyper-heuristics, are explored under a cross-domain search framework. A survey of the existing move acceptance methods as a part of local search metaheuristics is conducted based on the hyper-heuristic literature as solution methods to the cross-domain search problem. Furthermore, a taxonomy is provided for classifying them based on their design characteristics. The cross-domain performance of existing move acceptance methods, covering the taxonomy, is compared across a total of 45 problem instances spanning 9 problem domains, and the effects of parameter tuning versus choice of the move acceptance method are explored. A novel move acceptance method (HAMSTA) is proposed to overcome the shortcomings of the existing methods to improve the cross-domain performance of a local search metaheuristic. HAMSTA is capable of outperforming the cross-domain performances of existing methods that are re-tuned for each domain, despite itself using only a single cross-domain parameter configuration derived from tuning experiments that considers 2 instances each from 4 domains; hence, HAMSTA requires no expert intervention to re-configure it to perform well for solving multiple COPs with 37 problem instances unseen by HAMSTA, 25 of which are from unseen domains. HAMSTA is therefore shown to have the potential to fulfil the aforementioned decision-making dilemma

    Alzheimerā€™s Dementia Recognition Through Spontaneous Speech

    Get PDF

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions
    corecore