640 research outputs found

    Multisensor navigation systems: a remedy for GNSS vulnerabilities?

    Get PDF
    Space-based positioning, navigation, and timing (PNT) technologies, such as the global navigation satellite systems (GNSS) provide position, velocity, and timing information to an unlimited number of users around the world. In recent years, PNT information has become increasingly critical to the security, safety, and prosperity of the World's population, and is now widely recognized as an essential element of the global information infrastructure. Due to its vulnerabilities and line-of-sight requirements, GNSS alone is unable to provide PNT with the required levels of integrity, accuracy, continuity, and reliability. A multisensor navigation approach offers an effective augmentation in GNSS-challenged environments that holds a promise of delivering robust and resilient PNT. Traditionally, sensors such as inertial measurement units (IMUs), barometers, magnetometers, odometers, and digital compasses, have been used. However, recent trends have largely focused on image-based, terrain-based and collaborative navigation to recover the user location. This paper offers a review of the technological advances that have taken place in PNT over the last two decades, and discusses various hybridizations of multisensory systems, building upon the fundamental GNSS/IMU integration. The most important conclusion of this study is that in order to meet the challenging goals of delivering continuous, accurate and robust PNT to the ever-growing numbers of users, the hybridization of a suite of different PNT solutions is required

    Pushing the limits of inertial motion sensing

    Get PDF

    Discovering user mobility and activity in smart lighting environments

    Full text link
    "Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00

    Quantifying the ergonomic risk and biomechanical exposure in automotive assembly lines

    Get PDF
    Tese de Mestrado Integrado, Engenharia Biomédica e Biofísica (Biofísica Médica e Fisiologia de Sistemas), 2021, Universidade de Lisboa, Faculdade de CiênciasAs Lesões Músculo-esqueléticas Relacionadas com o Trabalho (LMERTs) representam 15% do número total de anos de vida perdidos por danos físicos ou doenças com a sua génese no trabalho. De entre os fatores de risco para as LMERTs, no presente estudo, destacam-se as posturas corporais relacionadas com o trabalho. A exposição biomecânica a posturas consideradas prejudiciais tem um impacto negativo na saúde dos trabalhadores, na economia das empresas e na sociedade. A fim de aperceber a prática recorrente de posturas prejudiciais no local de trabalho, têm sido invocados métodos de autoavaliação ergonómica, nos quais o risco é percecionado pelo próprio trabalhador; observacionais, conduzidos por peritos em ergonomia; e de medição direta, que recorrem ao emprego de soluções tecnológicas para a recolha e monitorização objetiva de variáveis pertinentes para a avaliação ergonómica. Porém, frequentemente e em contexto industrial, são apenas aplicados métodos de autoavaliação e observacionais, apesar da medição direta constituir uma solução mais notável. O advento da Internet das Coisas vem revelar a oportunidade da utilização de wearables para uma recolha de dados omnipresente, amplificando a quantidade de dados disponível com o fim de uma avaliação ergonómica mais individual e imparcial. Deste modo, estudos relativos à avaliação ergonómica no local de trabalho têm primado pelo uso de wearables com vista a monitorização do movimento humano. A presente dissertação respeita ao desenvolvimento de uma abordagem automática para a avaliação ergonómica em contexto industrial. As contribuições principais são o desenvolvimento de (1) uma rotina de captura de movimento, através da utilização de um sistema wearable com sensores inerciais; (2) uma framework computacional para a monitorização do movimento da parte superior do corpo humano, em termos dos ângulos relativos às articulações entre os segmentos anatómicos, estimados com recurso à cinemática inversa; e (3) implementações computacionais de especificações estabelecidas e relativas aos fatores de risco de postura para a quantificação da exposição biomecânica e consequente risco ergonómico em âmbito ocupacional. Subsequentemente, as implementações das especificações foram aplicadas por forma a prover constatações acerca de um caso de estudo das linhas de montagem de automóveis da Volkswagen Autoeuropa. O estudo delineado foi dividido em dois cenários: validação e avaliação. A validação consistiu em comparar os dados provisionados por um sistema inercial de referência e determinados através dos métodos desenvolvidos. Para tal, usaram-se dados de sensores inerciais recolhidos em laboratório (N = 8 participantes) e nas linhas de montagem de automóveis (N = 9 participantes). A avaliação consistiu em quantificar a exposição biomecânica e consequente risco ergonómico respeitantes ao caso de estudo, empregando as estimativas angulares calculadas pela framework desenvolvida, e a partir dos dados recolhidos com o nosso sistema nas linhas de montagem de automóveis. Os resultados revelaram que a framework proposta tem o potencial para ser aplicada na monitorização de tarefas industriais. A avaliação ergonómica é mais lata através da medição direta, desvendando diferenças de exposição biomecânica e consequente risco ergonómico entre operadores.Work-related musculoskeletal disorders (WRMSDs) represent 15% of the total number of life-years lost due to work-related injuries and illness. Among WRMSDs’ risk factors, work-related postures are underlined in this research. Biomechanical exposure to hazardous postures negatively impacts workers’ health, enterprises’ economy, and society. Toward the apperception about the recurrent practice of hazardous postures in the workplace, self-reported, observational, and directly measured ergonomic assessment methods have been established. However, only self-reported and observational approaches are enforced on a more frequent basis, besides directly measured is a more compelling choice. The advent of the Internet of Things poses the opportunity of using wearables in the direction of ubiquitous data collection, increasing the amount of available data for a more personal and non-biased ergonomic evaluation. As follows, over workplace ergonomics research, wearables have been used to monitor human motion. The dissertation developed an automatic approach to ergonomic evaluation in industrial contexts. Its main contributions are the development of (1) a motion capture routine using inertial sensors; (2) a computational framework to monitor human upper body motion, in terms of joints’ angles, through inverse kinematics; and (3) computational implementations of posture risk factors specifications to quantify the biomechanical exposure and consequent ergonomic risk in occupational settings. Subsequently, specifications implementations were applied to provide insights in consideration of a case study from Volkswagen Autoeuropa automotive assembly lines. The research was divided into two scenarios: validation and evaluation. Validation consisted of comparing data provided by a ground truth inertial motion capture system and computed throughout the developed methods. Hence, inertial sensors’ data, collected in the laboratory (N = 8 participants) and automotive assembly lines (N = 9 participants) settings, were used. The evaluation consisted of quantifying the biomechanical exposure and consequent ergonomic risk concerning the case study, using angular estimates computed through the developed framework and about data collected in automotive assembly lines. The results revealed that the proposed framework has the potential to be applied to monitor industrial tasks. The ergonomic evaluation is more comprehensive through direct measures, uncovering differences about biomechanical exposure and consequent ergonomic risk among operators

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Event Detection in Eye-Tracking Data for Use in Applications with Dynamic Stimuli

    Get PDF
    This doctoral thesis has signal processing of eye-tracking data as its main theme. An eye-tracker is a tool used for estimation of the point where one is looking. Automatic algorithms for classification of different types of eye movements, so called events, form the basis for relating the eye-tracking data to cognitive processes during, e.g., reading a text or watching a movie. The problems with the algorithms available today are that there are few algorithms that can handle detection of events during dynamic stimuli and that there is no standardized procedure for how to evaluate the algorithms. This thesis comprises an introduction and four papers describing methods for detection of the most common types of eye movements in eye-tracking data and strategies for evaluation of such methods. The most common types of eye movements are fixations, saccades, and smooth pursuit movements. In addition to these eye movements, the event post-saccadic oscillations, (PSO), is considered. The eye-tracking data in this thesis are recorded using both high- and low-speed eye-trackers. The first paper presents a method for detection of saccades and PSO. The saccades are detected using the acceleration signal and three specialized criteria based on directional information. In order to detect PSO, the interval after each saccade is modeled and the parameters of the model are used to determine whether PSO are present or not. The algorithm was evaluated by comparing the detection results to manual annotations and to the detection results of the most recent PSO detection algorithm. The results show that the algorithm is in good agreement with annotations, and has better performance than the compared algorithm. In the second paper, a method for separation of fixations and smooth pursuit movements is proposed. In the intervals between the detected saccades/PSO, the algorithm uses different spatial scales of the position signal in order to separate between the two types of eye movements. The algorithm is evaluated by computing five different performance measures, showing both general and detailed aspects of the discrimination performance. The performance of the algorithm is compared to the performance of a velocity and dispersion based algorithm, (I-VDT), to the performance of an algorithm based on principle component analysis, (I-PCA), and to manual annotations by two experts. The results show that the proposed algorithm performs considerably better than the compared algorithms. In the third paper, a method based on eye-tracking signals from both eyes is proposed for improved separation of fixations and smooth pursuit movements. The method utilizes directional clustering of the eye-tracking signals in combination with binary filters taking both temporal and spatial aspects of the eye-tracking signal into account. The performance of the method is evaluated using a novel evaluation strategy based on automatically detected moving objects in the video stimuli. The results show that the use of binocular information for separation of fixations and smooth pursuit movements is advantageous in static stimuli, without impairing the algorithm's ability to detect smooth pursuit movements in video and moving dot stimuli. The three first papers in this thesis are based on eye-tracking signals recorded using a stationary eye-tracker, while the fourth paper uses eye-tracking signals recorded using a mobile eye-tracker. In mobile eye-tracking, the user is allowed to move the head and the body, which affects the recorded data. In the fourth paper, a method for compensation of head movements using an inertial measurement unit, (IMU), combined with an event detector for lower sampling rate data is proposed. The event detection is performed by combining information from the eye-tracking signals with information about objects extracted from the scene video of the mobile eye-tracker. The results show that by introducing head movement compensation and information about detected objects in the scene video in the event detector, improved classification can be achieved. In summary, this thesis proposes an entire methodological framework for robust event detection which performs better than previous methods when analyzing eye-tracking signals recorded during dynamic stimuli, and also provides a methodology for performance evaluation of event detection algorithms

    Airborne Navigation by Fusing Inertial and Camera Data

    Get PDF
    Unmanned aircraft systems (UASs) are often used as measuring system. Therefore, precise knowledge of their position and orientation are required. This thesis provides research in the conception and realization of a system which combines GPS-assisted inertial navigation systems with the advances in the area of camera-based navigation. It is presented how these complementary approaches can be used in a joint framework. In contrast to widely used concepts utilizing only one of the two approaches, a more robust overall system is realized. The presented algorithms are based on the mathematical concepts of rigid body motions. After derivation of the underlying equations, the methods are evaluated in numerical studies and simulations. Based on the results, real-world systems are used to collect data, which is evaluated and discussed. Two approaches for the system calibration, which describes the offsets between the coordinate systems of the sensors, are proposed. The first approach integrates the parameters of the system calibration in the classical bundle adjustment. The optimization is presented very descriptive in a graph based formulation. Required is a high precision INS and data from a measurement flight. In contrast to classical methods, a flexible flight course can be used and no cost intensive ground control points are required. The second approach enables the calibration of inertial navigation systems with a low positional accuracy. Line observations are used to optimize the rotational part of the offsets. Knowledge of the offsets between the coordinate systems of the sensors allows transforming measurements bidirectional. This is the basis for a fusion concept combining measurements from the inertial navigation system with an approach for the visual navigation. As a result, more robust estimations of the own position and orientation are achieved. Moreover, the map created from the camera images is georeferenced. It is shown how this map can be used to navigate an unmanned aerial system back to its starting position in the case of a disturbed or failed GPS reception. The high precision of the map allows the navigation through previously unexplored area by taking into consideration the maximal drift for the camera-only navigation. The evaluated concept provides insight into the possibility of the robust navigation of unmanned aerial systems with complimentary sensors. The constantly increasing computing power allows the evaluation of big amounts of data and the development of new concept to fuse the information. Future navigation systems will use the data of all available sensors to achieve the best navigation solution at any time

    Design and verification of Guidance, Navigation and Control systems for space applications

    Get PDF
    In the last decades, systems have strongly increased their complexity in terms of number of functions that can be performed and quantity of relationships between functions and hardware as well as interactions of elements and disciplines concurring to the definition of the system. The growing complexity remarks the importance of defining methods and tools that improve the design, verification and validation of the system process: effectiveness and costs reduction without loss of confidence in the final product are the objectives that have to be pursued. Within the System Engineering context, the modern Model and Simulation based approach seems to be a promising strategy to meet the goals, because it reduces the wasted resources with respect to the traditional methods, saving money and tedious works. Model Based System Engineering (MBSE) starts from the idea that it is possible at any moment to verify, through simulation sessions and according to the phase of the life cycle, the feasibility, the capabilities and the performances of the system. Simulation is used during the engineering process and can be classified from fully numerical (i.e. all the equipment and conditions are reproduced as virtual model) to fully integrated hardware simulation (where the system is represented by real hardware and software modules in their operational environment). Within this range of simulations, a few important stages can be defined: algorithm in the loop (AIL), software in the loop (SIL), controller in the loop (CIL), hardware in the loop (HIL), and hybrid configurations among those. The research activity, in which this thesis is inserted, aims at defining and validating an iterative methodology (based on Model and Simulation approach) in support of engineering teams and devoted to improve the effectiveness of the design and verification of a space system with particular interest in Guidance Navigation and Control (GNC) subsystem. The choice of focusing on GNC derives from the common interest and background of the groups involved in this research program (ASSET at Politecnico di Torino and AvioSpace, an EADS company). Moreover, GNC system is sufficiently complex (demanding both specialist knowledge and system engineer skills) and vital for whatever spacecraft and, last but not least the verification of its behavior is difficult on ground because strong limitations on dynamics and environment reproduction arise. Considering that the verification should be performed along the entire product life cycle, a tool and a facility, a simulator, independent from the complexity level of the test and the stage of the project, is needed. This thesis deals with the design of the simulator, called StarSim, which is the real heart of the proposed methodology. It has been entirely designed and developed from the requirements definition to the software implementation and hardware construction, up to the assembly, integration and verification of the first simulator release. In addition, the development of this technology met the modern standards on software development and project management. StarSim is a unique and self-contained platform: this feature allows to mitigate the risk of incompatibility, misunderstandings and loss of information that may arise using different software, simulation tools and facilities along the various phases. Modularity, flexibility, speed, connectivity, real time operation, fidelity with real world, ease of data management, effectiveness and congruence of the outputs with respect to the inputs are the sought-after features in the StarSim design. For every iteration of the methodology, StarSim guarantees the possibility to verify the behavior of the system under test thanks to the permanent availability of virtual models, that substitute all those elements not yet available and all the non-reproducible dynamics and environmental conditions. StarSim provides a furnished and user friendly database of models and interfaces that cover different levels of detail and fidelity, and supports the updating of the database allowing the user to create custom models (following few, simple rules). Progressively, pieces of the on board software and hardware can be introduced without stopping the process of design and verification, avoiding delays and loss of resources. StarSim has been used for the first time with the CubeSats belonging to the e-st@r program. It is an educational project carried out by students and researchers of the “CubeSat Team Polito” in which StarSim has been mainly used for the payload development, an Active Attitude Determination and Control System, but StarSim’s capabilities have also been updated to evaluate functionalities, operations and performances of the entire satellite. AIL, SIL, CIL, HIL simulations have been performed along all the phases of the project, successfully verifying a great number of functional and operational requirements. In particular, attitude determination algorithms, control laws, modes of operation have been selected and verified; software has been developed step by step and the bugs-free executable files have been loaded on the micro-controller. All the interfaces and protocols as well as data and commands handling have been verified. Actuators, logic and electrical circuits have been designed, built and tested and sensors calibration has been performed. Problems such as real time and synchronization have been solved and a complete hardware in the loop simulation test campaign both for A-ADCS standalone and for the entire satellite has been performed, verifying the satisfaction of a great number of CubeSat functional and operational requirements. The case study represents the first validation of the methodology with the first release of StarSim. It has been proven that the methodology is effective in demonstrating that improving the design and verification activities is a key point to increase the confidence level in the success of a space mission

    Wearable Movement Sensors for Rehabilitation: From Technology to Clinical Practice

    Get PDF
    This Special Issue shows a range of potential opportunities for the application of wearable movement sensors in motor rehabilitation. However, the papers surely do not cover the whole field of physical behavior monitoring in motor rehabilitation. Most studies in this Special Issue focused on the technical validation of wearable sensors and the development of algorithms. Clinical validation studies, studies applying wearable sensors for the monitoring of physical behavior in daily life conditions, and papers about the implementation of wearable sensors in motor rehabilitation are under-represented in this Special Issue. Studies investigating the usability and feasibility of wearable movement sensors in clinical populations were lacking. We encourage researchers to investigate the usability, acceptance, feasibility, reliability, and clinical validity of wearable sensors in clinical populations to facilitate the application of wearable movement sensors in motor rehabilitation

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction and reports on fifteen research projects.National Institutes of Health Grant RO1 DC00117National Institutes of Health Grant RO1 DC02032National Institutes of Health Contract P01-DC00361National Institutes of Health Contract N01-DC22402National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant 2 R01 DC00126National Institutes of Health Grant 2 R01 DC00270National Institutes of Health Contract N01 DC-5-2107National Institutes of Health Grant 2 R01 DC00100U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-94-C-0087U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-95-K-0014U.S. Navy - Office of Naval Research/Naval Air Warfare Center Grant N00014-93-1-1399U.S. Navy - Office of Naval Research/Naval Air Warfare Center Grant N00014-94-1-1079U.S. Navy - Office of Naval Research Subcontract 40167U.S. Navy - Office of Naval Research Grant N00014-92-J-1814National Institutes of Health Grant R01-NS33778U.S. Navy - Office of Naval Research Grant N00014-88-K-0604National Aeronautics and Space Administration Grant NCC 2-771U.S. Air Force - Office of Scientific Research Grant F49620-94-1-0236U.S. Air Force - Office of Scientific Research Agreement with Brandeis Universit
    corecore