12 research outputs found

    Development of a Model and Localization Algorithm for Received Signal Strength-Based Geolocation

    Get PDF
    Location-Based Services (LBS), also called geolocation, have become increasingly popular in the past decades. They have several uses ranging from assisting emergency personnel, military reconnaissance and applications in social media. In geolocation a group of sensors estimate the location of transmitters using position and Radio Frequency (RF) information. A review of the literature revealed that a majority of the Received Signal Strength (RSS) techniques used made erroneous assumptions about the distribution or ignored effects of multiple transmitters, noise and multiple antennas. Further, the corresponding algorithms are often mathematically complex and computationally expensive. To address the issues this dissertation focused on RSS models which account for external factors effects and algorithms that are more efficient and accurate

    Design of a walking robot

    Get PDF
    Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project

    Generating depth maps from stereo image pairs

    Get PDF

    SIMULATING, RECONSTRUCTING, AND ROUTING METROPOLITAN-SCALE TRAFFIC

    Get PDF
    Few phenomena are more ubiquitous than traffic, and few are more significant economically, socially, or environmentally. The vast, world-spanning road network enables the daily commutes of billions of people and makes us mobile in a way our ancestors would have envied. And yet, few systems perform so poorly so often. Gridlock and traffic jams cost 2.9 billion gallons of wasted fuel and costs over 121 billion dollars every year in the U.S. alone. One promising approach to improving the reliability and efficiency of traffic systems is to fully incorporate computational techniques into the system, transforming the traffic systems of today into cyber-physical systems. However, creating a truly cyber-physical traffic system will require overcoming many substantial challenges. The state of traffic at any given time is unknown for the majority of the road network. The dynamics of traffic are complex, noisy, and dependent on drivers' decisions. The domain of the system, the real-world road network, has no suitable representation for high-detail simulation. And there is no known solution for improving the efficiency and reliability of the system. In this dissertation, I propose techniques that combine simulation and data to solve these challenges and enable large-scale traffic state estimation, simulation, and route planning. First, to create and represent road networks, I propose an efficient method for enhancing noisy GIS road maps to create geometrically and topologically consistent 3D models for high-detail, real-time traffic simulation, interactive visualization, traffic state estimation, and vehicle routing. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, merge zones, and intersections with arbitrary states. Second, to estimate and communicate traffic conditions, I propose a fast technique to reconstruct traffic flows from in-road sensor measurements or user-specified control points for interactive 3D visualization and communication. My algorithm estimates the full state of the traffic flow from sparse sensor measurements using a statistical inference method and a continuum traffic model. This estimated state then drives an agent-based traffic simulator to produce a 3D animation of traffic that statistically matches the sensed traffic conditions. Third, to improve real-world traffic system efficiency, I propose a novel approach that takes advantage of mobile devices, such as cellular phones or embedded systems in cars, to form an interactive, participatory network of vehicles that plan their travel routes based on the current, sensed traffic conditions and the future, projected traffic conditions, which are estimated from the routes planned by all the participants. The premise of this approach is that a route, or plan, for a vehicle is also a prediction of where the car will travel. If routes are planned for a sizable percentage of the vehicles using the road network, an estimate for the overall traffic pattern is attainable. If fewer cars are being coordinated, their impact on the traffic conditions can be combined with sensor-based estimations. Taking planned routes into account as predictions allows the entire traffic route planning system to better distribute vehicles and to minimize traffic congestion. For each of these challenges, my work is motivated by the idea of fully integrating traffic simulation, as a model for the complex dynamics of real world traffic, with emerging data sources, including real-time sensor and public survey data.Doctor of Philosoph

    Advanced GPS signal processing techniques for LBS services

    Get PDF
    Par le passé, il était indispensable, pour le bon fonctionnement du GPS (Global Positioning System), que le signal soit en vision directe entre le satellite et le récepteur, et les signaux faibles n'étaient pas exploitables. Mais l'extension du GPS aux services LBS (Location Based Services) et à d'autres applications de navigation a changé ce paradigme. Par conséquent, on prévoit une augmentation considérable de techniques de localisation de plus en plus performantes, surtout dans des environnements du type indoor ou urbain. Les exigences de la localisation dans ce type d'environnements posent un véritable défi pour la conception des récepteurs GPS. Le but de la thèse est d'optimiser les techniques existantes de traitement du signal GPS pour la localisation dans des milieux contraints, dans le cadre de l'AGPS (Assisted GPS). Ce système suppose que le récepteur GPS est connecté ou introduit dans un téléphone portable. Ce genre de couplage permet de transférer au récepteur GPS des données d'assistance via le réseau GSM (Global System for Mobile communications). Ces données fournissent au récepteur GPS la liste des satellites visibles, mais aussi des valeurs estimées de leur Doppler et leur retard de code, réduisant ainsi la fenêtre de recherche de ces paramètres. Les travaux de la thèse consistent à explorer différentes techniques d'acquisition du signal GPS pour réduire le temps d'acquisition nécessaire ou TTFF (Time To First Fix), sans affecter la sensibilité du récepteur GPS. Ceci est réalisé après une étude du canal GPS radio. L'étude débute par une revue du GPS et de la structure du signal utilisé dans ce système. Le processus d'acquisition est ensuite décrit en détails: l'acquisition classique est décrite en premier pour mettre en évidence par la suite l'effet du milieu de propagation sur cette étape du traitement du signal. A cet effet, les milieux contraignants (Indoors et Urbains) seront modélisés et analysés. Cette analyse permettra de mettre en évidence les problèmes subits par les ondes radio se propageant dans ce type d'environnements. On notera que le canal urbain a été analysé en utilisant un modèle déjà existant élaboré par Alexander Steingass et Andreas Lehner du DLR (Centre Aérospatial Allemand) [Steingass et al., 2005]. D'autre part, un modèle statistique du canal indoor a été développé par l'ESA (European Space Agency) dans le cadre du projet intitulé “Navigation signal measurement campaign for critical environments” et présenté dans [Pérez-Fontán et al, 2004]. Mais ce modèle considère un canal statistique invariable dans le temps. Pour cela nous avons développé un modèle Indoor qui envisage plutôt un canal variant avec le temps, en prenant en compte les variations temporelles de certains paramètres du canal, comme le retard et la phase de la fonction de transfert. Les valeurs initiales de ces paramètres utilisés dans notre modèle sont toutefois basées sur les distributions statistiques fournies par le modèle de l'ESA. L'étude des canaux de propagation porte surtout sur les multitrajets, les inter-corrélations, et le masquage du signal. Les multitrajets sont particulièrement gênants dans le cas de milieux urbains, les intercorrélations et le masquage sont par contre plus gênants dans les milieux indoors. Ces phénomènes peuvent impliquer des erreurs dans la position calculée par le récepteur. Pour y remédier, une des solutions est d'augmenter la durée d'observation du signal pour améliorer le rapport signal sur bruit. Mais ceci conduit à des temps d'acquisition beaucoup plus longs. Par conséquent, la qualité commerciale du récepteur est mise en cause vues les contraintes sur le TTFF nécessaires pour fournir une première solution. Ces contraintes en termes de temps ii de traitements sont aussi importantes que les contraintes en termes de précision pour les utilisateurs du GPS. Mais ces deux contraintes vont en général l'une à l'encontre de l'autre. Par conséquent, une solution idéale consistera à réduire le temps d'acquisition sans pour autant affecter la sensibilité du récepteur. Ainsi, dans la suite de l'exposé des méthodes avancées de traitement du signal dans la phase d'acquisition seront présentées. La plupart de ces méthodes vise à réduire le temps total d'acquisition plutôt qu'à améliorer la sensibilité du récepteur: ceci permet de tolérer) le traitement de signaux plus longs - afin d'améliorer la sensibilité - sans augmenter la durée globale de traitement. Ces méthodes seront tout d'abord caractérisées en évaluant les avantages et les inconvénients de chacune d'elles. Une évaluation de performances de ces algorithmes, utilisant des signaux générés avec un Spirent STR4500 sera conduite dans une étape finale de cette étude. ABSTRACT : In the past, in order for GPS (Global Positioning System) to work accurately, the presence of an unobstructed LOS (Line-Of- ight) signal was necessary. Weak signals were not suitable for use because they may have large associated noise and other errors. The expansion of GPS to LBS (Location- ased Services) and other navigation applications all over the world, such as the E-911 and the E-112 mandates in the United States and Europe respectively, changed the paradigm. Consequently a dramatic increase in the need for more and more performant positioning techniques is expected, especially in urban and indoor environments. These rising localization requirements pose a particularly difficult challenge for GPS receivers design. The thesis objective is to evaluate and enhance existing GPS signal acquisition techniques for positioning goals in harsh environments, in the context of AGPS (Assisted GPS). The AGPS system assumes that the GPS receiver is connected to or introduced in a mobile phone. This allows for the transfer of AD (Assistance Data) to the GPS receiver via the GSM (Global System for Mobile communications) cellular network. Amongst others, the AD provides the GPS receiver with the list of visible satellites and estimates of their Dopplers and code delays, thus reducing the search window of these parameters. This work consists in exploring different GPS signal acquisition to reduce the acquisition time or TTFF (Time To First Fix), without affecting the receiver sensitivity. This is done after a prior study of the GPS radio channel. The study starts out with a revue of the GPS system and the GPS transmitted and received signal structure. The acquisition process is then described in details: the classical acquisition is first described in order to proceed afterwards with the impact of the propagation environment on this stage of the signal processing. For this purpose, harsh environments (urban and indoor) are modelled and analysed. This analysis enables to study the problems which encounter the radio frequency signal propagation through such environments. Note that the urban channel is studied using an existing statistical model developed by Alexander Steingass and Andreas Lehner at the DLR (German Aerospace Center) [Steingass et al., 2005]. On the other hand, an indoor channel model was developed by the ESA (European Space Agency) in the frame of a project entitled “Navigation signal measurement campaign for critical environments” and presented in [Pérez-Fontán et al, 2004]. But this model considers a time invariant statistical channel. Consequently, we developed an Indoor model which rather considers a time variant channel, by taking into account temporal variations of some channel parameters, like the transfer function delay and phase. The initial values are however based on the statistical distributions provided by the ESA model. The channels are analysed is terms of multipaths, cross-correlations and signal masking. The multipaths replicas are particularly disturbing in urban environments while the cross-correlations and masking effects are more disturbing in indoor environments. These phenomena may induce errors in the final solution calculated by the receiver. In order to avoid this error, one solution consists in increasing the signal observation duration in order to enhance the signal to noise ratio. But this generally implies longer acquisition time, thus affecting the receiver iv performance, commercially speaking. Indeed, the time requirements are as important as sensitivity requirements for GPS users. However, these two requirements are not generally compatible with each other. Consequently, an ideal solution consists in reducing the acquisition time without greatly affecting the receiver sensitivity. Accordingly, such advanced methods for acquisition signal processing are described next. Most of these methods aim at reducing the total acquisition time, rather than enhancing the receiver sensitivity. This means however that longer signal blocks can be processed (thus enhancing sensitivity) without affecting the global processing duration. At first, each of these methods is evaluated through the description of its advantages and drawbacks. A performance evaluation of these algorithms, using signals generated with a Spirent STR4500, ensues as a final step of this stud

    Advances in knowledge discovery and data mining Part II

    Get PDF
    19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p

    GSI Scientific Report 2008 [GSI Report 2009-1]

    Get PDF

    Measurement of service innovation project success:A practical tool and theoretical implications

    Get PDF

    GSI Scientific Report 2015 / GSI Report 2016-1

    Get PDF

    GSI Scientific Report 2009 [GSI Report 2010-1]

    Get PDF
    Displacement design response spectrum is an essential component for the currently-developing displacement-based seismic design and assessment procedures. This paper proposes a new and simple method for constructing displacement design response spectra on soft soil sites. The method takes into account modifications of the seismic waves by the soil layers, giving due considerations to factors such as the level of bedrock shaking, material non-linearity, seismic impedance contrast at the interface between soil and bedrock, and plasticity of the soil layers. The model is particularly suited to applications in regions with a paucity of recorded strong ground motion data, from which empirical models cannot be reliably developed
    corecore