6,295 research outputs found
Self-localizing Smart Cameras and Their Applications
As the prices of cameras and computing elements continue to fall, it
has become increasingly attractive to consider the deployment of
smart camera networks. These networks would be composed of small,
networked computers equipped with inexpensive image sensors. Such
networks could be employed in a wide range of applications including
surveillance, robotics and 3D scene reconstruction.
One critical problem that must be addressed before such systems can
be deployed effectively is the issue of localization. That is, in
order to take full advantage of the images gathered from multiple
vantage points it is helpful to know how the cameras in the scene
are positioned and oriented with respect to each other. To address
the localization problem we have proposed a novel approach to
localizing networks of embedded cameras and sensors. In this scheme
the cameras and the nodes are equipped with controllable light
sources (either visible or infrared) which are used for
signaling. Each camera node can then automatically determine the
bearing to all the nodes that are visible from its vantage point. By
fusing these measurements with the measurements obtained from
onboard accelerometers, the camera nodes are able to determine the
relative positions and orientations of other nodes in the network.
This localization technology can serve as a basic capability on
which higher level applications can be built. The method could be
used to automatically survey the locations of sensors of interest,
to implement distributed surveillance systems or to analyze the
structure of a scene based on the images obtained from multiple
registered vantage points. It also provides a mechanism for
integrating the imagery obtained from the cameras with the
measurements obtained from distributed sensors.
We have successfully used our custom made self localizing smart
camera networks to implement a novel decentralized target tracking
algorithm, create an ad-hoc range finder and localize the components
of a self assembling modular robot
Out of Nowhere: The 'emergence' of spacetime in string theory
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of
Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and
Christian W\"uthrich and under contract with Oxford University Press. (More
information at www.beyondspacetime.net.) This chapter analyses the nature and
derivation of spacetime topology and geometry according to string theory.Comment: 40 pages, 2 figure
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection
In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen
Seamless Positioning and Navigation in Urban Environment
L'abstract è presente nell'allegato / the abstract is in the attachmen
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Sulautettu ohjelmistototeutus reaaliaikaiseen paikannusjärjestelmään
Asset tracking often necessitates wireless, radio-frequency identification (RFID). In practice, situations often arise where plain inventory operations are not sufficient, and methods to estimate movement trajectory are needed for making reliable observations, classification and report generation.
In this thesis, an embedded software application for an industrial, resource-constrained off-the-shelf RFID reader device in the UHF frequency range is designed and implemented. The software is used to configure the reader and its air-interface operations, accumulate read reports and generate events to be reported over network connections. Integrating location estimation methods to the application facilitates the possibility to make deploying middleware RFID solutions more streamlined and robust while reducing network bandwidth requirements.
The result of this thesis is a functional embedded software application running on top of an embedded Linux distribution on an ARM processor. The reader software is used commercially in industrial and logistics applications. Non-linear state estimation features are applied, and their performance is evaluated in empirical experiments.Tavaroiden seuranta edellyttää usein langatonta radiotaajuustunnistustekniikkaa (RFID). Käytännön sovelluksissa tulee monesti tilanteita joissa pelkkä inventointi ei riitä, vaan tarvitaan menetelmiä liikeradan estimointiin luotettavien havaintojen ja luokittelun tekemiseksi sekä raporttien generoimiseksi.
Tässä työssä on suunniteltu ja toteutettu sulautettu ohjelmistosovellus teolliseen, resursseiltaan rajoitettuun ja kaupallisesti saatavaan UHF-taajuusalueen RFID-lukijalaitteeseen. Ohjelmistoa käytetään lukijalaitteen ja sen ilmarajapinnan toimintojen konfigurointiin, lukutapahtumien keräämiseen ja raporttien lähettämiseen verkkoyhteyksiä pitkin. Paikkatiedon estimointimenetelmien integroiminen ohjelmistoon mahdollistaa välitason RFID-sovellusten toteuttamisen aiempaa suoraviivaisemin ja luotettavammin, vähentäen samalla vaatimuksia tietoverkon kaistanleveydelle.
Työn tuloksena on toimiva sulautettu ohjelmistosovellus, jota ajetaan sulautetussa Linux-käyttöjärjestelmässä ARM-arkkitehtuurilla. Lukijaohjelmistoa käytetään kaupallisesti teollisuuden ja logistiikan sovelluskohteissa. Epälineaarisia estimointiominaisuuksia hyödynnetään, ja niiden toimivuutta arvioidaan empiirisin kokein
Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification
In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology.
This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems
Fusion of non-visual and visual sensors for human tracking
Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii
UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments
The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
- …