49 research outputs found

    Subsurface robotic exploration for geomorphology, astrobiology and mining during MINAR6 campaign, Boulby Mine, UK : part I (Rover development)

    Get PDF
    Acknowledgement. The authors of this paper would like to thank Kempe Foundation for its generous funding support to develop KORE, the workshop at the Teknikens Hus, Luleå, for their invaluable and unconditional support in helping with the fabrication of the Rover components and the organizers of the MINAR campaign comprising the UK Centre of Astrobiology, Dark Matter Research Facility and the Israel Chemicals Limited (ICL), UK.Peer reviewedPublisher PD

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Semantic Mapping of Road Scenes

    Get PDF
    The problem of understanding road scenes has been on the fore-front in the computer vision community for the last couple of years. This enables autonomous systems to navigate and understand the surroundings in which it operates. It involves reconstructing the scene and estimating the objects present in it, such as ‘vehicles’, ‘road’, ‘pavements’ and ‘buildings’. This thesis focusses on these aspects and proposes solutions to address them. First, we propose a solution to generate a dense semantic map from multiple street-level images. This map can be imagined as the bird’s eye view of the region with associated semantic labels for ten’s of kilometres of street level data. We generate the overhead semantic view from street level images. This is in contrast to existing approaches using satellite/overhead imagery for classification of urban region, allowing us to produce a detailed semantic map for a large scale urban area. Then we describe a method to perform large scale dense 3D reconstruction of road scenes with associated semantic labels. Our method fuses the depth-maps in an online fashion, generated from the stereo pairs across time into a global 3D volume, in order to accommodate arbitrarily long image sequences. The object class labels estimated from the street level stereo image sequence are used to annotate the reconstructed volume. Then we exploit the scene structure in object class labelling by performing inference over the meshed representation of the scene. By performing labelling over the mesh we solve two issues: Firstly, images often have redundant information with multiple images describing the same scene. Solving these images separately is slow, where our method is approximately a magnitude faster in the inference stage compared to normal inference in the image domain. Secondly, often multiple images, even though they describe the same scene result in inconsistent labelling. By solving a single mesh, we remove the inconsistency of labelling across the images. Also our mesh based labelling takes into account of the object layout in the scene, which is often ambiguous in the image domain, thereby increasing the accuracy of object labelling. Finally, we perform labelling and structure computation through a hierarchical robust PN Markov Random Field defined on voxels and super-voxels given by an octree. This allows us to infer the 3D structure and the object-class labels in a principled manner, through bounded approximate minimisation of a well defined and studied energy functional. In this thesis, we also introduce two object labelled datasets created from real world data. The 15 kilometre Yotta Labelled dataset consists of 8,000 images per camera view of the roadways of the United Kingdom with a subset of them annotated with object class labels and the second dataset is comprised of ground truth object labels for the publicly available KITTI dataset. Both the datasets are available publicly and we hope will be helpful to the vision research community

    Proceedings, MSVSCC 2013

    Get PDF
    Proceedings of the 7th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 11, 2013 at VMASC in Suffolk, Virginia

    Bio-Inspired Robotics

    Get PDF
    Modern robotic technologies have enabled robots to operate in a variety of unstructured and dynamically-changing environments, in addition to traditional structured environments. Robots have, thus, become an important element in our everyday lives. One key approach to develop such intelligent and autonomous robots is to draw inspiration from biological systems. Biological structure, mechanisms, and underlying principles have the potential to provide new ideas to support the improvement of conventional robotic designs and control. Such biological principles usually originate from animal or even plant models, for robots, which can sense, think, walk, swim, crawl, jump or even fly. Thus, it is believed that these bio-inspired methods are becoming increasingly important in the face of complex applications. Bio-inspired robotics is leading to the study of innovative structures and computing with sensory–motor coordination and learning to achieve intelligence, flexibility, stability, and adaptation for emergent robotic applications, such as manipulation, learning, and control. This Special Issue invites original papers of innovative ideas and concepts, new discoveries and improvements, and novel applications and business models relevant to the selected topics of ``Bio-Inspired Robotics''. Bio-Inspired Robotics is a broad topic and an ongoing expanding field. This Special Issue collates 30 papers that address some of the important challenges and opportunities in this broad and expanding field

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    Development and evaluation of a haptic framework supporting telerehabilitation robotics and group interaction

    Get PDF
    Telerehabilitation robotics has grown remarkably in the past few years. It can provide intensive training to people with special needs remotely while facilitating therapists to observe the whole process. Telerehabilitation robotics is a promising solution supporting routine care which can help to transform face-to-face and one-on-one treatment sessions that require not only intensive human resource but are also restricted to some specialised care centres to treatments that are technology-based (less human involvement) and easy to access remotely from anywhere. However, there are some limitations such as network latency, jitter, and delay of the internet that can affect negatively user experience and quality of the treatment session. Moreover, the lack of social interaction since all treatments are performed over the internet can reduce motivation of the patients. As a result, these limitations are making it very difficult to deliver an efficient recovery plan. This thesis developed and evaluated a new framework designed to facilitate telerehabilitation robotics. The framework integrates multiple cutting-edge technologies to generate playful activities that involve group interaction with binaural audio, visual, and haptic feedback with robot interaction in a variety of environments. The research questions asked were: 1) Can activity mediated by technology motivate and influence the behaviour of users, so that they engage in the activity and sustain a good level of motivation? 2) Will working as a group enhance users’ motivation and interaction? 3) Can we transfer real life activity involving group interaction to virtual domain and deliver it reliably via the internet? There were three goals in this work: first was to compare people’s behaviours and motivations while doing the task in a group and on their own; second was to determine whether group interaction in virtual and reala environments was different from each other in terms of performance, engagement and strategy to complete the task; finally was to test out the effectiveness of the framework based on the benchmarks generated from socially assistive robotics literature. Three studies have been conducted to achieve the first goal, two with healthy participants and one with seven autistic children. The first study observed how people react in a challenging group task while the other two studies compared group and individual interactions. The results obtained from these studies showed that the group interactions were more enjoyable than individual interactions and most likely had more positive effects in terms of user behaviours. This suggests that the group interaction approach has the potential to motivate individuals to make more movements and be more active and could be applied in the future for more serious therapy. Another study has been conducted to measure group interaction’s performance in virtual and real environments and pointed out which aspect influences users’ strategy for dealing with the task. The results from this study helped to form a better understanding to predict a user’s behaviour in a collaborative task. A simulation has been run to compare the results generated from the predictor and the real data. It has shown that, with an appropriate training method, the predictor can perform very well. This thesis has demonstrated the feasibility of group interaction via the internet using robotic technology which could be beneficial for people who require social interaction (e.g. stroke patients and autistic children) in their treatments without regular visits to the clinical centres

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    Collision Avoidance on Unmanned Aerial Vehicles using Deep Neural Networks

    Get PDF
    Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries, being widely used not only among enthusiastic consumers but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is full of serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, focusing on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. First, the SoA principles for collision avoidance against stationary objects are reviewed. Afterward, a novel image processing approach that uses deep learning and optical flow is presented. This approach is capable of detecting and generating escape trajectories against potential collisions with dynamic objects. Finally, novel models and algorithms combinations were tested, providing a new approach for the collision avoidance of UAVs using Deep Neural Networks. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, created from scratch using the framework developed.Os veículos aéreos não tripulados (VANTs), embora dificilmente considerados uma nova tecnologia, ganharam recentemente um papel de destaque em muitas indústrias, sendo amplamente utilizados não apenas por amadores, mas também em situações profissionais de alta exigência, sendo expectável um impacto social massivo nos próximos anos. No entanto, a operação de VANTs está repleta de sérios riscos de segurança, como colisões com obstáculos dinâmicos (pássaros, outros VANTs ou objetos arremessados). Estes cenários de colisão são complexos para analisar em tempo real, às vezes sendo computacionalmente impossível de resolver com os algoritmos existentes, tornando o uso de VANTs um risco operacional e, portanto, reduzindo significativamente a sua aplicabilidade comercial em ambientes citadinos. Neste trabalho, uma arquitectura conceptual para VANTs autônomos e em rede é apresentada, com foco nos requisitos arquitetônicos do subsistema de prevenção de colisão para atingir níveis aceitáveis de segurança e confiabilidade. Os estudos presentes na literatura para prevenção de colisão contra objectos estacionários são revistos e uma nova abordagem é descrita. Esta tecnica usa técnicas de aprendizagem profunda e processamento de imagem, para realizar a prevenção de colisões em tempo real com objetos móveis. Por fim, novos modelos e combinações de algoritmos são propostos, fornecendo uma nova abordagem para evitar colisões de VANTs usando Redes Neurais Profundas. A viabilidade da abordagem foi demonstrada através de testes experimentais utilizando um VANT, desenvolvido a partir da arquitectura apresentada

    Arrayed LiDAR signal analysis for automotive applications

    Get PDF
    Light detection and ranging (LiDAR) is one of the enabling technologies for advanced driver assistance and autonomy. Advances in solid-state photon detector arrays offer the potential of high-performance LiDAR systems but require novel signal processing approaches to fully exploit the dramatic increase in data volume an arrayed detector can provide. This thesis presents two approaches applicable to arrayed solid-state LiDAR. First, a novel block independent sparse depth reconstruction framework is developed, which utilises a random and very sparse illumination scheme to reduce illumination density while improving sampling times, which further remain constant for any array size. Compressive sensing (CS) principles are used to reconstruct depth information from small measurement subsets. The smaller problem size of blocks reduces the reconstruction complexity, improves compressive depth reconstruction performance and enables fast concurrent processing. A feasibility study of a system proposal for this approach demonstrates that the required logic could be practically implemented within detector size constraints. Second, a novel deep learning architecture called LiDARNet is presented to localise surface returns from LiDAR waveforms with high throughput. This single data driven processing approach can unify a wide range of scenarios, making use of a training-by-simulation methodology. This augments real datasets with challenging simulated conditions such as multiple returns and high noise variance, while enabling rapid prototyping of fast data driven processing approaches for arrayed LiDAR systems. Both approaches are fast and practical processing methodologies for arrayed LiDAR systems. These retrieve depth information with excellent depth resolution for wide operating ranges, and are demonstrated on real and simulated data. LiDARNet is a rapid approach to determine surface locations from LiDAR waveforms for efficient point cloud generation, while block sparse depth reconstruction is an efficient method to facilitate high-resolution depth maps at high frame rates with reduced power and memory requirements.Engineering and Physical Sciences Research Council (EPSRC
    corecore