894 research outputs found
A Survey on Imitation Learning Techniques for End-to-End Autonomous Vehicles
Funding Agency: 10.13039/100016335-Jaguar Land Rover 10.13039/501100000266-U.K. Engineering and Physical Sciences Research Council (EPSRC) (Grant Number: EP/N01300X/1) jointly funded Towards Autonomy: Smart and Connected Control (TASCC) ProgramPeer reviewedPostprin
Determining the ages of sub-fossil cetacean remains, found in the Carse of Stirling, Scotland
During the 19th and early 20th centuries, sub-fossil cetacean remains were often discovered in the Firth of Forth, Central Scotland. These bones and skeletons of "Whales" were excavated from a recent, estuarine deposit (named "carse clay") and, within the biological and geological sciences, were not judged to be important. That palaeontological evidence is re-evaluated in this thesis. These cetacean remains have been preserved in an unusual marine environment and form an exceptional fossil assemblage, with almost no geological precedents. Why is it there?
Whatever caused exceptional preservation in the Firth of Forth in the early Holocene (c. 9.5 – 2.5ka cal BP) can be best identified with chronological data. The ages of six sets of cetacean remains are determined in this thesis, by radiocarbon dating and stratigraphic inference. To reconstruct where a bone or skeleton had been found in the "carse" and then to identify any surviving elements in modern museum collections, archaic textual sources had to be thoroughly investigated. Radiocarbon dates from marine organisms require correction for "reservoir effects" and those applicable to mysticete cetaceans require careful consideration.
The absolute dating evidence shows that no two "Whales" are the same age and that each died, and was then preserved, over the period 9.5 – 7.0ka cal BP. Therefore, a "disaster" (e.g. tsunami) or mass mortality is unlikely to have caused these remains to accumulate. A combination of physical processes and stable environmental conditions are more likely responsible, and might still permit exceptional preservation in the modern Firth of Forth. Actualistic experiment (observing if, and how, a cetacean carcass is preserved or dispersed on a modern tidal foreshore) would allow further insights into this cryptic palaeontological assemblage
Natural and Technological Hazards in Urban Areas
Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events
Relatively Absolute : Relative and Absolute Chronologies in the Neolithic of Southeast Europe
Зборник радова на тему апсолутне и релативне хронологије неолитског периода у југоисточној Европи. Географски покрива области од Грчке до Хрватске, а хронолошки период између 7000 и 4500 године пре нове ере. У зборнику су приказани најновији приступи и резултати радиокарбонских анализа и статистички и типолошки модели који побољшавају прецизност резултата
HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised Learning
Recent advancements in 3D hand pose estimation have shown promising results,
but its effectiveness has primarily relied on the availability of large-scale
annotated datasets, the creation of which is a laborious and costly process. To
alleviate the label-hungry limitation, we propose a self-supervised learning
framework, HaMuCo, that learns a single-view hand pose estimator from
multi-view pseudo 2D labels. However, one of the main challenges of
self-supervised learning is the presence of noisy labels and the ``groupthink''
effect from multiple views. To overcome these issues, we introduce a cross-view
interaction network that distills the single-view estimator by utilizing the
cross-view correlated features and enforcing multi-view consistency to achieve
collaborative learning. Both the single-view estimator and the cross-view
interaction network are trained jointly in an end-to-end manner. Extensive
experiments show that our method can achieve state-of-the-art performance on
multi-view self-supervised hand pose estimation. Furthermore, the proposed
cross-view interaction network can also be applied to hand pose estimation from
multi-view input and outperforms previous methods under the same settings.Comment: Accepted to ICCV 2023. Won first place in the HANDS22 Challenge Task
2. Project page: https://zxz267.github.io/HaMuC
BDS GNSS for Earth Observation
For millennia, human communities have wondered about the possibility of observing
phenomena in their surroundings, and in particular those affecting the Earth on which they live.
More generally, it can be conceptually defined as Earth observation (EO) and is the collection of
information about the biological, chemical and physical systems of planet Earth. It can be undertaken
through sensors in direct contact with the ground or airborne platforms (such as weather balloons and
stations) or remote-sensing technologies. However, the definition of EO has only become significant
in the last 50 years, since it has been possible to send artificial satellites out of Earth’s orbit.
Referring strictly to civil applications, satellites of this type were initially designed to provide
satellite images; later, their purpose expanded to include the study of information on land
characteristics, growing vegetation, crops, and environmental pollution. The data collected are used
for several purposes, including the identification of natural resources and the production of accurate
cartography. Satellite observations can cover the land, the atmosphere, and the oceans.
Remote-sensing satellites may be equipped with passive instrumentation such as infrared or
cameras for imaging the visible or active instrumentation such as radar. Generally, such satellites are
non-geostationary satellites, i.e., they move at a certain speed along orbits inclined with respect to the
Earth’s equatorial plane, often in polar orbit, at low or medium altitude, Low Earth Orbit (LEO) and
Medium Earth Orbit (MEO), thus covering the entire Earth’s surface in a certain scan time (properly
called ’temporal resolution’), i.e., in a certain number of orbits around the Earth.
The first remote-sensing satellites were the American NASA/USGS Landsat Program;
subsequently, the European: ENVISAT (ENVironmental SATellite), ERS (European Remote-Sensing
satellite), RapidEye, the French SPOT (Satellite Pour l’Observation de laTerre), and the Canadian
RADARSAT satellites were launched. The IKONOS, QuickBird, and GeoEye-1 satellites were
dedicated to cartography. The WorldView-1 and WorldView-2 satellites and the COSMO-SkyMed
system are more recent. The latest generation are the low payloads called Small Satellites, e.g., the
Chinese BuFeng-1 and Fengyun-3 series.
Also, Global Navigation Satellite Systems (GNSSs) have captured the attention of researchers
worldwide for a multitude of Earth monitoring and exploration applications. On the other hand,
over the past 40 years, GNSSs have become an essential part of many human activities. As is widely
noted, there are currently four fully operational GNSSs; two of these were developed for military
purposes (American NAVstar GPS and Russian GLONASS), whilst two others were developed for
civil purposes such as the Chinese BeiDou satellite navigation system (BDS) and the European
Galileo. In addition, many other regional GNSSs, such as the South Korean Regional Positioning
System (KPS), the Japanese quasi-zenital satellite system (QZSS), and the Indian Regional Navigation
Satellite System (IRNSS/NavIC), will become available in the next few years, which will have
enormous potential for scientific applications and geomatics professionals.
In addition to their traditional role of providing global positioning, navigation, and timing (PNT)
information, GNSS navigation signals are now being used in new and innovative ways. Across the
globe, new fields of scientific study are opening up to examine how signals can provide information
about the characteristics of the atmosphere and even the surfaces from which they are reflected before
being collected by a receiver.
EO researchers monitor global environmental systems using in situ and remote monitoring tools.
Their findings provide tools to support decision makers in various areas of interest, from security
to the natural environment. GNSS signals are considered an important new source of information
because they are a free, real-time, and globally available resource for the EO community
Recommended from our members
Sonic heritage: listening to the past
History is so often told through objects, images and photographs, but the potential of sounds to reveal place and space is often neglected. Our research project ‘Sonic Palimpsest’1 explores the potential of sound to evoke impressions and new understandings of the past, to embrace the sonic as a tool to understand what was, in a way that can complement and add to our predominant visual understandings. Our work includes the expansion of the Oral History archives held at Chatham Dockyard to include women’s voices and experiences, and the creation of sonic works to engage the public with their heritage. Our research highlights the social and cultural value of oral history and field recordings in the transmission of knowledge to both researchers and the public. Together these recordings document how buildings and spaces within the dockyard were used and experienced by those who worked there. We can begin to understand the social and cultural roles of these buildings within the community, both past and present
Visual Guidance for Unmanned Aerial Vehicles with Deep Learning
Unmanned Aerial Vehicles (UAVs) have been widely applied in the military and civilian domains. In recent years, the operation mode of UAVs is evolving from teleoperation to autonomous flight. In order to fulfill the goal of autonomous flight, a reliable guidance system is essential. Since the combination of Global Positioning System (GPS) and Inertial Navigation System (INS) systems cannot sustain autonomous flight in some situations where GPS can be degraded or unavailable, using computer vision as a primary method for UAV guidance has been widely explored. Moreover, GPS does not provide any information to the robot on the presence of obstacles.
Stereo cameras have complex architecture and need a minimum baseline to generate disparity map. By contrast, monocular cameras are simple and require less hardware resources. Benefiting from state-of-the-art Deep Learning (DL) techniques, especially Convolutional Neural Networks (CNNs), a monocular camera is sufficient to extrapolate mid-level visual representations such as depth maps and optical flow (OF) maps from the environment. Therefore, the objective of this thesis is to develop a real-time visual guidance method for UAVs in cluttered environments using a monocular camera and DL.
The three major tasks performed in this thesis are investigating the development of DL techniques and monocular depth estimation (MDE), developing real-time CNNs for MDE, and developing visual guidance methods on the basis of the developed MDE system. A comprehensive survey is conducted, which covers Structure from Motion (SfM)-based methods, traditional handcrafted feature-based methods, and state-of-the-art DL-based methods. More importantly, it also investigates the application of MDE in robotics. Based on the survey, two CNNs for MDE are developed. In addition to promising accuracy performance, these two CNNs run at high frame rates (126 fps and 90 fps respectively), on a single modest power Graphical Processing Unit (GPU).
As regards the third task, the visual guidance for UAVs is first developed on top of the designed MDE networks. To improve the robustness of UAV guidance, OF maps are integrated into the developed visual guidance method. A cross-attention module is applied to fuse the features learned from the depth maps and OF maps. The fused features are then passed through a deep reinforcement learning (DRL) network to generate the policy for guiding the flight of UAV. Additionally, a simulation framework is developed which integrates AirSim, Unreal Engine and PyTorch. The effectiveness of the developed visual guidance method is validated through extensive experiments in the simulation framework
Neural Reflectance Decomposition
Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der größten Herausforderungen bei dieser Aufgabe ist die vielfältige Ambiguität. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivität der Oberfläche gegenseitig. Zusätzlich wird eine Integration dieser Einflüsse durchgeführt, um das endgültige Bild zu erzeugen. Die Umkehrung dieser integrierten Abhängigkeiten ist eine äußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat.
In dieser Arbeit werden zwei Ansätze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ansätzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsüblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natürlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren.
Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualität der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was üblich für Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies.
In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision.
Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections
Advances in Modelling of Rainfall Fields
Rainfall is the main input for all hydrological models, such as rainfall–runoff models and the forecasting of landslides triggered by precipitation, with its comprehension being clearly essential for effective water resource management as well. The need to improve the modeling of rainfall fields constitutes a key aspect both for efficiently realizing early warning systems and for carrying out analyses of future scenarios related to occurrences and magnitudes for all induced phenomena. The aim of this Special Issue was hence to provide a collection of innovative contributions for rainfall modeling, focusing on hydrological scales and a context of climate changes. We believe that the contribution from the latest research outcomes presented in this Special Issue can shed novel insights on the comprehension of the hydrological cycle and all the phenomena that are a direct consequence of rainfall. Moreover, all these proposed papers can clearly constitute a valid base of knowledge for improving specific key aspects of rainfall modeling, mainly concerning climate change and how it induces modifications in properties such as magnitude, frequency, duration, and the spatial extension of different types of rainfall fields. The goal should also consider providing useful tools to practitioners for quantifying important design metrics in transient hydrological contexts (quantiles of assigned frequency, hazard functions, intensity–duration–frequency curves, etc.)
- …