1,059 research outputs found
SPATIAL ANALYSES AND REMOTE SENSING FOR LAND COVER CHANGE DYNAMICS: ASSESSING IN A SPATIAL PLANNING
ABSTRACT (EN)
Spatial planning is a crucial discipline for the identification and implementation of sustainable development strategies that take into account the environmental impacts on the soil. In recent years, the significant development of technology, like remote sensing and GIS software, has significantly increased the understanding of environmental components, highlighting their peculiarities and criticalities. Geographically referenced information on environmental and socio-economic components represents a fundamental database for identifying and monitoring vulnerable areas, also distinguishing different levels of vulnerability.
This is even more relevant considering the increasingly significant impact of land transformation processes, consisting of rapid and frequent changes in land use patterns. In order to achieve some of the Sustainable Development Goals of the 2030 Agenda, the role of environmental planning is crucial in addressing spatial problems, such as agricultural land abandonment and land take, which cause negative impacts on
ecosystems. Remote sensing, and in general all Earth Observation techniques, play a key role in achieving SDG 11.3 and 15.3 of Agenda 2030. Through a series of applications and investigations in different areas of Basilicata, it has been demonstrated how the extensive use of remote sensing and spatial analysis in a GIS environment provide a substantial contribution to the results of the SDGs, enabling an informed decisionmaking process and enabling monitoring of the results expected, ensuring data reliability and directly contributing to the calculation of SDG objectives and indicators by facilitating local administrations approaches to work in different development and sustainability sectors. In this thesis have been analyse the dynamics of land transformation in terms of land take and soil erosion in sample areas of the Basilicata Region, which represents an interesting case example for the study of land use land cover change (LULCC).
The socio-demographic evolutionary trends and the study of marginality and territorial fragility are fundamental aspects in the context of territorial planning, since they are important drivers of the LULCC and territorial transformation processes. In fact, in Basilicata, settlement dynamics over the years have occurred in an uncontrolled and unregulated manner, leading to a constant consumption of land not accompanied by
adequate demographic and economic growth. To better understand the evolution and dynamics of the LULCCs and provide useful tools for formulating territorial planning policies and strategies aimed at a sustainable use of the territory, the socio-economic aspects of the Region were investigated. A first phase involved the creation of a database and the study and identification of essential services in the area as a
fundamental parameter against which to evaluate the quality of life in a specific area. The supply of essential services can be understood as an assessment of the lack of minimum requirements with reference to the urban functions exercised by each territorial unit. From a territorial point of view, the level of peripherality of the territories with respect to the network of urban centres profoundly influences the quality of life of
citizens and the level of social inclusion. In these, the presence of essential services can act as an attractor capable of generating discrete catchment areas. The purpose of this first part of the work was above all to create a dataset of data useful for the calculation of various socio-economic indicators, in order to frame the demographic evolution and the evolution of the stock of public and private services. The first methodological
approach was to reconstruct the offer of essential services through the use of open data in a GIS environment and subsequently estimate the peripherality of each municipality by estimating the accessibility to essential services. The study envisaged the use of territorial analysis techniques aimed at describing the distribution of essential services on the regional territory. It is essential to understand the role of demographic dynamics
as a driver of urban land use change such as, for example, the increase in demand for artificial surfaces that occurs locally. Social and economic analyses are important in the spatial planning process. Comparison of socio-economic analyses with land use and land cover change can highlight the need to modify existing policies or implement new ones. A particular land use can degrade and thereby destroy other land resources.
If the economic analysis shows that the use is beneficial from the point of view of the land user, it is likely to continue, regardless of whether the process is environmentally friendly. It is important to understand and investigate which drivers have been and will be in the future the most decisive in these dynamics that intrinsically contribute to land take, agricultural abandonment and the consequent processes of land degradation and to define policies or thresholds to mitigate and monitor the effects of these processes.
Subsequently, the issues of land take and abandonment of agricultural land were analysed by applying models and techniques of remote sensing, GIS and territorial analysis for the identification and monitoring of abandoned agricultural areas and sealed areas. The classic remote sensing methods have also been integrated by some geostatistical analyses which have provided more information on the investigated phenomenon. The aim was the creation of a quick methodology that would allow to describe the monitoring and analysis activities of the development trends of soil consumption and the monitoring and identification of degraded areas. The first methodology proposed allowed the automatic and rapid detection of detailed
LULCC and Land Take maps with an overall accuracy of more than 90%, reducing costs and processing times.
The identification of abandoned agricultural areas in degradation is among the most complicated LULCC and Land Degradation processes to identify and monitor as it is driven by a multiplicity of anthropic and natural factors. The model used to estimate soil erosion as a degradation phenomenon is the Revised Universal Soil Loss Equation (RUSLE). To identify potentially degraded areas, two factors of the RUSLE have been correlated: Factor C which describes the vegetation cover of the soil and Factor A which represents the amount of potential soil erosion. Through statistical correlation analysis with the RUSLE factors, on the basis of the deviations from the average RUSLE values and mapping of the areas of vegetation degradation, relating to arable land, through statistical correlation with the vegetation factor C, the areas were identified and mapped
that are susceptible to soil degradation. The results obtained allowed the creation of a database and a map of the degraded areas to be paid attention to
FLARE: Fast Learning of Animatable and Relightable Mesh Avatars
Our goal is to efficiently learn personalized animatable 3D head avatars from
videos that are geometrically accurate, realistic, relightable, and compatible
with current rendering systems. While 3D meshes enable efficient processing and
are highly portable, they lack realism in terms of shape and appearance. Neural
representations, on the other hand, are realistic but lack compatibility and
are slow to train and render. Our key insight is that it is possible to
efficiently learn high-fidelity 3D mesh representations via differentiable
rendering by exploiting highly-optimized methods from traditional computer
graphics and approximating some of the components with neural networks. To that
end, we introduce FLARE, a technique that enables the creation of animatable
and relightable mesh avatars from a single monocular video. First, we learn a
canonical geometry using a mesh representation, enabling efficient
differentiable rasterization and straightforward animation via learned
blendshapes and linear blend skinning weights. Second, we follow
physically-based rendering and factor observed colors into intrinsic albedo,
roughness, and a neural representation of the illumination, allowing the
learned avatars to be relit in novel scenes. Since our input videos are
captured on a single device with a narrow field of view, modeling the
surrounding environment light is non-trivial. Based on the split-sum
approximation for modeling specular reflections, we address this by
approximating the pre-filtered environment map with a multi-layer perceptron
(MLP) modulated by the surface roughness, eliminating the need to explicitly
model the light. We demonstrate that our mesh-based avatar formulation,
combined with learned deformation, material, and lighting MLPs, produces
avatars with high-quality geometry and appearance, while also being efficient
to train and render compared to existing approaches.Comment: 15 pages, Accepted: ACM Transactions on Graphics (Proceedings of
SIGGRAPH Asia), 202
Interactive visualizations of unstructured oceanographic data
The newly founded company Oceanbox is creating a novel oceanographic forecasting system to provide oceanography as a service. These services use mathematical models that generate large hydrodynamic data sets as unstructured triangular grids with high-resolution model areas. Oceanbox makes the model results accessible in a web application. New visualizations are needed to accommodate land-masking and large data volumes.
In this thesis, we propose using a k-d tree to spatially partition unstructured triangular grids to provide the look-up times needed for interactive visualizations. A k-d tree is implemented in F# called FsKDTree. This thesis also describes the implementation of dynamic tiling map layers to visualize current barbs, scalar fields, and particle streams. The current barb layer queries data from the data server with the help of the k-d tree and displays it in the browser. Scalar fields and particle streams are implemented using WebGL, which enables the rendering of triangular grids. Stream particle visualization effects are implemented as velocity advection computed on the GPU with textures.
The new visualizations are used in Oceanbox's production systems, and spatial indexing has been integrated into Oceanbox's archive retrieval system. FsKDTree improves tree creation times by up to 4x over the C# equivalent and improves search times by up to 13x compared to the .NET C# implementation. Finally, the largest model areas can be viewed with current barbs, scalar fields, and particle stream visualizations at 60 FPS, even for the largest model areas provided by the service
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars
Synthesizing high-fidelity head avatars is a central problem for computer
vision and graphics. While head avatar synthesis algorithms have advanced
rapidly, the best ones still face great obstacles in real-world scenarios. One
of the vital causes is inadequate datasets -- 1) current public datasets can
only support researchers to explore high-fidelity head avatars in one or two
task directions; 2) these datasets usually contain digital head assets with
limited data volume, and narrow distribution over different attributes. In this
paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive
advance in head avatar research. It contains massive data assets, with 243+
million complete head frames, and over 800k video sequences from 500 different
identities captured by synchronized multi-view cameras at 30 FPS. It is a
large-scale digital library for head avatars with three key attributes: 1) High
Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K
cameras in 360 degrees. 2) High Diversity: The collected subjects vary from
different ages, eras, ethnicities, and cultures, providing abundant materials
with distinctive styles in appearance and geometry. Moreover, each subject is
asked to perform various motions, such as expressions and head rotations, which
further extend the richness of assets. 3) Rich Annotations: we provide
annotations with different granularities: cameras' parameters, matting, scan,
2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar
research, with 16 state-of-the-art methods performed on five main tasks: novel
view synthesis, novel expression synthesis, hair rendering, hair editing, and
talking head generation. Our experiments uncover the strengths and weaknesses
of current methods. RenderMe-360 opens the door for future exploration in head
avatars.Comment: Technical Report; Project Page: 36; Github Link:
https://github.com/RenderMe-360/RenderMe-36
Pixelated Interactions: Exploring Pixel Art for Graphical Primitives on a Pin Array Tactile Display
Two-dimensional pin array displays enable access to tactile graphics that are important for the education of students with visual impairments. Due to their prohibitive cost and limited access, there is limited research within HCI and the rules to design graphics on these low-resolution tactile displays are unclear. In this paper, eight tactile readers with visual impairments qualitatively evaluate the implementation of Pixel Art to create tactile graphical primitives on a pin array display. Every pin of the pin array is assumed to be a pixel on a pixel grid. Our findings suggest that Pixel Art tactile graphics on a pin array are clear and comprehensible to tactile readers, positively confirming its use to design basic tactile shapes and line segments. The guidelines provide a consistent framework to create tactile media which implies that they can be used to downsize basic shapes for refreshable pin-array displays
Towards Object-Centric Scene Understanding
Visual perception for autonomous agents continues to attract community attention due to the disruptive technologies and the wide applicability of such solutions. Autonomous Driving (AD), a major application in this domain, promises to revolutionize our approach to mobility while bringing critical advantages in limiting accident fatalities.
Fueled by recent advances in Deep Learning (DL), more computer vision tasks are being addressed using a learning paradigm. Deep Neural Networks (DNNs) succeeded consistently in pushing performances to unprecedented levels and demonstrating the ability of such approaches to generalize to an increasing number of difficult problems, such as 3D vision tasks.
In this thesis, we address two main challenges arising from the current approaches. Namely, the computational complexity of multi-task pipelines, and the increasing need for manual annotations. On the one hand, AD systems need to perceive the surrounding environment on different levels of detail and, subsequently, take timely actions. This multitasking further limits the time available for each perception task. On the other hand, the need for universal generalization of such systems to massively diverse situations requires the use of large-scale datasets covering long-tailed cases. Such requirement renders the use of traditional supervised approaches, despite the data readily available in the AD domain, unsustainable in terms of annotation costs, especially for 3D tasks.
Driven by the AD environment nature and the complexity dominated (unlike indoor scenes) by the presence of other scene elements (mainly cars and pedestrians) we focus on the above-mentioned challenges in object-centric tasks. We, then, situate our contributions appropriately in fast-paced literature, while supporting our claims with extensive experimental analysis leveraging up-to-date state-of-the-art results and community-adopted benchmarks
Durability of Wireless Charging Systems Embedded Into Concrete Pavements for Electric Vehicles
Point clouds are widely used in various applications such as 3D modeling, geospatial analysis, robotics, and more. One of the key advantages of 3D point cloud data is that, unlike other data formats like texture, it is independent of viewing angle, surface type, and parameterization. Since each point in the point cloud is independent of the other, it makes it the most suitable source of data for tasks like object recognition, scene segmentation, and reconstruction. Point clouds are complex and verbose due to the numerous attributes they contain, many of which may not be always necessary for rendering, making retrieving and parsing a heavy task.
As Sensors are becoming more precise and popular, effectively streaming, processing, and rendering the data is also becoming more challenging. In a hierarchical continuous LOD system, the previously fetched and rendered data for a region may become unavailable when revisiting it. To address this, we use a non-persistence cache using hash-map which stores the parsed point attributes, which still has some limitations, such as the dataset needing to be refetched and reprocessed if the tab or browser is closed and reopened which can be addressed by persistence caching. On the web, popularly persistence caching involves storing data in server memory, or an intermediate caching server like Redis. This is not suitable for point cloud data where we have to store parsed and processed large point data making point cloud visualization rely only on non-persistence caching.
The thesis aims to contribute toward better performance and suitability of point cloud rendering on the web reducing the number of read requests to the remote file to access data.We achieve this with the application of client-side-based LRU Cache and Private File Open Space as a combination of both persistence and non-persistence caching of data. We use a cloud-optimized data format, which is better suited for web and streaming hierarchical data structures. Our focus is to improve rendering performance using WebGPU by reducing access time and minimizing the amount of data loaded in GPU.
Preliminary results indicate that our approach significantly improves rendering performance and reduce network request when compared to traditional caching methods using WebGPU
ImMesh: An Immediate LiDAR Localization and Meshing Framework
In this paper, we propose a novel LiDAR(-inertial) odometry and mapping
framework to achieve the goal of simultaneous localization and meshing in
real-time. This proposed framework termed ImMesh comprises four tightly-coupled
modules: receiver, localization, meshing, and broadcaster. The localization
module utilizes the prepossessed sensor data from the receiver, estimates the
sensor pose online by registering LiDAR scans to maps, and dynamically grows
the map. Then, our meshing module takes the registered LiDAR scan for
incrementally reconstructing the triangle mesh on the fly. Finally, the
real-time odometry, map, and mesh are published via our broadcaster. The key
contribution of this work is the meshing module, which represents a scene by an
efficient hierarchical voxels structure, performs fast finding of voxels
observed by new scans, and reconstructs triangle facets in each voxel in an
incremental manner. This voxel-wise meshing operation is delicately designed
for the purpose of efficiency; it first performs a dimension reduction by
projecting 3D points to a 2D local plane contained in the voxel, and then
executes the meshing operation with pull, commit and push steps for incremental
reconstruction of triangle facets. To the best of our knowledge, this is the
first work in literature that can reconstruct online the triangle mesh of
large-scale scenes, just relying on a standard CPU without GPU acceleration. To
share our findings and make contributions to the community, we make our code
publicly available on our GitHub: https://github.com/hku-mars/ImMesh
Recommended from our members
Efficient Autonomous Path Planning for Ultrasonic Non-Destructive Testing: A Graph Theory and K-Dimensional Tree Optimisation Approach
Data Availability Statement:
Data are contained within the article.Acknowledgements:
This work was enabled through the National Structural Integrity Research Centre (NSIRC), a postgraduate engineering facility for industry-led research into structural integrity established and managed by TWI Ltd. through a network of both national and international universities.Within the domain of robotic non-destructive testing (NDT) of complex structures, the existing methods typically utilise an offline robot-path-planning strategy. Commonly, for robotic inspection, this will involve full coverage of the component. An NDT probe oriented normal to the component surface is deployed in a raster scan pattern. Here, digital models are used, with the user decomposing complex structures into manageable scan path segments, while carefully avoiding obstacles and other geometric features. This is a manual process that requires a highly skilled robotic operator, often taking several hours or days to refine. This introduces several challenges to NDT, including the need for an accurate model of the component (which, for NDT inspection, is often not available), the requirement of skilled personnel, and careful consideration of both the NDT inspection method and the geometric structure of the component. This paper addresses the specific challenge of scanning complex surfaces by using an automated approach. An algorithm is presented, which is able to learn an efficient scan path by taking into account the dimensional constraints of the footprint of an ultrasonic phased-array probe (a common inspection method for NDT) and the surface geometry. The proposed solution harnesses a digital model of the component, which is decomposed into a series of connected nodes representing the NDT inspection points within the NDT process—this step utilises graph theory. The connections to other nodes are determined using nearest neighbour with KD-Tree optimisation to improve the efficiency of node traversal. This enables a trade-off between simplicity and efficiency. Next, movement restrictions are introduced to allow the robot to navigate the surface of a component in a three-dimensional space, defining obstacles as prohibited areas, explicitly. Our solution entails a two-stage planning process, as follows: a modified three-dimensional flood fill is combined with Dijkstra’s shortest path algorithm. The process is repeated iteratively until the entire surface is covered. The efficiency of this proposed approach is evaluated through simulations. The technique presented in this paper provides an improved and automated method for NDT robotic inspection, reducing the requirement of skilled robotic path-planning personnel while ensuring full component coverage.This project was part of an initiative known as AEMRI (Advanced Engineering Materials Research Institute), which is funded by the Welsh European Funding Office (WEFO) using European Regional Development Funds (ERDF) WEFO contract no. 80854
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to
synthesize images of 3D scenes from novel views. However, they rely upon
specialized volumetric rendering algorithms based on ray marching that are
mismatched to the capabilities of widely deployed graphics hardware. This paper
introduces a new NeRF representation based on textured polygons that can
synthesize novel images efficiently with standard rendering pipelines. The NeRF
is represented as a set of polygons with textures representing binary opacities
and feature vectors. Traditional rendering of the polygons with a z-buffer
yields an image with features at every pixel, which are interpreted by a small,
view-dependent MLP running in a fragment shader to produce a final pixel color.
This approach enables NeRFs to be rendered with the traditional polygon
rasterization pipeline, which provides massive pixel-level parallelism,
achieving interactive frame rates on a wide range of compute platforms,
including mobile phones.Comment: CVPR 2023. Project page: https://mobile-nerf.github.io, code:
https://github.com/google-research/jax3d/tree/main/jax3d/projects/mobilener
- …