6,399 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Large-scale Point Cloud Registration Based on Graph Matching Optimization

    Full text link
    Point Clouds Registration is a fundamental and challenging problem in 3D computer vision. It has been shown that the isometric transformation is an essential property in rigid point cloud registration, but the existing methods only utilize it in the outlier rejection stage. In this paper, we emphasize that the isometric transformation is also important in the feature learning stage for improving registration quality. We propose a \underline{G}raph \underline{M}atching \underline{O}ptimization based \underline{Net}work (denoted as GMONet for short), which utilizes the graph matching method to explicitly exert the isometry preserving constraints in the point feature learning stage to improve %refine the point representation. Specifically, we %use exploit the partial graph matching constraint to enhance the overlap region detection abilities of super points (i.e.,i.e., down-sampled key points) and full graph matching to refine the registration accuracy at the fine-level overlap region. Meanwhile, we leverage the mini-batch sampling to improve the efficiency of the full graph matching optimization. Given high discriminative point features in the evaluation stage, we utilize the RANSAC approach to estimate the transformation between the scanned pairs. The proposed method has been evaluated on the 3DMatch/3DLoMatch benchmarks and the KITTI benchmark. The experimental results show that our method achieves competitive performance compared with the existing state-of-the-art baselines

    CAROM Air -- Vehicle Localization and Traffic Scene Reconstruction from Aerial Videos

    Full text link
    Road traffic scene reconstruction from videos has been desirable by road safety regulators, city planners, researchers, and autonomous driving technology developers. However, it is expensive and unnecessary to cover every mile of the road with cameras mounted on the road infrastructure. This paper presents a method that can process aerial videos to vehicle trajectory data so that a traffic scene can be automatically reconstructed and accurately re-simulated using computers. On average, the vehicle localization error is about 0.1 m to 0.3 m using a consumer-grade drone flying at 120 meters. This project also compiles a dataset of 50 reconstructed road traffic scenes from about 100 hours of aerial videos to enable various downstream traffic analysis applications and facilitate further road traffic related research. The dataset is available at https://github.com/duolu/CAROM.Comment: Accepted to IEEE ICRA 202

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    NPC: Neural Point Characters from Video

    Full text link
    High-fidelity human 3D models can now be learned directly from videos, typically by combining a template-based surface model with neural representations. However, obtaining a template surface requires expensive multi-view capture systems, laser scans, or strictly controlled conditions. Previous methods avoid using a template but rely on a costly or ill-posed mapping from observation to canonical space. We propose a hybrid point-based representation for reconstructing animatable characters that does not require an explicit surface model, while being generalizable to novel poses. For a given video, our method automatically produces an explicit set of 3D points representing approximate canonical geometry, and learns an articulated deformation model that produces pose-dependent point transformations. The points serve both as a scaffold for high-frequency neural features and an anchor for efficiently mapping between observation and canonical space. We demonstrate on established benchmarks that our representation overcomes limitations of prior work operating in either canonical or in observation space. Moreover, our automatic point extraction approach enables learning models of human and animal characters alike, matching the performance of the methods using rigged surface templates despite being more general. Project website: https://lemonatsu.github.io/npc/Comment: Project website: https://lemonatsu.github.io/npc

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world

    FLARE: Fast Learning of Animatable and Relightable Mesh Avatars

    Full text link
    Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. To that end, we introduce FLARE, a technique that enables the creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the pre-filtered environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material, and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.Comment: 15 pages, Accepted: ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), 202

    Desired sensory branding strategies in-store versus online: the skincare industry

    Get PDF
    Modern shoppers are inundated with purchasing options in every product category, with thousands of brands competing for their patronage. It has therefore become increasingly important for organisations to differentiate product offerings in the market if they want to be competitive. It has further been highlighted that an individual’s experience of a brand is of paramount importance, as it is directly linked to brand loyalty. A vehicle for creating memorable brand experiences is the utilisation of multi-sensory experiences or sensory branding. Within the context of traditional or in-store shopping, sensory branding encompasses the use of visual, auditory, olfactory, tactile and gustatory stimuli to adjust consumer purchasing behaviour. However, more and more consumers are opting for online shopping, spurred on by the effects of the global COVID-19 pandemic, and are no less demanding of brands online than they would be in-store. The cosmetics and personal care industry is one of the more predominant gainers from e-commerce. The skincare industry exhibited one of the largest growth rates from 2019 – 2025 and had an estimated market value of 155.8billionin2022.WhenconsideringtheSouthAfricanskincareindustryinisolation,thereisnoexception,categorisedbyhighaveragegrowthratesandmanycompetitiveplayersinthemarket.ThisisapparentwhenconsideringthattheskincareindustrywithinSouthAfricaisexpctedtogrowannuallyby5.48155.8 billion in 2022. When considering the South African skincare industry in isolation, there is no exception, categorised by high average growth rates and many competitive players in the market. This is apparent when considering that the skincare industry within South Africa is expcted to grow annually by 5.48% from 2023 to 2027, translating to an industry value of 788.4 million by 2027 (Statista 2023). With reference to in-store shopping for skincare products, sensory marketing strategies have been known to be heavily relied on. Therefore, with consumers moving towards online shopping, it is essential for skincare businesses to consider how to deliver sensory experiences online as well as in-store. Whilst the importance of the use of sensory branding and marketing in the skincare industry is notable, both in-store and online, it was established that while there is research available on sensory branding, there is very limited academic research on digital sensory branding and the sensory branding of v skincare products. Moreover, to the researcher’s knowledge, no academic literature specifically investigates the digital sensory branding of skincare brands. Therefore, this study will contribute not only by adding academic research to the topic being investigated but also through rreccomendations made based on the outcomes of this study to skincare brands in South Africa. From the comprehensive literature review, a conceptual model was constructed to investigate the relationship between traditional and digital sensory branding strategies (independent variables) and brand loyalty (dependent variable). Two sets of hypotheses were formulated relating to the identified variables of this study and the empirical research conducted was utilised to deduce whether these hypotheses should be rejected or supported. To conduct the empirical research needed for this study, certain research methodology was employed. This study made use of a positivistic paradigm and a quantitative approach. The target population of this study constituted consumers who had purchased skincare products in-store as well as online and, as no true sample frame existed, respondents were selected through the use of non-probability sampling, more specifically, convenience sampling. To collect the data, an online survey was used, with the specific data collection instrument being a web-based self-administered questionnaire, which was distributed via social media platforms, such as Facebook and LinkedIn, as well as via email. Section A of the questionnaire focused on the demographic details of the respondents, while Section B – Section F related to the variables of the study. A total of 372 potential respondents started the questionnaire, however only 321 questionnaires were deemed usable after the data had been coded and cleaned, indicating a response rate of 86.3%. This study made use of both descriptive (measures of central tendency as well as standard deviation and skewness) and inferential (SEM Models, Primary Models, Pearson’s correlation coefficients, Chi-Square test of Association, ANOVAs and Welch Robust test, Tukey test and Games Howell Test as well as Cohen’s d) statistics to interpret the data, which was graphically illustrated. vi The empirical investigation conducted in this study between the variables and sub-variables revealed that significant relationships exist between traditional sensory branding strategies (traditional olfactory and tactile stimuli) and digital sensory branding strategies (digital visual, olfactory and tactile stimuli) and brand loyalty, with refence to the skincare industry. It was further notable that, with specific reference to the skincare industry, the sense of sight, smell and touch are key factors for sensory branding, whereas auditory stimuli were found to only be useful when used in unison with the other senses. Moreover, with reference to in-store shopping, it was deduced that consumers shop for skincare mostly via retail outlets, which could lead to sensory overload. Furthermore, the results of this study suggest that younger consumers are price sensitive. Based on the pertinent empirical results, and corresponding literature findings, of this study, recommendations were provided to businesses operating in the skincare industry. With reference to in-store trading, it was recommended that because skincare is mostly sold via retail outlets, the brand itself does not have control over all sensory stimuli to which the consumer is exposed. As a result, consumers may be subject to sensory overload and skincare brands should keep their sensory branding in-store simple. Moreover, skincare brands could make use of an in-store aesthetician or beautician, which would facilitate consumer-product interaction. With regards to online trading, a recommendation for skincare brands would be to use moving images or GIFs, which will allow the consumer to more easily imagine the feel of the product. Moreover, skincare brands can make use of brand ambassadors to create “unboxing” videos, which will convey more clearly the sensory information of the product and instil confidence in consumers. Reccomendations were also made with reference to the financial state of consumers, as the financial position of the respondents could influence their decision making. The limitations of this study comprised the availability of reliable existing sources to support the study as the concept of digital sensory branding is still relatively new and, due to the study being focused on the skincare industry, taste stimuli were excluded as they were found to have no relevance. Finally, vii based on all the literature findings and empirical results, recommendations for future areas of study were made. This study provides evidence that both traditional and digital sensory branding strategies have an influence on, or relationship with, brand loyalty. Through this study, the importance of sensory branding, with specific reference to the skincare industry, is brought to light. Furthermore, skincare brands can utilise the information provided to improve the experience of their consumers when shopping in-store, as well as online, thereby increasing their base of brand loyal consumers.Thesis (PhD) -- Faculty of Business and Economic Sciences, 202

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo
    • …
    corecore