1,202 research outputs found

    Climate Change and Critical Agrarian Studies

    Full text link
    Climate change is perhaps the greatest threat to humanity today and plays out as a cruel engine of myriad forms of injustice, violence and destruction. The effects of climate change from human-made emissions of greenhouse gases are devastating and accelerating; yet are uncertain and uneven both in terms of geography and socio-economic impacts. Emerging from the dynamics of capitalism since the industrial revolution — as well as industrialisation under state-led socialism — the consequences of climate change are especially profound for the countryside and its inhabitants. The book interrogates the narratives and strategies that frame climate change and examines the institutionalised responses in agrarian settings, highlighting what exclusions and inclusions result. It explores how different people — in relation to class and other co-constituted axes of social difference such as gender, race, ethnicity, age and occupation — are affected by climate change, as well as the climate adaptation and mitigation responses being implemented in rural areas. The book in turn explores how climate change – and the responses to it - affect processes of social differentiation, trajectories of accumulation and in turn agrarian politics. Finally, the book examines what strategies are required to confront climate change, and the underlying political-economic dynamics that cause it, reflecting on what this means for agrarian struggles across the world. The 26 chapters in this volume explore how the relationship between capitalism and climate change plays out in the rural world and, in particular, the way agrarian struggles connect with the huge challenge of climate change. Through a huge variety of case studies alongside more conceptual chapters, the book makes the often-missing connection between climate change and critical agrarian studies. The book argues that making the connection between climate and agrarian justice is crucial

    Reliable Sensor Intelligence in Resource Constrained and Unreliable Environment

    Get PDF
    The objective of this research is to design a sensor intelligence that is reliable in a resource constrained, unreliable environment. There are various sources of variations and uncertainty involved in intelligent sensor system, so it is critical to build reliable sensor intelligence. Many prior works seek to design reliable sensor intelligence by developing robust and reliable task. This thesis suggests that along with improving task itself, task reliability quantification based early warning can further improve sensor intelligence. DNN based early warning generator quantifies task reliability based on spatiotemporal characteristics of input, and the early warning controls sensor parameters and avoids system failure. This thesis presents an early warning generator that predicts task failure due to sensor hardware induced input corruption and controls the sensor operation. Moreover, lightweight uncertainty estimator is presented to take account of DNN model uncertainty in task reliability quantification without prohibitive computation from stochastic DNN. Cross-layer uncertainty estimation is also discussed to consider the effect of PIM variations.Ph.D

    A new global media order? : debates and policies on media and mass communication at UNESCO, 1960 to 1980

    Get PDF
    Defence date: 24 June 2019Examining Board: Professor Federico Romero, European University Institute (Supervisor); Professor Corinna Unger, European University Institute (Second Reader); Professor Iris Schröder, Universität Erfurt (External Advisor); Professor Sandrine Kott, Université de GenèveThe 1970s, a UNESCO report claimed, would be the “communication decade”. UNESCO had started research on new means of mass communication for development purposes in the 1960s. In the 1970s, the issue evolved into a debate on the so-called “New World Information and Communication Order” (NWICO) and the democratisation of global media. It led UNESCO itself into a major crisis in the 1980s. My project traces a dual trajectory that shaped this global debate on transnational media. The first follows communications from being seen as a tool and goal of national development in the 1960s, to communications seen as catalyst for recalibrated international political, cultural and economic relations. The second relates to the recurrent attempts, and eventual failure, of various actors to engage UNESCO as a platform to promote a new global order. I take UNESCO as an observation post to study national ambitions intersecting with internationalist claims to universality, changing understandings of the role of media in development and international affairs, and competing visions of world order. Looking at the modes of this debate, the project also sheds light on the evolving practices of internationalism. Located in the field of a new international history, this study relates to the recent rediscovery of the “new order”-discourses of the 1970s as well as to the increasingly diversified literature on internationalism. With its focus on international communications and attempts at regulating them, it also contributes to an international media history in the late twentieth century. The emphasis on the role of international organisations as well as on voices from the Global South will make contributions to our understanding of the historic macro-processes of decolonisation, globalisation and the Cold War

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Image Denoising via Style Disentanglement

    Full text link
    Image denoising is a fundamental task in low-level computer vision. While recent deep learning-based image denoising methods have achieved impressive performance, they are black-box models and the underlying denoising principle remains unclear. In this paper, we propose a novel approach to image denoising that offers both clear denoising mechanism and good performance. We view noise as a type of image style and remove it by incorporating noise-free styles derived from clean images. To achieve this, we design novel losses and network modules to extract noisy styles from noisy images and noise-free styles from clean images. The noise-free style induces low-response activations for noise features and high-response activations for content features in the feature space. This leads to the separation of clean contents from noise, effectively denoising the image. Unlike disentanglement-based image editing tasks that edit semantic-level attributes using styles, our main contribution lies in editing pixel-level attributes through global noise-free styles. We conduct extensive experiments on synthetic noise removal and real-world image denoising datasets (SIDD and DND), demonstrating the effectiveness of our method in terms of both PSNR and SSIM metrics. Moreover, we experimentally validate that our method offers good interpretability

    Revealing More Details: Image Super-Resolution for Real-World Applications

    Get PDF

    From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm

    Full text link
    Neural radiance fields (NeRF) have shown great success in novel view synthesis. However, recovering high-quality details from real-world scenes is still challenging for the existing NeRF-based approaches, due to the potential imperfect calibration information and scene representation inaccuracy. Even with high-quality training frames, the synthetic novel views produced by NeRF models still suffer from notable rendering artifacts, such as noise and blur. To address this, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer. Specially, we design a NeRF-style degradation modeling approach and construct large-scale training data, enabling the possibility of effectively removing NeRF-native rendering artifacts for deep neural networks. Moreover, beyond the degradation removal, we propose an inter-viewpoint aggregation framework that fuses highly related high-quality training images, pushing the performance of cutting-edge NeRF models to entirely new levels and producing highly photo-realistic synthetic views. Based on this paradigm, we further present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer, achieving superior performance with significantly improved computational efficiency. Notably, NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views. Extensive experiments demonstrate the excellent restoration ability of NeRFLiX++ on various novel view synthesis benchmarks.Comment: 17 pages, 16 figures. Project Page: https://redrock303.github.io/nerflix_plus/. arXiv admin note: text overlap with arXiv:2303.0691

    Improving robustness of image recognition through artificial image augmentation

    Get PDF
    Deep learning based computer vision technologies can offer a number of advantages over manual labour inspection methods such as reduced operational costs and efficiency improvements. However, they are known to be unreliable in certain situations, especially when input images contain augmentations such as occlusion or distortion that computer vision models have not been trained on. While augmentations can be mitigated by controlling some situations, this is not always possible, especially in outdoor environments. To address this issue, one common approach is supplemental robustness training using augmented training data, which involves training models on images containing the expected augmentations to improve performance. However, this approach requires collection of a substantial volume of augmented images for each expected augmentation, making it time-consuming and costly depending on the difficulty involved in reproducing each augmentation. This thesis explores the viability of using artificially rendered augmentations on unaugmented images as a substitute for the manual collection and preparation of naturally augmented data for image recognition and object detection models. Specifically, this thesis recreates nine environmental augmentations that commonly occur within outdoor environments and evaluates their impact on model performance on three datasets. The findings of this thesis indicate potential for using artificially generated augmentations as substitutes for naturally occurring augmentations. It is anticipated that further research in this area will enable more reliable image recognition and object detection in less controllable environments, thus improving the results of these technologies in uncertain situations

    A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing

    Full text link
    Remote sensing images are essential for many earth science applications, but their quality can be degraded due to limitations in sensor technology and complex imaging environments. To address this, various remote sensing image deblurring methods have been developed to restore sharp, high-quality images from degraded observational data. However, most traditional model-based deblurring methods usually require predefined hand-craft prior assumptions, which are difficult to handle in complex applications, and most deep learning-based deblurring methods are designed as a black box, lacking transparency and interpretability. In this work, we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds, alternately updating blurring kernels and images, with the theoretical foundation of network design. Additionally, we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain. Then, we proposed a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold operator and a multi-scale prior feature extraction block. This module also introduces an attention mechanism to adaptively adjust the prior importance, thus avoiding the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale generalized shrinkage threshold network (MGSTNet) is designed to specifically focus on learning deep geometric prior features to enhance image restoration. Experiments demonstrate the superiority of our MGSTNet framework on remote sensing image datasets compared to existing deblurring methods.Comment: 12 pages

    Motion Offset for Blur Modeling

    Get PDF
    Motion blur caused by the relative movement between the camera and the subject is often an undesirable degradation of the image quality. In most conventional deblurring methods, a blur kernel is estimated for image deconvolution. Due to the ill-posed nature, predefined priors are proposed to suppress the ill-posedness. However, these predefined priors can only handle some specific situations. In order to achieve a better deblurring performance on dynamic scene, deep-learning based methods are proposed to learn a mapping function that restore the sharp image from a blurry image. The blur may be implicitly modelled in feature extraction module. However, the blur modelled from the paired dataset cannot be well generalized to some real-world scenes. To summary, an accurate and dynamic blur model that more closely approximates real-world blur is needed. By revisiting the principle of camera exposure, we can model the blur with the displacements between sharp pixels and the exposed pixel, namely motion offsets. Given specific physical constraints, motion offsets are able to form different exposure trajectories (i.e. linear, quadratic). Compare to conventional blur kernel, our proposed motion offsets are a more rigorous approximation for real-world blur, since they can constitute a non-linear and non-uniform motion field. Through learning from dynamic scene dataset, an accurate and spatial-variant motion offset field is obtained. With accurate motion information and a compact blur modeling method, we explore the ways of utilizing motion information to facilitate multiple blur-related tasks. By introducing recovered motion offsets, we build up a motion-aware and spatial-variant convolution. For extracting a video clip from a blurry image, motion offsets can provide an explicit (non-)linear motion trajectory for interpolating. We also work towards a better image deblurring performance in real-world scenarios by improving the generalization ability of the deblurring model
    corecore