935 research outputs found

    Analysis and Modelling of TTL ice crystals based on in-situ light scattering patterns

    Get PDF
    Even though there are numerous studies on cirrus clouds and its influence on climate, lack of detailed information on its microphysical properties like ice crystal geometry, still exists. Challenges like instrumental limitations and scarcity of observational data could be the reasons behind it. But this knowledge gap has only heightened the error in climate model predictions. Therefore, this study is focused on the Tropical Tropopause Layer (TTL), where cirrus clouds can be seen, and the temperature bias is higher. Since the shape and surface geometry of ice crystals greatly influence the temperature, a detailed understanding of these ice crystals is necessary. So, this paper will look in-depth on finding the morphology of different types of ice crystals in the TTL. The primary objective of this research is to analyse the scattering patterns of ice crystals in the TTL cirrus and find their characteristics like shape and size distributions. As cirrus is a high cloud, it plays a crucial role in the Earth-atmosphere radiation balance and by knowing the scattering properties of ice crystals, their impact on the radiative balance can be estimated. This research further helps to broaden the understanding of the general scattering properties of TTL ice crystals, to support climate modelling and contribute towards more accurate climate prediction. An investigation into the light scattering data is presented. The data consist of 2D scattering patterns of ice crystals of size 1-100őľm captured by the Aerosol Ice Interface Transition Spectrometer (AIITS) between the scattering angles 6¬į and 25¬į at the wavelength of 532nm. The images were taken during the NERC and NASA Co-ordinated Airborne Studies in the Tropics and Airborne Tropical Tropopause Experiment (known as the CAST-ATTREX campaign) on 5th March 2015 at an altitude between 15-16km over the Eastern Pacific. The features in the scattering patterns are analysed to identify the crystal habit, as they vary with the geometry of the crystal. After the analysis phase, the model crystals of specific types and sizes are generated using an appropriate computer program. The scattering data of the model crystals are then simulated using a Beam Tracing Model (BTM) based on physical optics, as geometric optics doesn‚Äôt produce the required information and exact methods (like T-matrix or Discrete Dipole Approximation) are either unsuitable for large size parameters or time-consuming. The simulated scattering pattern of a model crystal is then compared against that of the AIITS to find the characteristics like shape, surface texture and size of the ice crystals. By successive testing and further analysis, the crystal sizes are estimated. Since the manual analysis of scattering patterns is time-consuming, a pilot study on Deep Learning Network has been undertaken to classify the scattering patterns. Previous studies have shown that there are high concentrations of small ice crystals in TTL cirrus. However, these crystals, especially <30őľm, are often misclassified due to the limited resolution of the imaging instruments, or even considered as shattered ice. Through this research it was possible to explore both the crystal habit and its surface texture with greater accuracy as the scattering patterns captured by the AIITS are analysed instead of crystal images. It was found that most of the crystals are quasi-spheroidal in shape and that there is indeed an abundance of smaller crystals <30őľm. It was also found that over a quarter of the crystal population has rough surfaces

    Learning Object-Centric Neural Scattering Functions for Free-viewpoint Relighting and Scene Composition

    Full text link
    Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured objects. More recent neural inverse rendering approaches have enabled object relighting, but they represent surface properties as simple BRDFs, and therefore cannot handle translucent objects. We propose Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct object appearance from only images. OSFs not only support free-viewpoint object relighting, but also can model both opaque and translucent objects. While accurately modeling subsurface light transport for translucent objects can be highly complex and even intractable for neural methods, OSFs learn to approximate the radiance transfer from a distant light to an outgoing direction at any spatial location. This approximation avoids explicitly modeling complex subsurface scattering, making learning a neural implicit model tractable. Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects, allowing faithful free-viewpoint relighting as well as scene composition. Project website: https://kovenyu.com/osf/Comment: Project website: https://kovenyu.com/osf/ Journal extension of arXiv:2012.08503. The first two authors contributed equally to this wor

    Machine learning in solar physics

    Full text link
    The application of machine learning in solar physics has the potential to greatly enhance our understanding of the complex processes that take place in the atmosphere of the Sun. By using techniques such as deep learning, we are now in the position to analyze large amounts of data from solar observations and identify patterns and trends that may not have been apparent using traditional methods. This can help us improve our understanding of explosive events like solar flares, which can have a strong effect on the Earth environment. Predicting hazardous events on Earth becomes crucial for our technological society. Machine learning can also improve our understanding of the inner workings of the sun itself by allowing us to go deeper into the data and to propose more complex models to explain them. Additionally, the use of machine learning can help to automate the analysis of solar data, reducing the need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a Living Review in Solar Physics (LRSP

    LumiGAN: Unconditional Generation of Relightable 3D Human Faces

    Full text link
    Unsupervised learning of 3D human faces from unstructured 2D image data is an active research area. While recent works have achieved an impressive level of photorealism, they commonly lack control of lighting, which prevents the generated assets from being deployed in novel environments. To this end, we introduce LumiGAN, an unconditional Generative Adversarial Network (GAN) for 3D human faces with a physically based lighting module that enables relighting under novel illumination at inference time. Unlike prior work, LumiGAN can create realistic shadow effects using an efficient visibility formulation that is learned in a self-supervised manner. LumiGAN generates plausible physical properties for relightable faces, including surface normals, diffuse albedo, and specular tint without any ground truth data. In addition to relightability, we demonstrate significantly improved geometry generation compared to state-of-the-art non-relightable 3D GANs and notably better photorealism than existing relightable GANs.Comment: Project page: https://boyangdeng.com/projects/lumiga

    Automated detection of tumoural cells with graph neural networks

    Get PDF
    La detecci√≥ de c√®l¬∑lules tumorals en imatges de seccions completes √©s una tasca essencial en el diagn√≤stic m√®dic i la investigaci√≥. En aquesta tesi, proposem i analitzem un enfocament innovador que combina models basats en visi√≥ amb xarxes neuronals en grafs per millorar la precisi√≥ de la detecci√≥ automatitzada de c√®l¬∑lules tumorals. La nostra proposta aprofita l'estructura inherent i les relacions entre c√®l¬∑lules en el teixit. Els resultats experimentals en el nostre propi conjunt de dades curat mostrin que diversos indicadors milloren fins a un 15\% en comparaci√≥ amb nom√©s usar l'enfocament de visi√≥. S'ha demostrat que funciona amb teixit pulmonar tenyit amb H\&E i teixit mamari tenyit amb HER2. Creiem que el nostre m√®tode proposat t√© el potencial de millorar la precisi√≥ de la detecci√≥ automatitzada de c√®l¬∑lules tumorals, el que pot portar a uns diagn√≤stics m√©s r√†pids i una investigaci√≥ accelerada en el camp degut a la reducci√≥ en la c√†rrega de treball dels histopat√≤legs.La detecci√≥n de c√©lulas tumorales en im√°genes de portaobjeto completo juega un papel esencial en el diagn√≥stico m√©dico y es un elemento fundamental de la investigaci√≥n sobre el c√°ncer. En esta tesis proponemos y analizamos un enfoque novedoso que combina modelos de visi√≥n por ordenador con redes neuronales en grafos para mejorar la precisi√≥n de la detecci√≥n automatizada de c√©lulas tumorales. Nuestra propuesta aprovecha la estructura inherente y las relaciones entre las c√©lulas del tejido. Los resultados experimentales obtenidos sobre nuestra propia base de datos muestran que varias m√©tricas mejoran hasta en un 15\% en comparaci√≥n con solo usar el enfoque de visi√≥n. Se ha demostrado que funciona con tejido pulmonar te√Īido con H\&E y tejido mamario te√Īido con HER2. Creemos que nuestro m√©todo tiene el potencial de mejorar la precisi√≥n de los m√©todos autom√°ticos de detecci√≥n de c√©lulas tumorales, lo que puede llevar a acelerar los diagn√≥sticos y la investigaci√≥n en este √°mbito al reducir la carga de trabajo de los histopat√≥logos.The detection of tumoural cells from whole slide images is an essential task in medical diagnosis and research. In this thesis, we propose and analyse a novel approach that combines computer vision-based models with graph neural networks to improve the accuracy of automated tumoural cell detection. Our proposal leverages the inherent structure and relationships between cells in the tissue. Experimental results on our own curated dataset shows that several different metrics improve by up to 15%15\% compared to just using the computer vision approach. It has been proved to work with H\&E stained lung tissue and HER2 stained breast tissue. We believe that our proposed method has the potential to improve the accuracy of automated tumoural cell detection, which can lead to accelerated diagnosis and research in the field by reducing the worload of hystopathologists

    Online Neural Path Guiding with Normalized Anisotropic Spherical Gaussians

    Full text link
    The variance reduction speed of physically-based rendering is heavily affected by the adopted importance sampling technique. In this paper we propose a novel online framework to learn the spatial-varying density model with a single small neural network using stochastic ray samples. To achieve this task, we propose a novel closed-form density model called the normalized anisotropic spherical gaussian mixture, that can express complex irradiance fields with a small number of parameters. Our framework learns the distribution in a progressive manner and does not need any warm-up phases. Due to the compact and expressive representation of our density model, our framework can be implemented entirely on the GPU, allowing it produce high quality images with limited computational resources

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der gr√∂√üten Herausforderungen bei dieser Aufgabe ist die vielf√§ltige Ambiguit√§t. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivit√§t der Oberfl√§che gegenseitig. Zus√§tzlich wird eine Integration dieser Einfl√ľsse durchgef√ľhrt, um das endg√ľltige Bild zu erzeugen. Die Umkehrung dieser integrierten Abh√§ngigkeiten ist eine √§u√üerst schwierige und mehrdeutige Aufgabenstellung. Die L√∂sung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei Ans√§tze zur L√∂sung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen erm√∂glicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ans√§tzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handels√ľblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken k√∂nnen wir ein Reflektanzfeld aus nat√ľrlichen Bildsammlungen ohne jegliche Ground Truth (GT) √úberwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualit√§t der Dekomposition und erm√∂glichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden k√∂nnen, was √ľblich f√ľr Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections

    Pathway to Future Symbiotic Creativity

    Full text link
    This report presents a comprehensive view of our vision on the development path of the human-machine symbiotic art creation. We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist (Turing Artists) to a Machine artist in its own right. We begin with an overview of the limitations of the Turing Artists then focus on the top two-level systems, Machine Artists, emphasizing machine-human communication in art creation. In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations. The rapid development of immersive environment and further evolution into the new concept of metaverse enable symbiotic art creation through unprecedented flexibility of bi-directional communication between artists and art manifestation environments. By examining the latest sensor and XR technologies, we illustrate the novel way for art data collection to constitute the base of a new form of human-machine bidirectional communication and understanding in art creation. Based on such communication and understanding mechanisms, we propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle rather than the traditional "end-to-end" dogma. By proposing a new form of inverse reinforcement learning model, we outline the platform design of machine artists, demonstrate its functions and showcase some examples of technologies we have developed. We also provide a systematic exposition of the ecosystem for AI-based symbiotic art form and community with an economic model built on NFT technology. Ethical issues for the development of machine artists are also discussed

    Long Range Gene Regulation in Human Health and Disease

    Get PDF
    The human genome is capable of producing a vast number of phenotypically diverse cells, with incredibly unique roles that contribute to tissue- and developmental-specificity. As such, precise transcriptional control during biological processes such as differentiation, development, and response to environmental stimuli is required. A complex variety of regulatory elements are responsible for this regulation, many of which are still being characterized within the non-coding regions of the genome.In this work, I first investigate the function of the transcription factor Activator Protein 1 (AP-1) in loop-based gene regulation in a model of monocyte-to-macrophage differentiation. I utilized genome editing techniques to interrogate the role of AP-1 binding at Interleukin 1 beta (IL1) enhancers, and preliminary results suggest a mechanism in which a DNA loop connects enhancer-bound AP-1 to IL1, influencing gene expression. These data provide new insights into the mechanisms behind transcriptional control and 3D chromatin structure.I next assay the impact of genetic risk variants on target genes in an ex vivo model of osteoarthritis (OA), in which human chondrocytes are treated with fibronectin fragment (FN-f). This model allows for the study of disease-associated variants in the correct cellular and biological context. We integrated hits from OA genome-wide association studies (GWAS), maps of 3D chromatin structure and enhancer activity in chondrocytes, and previously collected RNA-seq data from our OA model. This work revealed a set of putative causal OA variants and their potential target genes, including suppressor of cytokine signaling 2 (SOCS2). These results provide unique putative OA risk genes for further research and therapeutic development.Finally, I describe my generation of high quality transcriptional and genotype data for use in expression quantitative trait locus (eQTL) analyses in an OA phenotype. These data will serve as the basis for QTL studies that assess both gene expression and chromatin accessibility. The overlap with OA GWAS hits will contribute to the identification of novel putative target genes, risk variants, and their mechanisms.Doctor of Philosoph
    • ‚Ķ