98 research outputs found

    CGAMES'2009

    Get PDF

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Role of the anisotropy in the interactions between nano- and micro-sized particles

    Get PDF
    The present Thesis focuses on the thermodynamic and dynamic behaviour of anisotropically interacting colloids by means of theoretical and numerical techniques. Colloidal suspensions, i.e. micro-- and nano--sized particles dispersed in a continuous phase, are a topic of great interest in several fields, including material science, soft matter and biophysics. Common in everyday life in the form of soap, milk, cream, etc., colloids have been used for decades as models for atomic and molecular systems, since both classes of systems share many features like critical phenomena, crystallisation and glass transition. Experimental investigation of colloidal systems is made easier by the large size of colloids, which makes it possible to employ visible light as an experimental probe to investigate these systems. Moreover, since the mass of the particles controls the timescales of the dynamics, relaxation times of colloidal suspensions, ranging from seconds to years, orders of magnitude larger than their atomic counterparts, are more easily experimentally accessible. By exploiting this intrinsic slowness, with respect to molecular liquids, present day experimental techniques make it possible to follow in time trajectories of ensembles of particles with tools like confocal microscopy, thus effectively allowing to reconstruct the whole phase space trajectory of the system. In addition, it is also possible to manipulate single and multiple objects using techniques like optical tweezers, magnetic tweezers and atomic force microscopy. With single-molecule force spectroscopy one can arrange particles in ordered structures or measure properties like stiffness or mechanical responses (as in pulling experiments on RNA and DNA strands of particles and aggregates). A remarkable difference between the molecular and the colloidal world is that in the former the interactions between the basic constituents are fixed by nature, while in the latter the effective potential between two particles can be controlled by accurately designing and synthesizing the building blocks or tuned by changing the properties of the solvent. In the last decade many new sophisticated techniques for particle synthesis have been developed and refined. These recent advances allow for the creation of an incredible variety of non-spherically, i.e. anisotropically, interacting building blocks. The anisotropy can arise from shape, surface patterning, form of the interactions or a combination thereof. Examples are colloidal cubes, Janus particles, triblock Janus particles, patchy particles, magnetic spheres and many others. The recent blossoming of experimental, theoretical and numerical studies and research on the role of the anisotropy has highlighted the richness of phenomena that these systems exhibit. Relevant examples for the present Thesis are valence-limited building blocks, i.e colloids with a maximum number of bound neighbours, and non-spherical particles with an aspect ratio, i.e. the ratio of the width of a particle to its height, significantly different from 11. The simplest example of valence-limited colloids is given by the so-called \textit{patchy} particles: colloids decorated with attractive spots (patches) on the surface. If the width and the range of the patches are chosen in such a way that each patch can form no more than one bond, then the total number of bound first neighbours per particle MM can not exceed the number of patches. For particles interacting through short-ranged isotropic potentials, M12M \approx 12. It has been shown that changing the valence MM has dramatic effects, both qualitative and quantitative, on the dynamic and thermodynamic properties of such systems. At high densities patchy colloids can self-assemble into a large variety of crystal structures, depending on valence, geometry and external parameters. We will mostly focus on low-density systems. The second class of systems pertinent to the present work comprises anisotropically shaped particles that, depending on the aspect ratio and the values of the external parameters, can exhibit liquid crystal phases which may display orientational long-range order. Nematic, in which there is no translational order, smectic, in which particles are ordered in layers and thus exhibit translational order in one dimension, and columnar phases, in which particles self-assemble into cylindrical aggregates which can in turn become nematic or form two-dimensional lattices, do not exist in isotropic systems, since the anisotropy in shape is a prerequisite for the breaking of the orientational symmetry. Liquid crystals, discovered at the end of the 19th century have been thoroughly investigated for decades, leading to technological breakthroughs like LCD displays. Recently it has been suggested that liquid crystal phases occurring in dense solutions of short DNA double strands could have played a role in the prebiotic chemical generation of complementary H-bonded molecular assemblies. The main goal of the present Thesis is to study the structural, thermodynamic and, to a lesser extent, dynamic properties of systems interacting through anisotropic potentials at low densities and temperatures. In particular, we focus on the low-density phase behaviour of valence-limited systems. We use a variegated approach, comprising state-of-the-art Monte Carlo and Molecular Dynamics techniques and theoretical approaches, to analyse and shed some light on the effect of the anisotropy on the phase diagram and on the dynamics of such systems. As the effect of the valence on the phase diagram plays a major role in the models investigated throughout this Thesis, each Chapter is devoted to the study of the dynamics and thermodynamics of systems having a fixed or effective maximum valence MM. In the last years a lot of effort has been devoted to the study of end-to-end stacking interactions between different strands of nucleic acids, which play an important role in both physical and biological applications of DNA and RNA. In Chapter~1, building on the experimental work of Bellini \textit{et al.}, we make use of a theoretical framework recently developed to tackle the problem of the isotropic--nematic phase coexistence in solutions of short DNA duplexes (DNADs). We compare the parameter-free theoretical predictions with results from large scale numerical simulations on GPUs of a coarse-grained realistic model and find a good quantitative agreement at low concentrations. We then predict the phase boundaries for different DNAD lengths and compare the results with experimental findings. In Chapter~2 we investigate the structural and thermodynamic properties of systems having M=2M=2, that is systems that undergo an extensive formation of linear structures as temperature is lowered. We focus on bi-functional patchy particles whose interaction details are chosen to qualitatively mimic the behaviour of the low-density, low-temperature dipolar hard sphere (DHS) model by analysing the outcomes of the simulations carried out in Chapter~3. In particular, we are interested in the interplay between chains and rings in equilibrium polymerization processes in a region of the phase diagram where the formation of the latter is favoured. The very good quantitative agreement found by comparing numerical results with theoretical, parameter-free predictions calls for an extension of the theory with the inclusion of branching, in order to understand how the presence of rings affects the phase separation. Chapter~3 is devoted to the investigation of the phase behaviour of dipolar fluids, i.e. systems interacting mainly through dipole-dipole potentials. For spheres, the lowest-energy configuration is the nose-to-tail contact geometry, and hence the ground state is an infinite chain or ring like in regular M=2M=2 systems. For finite temperatures, on the other hand, thermal fluctuations allow for the appearance of defects like dangling ends and chain branching which, in the language of this Thesis, makes for a temperature-dependent valence. This general mechanism, under some specific conditions, can lead to a very peculiar phase separation, driven by a balance between these \textit{topological} defects rather than by the energy/entropy competition usually responsible for regular gas--liquid phase transitions. This topological phase transition has been recently observed in a model system of patchy particles but it is unclear whether such mechanism still holds in dipolar fluids in general and in the DHS model in particular. We focus on the DHS model, whose phase behaviour at low densities and temperatures has been studied for decades but still remains largely unknown. In particular, we look for the gas--liquid critical point by means of state-of-the-art Monte Carlo simulations in a region where it has long been thought to be. We find no evidence of a phase transition and we speculate that this is due to an abundance of rings, providing a remarkable example of phase separation suppressed by self-assembly. In Chapter~4 we study the dynamics of tetravalent patchy particles in the optimal network density region. For this fixed value of density the system is able to form a fully connected random network, i.e. an ideal gel. Indeed, as the temperature is lowered, a percolating network forms and the dynamics slows down. Although the observed dynamical arrest is different from the glass case, where excluded volume interactions are dominant, the decay of the self-- and collective correlation functions of the resulting fluid bears similarities with that observed in glassy systems. Remarkably, comparing the characteristic decay times of density-density correlation functions with the average bond life, we find that only at very low TT the decay of the density fluctuations requires the breakage of bonds. In Chapter~5 we introduce DNA as a building block that can be used to rationally design novel, self-assembling materials with tunable properties. In this Chapter, we study the phase behaviour and the dynamics of four-armed DNA constructs at low densities. We use the coarse-grained, realistic DNA model employed in Chapter~1 and state-of-the-art simulation techniques, as presented in Chapter~6, to investigate systems composed of thousands of nucleotides undergoing a two-step self-assembling process and we quantitatively compare the outcome with experimental results obtained for a very similar system. In Chapter~6 we introduce Graphics Processing Units (GPUs) as valuable tools for present day numerical investigations. We outline both the architecture of NVIDIA GPUs and NVIDIA CUDA, the software layer built on top of the hardware required to program these devices. We then present the techniques employed to write an efficient, general Molecular Dynamics code and compare its performances with a regular CPU code. The observed performance boost allows us to tackle the analysis of the dynamics and thermodynamics of very large systems without having to resort to massive CPU clusters (see Chapters~1,~4 and~5). Our work shows that it is possible to predict the location of thermodynamic and dynamic \textit{locii} of very complicated objects by means of numerical simulations. Since the available computational power keeps increasing at a steady pace, it will be soon possible to repeat the pioneering study presented in this Thesis on a more automated basis and for even more complicated system. For example, it will be possible to directly study the isotropic--nematic phase transition of short DNA duplexes investigated in Chapter~1 or design self-assembling DNA strands able to reproduce the behaviour of the patchy colloids or dipolar fluids studied throughout this Thesis. Being able to carefully design the building blocks and then predict beforehand the properties of a compound will greatly simplify the process of synthesising tomorrow's materials

    GPU Computing for Cognitive Robotics

    Get PDF
    This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.European Commission Seventh Framework Programm

    Evolutionary design of deep neural networks

    Get PDF
    Mención Internacional en el título de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of the topology of artificial neural networks, with most works focusing on very simple architectures. However, times have changed, and nowadays convolutional neural networks are the industry and academia standard for solving a variety of problems, many of which remained unsolved before the discovery of this kind of networks. Convolutional neural networks involve complex topologies, and the manual design of these topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to use neuroevolution in order to evolve the architecture of convolutional neural networks. To do so, we have decided to try two different techniques: genetic algorithms and grammatical evolution. We have implemented a niching scheme for preserving the genetic diversity, in order to ease the construction of ensembles of neural networks. These techniques have been validated against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%, and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275. Both results have proven very competitive when compared with the state of the art. Also, in all cases, ensembles have proven to perform better than individual models. Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced in 2017, which includes more samples and a set of letters for character recognition. Results have shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures can be reused across domains with similar characteristics. In summary, neuroevolution is an effective approach for automatically designing topologies for convolutional neural networks. However, it still remains as an unexplored field due to hardware limitations. Current advances, however, should constitute the fuel that empowers the emergence of this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917. This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: María Araceli Sanchís de Miguel.- Secretario: Francisco Javier Segovia Pérez.- Vocal: Simon Luca

    Towards Smarter Fluorescence Microscopy: Enabling Adaptive Acquisition Strategies With Optimized Photon Budget

    Get PDF
    Fluorescence microscopy is an invaluable technique for studying the intricate process of organism development. The acquisition process, however, is associated with the fundamental trade-off between the quality and reliability of the acquired data. On one hand, the goal of capturing the development in its entirety, often times across multiple spatial and temporal scales, requires extended acquisition periods. On the other hand, high doses of light required for such experiments are harmful for living samples and can introduce non-physiological artifacts in the normal course of development. Conventionally, a single set of acquisition parameters is chosen in the beginning of the acquisition and constitutes the experimenter’s best guess of the overall optimal configuration within the aforementioned trade-off. In the paradigm of adaptive microscopy, in turn, one aims at achieving more efficient photon budget distribution by dynamically adjusting the acquisition parameters to the changing properties of the sample. In this thesis, I explore the principles of adaptive microscopy and propose a range of improvements for two real imaging scenarios. Chapter 2 summarizes the design and implementation of an adaptive pipeline for efficient observation of the asymmetrically dividing neurogenic progenitors in Zebrafish retina. In the described approach the fast and expensive acquisition mode is automatically activated only when the mitotic cells are present in the field of view. The method illustrates the benefits of the adaptive acquisition in the common scenario of the individual events of interest being sparsely distributed throughout the duration of the acquisition. Chapter 3 focuses on computational aspects of segmentation-based adaptive schemes for efficient acquisition of the developing Drosophila pupal wing. Fast sample segmentation is shown to provide a valuable output for the accurate evaluation of the sample morphology and dynamics in real time. This knowledge proves instrumental for adjusting the acquisition parameters to the current properties of the sample and reducing the required photon budget with minimal effects to the quality of the acquired data. Chapter 4 addresses the generation of synthetic training data for learning-based methods in bioimage analysis, making them more practical and accessible for smart microscopy pipelines. State-of-the-art deep learning models trained exclusively on the generated synthetic data are shown to yield powerful predictions when applied to the real microscopy images. In the end, in-depth evaluation of the segmentation quality of both real and synthetic data-based models illustrates the important practical aspects of the approach and outlines the directions for further research

    Artificial neural networks for scattered light imaging

    Get PDF
    Image formation is one of the most important aspect of our everyday life. Conventional optical Imaging (and Sensing) exploits light, reaching the detection system from a target or a scene of interest, mainly unscattered. However, there are many practical situations in which unscattered light may be undetectable, insufficient or mispresented. Nonetheless, if the considered system allows it, it could be still possible to exploit scattered light in order to extract relevant information. Problems arise from the fact that, in these cases, light propagation may undergo severe alterations, thus leading to challenging, and sometimes ill- posed, problems. In this thesis, two main scenarios involving scattered light are studied and addressed by means of artificial neural networks. Over the last period, these powerful data-driven algorithms have been extensively employed in many scientific contexts for their ability to solve even complex problems implicitly. Precisely this characteristic is exploited, in the present work, in a non-line- of-sight scenario in order to simultaneously locate and identify people hidden behind a corner. Moreover, a complex-valued neural network algorithm is implemented and applied to the problem of transmission of images through a multimode fibre, demonstrating high-speed and high-resolution image restoration even without the need for any phase measurements. Finally, due to its formulation based on the physics of multimode fibres, a direct comparison is proposed between the same algorithm and a more standard approach

    Bioinformatics

    Get PDF
    This book is divided into different research areas relevant in Bioinformatics such as biological networks, next generation sequencing, high performance computing, molecular modeling, structural bioinformatics, molecular modeling and intelligent data analysis. Each book section introduces the basic concepts and then explains its application to problems of great relevance, so both novice and expert readers can benefit from the information and research works presented here
    corecore