276 research outputs found

    COMPUTATIONAL ANALYSIS OF CODE-MULTIPLEXED COULTER SENSOR SIGNALS

    Get PDF
    Nowadays, lab-on-a-chip (LoC) technology has been applied in a variety of applications because of its capability to perform accurate microscale manipulations of cells for point-of-care diagnostics. On the other hand, such a result is not readily available from an LoC device and typically still requires a post-inspection of the chip using traditional laboratory equipment such as a microscope, negating the advantages of the LoC technology. To solve this dilemma, my doctoral research mainly focuses on developing portable and disposable biosensors for interfacing with and digitizing the information from an LoC system. Our sensor platform, integrated with multiple microfluidic impedance sensors, electrically monitors and tracks manipulated cells on an LoC device. The sensor platform compresses information from each sensor into a 1-dimensional electrical waveform, and therefore, further signal processing is required to recover the readout of each sensor and extract information of detected cells. Furthermore, with the capability of the sensor platform, we have introduced integrated microfluidic cytometers to characterize properties of cells such as cell surface expression and mechanical properties.Ph.D

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Doctor of Philosophy

    Get PDF
    dissertationDigital image processing has wide ranging applications in combustion research. The analysis of digital images is used in practically every scale of studying combustion phenomena from the scale of individual atoms to diagnosing and controlling large-scale combustors. Digital image processing is one of the fastest-growing scientific areas in the world today. From being able to reconstruct low-resolution grayscale images from transmitted signals, the capabilities have grown to enabling machines carrying out tasks that would normally require human vision, perception, and reasoning. Certain applications in combustion science benefit greatly from recent advances in image processing. Unfortunately, since the two fields - combustion and image processing research - stand relatively far from each other, the most recent results are often not known well enough in the areas where they may be applied with great benefits. This work aims to improve the accuracy and reliability of certain measurements in combustion science by selecting, adapting, and implementing the appropriate techniques originally developed in the image processing area. A number of specific applications were chosen that cover a wide range of physical scales of combustion phenomena, and specific image processing methodologies were proposed to improve or enable measurements in studying such phenomena. The selected applications include the description and quantification of combustion-derived carbon nanostructure, the three-dimensional optical diagnostics of combusting pulverized-coal particles and the optical flow velocimetry and quantitative radiation imaging of a pilot-scale oxy-coal flame. In the field of the structural analysis of soot, new structural parameters were derived and the extraction and fidelity of existing ones were improved. In the field of pulverized-coal combustion, the developed methodologies allow for studying the detailed mechanisms of particle combustion in three dimensions. At larger scales, the simultaneous measurement of flame velocity, spectral radiation, and pyrometric properties were realized

    Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes

    Get PDF
    The book documents 25 papers collected from the Special Issue “Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes”, highlighting recent research trends in complex industrial processes. The book aims to stimulate the research field and be of benefit to readers from both academic institutes and industrial sectors

    Fast and interactive ray-based rendering

    Get PDF
    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDespite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints

    Bayesian Modeling and Estimation Techniques for the Analysis of Neuroimaging Data

    Get PDF
    Brain function is hallmarked by its adaptivity and robustness, arising from underlying neural activity that admits well-structured representations in the temporal, spatial, or spectral domains. While neuroimaging techniques such as Electroencephalography (EEG) and magnetoencephalography (MEG) can record rapid neural dynamics at high temporal resolutions, they face several signal processing challenges that hinder their full utilization in capturing these characteristics of neural activity. The objective of this dissertation is to devise statistical modeling and estimation methodologies that account for the dynamic and structured representations of neural activity and to demonstrate their utility in application to experimentally-recorded data. The first part of this dissertation concerns spectral analysis of neural data. In order to capture the non-stationarities involved in neural oscillations, we integrate multitaper spectral analysis and state-space modeling in a Bayesian estimation setting. We also present a multitaper spectral analysis method tailored for spike trains that captures the non-linearities involved in neuronal spiking. We apply our proposed algorithms to both EEG and spike recordings, which reveal significant gains in spectral resolution and noise reduction. In the second part, we investigate cortical encoding of speech as manifested in MEG responses. These responses are often modeled via a linear filter, referred to as the temporal response function (TRF). While the TRFs estimated from the sensor-level MEG data have been widely studied, their cortical origins are not fully understood. We define the new notion of Neuro-Current Response Functions (NCRFs) for simultaneously determining the TRFs and their cortical distribution. We develop an efficient algorithm for NCRF estimation and apply it to MEG data, which provides new insights into the cortical dynamics underlying speech processing. Finally, in the third part, we consider the inference of Granger causal (GC) influences in high-dimensional time series models with sparse coupling. We consider a canonical sparse bivariate autoregressive model and define a new statistic for inferring GC influences, which we refer to as the LASSO-based Granger Causal (LGC) statistic. We establish non-asymptotic guarantees for robust identification of GC influences via the LGC statistic. Applications to simulated and real data demonstrate the utility of the LGC statistic in robust GC identification

    Image Processing and Simulation Toolboxes of Microscopy Images of Bacterial Cells

    Get PDF
    Recent advances in microscopy imaging technology have allowed the characterization of the dynamics of cellular processes at the single-cell and single-molecule level. Particularly in bacterial cell studies, and using the E. coli as a case study, these techniques have been used to detect and track internal cell structures such as the Nucleoid and the Cell Wall and fluorescently tagged molecular aggregates such as FtsZ proteins, Min system proteins, inclusion bodies and all the different types of RNA molecules. These studies have been performed with using multi-modal, multi-process, time-lapse microscopy, producing both morphological and functional images. To facilitate the finding of relationships between cellular processes, from small-scale, such as gene expression, to large-scale, such as cell division, an image processing toolbox was implemented with several automatic and/or manual features such as, cell segmentation and tracking, intra-modal and intra-modal image registration, as well as the detection, counting and characterization of several cellular components. Two segmentation algorithms of cellular component were implemented, the first one based on the Gaussian Distribution and the second based on Thresholding and morphological structuring functions. These algorithms were used to perform the segmentation of Nucleoids and to identify the different stages of FtsZ Ring formation (allied with the use of machine learning algorithms), which allowed to understand how the temperature influences the physical properties of the Nucleoid and correlated those properties with the exclusion of protein aggregates from the center of the cell. Another study used the segmentation algorithms to study how the temperature affects the formation of the FtsZ Ring. The validation of the developed image processing methods and techniques has been based on benchmark databases manually produced and curated by experts. When dealing with thousands of cells and hundreds of images, these manually generated datasets can become the biggest cost in a research project. To expedite these studies in terms of time and lower the cost of the manual labour, an image simulation was implemented to generate realistic artificial images. The proposed image simulation toolbox can generate biologically inspired objects that mimic the spatial and temporal organization of bacterial cells and their processes, such as cell growth and division and cell motility, and cell morphology (shape, size and cluster organization). The image simulation toolbox was shown to be useful in the validation of three cell tracking algorithms: Simple Nearest-Neighbour, Nearest-Neighbour with Morphology and DBSCAN cluster identification algorithm. It was shown that the Simple Nearest-Neighbour still performed with great reliability when simulating objects with small velocities, while the other algorithms performed better for higher velocities and when there were larger clusters present

    Optic Flow for Obstacle Avoidance and Navigation: A Practical Approach

    Full text link
    This thesis offers contributions and innovations to the development of vision-based autonomous flight control systems for small unmanned aerial vehicles operating in cluttered urban environments. Although many optic flow algorithms have been reported, almost none have addressed the critical issue of accuracy and reliability over a wide dynamic range of optic flow. My aim is to rigorously develop improved optic flow sensing to meet realistic mission requirements for autonomous navigation and collision avoidance. A review of related work enabled development of a new hybrid optic flow algorithm concept combining the best properties of image correlation and interpolation with additional innovations to enhance accuracy, computational speed and reliability. Key analytical work yielded a methodology for determining optic flow dynamic range requirements from system and sensor design parameters and a technique enabling a video sensor to operate as a passive ranging system for closed loop flight control. Detailed testing led to development of the hybrid image interpolation algorithm (HI2A) using improved correlation search strategies, sparse images to reduce processing loads, a solution tracking loop to bypass the more intensive initial estimation process, a frame look-back method to improve accuracy at low optic flow, a modified interpolation technique to improve robustness and an extensive error checking system for validating outputs. A realistic simulation system was developed incorporating independent, precision ground truthing to assess algorithm accuracy. Comparison testing of the HI2A against the commonly-used Lucas Kanade algorithm demonstrates major improvement in accuracy over greatly expanded dynamic range. A reactive flight controller using ranging data from a monocular, forward looking video sensor and rules-based logic was developed and tested in Monte Carlo simulations of a hundred flights. At higher flight speeds than reported in similar tests, collision-free results were obtained in a realistic urban canyon environment. The HI2A algorithm and flight controller software performance on a common PC was up to eight times faster than real-time for outputs of 250 measurements at 50 Hz. The feasibility of terrain mapping in real-time was demonstrated using 3D ranging data from optic flow in an overflight of the urban simulation environment indicating the potential for its use in path planning approaches to navigation and collision avoidance

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts
    • …
    corecore