119 research outputs found

    PyrSat - Prevention and response to wild fires with an intelligent Earth observation CubeSat

    Get PDF
    Forest fires are a pervasive and serious problem. Besides loss of life and extensive environmental damage, fires also result in substantial economic losses, not to mention property damage, injuries, displacements and hardships experienced by the affected citizens. This project proposes a low-cost intelligent hyperspectral 3U CubeSat for the production of fire risk and burnt area maps. It applies Machine Learning algorithms to autonomously process images and obtain final data products on-board the satellite for direct transmission to users on the ground. Used in combination with other services such as EFFIS or AFIS, the system could considerably reduce the extent and consequences of forest fires

    Implementation of a neural network-based electromyographic control system for a printed robotic hand

    Get PDF
    3D printing has revolutionized the manufacturing process reducing costs and time, but only when combined with robotics and electronics, this structures could develop their full potential. In order to improve the available printable hand designs, a control system based on electromyographic (EMG) signals has been implemented, so that different movement patterns can be recognized and replicated in the bionic hand in real time. This control system has been developed in Matlab/ Simulink comprising EMG signal acquisition, feature extraction, dimensionality reduction and pattern recognition through a trained neural-network. Pattern recognition depends on the features used, their dimensions and the time spent in signal processing. Finding balance between this execution time and the input features of the neural network is a crucial step for an optimal classification.Ingeniería Biomédic

    The Design and Construction of Novel Near -Infrared Time -Correlated Single Photon Counting Devices for the Identification of Analytes in Multiplexed Applications.

    Get PDF
    This manuscript details the design, construction, and application of novel near infrared time correlated single photon counting devices to the identification of analytes in analytical separations. The thrust of this research is to provide a simple, low cost technique for the high-speed identification of DNA sequencing bases that are labeled with a series of unique near infrared fluorophores. These fluorophores are unique because they possess the same emission and absorption maxima, but different fluorescence lifetimes. Consequently, they allow analytes to be discriminated by fluorescence lifetime as opposed to color. The first goal of this dissertation research was to implement a time correlated single photon counting system with the use of single mode fiber optics. Utilizing a passively mode locked Ti: Sapphire Laser, a single photon avalanche diode, single mode fiber optics and a mechanical switch a fiber optic based time correlated single photon counting device with subnanosecond resolution was constructed. The experimental results showed that group velocity dispersion was low and that it was possible to perform multiple time correlated single photon counting experiments with a limited number of excitation sources and detectors. It was determined that the average instrumental response of each channel was 181 picoseconds. The fluorescence lifetime of a near infrared dye, aluminum tetrasulfonated naphthalocyanine was determined to be 3.08 nanoseconds. The second phase of this doctoral research involved the construction and characterization of a near infrared time correlated single photon counting scanning device. This integrated device consisted of a pulsed diode laser, single photon avalanche diode, and a time correlated single photon counting board. The instrument response function of this system was determined to be less than 300 ps. The sensitivity and ability to discriminate between various fluorophores was determined. In addition to its application for scanning solid surfaces such as DNA microarrays, the device was utilized to detect analytes in a micro-capillary electrophoresis separation. The fluorescence lifetimes of these analytes were determined on-line

    Exploiting All-Programmable System on Chips for Closed-Loop Real-Time Neural Interfaces

    Get PDF
    High-density microelectrode arrays (HDMEAs) feature thousands of recording electrodes in a single chip with an area of few square millimeters. The obtained electrode density is comparable and even higher than the typical density of neuronal cells in cortical cultures. Commercially available HDMEA-based acquisition systems are able to record the neural activity from the whole array at the same time with submillisecond resolution. These devices are a very promising tool and are increasingly used in neuroscience to tackle fundamental questions regarding the complex dynamics of neural networks. Even if electrical or optical stimulation is generally an available feature of such systems, they lack the capability of creating a closed-loop between the biological neural activity and the artificial system. Stimuli are usually sent in an open-loop manner, thus violating the inherent working basis of neural circuits that in nature are constantly reacting to the external environment. This forbids to unravel the real mechanisms behind the behavior of neural networks. The primary objective of this PhD work is to overcome such limitation by creating a fullyreconfigurable processing system capable of providing real-time feedback to the ongoing neural activity recorded with HDMEA platforms. The potentiality of modern heterogeneous FPGAs has been exploited to realize the system. In particular, the Xilinx Zynq All Programmable System on Chip (APSoC) has been used. The device features reconfigurable logic, specialized hardwired blocks, and a dual-core ARM-based processor; the synergy of these components allows to achieve high elaboration performances while maintaining a high level of flexibility and adaptivity. The developed system has been embedded in an acquisition and stimulation setup featuring the following platforms: \u2022 3\ub7Brain BioCam X, a state-of-the-art HDMEA-based acquisition platform capable of recording in parallel from 4096 electrodes at 18 kHz per electrode. \u2022 PlexStim\u2122 Electrical Stimulator System, able to generate electrical stimuli with custom waveforms to 16 different output channels. \u2022 Texas Instruments DLP\uae LightCrafter\u2122 Evaluation Module, capable of projecting 608x684 pixels images with a refresh rate of 60 Hz; it holds the function of optical stimulation. All the features of the system, such as band-pass filtering and spike detection of all the recorded channels, have been validated by means of ex vivo experiments. Very low-latency has been achieved while processing the whole input data stream in real-time. In the case of electrical stimulation the total latency is below 2 ms; when optical stimuli are needed, instead, the total latency is a little higher, being 21 ms in the worst case. The final setup is ready to be used to infer cellular properties by means of closed-loop experiments. As a proof of this concept, it has been successfully used for the clustering and classification of retinal ganglion cells (RGCs) in mice retina. For this experiment, the light-evoked spikes from thousands of RGCs have been correctly recorded and analyzed in real-time. Around 90% of the total clusters have been classified as ON- or OFF-type cells. In addition to the closed-loop system, a denoising prototype has been developed. The main idea is to exploit oversampling techniques to reduce the thermal noise recorded by HDMEAbased acquisition systems. The prototype is capable of processing in real-time all the input signals from the BioCam X, and it is currently being tested to evaluate the performance in terms of signal-to-noise-ratio improvement

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    JUNO Conceptual Design Report

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine the neutrino mass hierarchy using an underground liquid scintillator detector. It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants in Guangdong, China. The experimental hall, spanning more than 50 meters, is under a granite mountain of over 700 m overburden. Within six years of running, the detection of reactor antineutrinos can resolve the neutrino mass hierarchy at a confidence level of 3-4σ\sigma, and determine neutrino oscillation parameters sin2θ12\sin^2\theta_{12}, Δm212\Delta m^2_{21}, and Δmee2|\Delta m^2_{ee}| to an accuracy of better than 1%. The JUNO detector can be also used to study terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard Model. The central detector contains 20,000 tons liquid scintillator with an acrylic sphere of 35 m in diameter. \sim17,000 508-mm diameter PMTs with high quantum efficiency provide \sim75% optical coverage. The current choice of the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of detected photoelectrons per MeV is larger than 1,100 and the energy resolution is expected to be 3% at 1 MeV. The calibration system is designed to deploy multiple sources to cover the entire energy range of reactor antineutrinos, and to achieve a full-volume position coverage inside the detector. The veto system is used for muon detection, muon induced background study and reduction. It consists of a Water Cherenkov detector and a Top Tracker system. The readout system, the detector control system and the offline system insure efficient and stable data acquisition and processing.Comment: 328 pages, 211 figure

    Magnetic Resonance Imaging (MRI) Biomarkers for Therapeutic Response Prediction in Rectal Cancer

    Full text link
    Prediction of chemoradiotherapy (CRT) response in rectal cancer would enable stratification of management whereby responders could undergo ‘watch-and-wait’ to avoid surgical morbidity, and non-responders could have early treatment intensification to improve therapeutic outcomes. Functional MRI can assess tumour function and heterogeneity, and may improve therapeutic response prediction. The aims of this PhD were to (i) prospectively evaluate multi-parametric MRI at 3.0 tesla in vivo combining diffusion weighted imaging (DWI) and dynamic contrast enhanced (DCE) MRI for prediction of CRT response and 2 year disease-free survival (DFS), and (ii) examine diffusion tensor imaging (DTI) MRI biomarkers of rectal cancer extent and heterogeneity at ultra-high field 11.7 tesla ex vivo in order to establish a pipeline for MRI biomarker discovery from ultra-high field to clinical field. Patients with locally advanced rectal cancer undergoing CRT followed by surgery underwent multi-parametric MRI before, during, and after CRT. A whole tumour voxelwise histogram analysis of apparent diffusion co-efficient (ADC) and Ktrans heterogeneity was performed and correlated with histopathology tumour regression grade. After CRT (before surgery) ADC 75th and 90th quantiles were significantly higher in responders than non-responders. Patients with higher Ktrans values after CRT or greater increase in Ktrans values from before to after CRT had a significantly higher risk of distant metastases, and lower 2 year DFS. Biobank tissue from patients with rectal cancer were examined at 11.7 tesla and DTI-MRI results correlated with histopathology. This work established a discovery framework for screening Biobank cancer tissue for novel MRI biomarkers of tumour extent and heterogeneity, and resulted in good preservation of tissue integrity and MRI-histopathology alignment. DTI-MRI derived fractional anisotropy (FA) was able to differentiate between tumour and desmoplasia, fibrous tissue, and muscularis propria, allowing for more accurate delineation of rectal cancer tumour extent and stromal heterogeneity ex vivo. In conclusion, DWI-MRI was predictive of CRT response, DCE-MRI was predictive of 2 year DFS, and DTI-MRI was able to more accurately define tumour extent and heterogeneity in rectal cancer. These findings could be useful for stratification of patients for individualised treatment based on accurate assessment of tumour extent and therapeutic response prediction
    corecore