300 research outputs found

    Non-Classical Nucleation Phenomena Study And Following Process Monitoring and Optimization in Solution Crystallization Process

    Get PDF
    Nucleation is a crucial step in the solution crystallization process. Despite their good development, classical nucleation theory and two-step nucleation theory cannot explain all the nucleation phenomena, especially for the non-classical nucleation phenomena which include oiling out, gelation and non-monotonic nucleation. Accordingly, for the non-classical nucleation systems, the crystallization processes are seldom designed based on the nucleation monitoring and supervision. In this thesis, crystallization process optimization was conducted to study the mechanism of non-classical nucleation phenomena and in-line process monitoring technology development. Two kinds of non-classical nucleation phenomena with non-monotonic nucleation rate and gel formation were investigated, and accordingly, two nucleation pathways that self-induced nucleation and jellylike phase mediated nucleation were proposed based on the analysis of in-line spectral monitoring and off-line sample characterizations. Results indicated the agitation level would affect the pre-nucleation clusters’ existence in the non-monotonic nucleation system, and the properties of solvent determined the formation of jellylike phase and the transformation to crystals. Motion-based objects tracking model and the state-of-the-art neural network Mask R-CNN were introduced to monitor the onset of nucleation and following the crystallization process. Combined with a cost-effective camera probe, the developed real-time tracking system can detect the nucleation onset accurately even with ultrasonic irradiation and can extract much more information during the whole crystallization process. Subsequently, ultrasonic irradiation and seeding were used to optimize a non-classical nucleation system that accompanied oiling out phenomenon. Different frequencies and intensities of ultrasonic irradiation and seeds addition time were screened to optimize the nucleation step, which proved their effectiveness of promoting nucleation and narrowing the metastable zone widths of oiling out and nucleation. A fine-tuning of nucleation step was carried out in a mixed suspension mixed product removal (MSMPR)-tubular crystallizer series. The nucleation step was optimized in the MSMPR stage with the aid of principal component analysis, which enabled the growth of crystals in the tubular crystallizer with preferred polymorphism, shape, and size. The study in this thesis provides insights into non-classical nucleation mechanism and nucleation based crystallization process design and optimization

    A Comprehensive Literature Review on Convolutional Neural Networks

    Get PDF
    The fields of computer vision and image processing from their initial days have been dealing with the problems of visual recognition. Convolutional Neural Networks (CNNs) in machine learning are deep architectures built as feed-forward neural networks or perceptrons, which are inspired by the research done in the fields of visual analysis by the visual cortex of mammals like cats. This work gives a detailed analysis of CNNs for the computer vision tasks, natural language processing, fundamental sciences and engineering problems along with other miscellaneous tasks. The general CNN structure along with its mathematical intuition and working, a brief critical commentary on the advantages and disadvantages, which leads researchers to search for alternatives to CNN’s are also mentioned. The paper also serves as an appreciation of the brain-child of past researchers for the existence of such a fecund architecture for handling multidimensional data and approaches to improve their performance further

    Deep learning approach for epileptic seizure detection

    Get PDF
    Abstract. Epilepsy is the most common brain disorder that affects approximately fifty million people worldwide, according to the World Health Organization. The diagnosis of epilepsy relies on manual inspection of EEG, which is error-prone and time-consuming. Automated epileptic seizure detection of EEG signal can reduce the diagnosis time and facilitate targeting of treatment for patients. Current detection approaches mainly rely on the features that are designed manually by domain experts. The features are inflexible for the detection of a variety of complex patterns in a large amount of EEG data. Moreover, the EEG is non-stationary signal and seizure patterns vary across patients and recording sessions. EEG data always contain numerous noise types that negatively affect the detection accuracy of epileptic seizures. To address these challenges deep learning approaches are examined in this paper. Deep learning methods were applied to a large publicly available dataset, the Children’s Hospital of Boston-Massachusetts Institute of Technology dataset (CHB-MIT). The present study includes three experimental groups that are grouped based on the pre-processing steps. The experimental groups contain 3–4 experiments that differ between their objectives. The time-series EEG data is first pre-processed by certain filters and normalization techniques, and then the pre-processed signal was segmented into a sequence of non-overlapping epochs. Second, time series data were transformed into different representations of input signals. In this study time-series EEG signal, magnitude spectrograms, 1D-FFT, 2D-FFT, 2D-FFT magnitude spectrum and 2D-FFT phase spectrum were investigated and compared with each other. Third, time-domain or frequency-domain signals were used separately as a representation of input data of VGG or DenseNet 1D. The best result was achieved with magnitude spectrograms used as representation of input data in VGG model: accuracy of 0.98, sensitivity of 0.71 and specificity of 0.998 with subject dependent data. VGG along with magnitude spectrograms produced promising results for building personalized epileptic seizure detector. There was not enough data for VGG and DenseNet 1D to build subject-dependent classifier.Epileptisten kohtausten havaitseminen syväoppimisella lähestymistavalla. Tiivistelmä. Epilepsia on yleisin aivosairaus, joka Maailman terveysjärjestön mukaan vaikuttaa noin viiteenkymmeneen miljoonaan ihmiseen maailmanlaajuisesti. Epilepsian diagnosointi perustuu EEG:n manuaaliseen tarkastamiseen, mikä on virhealtista ja aikaa vievää. Automaattinen epileptisten kohtausten havaitseminen EEG-signaalista voi potentiaalisesti vähentää diagnoosiaikaa ja helpottaa potilaan hoidon kohdentamista. Nykyiset tunnistusmenetelmät tukeutuvat pääasiassa piirteisiin, jotka asiantuntijat ovat määritelleet manuaalisesti, mutta ne ovat joustamattomia monimutkaisten ilmiöiden havaitsemiseksi suuresta määrästä EEG-dataa. Lisäksi, EEG on epästationäärinen signaali ja kohtauspiirteet vaihtelevat potilaiden ja tallennusten välillä ja EEG-data sisältää aina useita kohinatyyppejä, jotka huonontavat epilepsiakohtauksen havaitsemisen tarkkuutta. Näihin haasteisiin vastaamiseksi tässä diplomityössä tarkastellaan soveltuvatko syväoppivat menetelmät epilepsian havaitsemiseen EEG-tallenteista. Aineistona käytettiin suurta julkisesti saatavilla olevaa Bostonin Massachusetts Institute of Technology lastenklinikan tietoaineistoa (CHB-MIT). Tämän työn tutkimus sisältää kolme koeryhmää, jotka eroavat toisistaan esikäsittelyvaiheiden osalta: aikasarja-EEG-data esikäsiteltiin perinteisten suodattimien ja normalisointitekniikoiden avulla, ja näin esikäsitelty signaali segmentoitiin epookkeihin. Kukin koeryhmä sisältää 3–4 koetta, jotka eroavat menetelmiltään ja tavoitteiltaan. Kussakin niistä epookkeihin jaettu aikasarjadata muutettiin syötesignaalien erilaisiksi esitysmuodoiksi. Tässä tutkimuksessa tutkittiin ja verrattiin keskenään EEG-signaalia sellaisenaan, EEG-signaalin amplitudi-spektrogrammeja, 1D-FFT-, 2D-FFT-, 2D-FFT-amplitudi- ja 2D-FFT -vaihespektriä. Näin saatuja aika- ja taajuusalueen signaaleja käytettiin erikseen VGG- tai DenseNet 1D -mallien syötetietoina. Paras tulos saatiin VGG-mallilla kun syötetietona oli amplitudi-spektrogrammi ja tällöin tarkkuus oli 0,98, herkkyys 0,71 ja spesifisyys 0,99 henkilöstä riippuvaisella EEG-datalla. VGG yhdessä amplitudi-spektrogrammien kanssa tuottivat lupaavia tuloksia henkilökohtaisen epilepsiakohtausdetektorin rakentamiselle. VGG- ja DenseNet 1D -malleille ei ollut tarpeeksi EEG-dataa henkilöstä riippumattoman luokittelijan opettamiseksi

    NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models

    Full text link
    Automatically generating high-quality real world 3D scenes is of enormous interest for applications such as virtual reality and robotics simulation. Towards this goal, we introduce NeuralField-LDM, a generative model capable of synthesizing complex 3D environments. We leverage Latent Diffusion Models that have been successfully utilized for efficient high-quality 2D content creation. We first train a scene auto-encoder to express a set of image and pose pairs as a neural field, represented as density and feature voxel grids that can be projected to produce novel views of the scene. To further compress this representation, we train a latent-autoencoder that maps the voxel grids to a set of latent representations. A hierarchical diffusion model is then fit to the latents to complete the scene generation pipeline. We achieve a substantial improvement over existing state-of-the-art scene generation models. Additionally, we show how NeuralField-LDM can be used for a variety of 3D content creation applications, including conditional scene generation, scene inpainting and scene style manipulation.Comment: CVPR 202

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process

    CNS-Net: Conservative Novelty Synthesizing Network for Malware Recognition in an Open-set Scenario

    Full text link
    We study the challenging task of malware recognition on both known and novel unknown malware families, called malware open-set recognition (MOSR). Previous works usually assume the malware families are known to the classifier in a close-set scenario, i.e., testing families are the subset or at most identical to training families. However, novel unknown malware families frequently emerge in real-world applications, and as such, require to recognize malware instances in an open-set scenario, i.e., some unknown families are also included in the test-set, which has been rarely and non-thoroughly investigated in the cyber-security domain. One practical solution for MOSR may consider jointly classifying known and detecting unknown malware families by a single classifier (e.g., neural network) from the variance of the predicted probability distribution on known families. However, conventional well-trained classifiers usually tend to obtain overly high recognition probabilities in the outputs, especially when the instance feature distributions are similar to each other, e.g., unknown v.s. known malware families, and thus dramatically degrades the recognition on novel unknown malware families. In this paper, we propose a novel model that can conservatively synthesize malware instances to mimic unknown malware families and support a more robust training of the classifier. Moreover, we also build a new large-scale malware dataset, named MAL-100, to fill the gap of lacking large open-set malware benchmark dataset. Experimental results on two widely used malware datasets and our MAL-100 demonstrate the effectiveness of our model compared with other representative methods.Comment: 16 pages, 8 figure

    Deep learning for time series forecasting: The electric load case

    Get PDF
    Management and efficient operations in critical infrastructures such as smart grids take huge advantage of accurate power load forecasting, which, due to its non-linear nature, remains a challenging task. Recently, deep learning has emerged in the machine learning field achieving impressive performance in a vast range of tasks, from image classification to machine translation. Applications of deep learning models to the electric load forecasting problem are gaining interest among researchers as well as the industry, but a comprehensive and sound comparison among different-also traditional-architectures is not yet available in the literature. This work aims at filling the gap by reviewing and experimentally evaluating four real world datasets on the most recent trends in electric load forecasting, by contrasting deep learning architectures on short-term forecast (one-day-ahead prediction). Specifically, the focus is on feedforward and recurrent neural networks, sequence-to-sequence models and temporal convolutional neural networks along with architectural variants, which are known in the signal processing community but are novel to the load forecasting one

    Code Generation and Global Optimization Techniques for a Reconfigurable PRAM-NUMA Multicore Architecture

    Full text link
    corecore