15,449 research outputs found

    Kinetic Gas Molecule Optimization based Cluster Head Selection Algorithm for minimizing the Energy Consumption in WSN

    Get PDF
    As the amount of low-cost and low-power sensor nodes increases, so does the size of a wireless sensor network (WSN). Using self-organization, the sensor nodes all connect to one another to form a wireless network. Sensor gadgets are thought to be extremely difficult to recharge in unfavourable conditions. Moreover, network longevity, coverage area, scheduling, and data aggregation are the major issues of WSNs. Furthermore, the ability to extend the life of the network, as well as the dependability and scalability of sensor nodes' data transmissions, demonstrate the success of data aggregation. As a result, clustering methods are thought to be ideal for making the most efficient use of resources while also requiring less energy. All sensor nodes in a cluster communicate with each other via a cluster head (CH) node. Any clustering algorithm's primary responsibility in these situations is to select the ideal CH for solving the variety of limitations, such as minimising energy consumption and delay. Kinetic Gas Molecule Optimization (KGMO) is used in this paper to create a new model for selecting CH to improve network lifetime and energy. Gas molecule agents move through a search space in pursuit of an optimal solution while considering characteristics like energy, distance, and delay as objective functions. On average, the KGMO algorithm results in a 20% increase in network life expectancy and a 19.84% increase in energy stability compared to the traditional technique Bacterial Foraging Optimization Algorithm (BFO)

    Computer-Aided Multi-Objective Optimization in Small Molecule Discovery

    Full text link
    Molecular discovery is a multi-objective optimization problem that requires identifying a molecule or set of molecules that balance multiple, often competing, properties. Multi-objective molecular design is commonly addressed by combining properties of interest into a single objective function using scalarization, which imposes assumptions about relative importance and uncovers little about the trade-offs between objectives. In contrast to scalarization, Pareto optimization does not require knowledge of relative importance and reveals the trade-offs between objectives. However, it introduces additional considerations in algorithm design. In this review, we describe pool-based and de novo generative approaches to multi-objective molecular discovery with a focus on Pareto optimization algorithms. We show how pool-based molecular discovery is a relatively direct extension of multi-objective Bayesian optimization and how the plethora of different generative models extend from single-objective to multi-objective optimization in similar ways using non-dominated sorting in the reward function (reinforcement learning) or to select molecules for retraining (distribution learning) or propagation (genetic algorithms). Finally, we discuss some remaining challenges and opportunities in the field, emphasizing the opportunity to adopt Bayesian optimization techniques into multi-objective de novo design

    Fast, Three-Dimensional Fluorescence Imaging of Living Cells

    Get PDF
    This thesis focuses on multi-plane fluorescence microscopy for fast live-cell imaging. To improve the performance of multi-plane microscopy, I developed new image analysis methods. I used these methods to measure and analyze the movements of cardiomyocytesand Dictyostelium discoideum cells.The multi-plane setup is based on a conventional wide-field microscope using a custom multiple beam-splitter in the detection path. This prism creates separate images of eight distinct focal planes in the sample. Since 3D volume is imaged without scanning, three-dimensional imaging at a very high speed becomes possible. However, as in conventional wide-field microscopy, the "missing cone" of spatial frequencies along the optical axis in the optical transfer function (OTF) prevents optical sectioning in such a microscope. This is in stark contrast to other truly three-dimensional imaging modalities like confocal and light-sheet microscopy. In order to overcome the lack of optical sectioning, I developed a new deconvolution method. Deconvolution describes methods that restore or sharpen an image based on physical assumptions and knowledge of the imaging process. Deconvolution methods have been widely used to sharpen images of microscopes and telescopes. The recently developed SUPPOSe algorithm is a deconvolution algorithm that uses a set of numerous virtual point sources. It tries to reconstruct an image by distributing these point sources in space and optimizing their positions so that the resulting image reproduces as good as possible the measured data. SUPPOSe has never been used for 3D images. Compared to other algorithms, this method has superior performance when the number of pixels is increased by interpolation. In this work, I extended the method to work also with 3D image data. The 3D-SUPPOSe program is suitable for analyzing data of our multi-plane setup. The multi-plane setup has only eight vertically aligned image planes. Furthermore, for accurate reconstruction of 3D images, I studied a method of correcting each image plane's relative brightness constituting an image, and I also developed a method of measuring the movement of point emitters in 3D space. Using these methods, I measured and analyzed the beating motion of cardiomyocytes and the chemotaxis of Dicyosteilium discoidem. Cardiomyocytes are the cells of the heart muscle and consist of repetitive sarcomeres. These cells are characterized by fast and periodic movements, and so far the dynamics of these cells was studied only with two-dimensional imaging. In this thesis, the beating motion was analyzed by tracing the spatial distribution of the so-called z-discs, one of the constituent components of cardiomyocytes. I found that the vertical distribution of α\alpha-actinine-2 in a single z-disc changed very rapidly, which may serve as a starting point for a better understanding the motion of cardiomyocytes. \textit{Dictyostelium discoideum} is a well established single cell model organism that migrates along the gradient of a chemoattractant. One has conducted much research to understand the mechanism of chemotaxis, and many efforts have been made to understand the role of actin in the chemotactic motion. By suppressing the motor protein, myosin, a cell line was created that prevented the formation of normal actin filaments. In these myosin null cells, F-actin moves in a flow-like behaviour and induces cell movement. In this study, I imaged the actin dynamics, and I analyzed the flow using the newly created deconvolution and flow estimation methods. As a result of the analysis, the spatio-temporal correlation between pseudo-pod formation and dynamics and actin flow was investigated.2022-01-2

    Systematic Methods for Reaction Solvent Design and Integrated Solvent and Process Design

    Get PDF
    Otto-von-Guericke-Universität Magdeburg, Fakultät für Verfahrens- und Systemtechnik, Dissertation, 2016by: M. Sc. Teng ZhouLiteraturverzeichnis: Seite 100-10

    Optimization of a Quantum Cascade Laser Operating in the Terahertz Frequency Range Using a Multiobjective Evolutionary Algorithm

    Get PDF
    A quantum cascade (QC) laser is a specific type of semiconductor laser that operates through principles of quantum mechanics. In less than a decade QC lasers are already able to outperform previously designed double heterostructure semiconductor lasers. Because there is a genuine lack of compact and coherent devices which can operate in the far-infrared region the motivation exists for designing a terahertz QC laser. A device operating at this frequency is expected to be more efficient and cost effective than currently existing devices. It has potential applications in the fields of spectroscopy, astronomy, medicine and free-space communication as well as applications to near-space radar and chemical/biological detection. The overarching goal of this research was to find QC laser parameter combinations which can be used to fabricate viable structures. To ensure operation in the THz region the device must conform to the extremely small energy level spacing range from ~10-15 meV. The time and expense of the design and production process is prohibitive, so an alternative to fabrication was necessary. To accomplish this goal a model of a QC laser, developed at Worchester Polytechnic Institute with sponsorship from the Air Force Research Laboratory Sensors Directorate, and the General Multiobjective Parallel Genetic Algorithm (GenMOP), developed at the Air Force Institute of Technology, were integrated to form a computer simulation which stochastically searches for feasible solutions

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Algorithmes incrémentaux pour la théorie de la fonctionnelle de la densité

    Get PDF
    The ability to model molecular systems on a computer has become a crucial tool for chemists. Indeed molecular simulations have helped to understand and predict properties of nanoscopic world, and during the last decades have had large impact on domains like biology, electronic or materials development. Particle simulation is a classical method of molecular dynamic. In particle simulation, molecules are split into atoms, their inter-atomic interactions are computed, and their time trajectories are derived step by step. Unfortunately, inter-atomic interactions computation costs prevent large systems to be modeled in a reasonable time. In this context, our research team looks for new accurate and efficient molecular simulation models. One of our team's focus is the search and elimination of useless calculus in dynamical simulations. Hence has been proposed a new adaptively restrained dynamical model in which the slowest particles movement is frozen, computational time is saved if the interaction calculus method do not compute again interactions between static atoms. The team also developed several interaction models that benefit from a restrained dynamical model, they often updates interactions incrementally using the previous time step results and the knowledge of which particle have moved.In the wake of our team's work, we propose in this thesis an incremental First-principles interaction models. Precisely, we have developed an incremental Orbital-Free Density Functional Theory method that benefits from an adaptively restrained dynamical model. The new OF-DFT model keeps computation in Real-Space, so can adaptively focus computations where they are necessary. The method is first proof-tested, then we show its ability to speed up computations when a majority of particle are static and with a restrained particle dynamic model. This work is a first step toward a combination of incremental First-principle interaction models and adaptively restrained particle dynamic models.In the wake of our team's work, we propose in this thesis an incremental First-principles interaction models. Precisely, we have developed an incremental Orbital-Free Density Functional Theory method that benefits from an adaptively restrained dynamical model. The new OF-DFT model keeps computation in Real-Space, so can adaptively focus computations where they are necessary. The method is first proof-tested, then we show its ability to speed up computations when a majority of particle are static and with a restrained particle dynamic model. This work is a first step toward a combination of incremental First-principle interaction models and adaptively restrained particle dynamic models.L'informatique est devenue un outil incontournable de la chimie. En effet la capacité de simuler des molécules sur ordinateur a aidé à la compréhension du monde nanoscopic et à la prédiction de ses propriétés. La simulation moléculaire a eu ces dernières décennies un impact scientifique énorme en biologie, en électronique, en science des matériaux ... La simulation de particules est une des méthodes classiques de dynamique moléculaire, les molécules y sont divisées en atomes, leurs interactions relatives calculées et leurs trajectoires déduites pas à pas. Malheureusement un calcul précis des interactions entre atomes demande énormément d'opérations et donc de temps, ce qui limite la portée de la simulation moléculaire à des systèmes de taille raisonnable. C'est dans ce contexte que notre équipe recherche de nouveaux modèles de simulation moléculaire rapide et précis. Un des angles de recherche est l'élimination des calculs inutiles des simulations. L'équipe a ainsi proposé un modèle de dynamique moléculaire dite restreinte de manière adaptative dans lequel le mouvement des particules les plus lentes est bloqué. Si la simulation ne recalcule pas les interactions inchangées entre atomes bloqués, le calcul des interactions est plus rapide. L'équipe a aussi développé plusieurs modèles d'interactions plus efficaces pour des modèles de dynamique restreinte de particules, ils mettent à jour les interactions de façon incrémentale en utilisant les résultats du pas de temps précédent et la liste des particules mobiles. Dans le sillage des travaux de notre équipe de recherche, nous proposons dans cette thèse une méthode incrémentale pour calculer des interactions interatomique basées sur les modèles de Théorie de la Fonctionnelle de la Densité Sans Orbitale. La nouvelle méthode garde les calculs dans l'espace réel et peut ainsi concentrer les calculs où cela est nécessaire. Dans ce manuscrit nous vérifions cette méthode, puis nous évaluons les gains de vitesse lorsqu'une majorité de particule est bloquée, avec un modèle de dynamique restreinte. Ces travaux sont un pas vers la l'intégration de modèles d'interactions Premier-principes pour des modèles dynamiques restreint de manière adaptative

    Population-Based Optimization Algorithms for Solving the Travelling Salesman Problem

    Get PDF
    [Extract] Population based optimization algorithms are the techniques which are in the set of the nature based optimization algorithms. The creatures and natural systems which are working and developing in nature are one of the interesting and valuable sources of inspiration for designing and inventing new systems and algorithms in different fields of science and technology. Evolutionary Computation (Eiben& Smith, 2003), Neural Networks (Haykin, 99), Time Adaptive Self-Organizing Maps (Shah-Hosseini, 2006), Ant Systems (Dorigo & Stutzle, 2004), Particle Swarm Optimization (Eberhart & Kennedy, 1995), Simulated Annealing (Kirkpatrik, 1984), Bee Colony Optimization (Teodorovic et al., 2006) and DNA Computing (Adleman, 1994) are among the problem solving techniques inspired from observing nature. In this chapter population based optimization algorithms have been introduced. Some of these algorithms were mentioned above. Other algorithms are Intelligent Water Drops (IWD) algorithm (Shah-Hosseini, 2007), Artificial Immune Systems (AIS) (Dasgupta, 1999) and Electromagnetism-like Mechanisms (EM) (Birbil & Fang, 2003). In this chapter, every section briefly introduces one of these population based optimization algorithms and applies them for solving the TSP. Also, we try to note the important points of each algorithm and every point we contribute to these algorithms has been stated. Section nine shows experimental results based on the algorithms introduced in previous sections which are implemented to solve different problems of the TSP using well-known datasets
    • …
    corecore