1,228 research outputs found
OPTIMAL WATER QUALITY MANAGEMENT STRATEGIES FOR URBAN WATERSHEDS USING MACRO-LEVEL SIMULATION MODELS LINKED WITH EVOLUTIONARY ALGORITHMS
Urban watershed management poses a very challenging problem due to the varioussources of pollution and there is a need to develop optimal management models that canfacilitate the process of identifying optimal water quality management strategies. Ascreening level, comprehensive, and integrated computational methodology is developedfor the management of point and non-point sources of pollution in urban watersheds. Themethodology is based on linking macro-level water quality simulation models withefficient nonlinear constrained optimization methods for urban watershed management.The use of macro-level simulation models in lieu of the traditional and complexdeductive simulation models is investigated in the optimal management framework forurban watersheds. Two different types of macro-level simulation models are investigatedfor application to watershed pollution problems namely explicit inductive models andsimplified deductive models. Three different types of inductive modeling techniques areused to develop macro-level simulation models ranging from simple regression methodsto more complex and nonlinear methods such as artificial neural networks and geneticfunctions. A new genetic algorithm (GA) based technique of inductive modelconstruction called Fixed Functional Set Genetic Algorithm (FFSGA) is developed andused in the development of macro-level simulation models. A novel simplified deductivemodel approach is developed for modeling the response of dissolved oxygen in urbanstreams impaired by point and non-point sources of pollution. The utility of this inverseloading model in an optimal management framework for urban watersheds isinvestigated.In the context of the optimization methods, the research investigated the use of parallelmethods of optimization for use in the optimal management formulation. These includedan evolutionary computing method called genetic optimization and a modified version ofthe direct search method of optimization called the Shuffled Box Complex method ofconstrained optimization. The resulting optimal management model obtained by linkingmacro-level simulation models with efficient optimization models is capable ofidentifying optimal management strategies for an urban watershed to satisfy waterquality and economic related objectives. Finally, the optimal management model isapplied to a real world urban watershed to evaluate management strategies for waterquality management leading to the selection of near-optimal strategies
An optically pumped magnetometer working in the light-shift dispersed Mz mode
We present an optically pumped magnetometer working in a new operational modeâ the light-shift dispersed Mz (LSD-Mz) mode. It is realized combining various features; (1) high power off-resonant optical pumping; (2) Mz configuration, where pumping light and magnetic field of interest are oriented parallel to each other; (3) use of small alkali metal vapor cells of identical properties in integrated array structures, where two such cells are pumped by circularly polarized light of opposite helicity; and (4) subtraction of the Mz signals of these two cells. The LSD-Mz magnetometerâs performance depends on the inherent and very complex interplay of input parameters. In order to find the configuration of optimal magnetometer resolution, a sensitivity analysis of the input parameters by means of Latin Hypercube Sampling was carried out. The resulting datasets of the multi-dimensional parameter space exploration were assessed by a subsequent physically reasonable interpretation. Finally, the best shot-noise limited magnetic field resolution was determined within that parameter space. As the result, using two 50 mm3 integrated vapor cells a magnetic ïŹeld resolution below 10 fT/âHz at Earthâs magnetic ïŹeld strength is possible
Recommended from our members
Parameter Optimization of Conceptual Hydrological Models
The form of modelling used in this research for the simulation of the rainfall/runoff regime of catchment areas by mathematical models is of particular importance to civil engineers in the building of dams, river bridges and other works affected by high and low flows in rivers and streams. The parametric conceptual models can also be used in the management of water resources and as a basis for the assessment of long term risks associated with water storage and transmission of supplies. The objectives of this research are to examine the problems arising from the conceptual modelling of catchment areas with large data sets, and the effective determination of model parameters using gradient and non-gradient optimization techniques in the field of hydrology.
A simple model package was developed from the application and modification of ideas current at the time which allowed a good fit to observed hydrographs to be achieved with the input of rainfall data and data for an evaporation loss function. Nine parameters were available for optimization in this model. The practical demand for the assessment of land use and its variations on catchment water yield led to the development of a more complex model with thirty five parameters based on the latest vegetation process studies.
One of the first modifications was to the criterion for convergence where it was changed from the rate of change of parameter values to that of the model coefficient of determination or efficiency of fit. The least squares objective function was investigated, and retained for model explained variance. However, for parameters involved in the simulation of base flows it was found to be more effective to use a proportional function, whilst for intense storm events an eighth power function exaggerated the information available in the data for determination of surface runoff parameters. The models employ an input data 'overlay' technique which allowed the use of large data sets running over many years. The simulation results from land use changes with large data sets from the highlands of Scotland, a clay catchment in Buckinghamshire and montane rain forest in Kenya are compared and contrasted for both models.
The results for these catchments using gradient and non-gradient optimization algorithms are also examined, including the use of a genetic algorithm, and recommendations made for the values of algorithm parameters. Hybridized algorithms are developed and tested. A combination of the Rosenbrock and Nelder and Mead Simplex techniques was found to be an efficient hybrid; particularly with the land use model
Sparse EEG Source Localization Using Bernoulli Laplacian Priors
International audienceSource localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the non zero elements while the l1 norm constrains the values of their amplitudes. We use a BernoulliâLaplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1 norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted
Machine learning techniques implementation in power optimization, data processing, and bio-medical applications
The rapid progress and development in machine-learning algorithms becomes a key factor in determining the future of humanity. These algorithms and techniques were utilized to solve a wide spectrum of problems extended from data mining and knowledge discovery to unsupervised learning and optimization. This dissertation consists of two study areas. The first area investigates the use of reinforcement learning and adaptive critic design algorithms in the field of power grid control. The second area in this dissertation, consisting of three papers, focuses on developing and applying clustering algorithms on biomedical data. The first paper presents a novel modelling approach for demand side management of electric water heaters using Q-learning and action-dependent heuristic dynamic programming. The implemented approaches provide an efficient load management mechanism that reduces the overall power cost and smooths grid load profile. The second paper implements an ensemble statistical and subspace-clustering model for analyzing the heterogeneous data of the autism spectrum disorder. The paper implements a novel k-dimensional algorithm that shows efficiency in handling heterogeneous dataset. The third paper provides a unified learning model for clustering neuroimaging data to identify the potential risk factors for suboptimal brain aging. In the last paper, clustering and clustering validation indices are utilized to identify the groups of compounds that are responsible for plant uptake and contaminant transportation from roots to plants edible parts --Abstract, page iv
Proceedings, MSVSCC 2015
The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their studentsâ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This yearâs conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this yearâs conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters.
Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair
John ShullGraduate Student, MSVE Capstone Conference Student Chai
Improving the Geotagging Accuracy of Street-level Images
Integrating images taken at street-level with satellite imagery is becoming increasingly valuable in the decision-making processes not only for individuals, but also in business and governmental sectors. To perform this integration, images taken at street-level need to be accurately georeferenced. This georeference information can be derived from a global positioning system (GPS). However, GPS data is prone to errors up to 15 meters, and needs to be corrected for the purpose of geo-referencing. In this thesis, an automatic method is proposed for correcting the georeference information obtained from the GPS data, based on image registration techniques. The proposed method uses an optimization technique to find local optimal solutions by matching high-level features and their relative locations. A global optimization method is then employed over all of the local solutions by applying a geometric constraint.
The main contribution of this thesis is introducing a new direction for correcting the GPS data which is more economical and more consistent compared to existing manual method. Other than high cost (labor and management), the main concern with manual correction is the low degree of consistency between different human operators. Our proposed automatic software-based method is a solution for these drawbacks.
Other contributions can be listed as (1) modified Chamfer matching (CM) cost function which improves the accuracy of standard CM for images with various misleading/disturbing edges; (2) Monte-Carlo-inspired statistical analysis which made it possible to quantify the overall performance of the proposed algorithm; (3) Novel similarity measure for applying normalized cross correlation (NCC) technique on multi-level thresholded images, which is used to compare multi-modal images more accurately as compared to standard application of NCC on raw images. (4) Casting the problem of selecting an optimal global solution among set of local minima into a problem of finding an optimal path in a graph using Dijkstra\u27s algorithm.
We used our algorithm for correcting the georeference information of 20 chains containing more than 7000 fisheye images and our experimental results show that the proposed algorithm can achieve an average error of 2 meters, which is acceptable for most of applications
A MultiâObjective GainingâSharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems
Metaheuristics in recent years has proven its effectiveness; however, robust algorithms that can solve real-world problems are always needed. In this paper, we suggest the first extended version of the recently introduced gainingâsharing knowledge optimization (GSK) algorithm, named multiobjective gainingâsharing knowledge optimization (MOGSK), to deal with multiobjective optimization problems (MOPs). MOGSK employs an external archive population to store the nondominated solutions generated thus far, with the aim of guiding the solutions during the exploration process. Furthermore, fast nondominated sorting with crowding distance was incorporated to sustain the diversity of the solutions and ensure the convergence towards the Pareto optimal set, while the e- dominance relation was used to update the archive population solutions. e-dominance helps provide a good boost to diversity, coverage, and convergence overall. The validation of the proposed MOGSK was conducted using five biobjective (ZDT) and seven three-objective test functions (DTLZ) problems, along with the recently introduced CEC 2021, with fifty-five test problems in total, including power electronics, process design and synthesis, mechanical design, chemical engineering, and power system optimization. The proposed MOGSK was compared with seven existing optimization algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. The experimental findings show the good behavior of our proposed MOGSK against the comparative algorithms in particular real-world optimization problems
Conditional Gradient Methods
The purpose of this survey is to serve both as a gentle introduction and a
coherent overview of state-of-the-art Frank--Wolfe algorithms, also called
conditional gradient algorithms, for function minimization. These algorithms
are especially useful in convex optimization when linear optimization is
cheaper than projections.
The selection of the material has been guided by the principle of
highlighting crucial ideas as well as presenting new approaches that we believe
might become important in the future, with ample citations even of old works
imperative in the development of newer methods. Yet, our selection is sometimes
biased, and need not reflect consensus of the research community, and we have
certainly missed recent important contributions. After all the research area of
Frank--Wolfe is very active, making it a moving target. We apologize sincerely
in advance for any such distortions and we fully acknowledge: We stand on the
shoulder of giants.Comment: 238 pages with many figures. The FrankWolfe.jl Julia package
(https://github.com/ZIB-IOL/FrankWolfe.jl) providces state-of-the-art
implementations of many Frank--Wolfe method
Comprehensive Taxonomies of Nature- and Bio-inspired Optimization: Inspiration versus Algorithmic Behavior, Critical Analysis and Recommendations
In recent years, a great variety of nature- and bio-inspired algorithms has
been reported in the literature. This algorithmic family simulates different
biological processes observed in Nature in order to efficiently address complex
optimization problems. In the last years the number of bio-inspired
optimization approaches in literature has grown considerably, reaching
unprecedented levels that dark the future prospects of this field of research.
This paper addresses this problem by proposing two comprehensive,
principle-based taxonomies that allow researchers to organize existing and
future algorithmic developments into well-defined categories, considering two
different criteria: the source of inspiration and the behavior of each
algorithm. Using these taxonomies we review more than three hundred
publications dealing with nature-inspired and bio-inspired algorithms, and
proposals falling within each of these categories are examined, leading to a
critical summary of design trends and similarities between them, and the
identification of the most similar classical algorithm for each reviewed paper.
From our analysis we conclude that a poor relationship is often found between
the natural inspiration of an algorithm and its behavior. Furthermore,
similarities in terms of behavior between different algorithms are greater than
what is claimed in their public disclosure: specifically, we show that more
than one-third of the reviewed bio-inspired solvers are versions of classical
algorithms. Grounded on the conclusions of our critical analysis, we give
several recommendations and points of improvement for better methodological
practices in this active and growing research field.Comment: 76 pages, 6 figure
- âŠ