204 research outputs found

    An Integrated Method for Optimizing Bridge Maintenance Plans

    Get PDF
    Bridges are one of the vital civil infrastructure assets, essential for economic developments and public welfare. Their large numbers, deteriorating condition, public demands for safe and efficient transportation networks and limited maintenance and intervention budgets pose a challenge, particularly when coupled with the need to respect environmental constraints. This state of affairs creates a wide gap between critical needs for intervention actions, and tight maintenance and rehabilitation funds. In an effort to meet this challenge, a newly developed integrated method for optimized maintenance and intervention plans for reinforced concrete bridge decks is introduced. The method encompasses development of five models: surface defects evaluation, corrosion severities evaluation, deterioration modeling, integrated condition assessment, and optimized maintenance plans. These models were automated in a set of standalone computer applications, coded using C#.net in Matlab environment. These computer applications were subsequently combined to form an integrated method for optimized maintenance and intervention plans. Four bridges and a dataset of bridge images were used in testing and validating the developed optimization method and its five models. The developed models have unique features and demonstrated noticeable performance and accuracy over methods used in practice and those reported in the literature. For example, the accuracy of the surface defects detection and evaluation model outperforms those of widely-recognized machine leaning and deep learning models; reducing detection, recognition and evaluation of surface defects error by 56.08%, 20.2% and 64.23%, respectively. The corrosion evaluation model comprises design of a standardized amplitude rating system that circumvents limitations of numerical amplitude-based corrosion maps. In the integrated condition, it was inferred that the developed model accomplished consistent improvement over the visual inspection procedures in-use by the Ministry of Transportation in Quebec. Similarly, the deterioration model displayed average enhancement in the prediction accuracies by 60% when compared against the most commonly-utilized weibull distribution. The performance of the developed multi-objective optimization model yielded 49% and 25% improvement over that of genetic algorithm in a five-year study period and a twenty five-year study period, respectively. At the level of thirty five-year study period, unlike the developed model, classical meta-heuristics failed to find feasible solutions within the assigned constraints. The developed integrated platform is expected to provide an efficient tool that enables decision makers to formulate sustainable maintenance plans that optimize budget allocations and ensure efficient utilization of resources

    A generic framework for context-dependent fusion with application to landmine detection.

    Get PDF
    For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods

    Optimization in bioinformatics

    Get PDF
    In this work, we present novel optimization approaches for important bioinformatical problems. The rst part deals mainly with the local optimization of molecular structures and its applications to molecular docking, while the second part discusses discrete global optimization. In the rst part, we present a novel algorithm to an old task: nd the next local optimum into a given direction on a molecular potential energy function (line search). We show that replacing a standard line search method with the new algorithm reduced the number of function/gradient evaluations in our test runs down to 47.7% (down to 85% on average) . Then, we include this method into our novel approach for locally optimizing exible ligands in the presence of their receptors, which we describe in detail, avoiding the singularity problem of orientational parameters. We extend this approach to a full ligand-receptor docking program using a Lamarckian genetic algorithm. Our validation runs show that we gained an up to tenfold speedup in comparison to other tested methods. Then, we further incorporate side chain exibility of the receptor into our approach and introduce limited backbone exibility by interpolating between known extremal conformations using spherical linear extrapolation. Our results show that this approach is very promising for exible ligand-receptor docking. However, the drawback is that we need known extremal backbone conformations for the interpolation. In the last section of the rst part, we allow a loop region to be fully exible. We present a new method to nd all possible conformations using the Go-Scheraga ring closure equations and interval arithmetic. Our results show that this algorithm reliably nds alternative conformations and is able to identify promising loop/ligand complexes of the studied example. In the second part of this work, we describe the bond order assignment problem for molecular structures. We present our novel linear 0-1-programming formulation for the very efficient computation of all optimal and suboptimal bond order assignments and show that our approach does not only outperform the original heuristic approach of Wang et al. but also commonly used software for determining bond orders on our test set considering all optimal results. This test set consists of 761 thoroughly prepared drug like molecules that were originally used for the validation of the Merck Molecular Force Field. Then, we present our lter method for feature subset selection that is based on mutual information and uses second order information. We show our mathematically well motivated criterion and, in contrast to other methods, solve the resulting optimization problem exactly by quadratic 0-1-programming. In the validation runs, our method could achieve in 18 out of 21 test scenarios the best classification accuracies. In the last section, we give our integer linear programming formulation for the detection of deregulated subgraphs in regulatory networks using expression proles. Our approach identies the subnetwork of a certain size of the regulatory network with the highest sum of node scores. To demonstrate the capabilities of our algorithm, we analyzed expression proles from nonmalignant primary mammary epithelial cells derived from BRCA1 mutation carriers and epithelial cells without BRCA1 mutation. Our results suggest that oxidative stress plays an important role in epithelial cells with BRCA1 mutations that may contribute to the later development of breast cancer. The application of our algorithm to already published data can yield new insights. As expression data and network data are still growing, methods as our algorithm will be valuable to detect deregulated subgraphs in different conditions and help contribute to a better understanding of diseases.In der vorliegenden Arbeit präsentieren wir neue Optimierungsansätze für wichtige Probleme der Bioinformatik. Der erste Teil behandelt vorwiegend die lokale Optimierung von Molekülen und die Anwendung beim molekularen Docking. Der zweite Teil diskutiert diskrete globale Optimierung. Im ersten Teil präsentieren wir einen neuartigen Algorithmus für ein altes Problem: finde das nächste lokale Optimum in einer gegebenen Richtung auf einer Energiefunktion (Liniensuche, "line search"). Wir zeigen, dass die Ersetzung einer Standardliniensuche mit unserer neuen Methode die Anzahl der Funktions- und Gradientauswertungen in unseren Testläufen auf bis zu 47.7% reduzierte (85% im Mittel). Danach nehmen wir diese Methode in unseren neuen Ansatz zur lokalen Optimierung von flexiblen Liganden im Beisein ihres Rezeptors auf, den wir im Detail beschreiben. Unser Verfahren vermeidet das Singularitätsproblem von Orientierungsparametern. Wir erweitern diese Methode zu einem vollständigen Liganden-Rezeptor-Dockingprogramm, indem wir einen Lamarck'schen genetischen Algorithmus einsetzen. Unsere Validierungsläufe zeigen, dass wir im Vergleich zu anderen getesteten Methoden einen bis zu zehnfachen Geschwindigkeitszuwachs erreichen. Danach arbeiten wir in unseren Ansatz Seitenketten- und begrenzte Backbone exibilität ein, indem wir zwischen bekannten Extremkonformationen mittels sphärischer linearer Extrapolation interpolieren. Unsere Resultate zeigen, dass unsere Methode sehr viel versprechend für flexibles Liganden-Rezeptor-Docking ist. Dennoch hat dieser Ansatz den Nachteil, dass man bekannte Extremkonformationen des Backbones für die Interpolation benötigt. Im letzten Abschnitt des ersten Teils behandeln wir eine Loopregion voll flexibel. Wir zeigen eine neue Methode, die die Go-Scheraga Ringschlussgleichungen und Intervalarithmetik nutzt, um alle möglichen Konformationen zu nden. Unsere Resultate zeigen, dass dieser Algorithmus zuverlässig in der Lage ist, alternative Konformationen zu nden. Er identiziert sehr vielversprechende Loop-Ligandenkomplexe unseres Testbeispiels. Im zweiten Teil dieser Arbeit beschreiben wir das Bindungsordnungszuweisungsproblem von Molekülen. Wir präsentieren unsere neuartige Formulierung, die auf linearer 0-1-Programmierung basiert. Dieser Ansatz ist in der Lage sehr effizient alle optimalen und suboptimalen Bindngsordnungszuweisungen zu berechnen. Unsere Methode ist nicht nur besser als der ursprüngliche Ansatz von Wang et al., sondern auch weitverbreiteter Software zur Bindungszuordnung auf unserem Testdatensatz überlegen. Dieser Datensatz besteht aus 761 sorgfältig präparierten, arzneimittelähnlichen Molekülen, die ursprünglich zur Validierung des Merck-Kraftfeldes eingesetzt wurden. Danach präsentieren wir unsere Filtermethode zur "Feature Subset Selection", die auf "Mutual Information" basiert und Informationen zweiter Ordnung nutzt. Wir geben unser mathematisch motiviertes Kriterium an und lösen das resultierende Optimierungsproblem global optimal im Gegensatz zu anderen Ansätzen. In unseren Validierungsläufen konnte unsere Methode in 18 von 21 Testszenarien die beste Klassizierungsrate erreichen. Im letzten Abschnitt geben wir unsere, auf linearer 0-1-Programmierung basierende Formulierung zur Berechnung von deregulierten Untergraphen in regulatorischen Netzwerken an. Die Basisdaten für diese Methode sind Expressionsprole. Unser Ansatz identiziert die Unternetze einer gewissen Größe mit der höchsten Summe der Knotenscores. Wir analysierten Expressionsprole von nicht bösartigen Brustepithelzellen von BRCA1 Mutationsträgern und Epithelzellen ohne BRCA1 Mutation, um die Fähigkeiten unseres Algorithmuses zu demonstrieren. Unsere Resultate legen nahe, dass oxidativer Stress eine wichtige Rolle bei Epithelzellen mit BRCA1 Mutation spielt, der zur späteren Entwicklung von Brustkrebs beitragen könnte. Die Anwendung unseres Ansatzes auf bereits publizierte Daten kann zu neuen Erkenntnissen führen. Da sowohl Expressions- wie auch Netzwerkdaten ständig anwachsen, sind es Methoden wie unser Algorithmus die wertvoll sein werden, um deregulierte Subgraphen in verschiedenen Situationen zu entdecken. Damit trägt unser Ansatz zu einem besseren Verständnis von Krankheiten und deren Verlauf bei

    Utilizing Converter-Interfaced Sources for Frequency Control with Guaranteed Performance in Power Systems

    Get PDF
    To integrate renewable energy, converter-interfaced sources (CISs) keep penetrating into power systems and degrade the grid frequency response. Control synthesis towards guaranteed performance is a challenging task. Meanwhile, the potentials of highly controllable converters are far from fully developed. With properly designed controllers the CISs can not only eliminate the negative impacts on the grid, but also provide performance guarantees.First, the wind turbine generator (WTG) is chosen to represent the CISs. An augmented system frequency response (ASFR) model is derived, including the system frequency response model and a reduced-order model of the WTG representing the supportive active power due to the supplementary inputs.Second, the framework for safety verification is introduced. A new concept, region of safety (ROS), is proposed, and the safe switching principle is provided. Two different approaches are proposed to estimate the largest ROS, which can be solved using the sum of squares programming.Third, the critical switching instants for adequate frequency response are obtained through the study of the ASFR model. A safe switching window is discovered, and a safe speed recovery strategy is proposed to ensure the safety of the second frequency dip due to the WTG speed recovery.Fourth, an adaptive safety supervisory control (SSC) is proposed with a two-loop configuration, where the supervisor is scheduled with respect to the varying renewable penetration level. For small-scale system, a decentralized fashion of the SSC is proposed under rational approximations and verified on the IEEE 39-bus system.Fifth, a two-level control diagram is proposed so that the frequency of a microgrid satisfies the temporal logic specifications (TLSs). The controller is configured into a scheduling level and a triggering level. The satisfaction of TLSs will be guaranteed by the scheduling level, and triggering level will determine the activation instant.Finally, a novel model reference control based synthetic inertia emulation strategy is proposed. This novel control strategy ensures precise emulated inertia by the WTGs as opposed to the trial and error procedure of conventional methods. Safety bounds can be easily derived based on the reference model under the worst-case scenario

    Basis Vector Model Method for Proton Stopping Power Estimation using Dual-Energy Computed Tomography

    Get PDF
    Accurate estimation of the proton stopping power ratio (SPR) is important for treatment planning and dose prediction for proton beam therapy. The state-of-the-art clinical practice for estimating patient-specific SPR distributions is the stoichiometric calibration method using single-energy computed tomography (SECT) images, which in principle may introduce large intrinsic uncertainties into estimation results. One major factor that limits the performance of SECT-based methods is the Hounsfield unit (HU) degeneracy in the presence of tissue composition variations. Dual-energy computed tomography (DECT) has shown the potential of reducing uncertainties in proton SPR prediction via scanning the patient with two different source energy spectra. Numerous methods have been studied to estimate the SPR by dual-energy CT DECT techniques using either image-domain or sinogram-domain decomposition approaches. In this work, we implement and evaluate a novel DECT approach for proton SPR mapping, which integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). This method reconstructs two images of material parameters simultaneously from the DECT measurement data and then uses them to predict the electron densities and the mean excitation energies, which are required by the Bethe equation for computing proton SPR. The proposed JSIR-BVM method is first compared with image-domain and sinogram-domain decomposition approaches based on three available SPR models including the BVM in a well controlled simulation framework that is representative of major uncertainty sources existing in practice. The intrinsic SPR modeling accuracy of the three DECT-SPR models is validated via theoretical computed radiological quantities for various reference human tissues. The achievable performances of the investigated methods in the presence of image formation uncertainties are evaluated using synthetic DECT transmission sinograms of virtual cylindrical phantoms and virtual patients, which consist of reference human tissues with known densities and compositions. The JSIR-BVM method is then experimentally commissioned using the DECT measurement data acquired on a Philips Brilliance Big Bore CT scanner at 90 kVp and 140 kVp for two phantoms of different sizes, each of which contains 12 different soft and bony tissue surrogates. An image-domain decomposition method that utilizes the two HU images reconstructed via the scanner\u27s software is implemented for comparison The JSIR-BVM method outperforms the other investigated methods in both the simulation and experimental settings. Although all investigated DECT-SPR models support low intrinsic modeling errors (i.e., less than 0.2% RMS errors for reference human tissues), the achievable accuracy of the image- and sinogram-domain methods is limited by the image formation uncertainties introduced by the reconstruction and decomposition processes. In contrast, by taking advantage of an accurate polychromatic CT data model and a joint DECT statistical reconstruction algorithm, the JSIR-BVM method accounts for both systematic bias and random noise in the acquired DECT measurement data. Therefore, the JSIR-BVM method achieves much better accuracy and precision on proton SPR estimation compared to the image- and sinogram-domain methods for various materials and object sizes, with an overall RMS-of-mean error of 0.4% and a maximum absolute-mean error of 0.7% for test samples in the experimental setting. The JSIR-BVM method also reduces the pixel-wise random variation by 4-fold to 6-fold within homogeneous regions compared to the image- and sinogram-domain methods while exhibiting relatively higher spatial resolution. The results suggest that the JSIR-BVM method has the potential for better SPR prediction in clinical settings

    Localization and security algorithms for wireless sensor networks and the usage of signals of opportunity

    Get PDF
    In this dissertation we consider the problem of localization of wireless devices in environments and applications where GPS (Global Positioning System) is not a viable option. The _x000C_rst part of the dissertation studies a novel positioning system based on narrowband radio frequency (RF) signals of opportunity, and develops near optimum estimation algorithms for localization of a mobile receiver. It is assumed that a reference receiver (RR) with known position is available to aid with the positioning of the mobile receiver (MR). The new positioning system is reminiscent of GPS and involves two similar estimation problems. The _x000C_rst is localization using estimates of time-di_x000B_erence of arrival (TDOA). The second is TDOA estimation based on the received narrowband signals at the RR and the MR. In both cases near optimum estimation algorithms are developed in the sense of maximum likelihood estimation (MLE) under some mild assumptions, and both algorithms compute approximate MLEs in the form of a weighted least-squares (WLS) solution. The proposed positioning system is illustrated with simulation studies based on FM radio signals. The numerical results show that the position errors are comparable to those of other positioning systems, including GPS. Next, we present a novel algorithm for localization of wireless sensor networks (WSNs) called distributed randomized gradient descent (DRGD), and prove that in the case of noise-free distance measurements, the algorithm converges and provides the true location of the nodes. For noisy distance measurements, the convergence properties of DRGD are discussed and an error bound on the location estimation error is obtained. In contrast to several recently proposed methods, DRGD does not require that blind nodes be contained in the convex hull of the anchor nodes, and can accurately localize the network with only a few anchors. Performance of DRGD is evaluated through extensive simulations and compared with three other algorithms, namely the relaxation-based second order cone programming (SOCP), the simulated annealing (SA), and the semi-de_x000C_nite programing (SDP) procedures. Similar to DRGD, SOCP and SA are distributed algorithms, whereas SDP is centralized. The results show that DRGD successfully localizes the nodes in all the cases, whereas in many cases SOCP and SA fail. We also present a modi_x000C_cation of DRGD for mobile WSNs and demonstrate the e_x000E_cacy of DRGD for localization of mobile networks with several simulation results. We then extend this method for secure localization in the presence of outlier distance measurements or distance spoo_x000C_ng attacks. In this case we present a centralized algorithm to estimate the position of the nodes in WSNs, where outlier distance measurements may be present

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research
    • …
    corecore