10 research outputs found

    MaxMinMax problem and sparse equations over finite fields

    Get PDF
    Asymptotical complexity of sparse equation systems over finite field FqF_q is studied. Let the variable sets belong to a fixed family X={X1,,Xm}\mathcal{X}=\{X_1,\ldots,X_m\} while the polynomials fi(Xi)f_i(X_i) are taken independently and uniformly at random from the set of all polynomials of degree q1\leq q-1 in each of the variables in XiX_i. In particular, for Xi3|X_i|\le3, m=nm=n, we prove the average complexity of finding all solutions to fi(Xi)=0,i=1,,mf_i(X_i)=0, i=1,\ldots,m by Gluing algorithm ( Semaev, Des. Codes Cryptogr., vol. 49 (2008), pp.47--60) is at most qn5.7883+O(logn) q^{\frac{n}{5.7883}+O(\log n)} for arbitrary X\mathcal{X} and qq. The proof results from a detailed analysis of 3-MaxMinMax problem, a novel problem for hyper-graphs

    A Combinatorial Problem Related to Sparse Systems of Equations

    Get PDF
    Nowadays sparse systems of equations occur frequently in science and engineering. In this contribution we deal with sparse systems common in cryptanalysis. Given a cipher system, one converts it into a system of sparse equations, and then the system is solved to retrieve either a key or a plaintext. Raddum and Semaev proposed new methods for solving such sparse systems. It turns out that a combinatorial MaxMinMax problem provides bounds on the average computational complexity of sparse systems. In this paper we initiate a study of a linear algebra variation of this MaxMinMax problem

    Assessment of Linear Inverse Problems in Magnetocardiography and Lorentz Force Eddy Current Testing

    Get PDF
    Lineare inverse Probleme tauchen in vielen Bereichen von Wissenschaft und Technik auf. Effiziente Lösungsstrategien für diese inversen Probleme erfordern Informationen darüber, ob das Problem schlecht-gestellt und in welchem Ausmaß dies der Fall ist. In der vorliegenden Dissertation wird eine umfassende theoretische Analyse existierender Bewertungsmaße durchgeführt. Aus diesen Untersuchungen werden schließlich zwei neue Bewertungsmaße abgeleitet. Beide können bei einer Vielzahl linearer inverser Probleme angewendet werden, einschließlich biomedizinische Anwendungen oder der zerstörungsfreien Materialprüfung. Die theoretischen Betrachtungen zur Behandlung linearer inverser Probleme werden auf zwei Beispiele angewendet. Das erste ist die Magnetkardiographie, wo die Optimierung magnetischer Sensoren in einem westenähnlichen Sensorfeld untersucht wird. Für die Messungen der magnetischen Flussdichte werden üblicherweise monoaxiale Sensoren in einem Feld perfekt parallel angeordnet. Eine zufällige Variation ihrer Ausrichtungen kann die Kondition des entsprechenden linearen inversen Problems verbessern. Eine theoretische Definition des Falls, in dem zufällige Variationen monoaxialer Sensoren den Zustand der Kernmatrix mit einer Wahrscheinlichkeit gleich Eins verbessern wird ebenfalls in der Dissertation vorgestellt. Diese theoretische Beobachtung ist allgemein gültig.Positionen und Orientierungen der Magnetsensoren rund um den Oberkörper wurden mit drei aus der Literatur bekannten Bewertungsmaßen und einem neu in dieser Arbeit vorgeschlagenen Maß optimiert. Die besten Ergebnisse ergeben sich bei einer unregelmäßigen Verteilung der Sensoren auf der Oberfläche des Brustkorbes. Im Vergleich zu früheren Untersuchungsergebnissen kann daraus geschlussfolgert werden, dass mit geringfügig abweichenden Sensoranordnungen ebenso gute Ergebnisse erzielt werden können. Ein zweites Anwendungsbeispiel ist ein Verfahren der zerstörungsfreien Materialprüfung, das auch als Lorentzkraft-Wirbelstromprüfung bekannt geworden ist. In dieser Arbeit wird eine neue Methode für die kontaktlose, zerstörungsfreie Untersuchung leitfähiger Materialien vorgestellt. Dabei wird die Lorentzkraft gemessen, die auf einen Dauermagneten wirkt, der relativ zu einem Testkörper bewegt. Es wird eine neue Approximationsmethode für die Berechnung der magnetischen Felder und der Lorentzkräfte vorgeschlagen.Linear inverse problems arise throughout a variety of branches of science and engineering. Efficient solution strategies for these inverse problems need to know whether a problem is ill-conditioned as well as its degree of ill-conditioning. In this thesis, a comprehensive theoretical analysis of known figures of merit has been done and finally two new figures of merit have been developed. Both can be applied in a large variety of linear inverse problems, including biomedical applications and nondestructive testing of materials. Theoretical considerations of the conditioning of linear inverse problems are applied to two examples. The first one is magnetocardiography, where the optimization of magnetic sensors in a vest-like sensor array has been considered. When measuring magnetic flux density, usually mono-axial magnetic sensors are arranged in an array, perfectly in parallel. It has been shown that a random variation of their orientations can improve the condition of the corresponding linear inverse problem. Thus, in this thesis a theoretical definition of the case when random variations of mono-axial sensors orientations improve the condition of the kernel matrix with a probability equal to one is presented. This theoretical observation is valid in general. Positions and orientations of magnetic sensors around the torso have been optimized minimizing three figures of merit given in the literature and a novel one presented in the thesis. Best results have been found for non-uniform sensors distribution on the whole torso surface. In comparison to previous findings can be concluded that quite different sensor sets can perform equally well.The second application example is nondestructive testing of materials named Lorentz force eddy current testing, where the Lorentz force exerting on a permanent magnet, which is moving relative to the specimen, is determined. A novel approximation method for the calculation of the magnetic fields and Lorentz forces is proposed. Based on the new approximation method, a new inverse procedure for defect reconstruction is proposed. A successful reconstruction using data from finite elements method analysis and measurements is obtained

    Optimized techniques for real-time microwave and millimeter wave SAR imaging

    Get PDF
    Microwave and millimeter wave synthetic aperture radar (SAR)-based imaging techniques, used for nondestructive evaluation (NDE), have shown tremendous usefulness for the inspection of a wide variety of complex composite materials and structures. Studies were performed for the optimization of uniform and nonuniform sampling (i.e., measurement positions) since existing formulations of SAR resolution and sampling criteria do not account for all of the physical characteristics of a measurement (e.g., 2D limited-size aperture, electric field decreasing with distance from the measuring antenna, etc.) and nonuniform sampling criteria supports sampling below the Nyquist rate. The results of these studies demonstrate optimum sampling given design requirements that fully explain resolution dependence on sampling criteria. This work was then extended to manually-selected and nonuniformly distributed samples such that the intelligence of the user may be utilized by observing SAR images being updated in real-time. Furthermore, a novel reconstruction method was devised that uses components of the SAR algorithm to advantageously exploit the inherent spatial information contained in the data, resulting in a superior final SAR image. Furthermore, better SAR images can be obtained if multiple frequencies are utilized as compared to single frequency. To this end, the design of an existing microwave imaging array was modified to support multiple frequency measurement. Lastly, the data of interest in such an array may be corrupted by coupling among elements since they are closely spaced, resulting in images with an increased level of artifacts. A method for correcting or pre-processing the data by using an adaptation of correlation canceling technique is presented as well --Abstract, page iii

    A knowledge-based engineering tool for aiding in the conceptual design of composite yachts

    Full text link
    Proposed in this thesis is a methodology to enable yacht designers to develop innovative structural concepts, even when the loads experienced by the yacht are highly uncertain, and has been implemented in sufficient detail to confirm the feasibility of this new approach. The new approach is required because today's yachts are generally lighter, getting larger and going faster. The question arises as to how far the design envelope can be pushed with the highly uncertain loads experienced by the structure? What are the effects of this uncertainty and what trade-offs in the structural design will best meet the overall design objectives? The new approach provides yacht designers with a means of developing innovative structural solutions that accommodate high levels of uncertainty, but still focus on best meeting design objectives constrained by trade-offs in weight, safety and cost. The designer's preferences have a large, and not always intuitive, influence on the necessary design trade-offs. This in turn invites research into ways to formally integrate decision algorithms into knowledge-based design systems. A lean and robust design system has been achieved by developing a set of tools which are blanketed by a fuzzy decision algorithm. The underlying tool set includes costing, material optimisation and safety analysis. Central to this is the innovative way in which the system allows non-discrete variables to be utilized along with new subjective measures of structural reliability based on load path algorithms and topological (shape) optimisation. The originality in this work is the development of a knowledge-based framework and methodology that uses a fuzzy decision making tool to navigate through a design space and address trade-offs between high level objectives when faced with limited design detail and uncertainty. In so doing, this work introduces the use of topological optimisation and load path theory to the structural design of yachts as a means of overcoming the historical focus of knowledge-based systems and to ensure that innovative solutions can still evolve. A sensitivity analysis is also presented which can quantify a design's robustness in a system that focuses on a global approach to the measurement of objectives such as cost, weight and safety. Results from the application of this system show new and innovative structural solutions evolving that take into account the designers preferences regarding cost, weight and safety while accommodating uncertain parameters such as the loading experienced by the hull

    Simultaneous sizing, layout and topology optimization for buckling and postbuckling of stiffened panels

    Get PDF
    This thesis aims to develop a computational scheme for simultaneous sizing, layout and topology optimization for buckling and postbuckling of stiffened panels. Many efforts have been made using structural optimization techniques to improve the buckling and postbuckling behaviours of stiffened panels, focusing on sizing and layout optimization. Stiffener internal topologies have however, received little attention for optimization. This reduces the design space that can be searched, and consequently limits the potential improvement in structural performance. In this thesis, a level-set-based topology optimization parameterization is developed, enabling the simultaneous optimization of the thicknesses of the skin and stiffeners, and stiffener layout and internal topologies. The simultaneous sizing, layout and topology optimization for buckling of panels stiffened with straight stiffeners is investigated for the first time. Numerical investigations demonstrate the effectiveness of the proposed method. The benefit of simultaneously conducting sizing, layout and topology optimization for the design of stiffened panels is also demonstrated. Since stiffness is commonly considered in the topology optimization field, the difference between buckling-driven and stiffness-driven designs is investigated and discussed. Besides buckling, stress is another critical failure criterion of stiffened panels. The proposed method is extended to stiffened panel design under both stress and buckling constraints. The simultaneous sizing, layout and topology optimization for postbuckling of panels with straight stiffeners is investigated for the first time. The proposed method is extended to postbuckling optimization. The out-of-plane skin deformation and the load-carrying capability are considered to access the postbuckling behaviours of stiffened panels. Compared with buckling optimization, postbuckling optimization can provide a design with more promising postbuckling behaviours of interest. The design of panels with curved stiffeners is investigated. The level-set-based method is extended to simultaneously optimize both stiffener curves and internal topologies. Numerical investigations demonstrate and validate the proposed method for simultaneous layout and topology optimization of curved stiffened panels. Compared with panels with straight stiffeners, curved stiffened panels have the potential to result in lighter weight designs

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Application of PSO for optimization of power systems under uncertainty

    Get PDF
    The primary objective of this dissertation is to develop a black box optimization tool. The algorithm should be able to solve complex nonlinear, multimodal, discontinuous and mixed-integer power system optimization problems without any model reduction. Although there are many computational intelligence (CI) based algorithms which can handle these problems, they require intense human intervention in the form of parameter tuning, selection of a suitable algorithm for a given problem etc. The idea here is to develop an algorithm that works relatively well on a variety of problems with minimum human effort. An adaptive particle swarm optimization algorithm (PSO) is presented in this thesis. The algorithm has special features like adaptive swarm size, parameter free update strategies, progressive neighbourhood topologies, self learning parameter free penalty approach etc. The most significant optimization task in the power system operation is the scheduling of various generation resources (Unit Commitment, UC). The current practice used in UC modelling is the binary approach. This modelling results in a high dimension problem. This in turn leads to increased computational effort and decreased efficiency of the algorithm. A duty cycle based modelling proposed in this thesis results in 80 percent reduction in the problem dimension. The stern uptime and downtime requirements are also included in the modelling. Therefore, the search process mostly starts in a feasible solution space. From the investigations on a benchmark problem, it was found that the new modelling results in high quality solutions along with improved convergence. The final focus of this thesis is to investigate the impact of unpredictable nature of demand and renewable generation on the power system operation. These quantities should be treated as a stochastic processes evolving over time. A new PSO based uncertainty modelling technique is used to abolish the restrictions imposed by the conventional modelling algorithms. The stochastic models are able to incorporate the information regarding the uncertainties and generate day ahead UC schedule that are optimal to not just the forecasted scenario for the demand and renewable generation in feed but also to all possible set of scenarios. These models will assist the operator to plan the operation of the power system considering the stochastic nature of the uncertainties. The power system can therefore optimally handle huge penetration of renewable generation to provide economic operation maintaining the same reliability as it was before the introduction of uncertainty
    corecore