38 research outputs found

    Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems

    Get PDF
    Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.Comment: 51 pages, 11 figure

    Processing of Erroneous and Unsafe Data

    Get PDF
    Statistical offices have to overcome many problems before they can publish reliable data. Two of these problems are examined in this thesis. The first problem is the occurrence of errors in the collected data. Due to these errors publication figures cannot be directly based on the collected data. Before publication the errors in the data have to be localised and corrected. In this thesis we focus on the localisation of errors in a mix of categorical and numerical data. The problem is formulated as a mathematical optimisation problem. Several new algorithms for solving this problem are proposed, and computational results of the most promising algorithms are compared to each other. The second problem that is examined in this thesis is the occurrence of unsafe data, i.e. data that would reveal too much sensitive information about individual respondents. Before publication of data, such unsafe data need to be protected. In the thesis we examine various aspects of the protection of unsafe data.Statistische bureaus dienen tal van problemen te overwinnen voordat zij de resultaten van hun onderzoeken kunnen publiceren. In het proefschrift wordt ingegaan op twee van deze problemen. Het eerste probleem is dat verzamelde gegevens foutief kunnen zijn. Door de mogelijke aanwezigheid van fouten in de gegevens moeten deze gegevens eerst worden gecontroleerd en indien nodig worden gecorrigeerd voordat tot publicatie van resultaten wordt overgegaan. In het proefschrift wordt vooral aandacht besteed aan het opsporen van de foutieve gegevens. Door te veronderstellen dat er zo min mogelijk fouten zijn gemaakt kan het opsporen van de foutieve waarden als een wiskundig optimaliseringsprobleem worden geformuleerd. In het proefschrift wordt een aantal methoden ontwikkeld om dit complexe probleem efficient op te lossen. Het tweede probleem dat in het proefschrift onderzocht wordt is dat geen gegevens gepubliceerd mogen worden die de privacy van individuele respondenten of kleine groepen respondenten schaden. Om gegevens van individuele of kleine groepen respondenten te beschermen moeten beveiligingsmaatregelen, zoals het niet publiceren van bepaalde informatie, worden getroffen. In het proefschrift wordt ingegaan op de wiskundige problemen die het beveiligen van gevoelige gegevens met zich mee brengt. Voor een aantal problemen, zoals het berekenen van het informatieverlies ten gevolge van het beveiligen van gevoelige gegevens en het minimaliseren van de informatie die niet gepubliceerd wordt, worden oplossingen beschreven

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    Automatic design of the gravity-reducing propulsion system of the TALARIS Hopper Testbed

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, September 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-93).This thesis describes a Systems Engineering tool for automatic design, presents the results of its application to the problem of designing Earth-based reduced-gravity simulators, and compares the performance of the found optimal design solutions with that of the MIT TALARIS Hopper Testbed. Earth-based reduced-gravity simulators are platforms that allow hosted vehicles to experience a dynamic environment -from a guidance, navigation, and control perspective- analog to other planetary surfaces. Simulators are used for system development and operator training purposes. Specifically, reduced-gravity simulators produce a constant vertical thrust equal to a fraction of the weight of the studied vehicle, this yielding a perceived gravity equal to the gravity of the celestial body of interest. Planetary hoppers explore planetary surfaces through hopping, i.e. low altitude and short-duration flying. Recently, these systems have gained popularity as cost-effective means for planetary exploration due to their larger operational flexibility compared to other exploration systems. The tool developed as part of this thesis eases the compilation and use of parts catalogs in the design task, includes real-time visualization of the search process, supports the output of multiple solutions that optimize conflicting goals, efficiently calculates Pareto frontiers of solutions, and can integrate external solvers and simulators seamlessly. Chapter 1 overviews the engineering challenges associated with Earth-based reduced-gravity simulators applied to planetary hoppers. Chapter 2 provides the context knowledge required for the development of the individual tasks in this thesis. Chapter 3 discusses the engineering literature covering relevant previous work. Chapter 4 describes the selected approach and the tool that has been developed for the design of the propulsion system of the simulator. Chapter 5 discusses the applicability of the approach and design tool to the case of the MIT TALARIS Hopper Testbed. Chapter 6 summarizes the results and outlines avenues for future research in this field.by Jorge Cañizales Díaz.S.M

    Nonlinear Constitutive Relations for High Temperature Application, 1984

    Get PDF
    Nonlinear constitutive relations for high temperature applications were discussed. The state of the art in nonlinear constitutive modeling of high temperature materials was reviewed and the need for future research and development efforts in this area was identified. Considerable research efforts are urgently needed in the development of nonlinear constitutive relations for high temperature applications prompted by recent advances in high temperature materials technology and new demands on material and component performance. Topics discussed include: constitutive modeling, numerical methods, material testing, and structural applications

    PSA 2018

    Get PDF
    These preprints were automatically compiled into a PDF from the collection of papers deposited in PhilSci-Archive in conjunction with the PSA 2018
    corecore