3,239 research outputs found

    Distributed Utility Estimation with Heterogeneous Relative Information

    Get PDF
    In this letter, we consider a scenario where a set of agents, interconnected by a network topology, aim at computing an estimate of their own utility, importance or value, based on pairwise relative information having heterogeneous nature. In more detail, the agents are able to measure the difference between their value and the value of some their neighbors, or have an estimate of the ratio between their value and the value the remaining neighbors. This setting may find application in problems involving information provided by heterogeneous sensors (e.g., differences and ratios), as well as in scenarios where estimations provided by humans have to be merged with sensor measurements. Specifically, we develop a distributed algorithm that lets each agent asymptotically compute a utility value. To this end, we first characterize the task at hand in terms of a least-squares minimum problem, providing a necessary and sufficient condition for the existence of a unique global minimum, and then we show that the proposed algorithm asymptotically converges to a global minimum. This letter is concluded by numerical analyses that corroborate the theoretical findings

    Albany: Using Component-based Design to Develop a Flexible, Generic Multiphysics Analysis Code

    Get PDF
    Abstract: Albany is a multiphysics code constructed by assembling a set of reusable, general components. It is an implicit, unstructured grid finite element code that hosts a set of advanced features that are readily combined within a single analysis run. Albany uses template-based generic programming methods to provide extensibility and flexibility; it employs a generic residual evaluation interface to support the easy addition and modification of physics. This interface is coupled to powerful automatic differentiation utilities that are used to implement efficient nonlinear solvers and preconditioners, and also to enable sensitivity analysis and embedded uncertainty quantification capabilities as part of the forward solve. The flexible application programming interfaces in Albany couple to two different adaptive mesh libraries; it internally employs generic integration machinery that supports tetrahedral, hexahedral, and hybrid meshes of user specified order. We present the overall design of Albany, and focus on the specifics of the integration of many of its advanced features. As Albany and the components that form it are openly available on the internet, it is our goal that the reader might find some of the design concepts useful in their own work. Albany results in a code that enables the rapid development of parallel, numerically efficient multiphysics software tools. In discussing the features and details of the integration of many of the components involved, we show the reader the wide variety of solution components that are available and what is possible when they are combined within a simulation capability. Key Words: partial differential equations, finite element analysis, template-based generic programmin

    Multidisciplinary Design Optimization for Space Applications

    Get PDF
    Multidisciplinary Design Optimization (MDO) has been increasingly studied in aerospace engineering with the main purpose of reducing monetary and schedule costs. The traditional design approach of optimizing each discipline separately and manually iterating to achieve good solutions is substituted by exploiting the interactions between the disciplines and concurrently optimizing every subsystem. The target of the research was the development of a flexible software suite capable of concurrently optimizing the design of a rocket propellant launch vehicle for multiple objectives. The possibility of combining the advantages of global and local searches have been exploited in both the MDO architecture and in the selected and self developed optimization methodologies. Those have been compared according to computational efficiency and performance criteria. Results have been critically analyzed to identify the most suitable optimization approach for the targeted MDO problem

    Incomplete analytic hierarchy process with minimum weighted ordinal violations

    Get PDF
    Incomplete pairwise comparison matrices offer a natural way of expressing preferences in decision making processes. Although ordinal information is crucial, there is a bias in the literature: cardinal models dominate. Ordinal models usually yield non-unique solutions; therefore, an approach blending ordinal and cardinal information is needed. In this work, we consider two cascading problems: first, we compute ordinal preferences, maximizing an index that combines ordinal and cardinal information; then, we obtain a cardinal ranking by enforcing ordinal constraints. Notably, we provide a sufficient condition (that is likely to be satisfied in practical cases) for the first problem to admit a unique solution and we develop a provably polynomial-time algorithm to compute it. The effectiveness of the proposed method is analyzed and compared with respect to other approaches and criteria at the state of the art.Comment: preprint submitted to the International Journal of General System

    ASSESSMENT OF FLOOD HAZARD SUSCEPTIBILITY IN SOUTH SUDAN’S UPPER NILE STATE USING GIS-BASED MULTICRITERIA ANALYSIS

    Get PDF
    openLe alluvioni sono tra i rischi naturali più rovinosi. I loro effetti avversi comprendono danni alle strutture fisiche, sociali ed economiche, ed un deterioramento dei mezzi di sussistenza. Allo stesso tempo, attribuito alle variazioni climatiche ed eventi estremi causati dal cambiamento climatico, è stato registrato un incremento nella frequenza di alluvioni a livello globale, aumentando la necessita di comprendere gli aspetti spazio-temporali di questi fenomeni. Questo studio esamina la dimensione spaziale del rischio di inondazione nell’Alto Nilo, Sudan del Sud, regione con una riconosciuta vulnerabilità verso le inondazioni, causata principalmente dal suo posizionamento geografico all’interno di una pianura alluvionale caratterizzata da una notevole variabilità della portata di piena. L’obiettivo di questa indagine è quello di mappare la potenziale estensione spaziale degli allagamenti all’interno dell’area di studio in uno scenario di inondazione. La mappa del rischio di inondazioni, fondata su diversi indici, è stata sviluppata utilizzando una decisione d’analisi multicriteriale (MCDA) basata su GIS, ed il analytical hierarchy process (AHP). Gli otto fattori d’influenza per le alluvioni utilizzati per lo studio sono: distanza da fiumi, indice di umidità topografica, densità di drenaggio, copertura del suolo (LULC), precipitazioni medie annue, pendenza, altitudine, e tipo di suolo. La mappa del rischio di inondazione sviluppata per l’area di studio è composta da cinque zone di suscettibilità: molto alta, alta, moderata, bassa, molto bassa. Queste zone coprono rispettivamente il 12%, 26%, 29%, 22%, e 9% dell’area di studio. La mappa è stata ulteriormente validata tramite un confronto con la mappa satellitare dello storico delle inondazioni, ed è risultata soddisfacente nello stimare la probabile estensione spaziale degli allagamenti. Il modello della mappa è potrà risultare strumentale per le misure di preparazione alle inondazioni, e come guida per future indagini specifiche nella dimensione spazio-temporale di eventi alluvionali nella regione dell’Alto Nilo.Floods are among the most ruinous of all natural hazards. Its adverse effects include damages to the physical, social, and economic structures, and disruption of livelihoods. contemporary, attributed to climate change-induced climate variations and extreme weather events, the frequency of flood occurrence has increased all around the globe. This has therefore, augmented the necessity to comprehend the spatial and temporal dimension of flood phenomena. The current study examines the spatial dimension of flood hazard in the Upper Nile state, South Sudan, a region acknowledged to be highly vulnerable to inundation, mainly due to is geographical position within a flood plain characterized by a notable variability in discharge. The objective of this investigation is to map the potential spatial extent of floodwater within the boundaries of study area under flood scenarios. The index-based flood hazard map was developed using GIS-based multicriteria decision analysis (MCDA), and the analytical hierarchy process (AHP). Eight flood influencing factors were used in this study, namely; distance to rivers, topographic wetness index, drainage density, land-coverage (LULC), annual average rainfall, slope, elevation, and soil types. The flood hazard map developed for study area consist of five flood hazard susceptibility zones: very high, high, moderate, low, and very low. These zones encompass proportions of 12%, 26%, 29%, 22%, and 9% of the study area, respectively. The flood hazard map was further validated using satellite historical inundation map and determined to be satisfactory in depicting the probabilistic spatial extent of inundation. The flood hazard model developed is anticipated to be instrumental in pre-flood preparedness measures as well as a guide for future detailed investigations on the spatial–temporal dimension of flood incidents in the Upper Nile state

    Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models

    Get PDF
    Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations

    Algorithms, applications and systems towards interpretable pattern mining from multi-aspect data

    Get PDF
    How do humans move around in the urban space and how do they differ when the city undergoes terrorist attacks? How do users behave in Massive Open Online courses~(MOOCs) and how do they differ if some of them achieve certificates while some of them not? What areas in the court elite players, such as Stephen Curry, LeBron James, like to make their shots in the course of the game? How can we uncover the hidden habits that govern our online purchases? Are there unspoken agendas in how different states pass legislation of certain kinds? At the heart of these seemingly unconnected puzzles is this same mystery of multi-aspect mining, i.g., how can we mine and interpret the hidden pattern from a dataset that simultaneously reveals the associations, or changes of the associations, among various aspects of the data (e.g., a shot could be described with three aspects, player, time of the game, and area in the court)? Solving this problem could open gates to a deep understanding of underlying mechanisms for many real-world phenomena. While much of the research in multi-aspect mining contribute broad scope of innovations in the mining part, interpretation of patterns from the perspective of users (or domain experts) is often overlooked. Questions like what do they require for patterns, how good are the patterns, or how to read them, have barely been addressed. Without efficient and effective ways of involving users in the process of multi-aspect mining, the results are likely to lead to something difficult for them to comprehend. This dissertation proposes the M^3 framework, which consists of multiplex pattern discovery, multifaceted pattern evaluation, and multipurpose pattern presentation, to tackle the challenges of multi-aspect pattern discovery. Based on this framework, we develop algorithms, applications, and analytic systems to enable interpretable pattern discovery from multi-aspect data. Following the concept of meaningful multiplex pattern discovery, we propose PairFac to close the gap between human information needs and naive mining optimization. We demonstrate its effectiveness in the context of impact discovery in the aftermath of urban disasters. We develop iDisc to target the crossing of multiplex pattern discovery with multifaceted pattern evaluation. iDisc meets the specific information need in understanding multi-level, contrastive behavior patterns. As an example, we use iDisc to predict student performance outcomes in Massive Open Online Courses given users' latent behaviors. FacIt is an interactive visual analytic system that sits at the intersection of all three components and enables for interpretable, fine-tunable, and scrutinizable pattern discovery from multi-aspect data. We demonstrate each work's significance and implications in its respective problem context. As a whole, this series of studies is an effort to instantiate the M^3 framework and push the field of multi-aspect mining towards a more human-centric process in real-world applications

    Least squares optimization: From theory to practice

    Get PDF
    Nowadays, Nonlinear Least-Squares embodies the foundation of many Robotics and Computer Vision systems. The research community deeply investigated this topic in the last few years, and this resulted in the development of several open-source solvers to approach constantly increasing classes of problems. In this work, we propose a unified methodology to design and develop efficient Least-Squares Optimization algorithms, focusing on the structures and patterns of each specific domain. Furthermore, we present a novel open-source optimization system that addresses problems transparently with a different structure and designed to be easy to extend. The system is written in modern C++ and runs efficiently on embedded systemsWe validated our approach by conducting comparative experiments on several problems using standard datasets. The results show that our system achieves state-of-the-art performances in all tested scenarios

    Performance modeling for systematic performance tuning

    Full text link
    corecore