695 research outputs found

    A Descriptive Tolerance Nearness Measure for Performing Graph Comparison

    Get PDF
    Accepted versionThis article proposes the tolerance nearness measure (TNM) as a computationally reduced alternative to the graph edit distance (GED) for performing graph comparisons. The TNM is defined within the context of near set theory, where the central idea is that determining similarity between sets of disjoint objects is at once intuitive and practically applicable. The TNM between two graphs is produced using the Bron-Kerbosh maximal clique enumeration algorithm. The result is that the TNM approach is less computationally complex than the bipartite-based GED algorithm. The contribution of this paper is the application of TNM to the problem of quantifying the similarity of disjoint graphs and that the maximal clique enumeration-based TNM produces comparable results to the GED when applied to the problem of content-based image processing, which becomes important as the number of nodes in a graph increases."This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant 418413."https://content.iospress.com/articles/fundamenta-informaticae/fi174

    Descriptive Topological Spaces for Performing Visual Search

    Get PDF
    Accepted versionThis article presents an approach to performing the task of visual search in the context of descriptive topological spaces. The presented algorithm forms the basis of a descriptive visual search system (DVSS) that is based on the guided search model (GSM) that is motivated by human visual search. This model, in turn, consists of the bottom-up and top-down attention models and is implemented within the DVSS in three distinct stages. First, the bottom-up activation process is used to generate saliency maps and to identify salient objects. Second, perceptual objects, defined in the context of descriptive topological spaces, are identified and associated with feature vectors obtained from a VGG deep learning convolutional neural network. Lastly, the top-down activation process makes decisions on whether the object of interest is present in a given image through the use of descriptive patterns within the context of a descriptive topological space. The presented approach is tested with images from the ImageNet ILSVRC2012 and SIMPLIcity datasets. The contribution of this article is a descriptive pattern-based visual search algorithm."This research has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant 418413, and the Faculty of Graduate Studies at the University of Winnipeg."https://link.springer.com/chapter/10.1007/978-3-662-58768-3_

    Interval Type-2 Beta Fuzzy Near Sets Approach to Content-Based Image Retrieval

    Get PDF
    In computer-based search systems, similarity plays a key role in replicating the human search process. Indeed, the human search process underlies many natural abilities such as image recovery, language comprehension, decision making, or pattern recognition. The search for images consists of establishing a correspondence between the available image and that sought by the user, by measuring the similarity between the images. Image search by content is generaly based on the similarity of the visual characteristics of the images. The distance function used to evaluate the similarity between images depends notonly on the criteria of the search but also on the representation of the characteristics of the image. This is the main idea of a content-based image retrieval (CBIR) system. In this article, first, we constructed type-2 beta fuzzy membership of descriptor vectors to help manage inaccuracy and uncertainty of characteristics extracted the feature of images. Subsequently, the retrieved images are ranked according to the novel similarity measure, noted type-2 fuzzy nearness measure (IT2FNM). By analogy to Type-2 Fuzzy Logic and motivated by near sets theory, we advanced a new fuzzy similarity measure (FSM) noted interval type-2 fuzzy nearness measure (IT-2 FNM). Then, we proposed three new IT-2 FSMs and we have provided mathematical justification to demonstrate that the proposed FSMs satisfy proximity properties (i.e. reflexivity, transitivity, symmetry, and overlapping). Experimental results generated using three image databases showing consistent and significant results

    Genome-wide Protein-chemical Interaction Prediction

    Get PDF
    The analysis of protein-chemical reactions on a large scale is critical to understanding the complex interrelated mechanisms that govern biological life at the cellular level. Chemical proteomics is a new research area aimed at genome-wide screening of such chemical-protein interactions. Traditional approaches to such screening involve in vivo or in vitro experimentation, which while becoming faster with the application of high-throughput screening technologies, remains costly and time-consuming compared to in silico methods. Early in silico methods are dependant on knowing 3D protein structures (docking) or knowing binding information for many chemicals (ligand-based approaches). Typical machine learning approaches follow a global classification approach where a single predictive model is trained for an entire data set, but such an approach is unlikely to generalize well to the protein-chemical interaction space considering its diversity and heterogeneous distribution. In response to the global approach, work on local models has recently emerged to improve generalization across the interaction space by training a series of independant models localized to each predict a single interaction. This work examines current approaches to genome-wide protein-chemical interaction prediction and explores new computational methods based on modifications to the boosting framework for ensemble learning. The methods are described and compared to several competing classification methods. Genome-wide chemical-protein interaction data sets are acquired from publicly available resources, and a series of experimental studies are performed in order to compare the the performance of each method under a variety of conditions

    Large-Scale Distributed Coalition Formation

    Get PDF
    The CyberCraft project is an effort to construct a large scale Distributed Multi-Agent System (DMAS) to provide autonomous Cyberspace defense and mission assurance for the DoD. It employs a small but flexible agent structure that is dynamically reconfigurable to accommodate new tasks and policies. This document describes research into developing protocols and algorithms to ensure continued mission execution in a system of one million or more agents, focusing on protocols for coalition formation and Command and Control. It begins by building large-scale routing algorithms for a Hierarchical Peer to Peer structured overlay network, called Resource-Clustered Chord (RC-Chord). RC-Chord introduces the ability to efficiently locate agents by resources that agents possess. Combined with a task model defined for CyberCraft, this technology feeds into an algorithm that constructs task coalitions in a large-scale DMAS. Experiments reveal the flexibility and effectiveness of these concepts for achieving maximum work throughput in a simulated CyberCraft environment

    Impact of irrigation on poverty and environment in Ethiopia. Draft Proceeding of the Symposium and Exhibition held at Ghion Hotel, Addis Ababa, Ethiopia 27th -29th November, 2007

    Get PDF
    Poverty, Crop management, Irrigated farming, Rainfed farming, Irrigation systems, Food security, Water harvesting, Institutions, Environmental effects, Public health, Malaria, GIS, Remote sensing, Crop Production/Industries, Environmental Economics and Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Food Security and Poverty, Health Economics and Policy, Institutional and Behavioral Economics, Resource /Energy Economics and Policy,

    Robust Systems of Cooperation

    Full text link
    This dissertation examines the robustness of systems of cooperation—the ability to maintain levels of cooperation in the presence of a potentially disruptive force. I examine rankings as a potentially disruptive force that is commonplace in organizations. A ranking is the ordering of individuals according to their performance on a specific dimension. Systems of cooperation often operate in contexts that feature rankings (e.g., the ride-sharing company Uber uses a “rank and yank” performance evaluation system, yet still expects cooperation on complex cooperative coding tasks) and some explicitly use rankings to motivate cooperative contributions toward a collective goal (e.g., the character improvement App “Peeple” consists of members’ public evaluations of each other’s character and uses a public “positivity rating” to motivate members to maintain a more collegial environment). Yet, a growing body of research is highlighting potential downsides to rankings that could undermine the maintenance of systems of cooperation. This research suggests that rankings may unexpectedly introduce new dynamics into a system of cooperation that drive actors toward uncooperative behaviors and undermine the system as a whole. This dissertation aims to address this tension by exploring how systems of cooperation interact with rankings. Specifically, it explores how rankings can both enrich and perturb a system of cooperation and how systems can achieve robust cooperation in the presence of rankings. Chapter 1 introduces the dual role of rankings for systems of cooperation, reflects on the importance of identifying characteristics that make these systems robust, and discusses how the changing nature of work creates a new urgency for understanding how rankings affect cooperation. This introductory chapter is followed by two empirical chapters that examine distinct pieces of the puzzle for how rankings affect the maintenance of cooperation over time. Chapter 2 examines how the introduction of a performance ranking affects established systems of cooperation. Using a between-groups, no-deception experimental design that includes 74 groups, 594 participants, and over 11,000 cooperation decisions, it examines 1) whether the self-sustaining properties of systems of cooperation are naturally able to overcome the potentially disruptive effects of rankings, and 2) in the case of disruption how managers may be able to restore cooperation in the presence of rankings—making these systems of cooperation more robust. Chapter 3 examines an online community that explicitly uses a ranking to promote cooperation. Using over 1.2 million observations of members’ weekly behaviors, this chapter examines how potential losses and gains in rank inspire individuals to perform both cooperative and uncooperative behaviors and explores how the system-level implications of these behaviors may affect the robustness of systems of cooperation. Chapter 4 concludes the dissertation by synthesizing findings from the empirical chapters, discussing their joint implications for building robust systems of cooperation, and detailing areas of future research.PHDBusiness AdministrationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145900/1/caceves_1.pd

    The Influence of Phase Distortion on Sound Quality

    Get PDF

    Assessment of the variability of spatial interpolation methods using elevation and drill hole data over the Magmont mine area, south-east Missouri

    Get PDF
    Spatial interpolation methods are widely used in fields of geoscience such as mineral exploration. Interpolation methods translate the distribution of discrete data into continuous field over a given study area. Many methods exist and operate differently. Choosing judiciously the best interpolation method calls for an understanding of the algorithm, the intent or goal of the investigation and the knowledge of the study area. In the field of mineral exploration, accurate assessment is important because both overestimation and underestimation at spatially defined variables result in varied consequences. Assessment of methods' variability can be used as an additional criterion to help make an informed choice. Here, eight interpolation methods were tested on two spatial data sets consisting of topographic surface elevations and subsurface elevations of the top and the bottom of lead orebody at the Magmont mine area, in South-east Missouri. Variability between the interpolation methods was assessed based on statistical paired t-test of each method against a reference value, geometric analysis the map algebra tool in Arcmap 10.4.1 and comparison of their algorithms. Two of the methods returned values not significantly different from the reference value while the others were less robust. In testing model variability a second time on a reduced sample size, results suggest that interpolation methods are sensitive to sample size. Similarly, building the orebody top and bottom surfaces from information on the depths across the mineralized intersection showed dissemblance among methods. Key words: spatial interpolation, GIS, Magmont mine area, variability, math algebra, paired t-test

    Systematic asset allocation using flexible views for South African markets

    Get PDF
    We implement a systematic asset allocation model using the Historical Simulation with Flexible Probabilities (HS-FP) framework developed by Meucci [142, 144, 145]. The HS-FP framework is a flexible non-parametric estimation approach that considers future asset class behavior to be conditional on time and market environments, and derives a forward-looking distribution that is consistent with this view while remaining as close as possible to the prior distribution. The framework derives the forward-looking distribution by applying unequal time and state conditioned probabilities to historical observations of asset class returns. This is achieved using relative entropy to find estimates with the least distortion to the prior distribution. Here, we use the HS-FP framework on South African financial market data for asset allocation purposes; by estimating expected returns, correlations and volatilities that are better represented through the measured market cycle. We demonstrate a range of state variables that can be useful towards understanding market environments. Concretely, we compare the out-of-sample performance for a specific configuration of the HS-FP model relative to classic Mean Variance Optimization(MVO) and Equally Weighted (EW) benchmark models. The framework displays low probability of backtest overfitting and the out-of-sample net returns and Sharpe ratio point estimates of the HS-FP model outperforms the benchmark models. However, the results are inconsistent when training windows are varied, the Sharpe ratio is seen to be inflated, and the method does not demonstrate statistically significant outperformance on a gross and net basis
    • …
    corecore