1,801,933 research outputs found

    Detecting wheels

    Full text link
    A \emph{wheel} is a graph made of a cycle of length at least~4 together with a vertex that has at least three neighbors in the cycle. We prove that the problem whose instance is a graph GG and whose question is "does GG contains a wheel as an induced subgraph" is NP-complete. We also settle the complexity of several similar problems

    Detecting Functional Requirements Inconsistencies within Multi-teams Projects Framed into a Model-based Web Methodology

    Get PDF
    One of the most essential processes within the software project life cycle is the REP (Requirements Engineering Process) because it allows specifying the software product requirements. This specification should be as consistent as possible because it allows estimating in a suitable manner the effort required to obtain the final product. REP is complex in itself, but this complexity is greatly increased in big, distributed and heterogeneous projects with multiple analyst teams and high integration between functional modules. This paper presents an approach for the systematic conciliation of functional requirements in big projects dealing with a web model-based approach and how this approach may be implemented in the context of the NDT (Navigational Development Techniques): a web methodology. This paper also describes the empirical evaluation in the CALIPSOneo project by analyzing the improvements obtained with our approach.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2015-71938-RED

    Detecting financial distress

    Get PDF
    This paper examines two types of statistical tests, which are multiple discriminant analysis (MDA) and the logit model to detect financially distressed companies. Comparison between the two statistical tests is implemented to identiy factors that could differentiate financially distressed companies from the healthy company. Among the fifteen explanators, M D A shows that the current ratios, net income to total asset, and sales to current asset, are the indicators of financially distressed companies. Other than net income to total asset, the logit model provides two different ratios which are shareholders’filnd to total liabilities, and cash flow from financing to total liabilities, to identi@ financially distressed companies. It zuasfound that the logit model could accurately predict 91.5% of the estimation sample and 90% of the holdout sample whereas the discriminant model shows an overall accuracy rate of 84.5% and 80% for the estimatiorl and the holdout sample respectively

    Detecting Fourier subspaces

    Full text link
    Let G be a finite abelian group. We examine the discrepancy between subspaces of l^2(G) which are diagonalized in the standard basis and subspaces which are diagonalized in the dual Fourier basis. The general principle is that a Fourier subspace whose dimension is small compared to |G| = dim(l^2(G)) tends to be far away from standard subspaces. In particular, the recent positive solution of the Kadison-Singer problem shows that from within any Fourier subspace whose dimension is small compared to |G| there is standard subspace which is essentially indistinguishable from its orthogonal complement.Comment: 8 page

    Detecting Sponsored Recommendations

    Full text link
    With a vast number of items, web-pages, and news to choose from, online services and the customers both benefit tremendously from personalized recommender systems. Such systems however provide great opportunities for targeted advertisements, by displaying ads alongside genuine recommendations. We consider a biased recommendation system where such ads are displayed without any tags (disguised as genuine recommendations), rendering them indistinguishable to a single user. We ask whether it is possible for a small subset of collaborating users to detect such a bias. We propose an algorithm that can detect such a bias through statistical analysis on the collaborating users' feedback. The algorithm requires only binary information indicating whether a user was satisfied with each of the recommended item or not. This makes the algorithm widely appealing to real world issues such as identification of search engine bias and pharmaceutical lobbying. We prove that the proposed algorithm detects the bias with high probability for a broad class of recommendation systems when sufficient number of users provide feedback on sufficient number of recommendations. We provide extensive simulations with real data sets and practical recommender systems, which confirm the trade offs in the theoretical guarantees.Comment: Shorter version to appear in Sigmetrics, June 201

    Detecting sequential structure

    Get PDF
    Programming by demonstration requires detection and analysis of sequential patterns in a user’s input, and the synthesis of an appropriate structural model that can be used for prediction. This paper describes SEQUITUR, a scheme for inducing a structural description of a sequence from a single example. SEQUITUR integrates several different inference techniques: identification of lexical subsequences or vocabulary elements, hierarchical structuring of such subsequences, identification of elements that have equivalent usage patterns, inference of programming constructs such as looping and branching, generalisation by unifying grammar rules, and the detection of procedural substructure., Although SEQUITUR operates with abstract sequences, a number of concrete illustrations are provided

    sequenceLDhot: Detecting Recombination Hotspots.

    Get PDF
    Motivation: There is much local variation in recombination rates across the human genome—with the majority of recombination occuring in recombination hotspots—short regions of around ~2 kb in length that have much higher recombination rates than neighbouring regions. Knowledge of this local variation is important, e.g. in the design and analysis of association studies for disease genes. Population genetic data, such as that generated by the HapMap project, can be used to infer the location of these hotspots. We present a new, efficient and powerful method for detecting recombination hotspots from population data. Results: We compare our method with four current methods for detecting hotspots. It is orders of magnitude quicker, and has greater power, than two related approaches. It appears to be more powerful than HotspotFisher, though less accurate at inferring the precise positions of the hotspot. It was also more powerful than LDhot in some situations: particularly for weaker hotspots (10–40 times the background rate) when SNP density is lower (< 1/kb). Availability: Program, data sets, and full details of results are available at: http://www.maths.lancs.ac.uk/~fearnhea/Hotspot

    Detecting Weakly Simple Polygons

    Full text link
    A closed curve in the plane is weakly simple if it is the limit (in the Fr\'echet metric) of a sequence of simple closed curves. We describe an algorithm to determine whether a closed walk of length n in a simple plane graph is weakly simple in O(n log n) time, improving an earlier O(n^3)-time algorithm of Cortese et al. [Discrete Math. 2009]. As an immediate corollary, we obtain the first efficient algorithm to determine whether an arbitrary n-vertex polygon is weakly simple; our algorithm runs in O(n^2 log n) time. We also describe algorithms that detect weak simplicity in O(n log n) time for two interesting classes of polygons. Finally, we discuss subtle errors in several previously published definitions of weak simplicity.Comment: 25 pages and 13 figures, submitted to SODA 201
    corecore