3,071 research outputs found

    Space-Efficient String Mining under Frequency Constraints

    Get PDF
    Let D1 and D2 be two databases (i.e. multisets) of d strings, over an alphabet S, with overall length n. We study the problem of mining discriminative patterns between D1 and D2, e.g., patterns that are frequent in one database but not in the other, emerging patterns, or patterns satisfying other frequency-related constraints. Using the algorithmic framework by Hui (CPM 1992), one can solve several variants of this problem in the optimal linear time with the aid of suffix trees or suffix arrays. This stands in high contrast to other pattern domains such as itemsets or subgraphs, where super-linear lower bounds are known. However, the space requirement of existing solutions is O(n log n) bits, which is not optimal for |S| << n (in particular for constant |S|), as the databases themselves occupy only n log |S| bits. Because in many real-life applications space is a more critical resource than time, the aim of this article is to reduce the space, at the cost of an increased running time. In particular, we give a solution for the above problems that uses O(n log n+d log n) bits, while the time requirement is increased from the optimal linear time to O(n log n). Our new method is tested extensively on a biologically relevant datasets and shown to be usable even on a genome-scale data

    The Challenges and Issues Facing the Deployment of RFID Technology

    Get PDF
    Griffith Sciences, School of Information and Communication TechnologyFull Tex

    A Survey on EOSIO Systems Security: Vulnerability, Attack, and Mitigation

    Full text link
    EOSIO, as one of the most representative blockchain 3.0 platforms, involves lots of new features, e.g., delegated proof of stake consensus algorithm and updatable smart contracts, enabling a much higher transaction per second and the prosperous decentralized applications (DApps) ecosystem. According to the statistics, it has reached nearly 18 billion USD, taking the third place of the whole cryptocurrency market, following Bitcoin and Ethereum. Loopholes, however, are hiding in the shadows. EOSBet, a famous gambling DApp, was attacked twice within a month and lost more than 1 million USD. No existing work has surveyed the EOSIO from a security researcher perspective. To fill this gap, in this paper, we collected all occurred attack events against EOSIO, and systematically studied their root causes, i.e., vulnerabilities lurked in all relying components for EOSIO, as well as the corresponding attacks and mitigations. We also summarized some best practices for DApp developers, EOSIO official team, and security researchers for future directions.Comment: 34 pages, 12 figure

    Mine schedule optimization with geotechnical constraints

    Get PDF
    If a mining project consists of n stoping activities these can be scheduled in n! ways according to the duration between the activities and their precedence. Mine schedule optimization manipulates the precedence relationships and the duration of the mining activities in order to maximize the Net Present Value (NPV). However, unexpected instabilities may impede or disrupt the schedule and thus reduce the profitability and so geotechnical aspects of the operation need to be taken into account. The mine schedule optimization software considered in this work is the so-called Schedule Optimization Tool (SOT). This thesis reports the work for development of new geomechanical constraints for any mine scheduling tool to find the safest and the most profitable schedule, exemplified within the SOT framework. The core hypothesis of this research is that there is a time-dependent aspect of the rock behaviour that leads to instability, a consequence of dependence of geotechnical instability upon the sequence and duration between stoping activities. There is evidence presented in this work that supports this hypothesis. An automated procedure for timely and computationally efficient calculations of the instability metrics is presented. This can be applied to evaluate the geomechanical stability of any of the n! schedules for excavating n stopes or applied to evaluate geomechanical stability of schedules arising in the schedule optimization process. However, in practice, the number of feasible schedules is much less than n! due to the precedence constraints. The approach starts with computing n elastic stress fields induced after excavating each individual stope independently within an identical computational domain using Compute3D. The stress-time series of each iv and/or every sequence of stoping are generated through superimposition of these pre-computed stress fields, and the time stamps of blasting for excavation are allocated corresponding to the stoping timetable. Different blasts are allowed to be timetabled at the same time. By means of Hooke’s law and the 3D Kelvin-Voigt creep model the elastic strain time series and the viscoelastic strain time series are produced for the stress-time series. Based on the Mohr- Coulomb Failure criterion three (in)stability indicators are defined: i. ‘Strength Factor’ to evaluate state of stress at each stage of each schedule as a proportion of is strength ii. ‘Strainth Factor’ to evaluate state of elastic strain at each stage of each schedule as a proportion of a limiting ‘rupture’ strain iii. ‘Viscoelastic-strainth Factor’ to evaluate state of viscoelastic strain at each stage of each schedule as a proportion of a limiting ‘rupture’ strain. To provide a perspective of the (in)stability condition in the computational domain for all the feasible sequences of stoping, 12 (in)stability metrics were defined and the results of each are illustrated in the form of ‘(in)stability indicator diagrams’. The overall methodology is applied to an example of excavating 6 open stopes. Additionally, a methodology is theorized to evaluate the stability condition in the rock mass surrounding the stopes for a series of stoping and backfilling schedules. The methodology is based on pre-computing one additional stress field element for each stope, which represents the effect of the fill loading on the rock mass. The calculations for this approach are consistent with the time and computational efficiency of the original methodology. The computational effort v increases to ‘2n’ pre-computed stress fields rather than ‘n’: as problem sizes double, computational time doubles, rather than increases in polynomial or exponential time.Master of Applied Science (M.Sc.) in Natural Resources Engineerin

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Algorithms and Data Structures for Coding, Indexing, and Mining of Sequential Data

    Get PDF
    In recent years, the production of sequential data has been rapidly increasing. This requires solving challenging problems about how to represent information, how to retrieve information, and how to extract knowledge, from sequential data. These questions belong to the areas of coding, indexing, and mining, respectively. In this thesis, we investigate problems from those three areas. Coding refers to the way in which information is represented. Coding aims at generating optimal codes, that are codes having a minimum expected length. Codes can be generated for different purposes, from data compression to error detection/correction. The Lempel-Ziv 77 parsing produces an asymptotically optimal code in terms of compression. We study algorithms to efficiently decompress strings from the Lempel-Ziv 77 parsing, using memory proportional to the size of the parsing itself. We provide the first implementation of an algorithm by Bille et al., the only work we are aware of on this problem. We present a practical evaluation of this approach and several optimizations which improve the performance on all datasets we tested. Through the Ulam-R{'e}nyi game, it is possible to provide optimal adaptive error-correcting codes. The game consists of discovering an unknown mm-bit number by asking membership questions the answers to which can be erroneous. Questions are formulated knowing the answers to all previous ones. We want to find an optimal strategy, i.e., a strategy that can identify any mm-bit number using the theoretical minimum number of questions. We studied the case where questions are a union of up to a fixed number of intervals, and up to three answers can be erroneous. We first show that for any sufficiently large mm, there exists a strategy to identify an initially unknown mm-bit number which uses at most four intervals per question. We further refine our main tool to turn the above asymptotic result into a complete characterization of those instances of the Ulam-R{'e}nyi game that admit optimal strategies. Indexing refers to the way in which information is retrieved. An index for texts permits finding all occurrences of any substring, without traversing the whole text. Many applications require to look for approximate substrings. One of these is the problem of jumbled pattern matching, where two strings match if one is a permutation of the other. We study combinatorial aspects of prefix normal words, a class of binary words introduced in this context. These words can be used as indices for the Indexed Binary Jumbled Pattern Matching problem. We present a new recursive generation algorithm for prefix normal words that is competitive with the previous one but allows to list all prefix normal words sharing the same prefix. This sheds lights on novel insights that may help solving the problem of counting the number of prefix normal words of a given length. We then introduce infinite prefix normal words, and we show that one of the operations used by the algorithm, when repeatedly applied to extend a word, produces an infinite prefix normal word. This motivates the seeking for other operations that produce infinite prefix normal words. We found that one of these operations establishes a connection between prefix normal words and Sturmian words. We also explored the relationship between prefix normal words and Abelian complexity, as well as between prefix normal words and lexicographic order. Mining refers to the way in which information is converted into knowledge. The process of knowledge discovery covers several processing steps, including knowledge extraction. We analyze the problem of mining assertions for an embedded system from its simulation traces. This problem can be modeled as a pattern discovery problem on colored strings. We present two problems of pattern discovery on colored strings: patterns for one color only, or for all colors at the same time. We present two suffix tree-based algorithms. The first algorithm solves both the one color problem and the all colors problem. We then, introduce modifications which improve performance of the algorithm both on synthetic and on real data. We implemented and evaluated the proposed approaches, highlighting time trade-offs that can be obtained. A different way of knowledge extraction is based on the information-theoretic perspective of Pearl's model of causality. It has been postulated that the true causality direction between two phenomena A and B is related to the problem of finding the minimum entropy joint distribution between A and B. This problem is known to be NP-hard, and greedy algorithms have recently been proposed. We provide a novel analysis of one of the proposed heuristic showing that this algorithm guarantees an additive approximation of 1 bit. We then, provide a general criterion for guaranteeing an additive approximation factor of 1. This criterion may be of independent interest in other contexts where couplings are used

    Effective And Efficient Preemption Placement For Cache Overhead Minimization In Hard Real-Time Systems

    Get PDF
    Schedulability analysis for real-time systems has been the subject of prominent research over the past several decades. One of the key foundations of schedulability analysis is an accurate worst case execution time (WCET) for each task. In preemption based real-time systems, the CRPD can represent a significant component (up to 44% as documented in research literature) of variability to overall task WCET. Several methods have been employed to calculate CRPD with significant levels of pessimism that may result in a task set erroneously declared as non-schedulable. Furthermore, they do not take into account that CRPD cost is inherently a function of where preemptions actually occur. Our approach for computing CRPD via loaded cache blocks (LCBs) is more accurate in the sense that cache state reflects which cache blocks and the specific program locations where they are reloaded. Limited preemption models attempt to minimize preemption overhead (CRPD) by reducing the number of allowed preemptions and/or allowing preemption at program locations where the CRPD effect is minimized. These algorithms rely heavily on accurate CRPD measurements or estimation models in order to identify an optimal set of preemption points. Our approach improves the effectiveness of limited optimal preemption point placement algorithms by calculating the LCBs for each pair of adjacent preemptions to more accurately model task WCET and maximize schedulability as compared to existing preemption point placement approaches. We utilize dynamic programming technique to develop an optimal preemption point placement algorithm. Lastly, we will demonstrate, using a case study, improved task set schedulability and optimal preemption point placement via our new LCB characterization. We propose a new CRPD metric, called loaded cache blocks (LCB) which accurately characterizes the CRPD a real-time task may be subjected to due to the preemptive execution of higher priority tasks. We show how to integrate our new LCB metric into our newly developed algorithms that automatically place preemption points supporting linear control flow graphs (CFGs) for limited preemption scheduling applications. We extend the derivation of loaded cache blocks (LCB), that was proposed for linear control flow graphs (CFGs) to conditional CFGs. We show how to integrate our revised LCB metric into our newly developed algorithms that automatically place preemption points supporting conditional control flow graphs (CFGs) for limited preemption scheduling applications. For future work, we will verify the correctness of our framework through other measurable physical and hardware constraints. Also, we plan to complete our work on developing a generalized framework that can be seamlessly integrated into real-time schedulability analysis

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution
    • 

    corecore