4,600 research outputs found

    Applying SMT Solvers to the Test Template Framework

    Full text link
    The Test Template Framework (TTF) is a model-based testing method for the Z notation. In the TTF, test cases are generated from test specifications, which are predicates written in Z. In turn, the Z notation is based on first-order logic with equality and Zermelo-Fraenkel set theory. In this way, a test case is a witness satisfying a formula in that theory. Satisfiability Modulo Theory (SMT) solvers are software tools that decide the satisfiability of arbitrary formulas in a large number of built-in logical theories and their combination. In this paper, we present the first results of applying two SMT solvers, Yices and CVC3, as the engines to find test cases from TTF's test specifications. In doing so, shallow embeddings of a significant portion of the Z notation into the input languages of Yices and CVC3 are provided, given that they do not directly support Zermelo-Fraenkel set theory as defined in Z. Finally, the results of applying these embeddings to a number of test specifications of eight cases studies are analysed.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Dynamic Protocol Reverse Engineering a Grammatical Inference Approach

    Get PDF
    Round trip engineering of software from source code and reverse engineering of software from binary files have both been extensively studied and the state-of-practice have documented tools and techniques. Forward engineering of protocols has also been extensively studied and there are firmly established techniques for generating correct protocols. While observation of protocol behavior for performance testing has been studied and techniques established, reverse engineering of protocol control flow from observations of protocol behavior has not received the same level of attention. State-of-practice in reverse engineering the control flow of computer network protocols is comprised of mostly ad hoc approaches. We examine state-of-practice tools and techniques used in three open source projects: Pidgin, Samba, and rdesktop . We examine techniques proposed by computational learning researchers for grammatical inference. We propose to extend the state-of-art by inferring protocol control flow using grammatical inference inspired techniques to reverse engineer automata representations from captured data flows. We present evidence that grammatical inference is applicable to the problem domain under consideration

    Image database system for glaucoma diagnosis support

    Get PDF
    Tato práce popisuje přehled standardních a pokročilých metod používaných k diagnose glaukomu v ranném stádiu. Na základě teoretických poznatků je implementován internetově orientovaný informační systém pro oční lékaře, který má tři hlavní cíle. Prvním cílem je možnost sdílení osobních dat konkrétního pacienta bez nutnosti posílat tato data internetem. Druhým cílem je vytvořit účet pacienta založený na kompletním očním vyšetření. Posledním cílem je aplikovat algoritmus pro registraci intenzitního a barevného fundus obrazu a na jeho základě vytvořit internetově orientovanou tři-dimenzionální vizualizaci optického disku. Tato práce je součásti DAAD spolupráce mezi Ústavem Biomedicínského Inženýrství, Vysokého Učení Technického v Brně, Oční klinikou v Erlangenu a Ústavem Informačních Technologií, Friedrich-Alexander University, Erlangen-Nurnberg.This master thesis describes a conception of standard and advanced eye examination methods used for glaucoma diagnosis in its early stage. According to the theoretical knowledge, a web based information system for ophthalmologists with three main aims is implemented. The first aim is the possibility to share medical data of a concrete patient without sending his personal data through the Internet. The second aim is to create a patient account based on a complete eye examination procedure. The last aim is to improve the HRT diagnostic method with an image registration algorithm for the fundus and intensity images and create an optic nerve head web based 3D visualization. This master thesis is a part of project based on DAAD co-operation between Department of Biomedical Engineering, Brno University of Technology, Eye Clinic in Erlangen and Department of Computer Science, Friedrich-Alexander University, Erlangen-Nurnberg.

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Code-Switched Urdu ASR for Noisy Telephonic Environment using Data Centric Approach with Hybrid HMM and CNN-TDNN

    Full text link
    Call Centers have huge amount of audio data which can be used for achieving valuable business insights and transcription of phone calls is manually tedious task. An effective Automated Speech Recognition system can accurately transcribe these calls for easy search through call history for specific context and content allowing automatic call monitoring, improving QoS through keyword search and sentiment analysis. ASR for Call Center requires more robustness as telephonic environment are generally noisy. Moreover, there are many low-resourced languages that are on verge of extinction which can be preserved with help of Automatic Speech Recognition Technology. Urdu is the 10th10^{th} most widely spoken language in the world, with 231,295,440 worldwide still remains a resource constrained language in ASR. Regional call-center conversations operate in local language, with a mix of English numbers and technical terms generally causing a "code-switching" problem. Hence, this paper describes an implementation framework of a resource efficient Automatic Speech Recognition/ Speech to Text System in a noisy call-center environment using Chain Hybrid HMM and CNN-TDNN for Code-Switched Urdu Language. Using Hybrid HMM-DNN approach allowed us to utilize the advantages of Neural Network with less labelled data. Adding CNN with TDNN has shown to work better in noisy environment due to CNN's additional frequency dimension which captures extra information from noisy speech, thus improving accuracy. We collected data from various open sources and labelled some of the unlabelled data after analysing its general context and content from Urdu language as well as from commonly used words from other languages, primarily English and were able to achieve WER of 5.2% with noisy as well as clean environment in isolated words or numbers as well as in continuous spontaneous speech.Comment: 32 pages, 19 figures, 2 tables, preprin

    UpMax: User Partitioning for MaxSAT

    Get PDF
    It has been shown that Maximum Satisfiability (MaxSAT) problem instances can be effectively solved by partitioning the set of soft clauses into several disjoint sets. The partitioning methods can be based on clause weights (e.g., stratification) or based on graph representations of the formula. Afterwards, a merge procedure is applied to guarantee that an optimal solution is found. This paper proposes a new framework called UpMax that decouples the partitioning procedure from the MaxSAT solving algorithms. As a result, new partitioning procedures can be defined independently of the MaxSAT algorithm to be used. Moreover, this decoupling also allows users that build new MaxSAT formulas to propose partition schemes based on knowledge of the problem to be solved. We illustrate this approach using several problems and show that partitioning has a large impact on the performance of unsatisfiability-based MaxSAT algorithms
    corecore