9 research outputs found

    MARF: Modular Audio Recognition Framework (in French)

    Get PDF
    Le Modular Audio Recognition Framework (MARF) concu en 2002, est une plateforme de recherche open-source et une collection de composants avec des algorithmes pour le traitement de la voix, le son, la parole, et l'écriture et de langues naturelles (TALN) MARF a été crée en Java et organisé sous forme de modules extensible qui facilite l'addition de nouvelles algorithmes. MARF peut être utilisé comme une bibliothèque dans un application ou comme une base de support à l'apprentisage et en extension. MARF a aussi été publié dans les plusieurs articles de conférence avec les detailles scientifiques dedant. De la documentation détaillée et la référence d'API en format javadoc sont disponibles étant donné que le projet tente d'être bien-documenté. MARF et ses applications sont déployés sous une licence BSD

    Collection and classification of services and their context

    Get PDF
    SOA provides new means for interoperability of business logic and flexible integration of independent systems by introducing and promoting Web Services. Since its introduction in the previous decade, it has gained a lot of attraction through industry and researchers. However, there are many problems which this novel idea of SOA encounters. One of the initial problems is finding Web Services by the service consumers. Initial design of SOA proposed a service registry between the consumers and providers but in practice, it was not respected and accepted in the industry and service providers are not registering their services. Many SOA researches assume that such registry exists but, a repository of services is preliminary to the research. The Internet is filled with many Web Services which are being published every day by different entities and individuals such as companies, public institutions, universities and private developers. Due to the nature of search engines to support all kinds of information, it is difficult for the service consumers to employ them to find their desired services fast and to restrict search results to Web Services. Vertical search engines which focus on Web Services are proposed to be specialized in searching Web Services. Another solution proposed is to use the notion of Brokerage in order to assist the service consumers to find and choose their desired services. A main requirement in both of these solutions is to have a repository of Web Services. In this thesis we exploit methodologies to find services and to create this repository. We survey and harvest three main type of service descriptions: WSDL, WADL, and Web pages describing RESTful services. In this effort, we extract the data from previous known repositories, we query search engines and we use Web Crawlers to find these descriptions. In order to increase the effectiveness and speed up the task of finding compatible Web Services in the Brokerage when performing service composition or suggesting Web Services to the requests, high-level functionality of the service needs to be determined. Due to the lack of structured support for specifying such functionality, classification of services into a set of abstract categories is necessary. In this thesis we exploit automatic classification of the Web Service descriptions which we harvest. We employ a wide range of Machine Learning and Signal Processing algorithms and techniques in order to find the highest precision achievable in the scope of this thesis for classification of each type of service description. In addition, we complement our approach by showing the importance and effect of contextual information on the classification of the service descriptions and show that it improves the precision. In order to achieve this goal, we gather and store contextual information related to the service descriptions from the sources to the extent of this thesis. Finally, the result of this effort is a repository of classified service descriptions

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL, and JOOIP bound by the underlying intensional programming paradigm
    corecore