34 research outputs found

    Immersion Within Call

    Get PDF
    The purpose of this research study was to explore the idea of immersion and what constitutes immersion in Computer Assisted Language Learning (CALL). CALL has increasingly become important in the field of SLA (Second Language Acquisition) and continues to grow in usage each year. As a graduate instructor of a basic level French course, my research focused on the immersion factor of CALL programs. This research was designed to obtain and analyze first year French students opinions of a CD-ROM CALL program by asking the following questions: (1) Did the participants feel immersed in the French language using the CD-ROM? (2) Had the participants visited a French speaking country or did they plan on studying in a French speaking country in the future? (3) Did the participants enjoy using the CD-ROM to learn French? (4) What did the participants like most and least about using the CD-ROM CALL program? The most substantial finding of the study was that a majority of the participants did feel immersed in the French language while using the CALL program. A secondary finding was that many of the likes and dislikes mentioned specifically by the participants coincide with the main advantages and disadvantages of CALL

    Anomaly-based Correlation of IDS Alarms

    Get PDF
    An Intrusion Detection System (IDS) is one of the major techniques for securing information systems and keeping pace with current and potential threats and vulnerabilities in computing systems. It is an indisputable fact that the art of detecting intrusions is still far from perfect, and IDSs tend to generate a large number of false IDS alarms. Hence human has to inevitably validate those alarms before any action can be taken. As IT infrastructure become larger and more complicated, the number of alarms that need to be reviewed can escalate rapidly, making this task very difficult to manage. The need for an automated correlation and reduction system is therefore very much evident. In addition, alarm correlation is valuable in providing the operators with a more condensed view of potential security issues within the network infrastructure. The thesis embraces a comprehensive evaluation of the problem of false alarms and a proposal for an automated alarm correlation system. A critical analysis of existing alarm correlation systems is presented along with a description of the need for an enhanced correlation system. The study concludes that whilst a large number of works had been carried out in improving correlation techniques, none of them were perfect. They either required an extensive level of domain knowledge from the human experts to effectively run the system or were unable to provide high level information of the false alerts for future tuning. The overall objective of the research has therefore been to establish an alarm correlation framework and system which enables the administrator to effectively group alerts from the same attack instance and subsequently reduce the volume of false alarms without the need of domain knowledge. The achievement of this aim has comprised the proposal of an attribute-based approach, which is used as a foundation to systematically develop an unsupervised-based two-stage correlation technique. From this formation, a novel SOM K-Means Alarm Reduction Tool (SMART) architecture has been modelled as the framework from which time and attribute-based aggregation technique is offered. The thesis describes the design and features of the proposed architecture, focusing upon the key components forming the underlying architecture, the alert attributes and the way they are processed and applied to correlate alerts. The architecture is strengthened by the development of a statistical tool, which offers a mean to perform results or alert analysis and comparison. The main concepts of the novel architecture are validated through the implementation of a prototype system. A series of experiments were conducted to assess the effectiveness of SMART in reducing false alarms. This aimed to prove the viability of implementing the system in a practical environment and that the study has provided appropriate contribution to knowledge in this field

    Software for the collaborative editing of the Greek new testament

    Get PDF
    This project was responsible for developing the Virtual Manuscript Room Collaborative Research Environment (VMR CRE), which offers a facility for the critical editing workflow from raw data collection, through processing, to publication, within an open and online collaborative framework for the Institut für Neutestamentliche Textforschung (INTF) and their global partners while editing the Editio Critica Maior (ECM)-- the paramount critical edition of the Greek New Testament which analyses over 5600 Greek witnesses and includes a comprehensive apparatus of chosen manuscripts, weighted by quotations and early translations. Additionally, this project produced the first digital edition of the ECM. This case study, transitioning the workflow at the INTF to an online collaborative research environment, seeks to convey successful methods and lessons learned through describing a professional software engineer’s foray into the world of academic digital humanities. It compares development roles and practices in the software industry with the academic environment and offers insights to how this software engineer found a software team therein, suggests how a fledgling online community can successfully achieve critical mass, provides an outsider’s perspective on what a digital critical scholarly edition might be, and hopes to offer useful software, datasets, and a thriving online community for manuscript researchers

    Netgraph-A Tool for Searching in the Prague Dependency Treebank 2.0

    Get PDF
    Three sides existed whose connection is solved in this thesis. First, it was the Prague Dependency Treebank 2.0, one of the most advanced treebanks in the linguistic world. Second, there existed a very limited but extremely intuitive search tool - Netgraph 1.0. Third, there were users longing for such a simple and intuitive tool that would be powerful enough to search in the Prague Dependency Treebank. In the thesis, we study the annotation of the Prague Dependency Treebank 2.0, especially on the tectogrammatical layer, which is by far the most complex layer of the treebank, and assemble a list of requirements on a query language that would allow searching for and studying all linguistic phenomena annotated in the treebank. We propose an extension to the query language of the existing search tool Netgraph 1.0 and show that the extended query language satisfies the list of requirements. We also show how all principal linguistic phenomena annotated in the treebank can be searched for with the query language. The proposed query language has also been implemented - we present the search tool as well and talk about the data format for the tool. An attached CD-ROM contains the installation of the tool.Tato práce se zabývá spojením tří existujících stran. Na straně jedné byl Pražský závislostní korpus 2.0, jeden z nejvyspělejších korpusů lingvistického světa. Na straně druhé existoval omezený, ale velmi intuitivní vyhledávací nástroj Netgraph 1.0. A na straně třetí byli uživatelé toužící po takovém jednoduchém nástroji, který by však byl dostatečně silný pro vyhledávání v Pražském závislostním korpusu. V této práci zkoumáme anotaci Pražského závislostního korpusu 2.0, obzvláště tektogramatické roviny, jež je zdaleka nejsložitější rovinou tohoto korpusu, a vytváříme seznam požadavků na dotazovací jazyk, který by umožnil vyhledávání a studium všech lingvistických jevů v korpusu anotovaných. Navrhujeme rozšíření dotazovacího jazyka existujícího vyhledávacího nástroje Netgraphu 1.0 a ukazujeme, že tento rozšířený dotazovací jazyk vyhovuje formulovanému seznamu požadavků. Ukazujeme rovněž, jak pomocí tohoto dotazovacího jazyka mohou být vyhledány všechny podstatné lingvistické jevy anotované v korpusu. Navržený dotazovací jazyk byl rovněž implementován - zmiňujeme se tedy i o vyhledávacím nástroji a hovoříme o datech pro tento nástroj. Nástroj je možno nainstalovat z přiloženého CD-ROMu.Institute of Formal and Applied LinguisticsÚstav formální a aplikované lingvistikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Art unlimited: an investigation into contemporary digital arts and the free software movement.

    Get PDF
    Computing technology has not only significantly shaped many of the contemporary artistic disciplines, it has also given birth to many new and exciting practices. Modest, low cost hardware enabled artists to manipulate real-time multimedia data and coordinate vast amounts of hardware devices, whilst high bandwidth Internet connections has allowed them to communicate and distribute their work rapidly. For this reason, art practices in the digital domain have become highly decentralized. It is therefore not surprising that the rise of free and open source software (FLOSS) has been warmly welcomed and adopted by an increasing number of practitioners. The technical advantages in free software allows them to create works of art with greater freedom and flexibility. Its open and collaborative ideology, on the other hand, further embraces the increasingly autonomous and distributed characteristics in the artistic community. This thesis aims to examine the impact of free and open source software in the context of contemporary digital arts. It will look at the current climate of both digital arts and the FLOSS movement, attempting to rationalize the implications of such a phenomena. It will also provide concrete examples of ongoing activities in FLOSS digital arts, as an evidence and documentation of its development to date. Lastly, the practical work in this research will offer a first hand insight into developing a FLOSS project within the given context

    A corpus-driven discourse analysis of transcripts of Hugo Chávez’s television programme ‘Aló Presidente’

    Get PDF
    This study proposes a methodology that combines techniques from corpus linguistics with theory from the Discourse-Historical Approach (DHA) to Critical Discourse Analysis (CDA). The methodology is demonstrated using a corpus comprising transcripts of Hugo Chávez’s television programme, Aló Presidente, broadcast between January 2002 and June 2007. In this thesis, I identify a number of criticisms of CDA and suggest that corpus linguistics can be used to reduce the principle risks: over-/under-interpretation of data and ensuring that the examples used are representative. I then present a methodology designed to minimise these effects, based upon a hypothesis that semantic fields are used more frequently in periods when they are topical, and therefore one can isolate instances which were produced at times of change. I use the Aló Presidente corpus to present a detailed description of three such semantic fields and then adopt the concept of discourse strategies from the DHA to demonstrate how Chávez’s framing of the topics changes with time. This leads to a set of conclusions which seek to answer the research question: How is life in Venezuela framed as having changed under Chávez’s Presidency by reference to his Aló Presidente television programme during the period 2002-2007

    Variable Format: Media Poetics and the Little Database

    Get PDF
    This dissertation explores the situation of twentieth-century art and literature becoming digital. Focusing on relatively small online collections, I argue for materially invested readings of works of print, sound, and cinema from within a new media context. With bibliographic attention to the avant-garde legacy of media specificity and the little magazine, I argue that the “films,” “readings,” “magazines,” and “books” indexed on a series of influential websites are marked by meaningful transformations that continue to shape the present through a dramatic reconfiguration of the past. I maintain that the significance of an online version of a work is not only transformed in each instance of use, but that these versions fundamentally change our understanding of each historical work in turn. Here, I offer the analogical coding of these platforms as “little databases” after the little magazines that served as the vehicle of modernism and the historical avant-garde. Like the study of the full run of a magazine, these databases require a bridge between close and distant reading. Rather than contradict each other as is often argued, in this instance a combined macro- and microscopic mode of analysis yields valuable information not readily available by either method in isolation. In both directions, the social networks and technical protocols of database culture inscribe the limits of potential readings. Bridging the material orientation of bibliographic study with the format theory of recent media scholarship, this work constructs a media poetics for reading analog works situated within the windows, consoles, and networks of the twenty-first century

    The Prime Machine: a user-friendly corpus tool for English language teaching and self-tutoring based on the Lexical Priming theory of language

    Get PDF
    This thesis presents the design and evaluation of a new concordancer called The Prime Machine which has been developed as an English language learning and teaching tool. The software has been designed to provide learners with a multitude of examples from corpus texts and additional information about the contextual environment in which words and combinations of words tend to occur. The prevailing view of how language operates has been that grammar and lexis are separate systems and sentences can be constructed merely by choosing any syntactic structure and slotting in vocabulary. Over the last few decades, however, corpus linguistics has presented challenges to this view of language, drawing on evidence which can be found in the patterning of language choices in texts. Nevertheless, despite some reports of success from researchers in this area, only a limited number of teachers and learners of second language seem to make direct use of corpus software tools. The desire to develop a new corpus tool grew out of professional experience as an English language teacher and manager in China. This thesis begins by introducing some background information about the role of English in international higher education and the language learning context in China, and then goes on to describe the software architecture and the process by which corpus texts are transformed from their raw state into rows of data in a sophisticated database to be accessed by the concordancer. It then introduces innovations including several aspects of the search screen interface, the concordance line display and the use of collocation data. The software provides a rich learning platform for language learners to independently look up and compare similar words, different word forms, different collocations and the same words across two corpora. Underpinning the design is a view of language which draws on Michael Hoey's theory of Lexical Priming. The software is designed to make it possible to see tendencies of words and phrases which are not usually apparent in either dictionary examples or the output from other concordancing software. The design features are considered from a pedagogical perspective, focusing on English for Academic Purposes and including important software design principles from Computer Aided Language Learning. Through a small evaluation involving undergraduate students, the software has been shown to have great potential as a tool for the writing process. It is believed that The Prime Machine will be a very useful corpus tool which, while simple to operate, provides a wealth of information for English language teaching and self-tutoring
    corecore