7,526 research outputs found

    Semi-automated creation of converged iTV services: From macromedia director simulations to services ready for broadcast

    Get PDF
    While sound and video may capture viewers’ attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services

    Defining and Evaluating Test Suite Consolidation for Event Sequence-based Test Cases

    Get PDF
    This research presents a new test suite consolidation technique, called CONTEST, for automated GUI testing. A new probabilistic model of the GUI is developed to allow direct application of CONTEST. Multiple existing test suites are used to populate the model and compute probabilities based on the observed event sequences. These probabilities are used to generate a new test suite that consolidates the original ones. A new test suite similarity metric, called CONTeSSi(n), is introduced which compares multiple event sequence-based test suites using relative event positions. Results of empirical studies showed that CONTEST yields a test suite that achieves better fault detection and code coverage than the original suites, and that the CONTeSSi(n) metric is a better indicator of the similarity between sequence-based test suites than existing metrics

    Learning from Analysis of Japanese EFL Texts

    Get PDF
    Japan has a long tradition of teaching English as a foreign language (EFL). A common feature of EFL courses is reliance on specific textbooks as a basis for graded teaching, and periods in Japanese EFL history are marked by the introduction of different textbook series. These sets of textbooks share the common goal of taking students from beginners through to able English language users, so one would expect to find common characteristics across such series. As part of an on-going research programme in which Japanese EFL textbooks from different historical periods are compared and contrasted, we have recently focussed our efforts on using textual analysis tools to highlight distinctive characteristics of such textbooks. The present paper introduces one such analysis tool and describes some of the results from its application to three textbook series from distinct periods in Japanese EFL history. In so doing, we aim to encourage the use of textual analysis and seek to expose salient features of EFL texts which would likely remain hidden without such analytical techniques

    Automated GUI performance testing

    Get PDF
    A significant body of prior work has devised approaches for automating the functional testing of interactive applications. However, little work exists for automatically testing their performance. Performance testing imposes additional requirements upon GUI test automation tools: the tools have to be able to replay complex interactive sessions, and they have to avoid perturbing the application's performance. We study the feasibility of using five Java GUI capture and replay tools for GUI performance test automation. Besides confirming the severity of the previously known GUI element identification problem, we also describe a related problem, the temporal synchronization problem, which is of increasing importance for GUI applications that use timer-driven activity. We find that most of the tools we study have severe limitations when used for recording and replaying realistic sessions of real-world Java applications and that all of them suffer from the temporal synchronization problem. However, we find that the most reliable tool, Pounder, causes only limited perturbation and thus can be used to automate performance testing. Based on an investigation of Pounder's approach, we further improve its robustness and reduce its perturbation. Finally, we demonstrate in a set of case studies that the conclusions about perceptible performance drawn from manual tests still hold when using automated tests driven by Pounder. Besides the significance of our findings to GUI performance testing, the results are also relevant to capture and replay-based functional GUI test automation approache

    A Methodology for Engineering Collaborative and ad-hoc Mobile Applications using SyD Middleware

    Get PDF
    Today’s web applications are more collaborative and utilize standard and ubiquitous Internet protocols. We have earlier developed System on Mobile Devices (SyD) middleware to rapidly develop and deploy collaborative applications over heterogeneous and possibly mobile devices hosting web objects. In this paper, we present the software engineering methodology for developing SyD-enabled web applications and illustrate it through a case study on two representative applications: (i) a calendar of meeting application, which is a collaborative application and (ii) a travel application which is an ad-hoc collaborative application. SyD-enabled web objects allow us to create a collaborative application rapidly with limited coding effort. In this case study, the modular software architecture allowed us to hide the inherent heterogeneity among devices, data stores, and networks by presenting a uniform and persistent object view of mobile objects interacting through XML/SOAP requests and responses. The performance results we obtained show that the application scales well as we increase the group size and adapts well within the constraints of mobile devices

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin

    JSB Composability and Web Services Interoperability Via Extensible Modeling & Simulation Framework (XMSF), Model Driven Architecture (MDA), Component Repositories, and Web-based Visualization

    Get PDF
    Study Report prepared for the U. S. Air Force, Joint Synthetic Battlespace Analysis of Technical Approaches (ATA) Studies & Prototyping Overview: This paper summarizes research work conducted by organizations concerned with interoperable distributed information technology (IT) applications, in particular the Naval Postgraduate School (NPS) and Old Dominion University (ODU). Although the application focus is distributed modeling & simulation (M&S) the results and findings are in general easily applicable to other distributed concepts as well, in particular the support of operations by M&S applications, such as distributed mission operations. The core idea of this work is to show the necessity of applying open standards for component description, implementation, and integration accompanied by aligned management processes and procedures to enable continuous interoperability for legacy and new M&S components of the live, virtual, and constructive domain within the USAF Joint Synthetic Battlespace (JSB). JSB will be a common integration framework capable of supporting the future emerging simulation needs ranging from training and battlefield rehearsal to research, system development and acquisition in alignment with other operational requirements, such as integration of command and control, support of operations, integration of training ranges comprising real systems, etc. To this end, the study describes multiple complementary Integrated Architecture Framework approaches and shows, how the various parts must be orchestrated in order to support the vision of JSB effectively and efficiently. Topics of direct relevance include Web Services via Extensible Modeling & Simulation Framework (XMSF), the Object Management Group (OMG)’s Model Driven Architecture (MDA), XML-based resource repositories, and Web-based X3D visualization. To this end, the report shows how JSB can − Utilize Web Services throughout all components via XMSF methodologies, − Compose diverse system visualizations using Web-based X3D graphics, − Benefit from distributed modeling methods using MDA, and − Best employ resource repositories for broad and consistent composability. Furthermore, the report recommends the establishment of necessary management organizations responsible for the necessary alignment of management processes and procedures within the JSB as well as with neighbored domains. Continuous interoperability cannot be accomplished by technical standards alone. The application of technical standards targets the implementation level of the system of systems, which results in an interoperable solution valid only for the actual 2 implementation. To insure continuity, the influence of updates, upgrades and introduction of components on the system of systems must be captured in the project management procedures of the participating systems. Finally, the report proposes an exemplifying set of proof-of-capability demonstration prototypes and a five-year technical/institutional transformation plan. All key references are online available at http://www.movesinstitute.org/xmsf/xmsf.html (if not explicitly stated otherwise)

    Design of risk assessment methodology for IT/OT systems : Employment of online security catalogues in the risk assessment process

    Get PDF
    The revolution brought about with the transition from Industry 1.0 to 4.0 has expanded the cyber threats from Information Technology (IT) to Operational Technology (OT) systems. However, unlike IT systems, identifying the relevant threats in OT is more complex as penetration testing applications highly restrict OT availability. The complexity is enhanced by the significant amount of information available in online security catalogues, like Common Weakness Enumeration, Common Vulnerabilities and Exposures and Common Attack Pattern Enumeration and Classification, and the incomplete organisation of their relationships. These issues hinder the identification of relevant threats during risk assessment of OT systems. In this thesis, a methodology is proposed to reduce the aforementioned complexities and improve relationships among online security catalogues to identify the cybersecurity risk of IT/OT systems. The weaknesses, vulnerabilities and attack patterns stored in the online catalogues are extracted and categorised by mapping their potential mitigations to their security requirements, which are introduced on security standards that the system should comply with, like the ISA/IEC 62443. The system's assets are connected to the potential threats through the security requirements, which, combined with the relationships established among the catalogues, offer the basis for graphical representation of the results by employing tree-shaped graphical models. The methodology is tested on the components of an Information and Communication Technology system, whose results verify the simplification of the threat identification process but highlight the need for an in-depth understanding of the system. Hence, the methodology offers a significant basis on which further work can be applied to standardise the risk assessment process of IT/OT systems

    Understanding the performance of interactive applications

    Get PDF
    Many if not most computer systems are used by human users. The performance of such interactive systems ultimately affects those users. Thus, when measuring, understanding, and improving system performance, it makes sense to consider the human user's perspective. Essentially, the performance of interactive applications is determined by the perceptible lag in handling user requests. So, when characterizing the runtime of an interactive application we need a new approach that focuses on the perceptible lags rather than on overall and general performance characteristics. Such a new characterization approach should enable a new way to profile and improve the performance of interactive applications. Imagine a way that would seek out these perceptible lags and then investigate the causes of these lags. Performance analysts could simply optimize responsible parts of the software, thus eliminating perceptible lag for interactive applications. Unfortunately, existing profiling approaches either incur significant overhead that makes them impractical for an interactive scenario, or they lack the ability to provide insight into the causes of long latencies. An effective approach for interactive applications has to fulfill several requirements such as an accurate view of the causes of performance problems and insignificant perturbation of the interactive application. We propose a new profiling approach that helps developers to understand and improve the perceptible performance of interactive applications and satisfies the above needs

    Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles

    Full text link
    In micro-blogging platforms, people connect and interact with others. However, due to cognitive biases, they tend to interact with like-minded people and read agreeable information only. Many efforts to make people connect with those who think differently have not worked well. In this paper, we hypothesize, first, that previous approaches have not worked because they have been direct -- they have tried to explicitly connect people with those having opposing views on sensitive issues. Second, that neither recommendation or presentation of information by themselves are enough to encourage behavioral change. We propose a platform that mixes a recommender algorithm and a visualization-based user interface to explore recommendations. It recommends politically diverse profiles in terms of distance of latent topics, and displays those recommendations in a visual representation of each user's personal content. We performed an "in the wild" evaluation of this platform, and found that people explored more recommendations when using a biased algorithm instead of ours. In line with our hypothesis, we also found that the mixture of our recommender algorithm and our user interface, allowed politically interested users to exhibit an unbiased exploration of the recommended profiles. Finally, our results contribute insights in two aspects: first, which individual differences are important when designing platforms aimed at behavioral change; and second, which algorithms and user interfaces should be mixed to help users avoid cognitive mechanisms that lead to biased behavior.Comment: 12 pages, 7 figures. To be presented at ACM Intelligent User Interfaces 201
    corecore