335 research outputs found

    Grid desktop computing for constructive battlefield simulation

    Get PDF
    It is a fact that gaming technology is a state-of-the-art tool for military training, not only in low level simulations, e.g. flight training simulations, but also for strategic and tactical training. It is also a fact that users of this kind of technologies require increasingly more realistic representations of the real world. This functional reality threatens both hardware and software capabilities, making almost impossible to keep up with the requirements. Many optimizations have been performed over simulation algorithms in order to fulfill these needs; no definitive solution, however, has yet been achieved. The question that arises naturally is, then: Does any generic global solution to the problem of the uneven growth of the computational power requirements with respect to the available capacities exist? This paper presents the problem by describing a real situation, analyzing the Batalla Virtual3 case of study and, in answering the motivating question, it proposes a potential software architecture employing a grid desktop computing (GDC) framework to empower constructive simulation systems, pulling off an adaptive hardware infrastructure. Additionally, as the constructive simulation scenarios do not fully adapt to the market-available GDC frameworks, the solution recommends suitable modifications to these software.Presentado en el IX Workshop Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Distributed Interactive Simulation Baseline Study: Phase 1-FY96

    Get PDF

    Delivery of Personalized and Adaptive Content to Mobile Devices:A Framework and Enabling Technology

    Get PDF
    Many innovative wireless applications that aim to provide mobile information access are emerging. Since people have different information needs and preferences, one of the challenges for mobile information systems is to take advantage of the convenience of handheld devices and provide personalized information to the right person in a preferred format. However, the unique features of wireless networks and mobile devices pose challenges to personalized mobile content delivery. This paper proposes a generic framework for delivering personalized and adaptive content to mobile users. It introduces a variety of enabling technologies and highlights important issues in this area. The framework can be applied to many applications such as mobile commerce and context-aware mobile services

    An improved method for text summarization using lexical chains

    Get PDF
    This work is directed toward the creation of a system for automatically sum-marizing documents by extracting selected sentences. Several heuristics including position, cue words, and title words are used in conjunction with lexical chain in-formation to create a salience function that is used to rank sentences for extraction. Compiler technology, including the Flex and Bison tools, is used to create the AutoExtract summarizer that extracts and combines this information from the raw text. The WordNet database is used for the creation of the lexical chains. The AutoExtract summarizer performed better than the Microsoft Word97 AutoSummarize tool and the Sinope commercial summarizer in tests against ideal extracts and in tests judged by humans

    AN ENERGY EFFICIENT CLUSTER-HEAD FORMATION AND MEDIUM ACCESS TECHNIQUE IN MULTI-HOP WBAN

    Get PDF
    In the present era, Wireless Body Area Network (WBAN) has emerged as one of the most desired healthcare technologies. Along with healthcare, its application area includes sports, entertainment, battlefield etc. Any time to time posture and position changes of human beings result in changes in node connectivity of the WBAN associated with them. To cope with this situation, the data cluster heads should be changed and adjusted as per the distance between the various sensor nodes. Also, the cluster head must be accessible to all neighbouring nodes to ensure that each node transfers its data packet to the cluster head. This ultimately increases the reliability of WBAN. Energy efficiency is another most important requirement in WBAN to increase the network lifetime. Selection of cluster head plays a crucial role in improving energy efficiency. In this paper, an energy efficient, integrated cluster formation and cluster head selection method where cluster head can be selected dynamically to achieve high fault tolerance is presented. This work has relevance to multi-hop WBAN environment as cluster-based topology involves minimum two hop communication between sensor node and the coordinator node. The proposed technique involves selection by the cluster head the frames having least interference. Achieving energy efficiency without any data discrimination, considering probabilistic inter-cluster interference as one of the constraints in cluster creation for avoiding collision, elimination of hard clusters, and incorporation of dynamic channel allocation scheme in CH selection for efficient utilization of the bandwidth and reduction in adverse effects of clustering are the main beneficial features of the technique

    Augmented Reality Model for Pre-School Learning

    Get PDF
    Science subject is very important to create scientific knowledge among students. In Malaysia, the implementation of the Science Curriculum is normally done via conventional approach. However, this approach is not able to attract students’ interests in exploring more knowledge. In addition, the students only acquire the basic knowledge without being able to visualize the subject matters. Thus, this study is aimed to apply Augmented Reality (AR) technology in teaching and learning of the Basic Science subject to overcome the issues. AR is the augmentation of the real world through the addition of three-dimensional (3D) virtual objects. AR has been proven as an effective method in delivering lessons to the students compared to conventional method. This study applied AR in preschool Basic Science subject that focused on the internal organ of human body known as the Muscular System. This study adapted AR with Experiential Learning Model (ELM) theory to construct the requirement model of the Augmented Reality for Learning in Muscular System (ARMS). The proposed model consisted of three (3) main components; i) Requirement to Implement AR in a Classroom (R-IARC), ii) High-Level Prototyping (HLP), and iii) Experiential Learning Model (ELM). The methodology in this study involved five (5) main phases; i) theoretical study, ii) preliminary study, iii) requirement model construction, iv) ARMS development, v) model evaluation by users and experts respectively. The requirement of the proposed model was collected using multiple facts finding techniques, namely interview, observation, and document reviews. The proposed model was validated using prototyping approach. The evaluation of the prototype was done by expert reviews and end-user acceptance study. The results of the evaluation showed that the ARMS was highly effective to be implemented in the teaching and learning of Basic Science subject. This is because it assists in explaining difficult topics. In addition, it has also been proven that the integration of the AR technology in teaching and learning is able to create an enjoyable environment because it is supported by the visualization of 3D virtual objects. As a result, the students were able to understand and recognize the functions, health, and diseases of the muscular system through ARMS. The study also found that the implementation of ARMS was able to increase the students’ cognitive development and enhance the students’ learning ability

    Knowledge visualizations: a tool to achieve optimized operational decision making and data integration

    Get PDF
    The overabundance of data created by modern information systems (IS) has led to a breakdown in cognitive decision-making. Without authoritative source data, commanders’ decision-making processes are hindered as they attempt to paint an accurate shared operational picture (SOP). Further impeding the decision-making process is the lack of proper interface interaction to provide a visualization that aids in the extraction of the most relevant and accurate data. Utilizing the DSS to present visualizations based on OLAP cube integrated data allow decision-makers to rapidly glean information and build their situation awareness (SA). This yields a competitive advantage to the organization while in garrison or in combat. Additionally, OLAP cube data integration enables analysis to be performed on an organization’s data-flows. This analysis is used to identify the critical path of data throughout the organization. Linking a decision-maker to the authoritative data along this critical path eliminates the many decision layers in a hierarchal command structure that can introduce latency or error into the decision-making process. Furthermore, the organization has an integrated SOP from which to rapidly build SA, and make effective and efficient decisions.http://archive.org/details/knowledgevisuali1094545877Outstanding ThesisOutstanding ThesisMajor, United States Marine CorpsCaptain, United States Marine CorpsApproved for public release; distribution is unlimited

    Artificial Intelligence in the Context of Human Consciousness

    Get PDF
    Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence attempts to simulate these functions through techniques like artificial neural networks, Markov Decision Processes, Human Language Technology, and Multi-Agent Systems, which rely upon a combination of mathematical models and hardware

    Grid desktop computing for constructive battlefield simulation

    Get PDF
    It is a fact that gaming technology is a state-of-the-art tool for military training, not only in low level simulations, e.g. flight training simulations, but also for strategic and tactical training. It is also a fact that users of this kind of technologies require increasingly more realistic representations of the real world. This functional reality threatens both hardware and software capabilities, making almost impossible to keep up with the requirements. Many optimizations have been performed over simulation algorithms in order to fulfill these needs; no definitive solution, however, has yet been achieved. The question that arises naturally is, then: Does any generic global solution to the problem of the uneven growth of the computational power requirements with respect to the available capacities exist? This paper presents the problem by describing a real situation, analyzing the Batalla Virtual3 case of study and, in answering the motivating question, it proposes a potential software architecture employing a grid desktop computing (GDC) framework to empower constructive simulation systems, pulling off an adaptive hardware infrastructure. Additionally, as the constructive simulation scenarios do not fully adapt to the market-available GDC frameworks, the solution recommends suitable modifications to these software.Presentado en el IX Workshop Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Computational Abstraction of Films for Quantitave Analysis of Cinematography

    Get PDF
    Currently, film viewers’ options for getting objective information about films before watching them, are limited. Comparisons are even harder to find and often require extensive film knowledge both by the author and the reader. Such comparisons are inherently subjective, therefore they limit the possibilities for scalable and effective statistical analyses. Apart from trailers, information about films cannot reach viewers audibly or visibly, which seems absurd considering the very nature of film. The thesis examines repeatable quantification methods for computationally abstracting films in order to extract informative data for visualizations and further statistical analy- ses. Theoretical background empowered by multidisciplinary approach and design processes are described. Visualizations of analyses are provided and evaluated for their accuracy and efficiency. Throughout the thesis foundations for the future automated quantification player/plugin, are described aiming to facilitate further developments. Theoretical structures of the website which may act as a gateway that collects and provides data for statistical cinematic research are also discussed
    • …
    corecore