2,733 research outputs found

    Whole body interaction

    Get PDF
    In this workshop we explore the notation of whole body interaction. We bring together different disciplines to create a new research direction for study of this emerging form of interaction

    Knowledge Extraction Using Probabilistic Reasoning: An Artificial Neural Network Approach

    Get PDF
    The World Wide Web (WWW) has radically changed the way in which we access, generate and disseminate information. Its presence is felt daily and with more internet-enabled devices being connected the web of knowledge is growing. We are now moving into era where the WWW is capable of ‘understanding’ the actual/intended meaning of our content. This is being achieved by creating links between distributed data sources using the Resource Description Framework (RDF). In order to find information in this web of interconnected sources, complex query languages are often employed, e.g. SPARQL. However, this approach is limited as exact query matches are often required. In order to overcome this challenge, this paper presents a probabilistic approach to searching RDF documents. The developed algorithm converts RDF data into a matrix of features and treats searching as a machine learning problem. Using a number of artificial neural network algorithms, a successfully developed prototype has been developed that demonstrates the applicability of the approach. The results illustrate that the Voted Perceptron classifier (VPC), perceptron linear classifier (PERLC) and random neural network classifier (RNNC) performed particularly well, with accuracies of 100%, 98% and 93% respectively

    Determining the stellar masses of submillimetre galaxies: the critical importance of star formation histories

    Full text link
    Submillimetre (submm) galaxies are among the most rapidly star-forming and most massive high-redshift galaxies; thus, their properties provide important constraints on galaxy evolution models. However, there is still a debate about their stellar masses and their nature in the context of the general galaxy population. To test the reliability of their stellar mass determinations, we used a sample of simulated submm galaxies for which we derived stellar masses via spectral energy distribution (SED) modelling (with Grasil, Magphys, Hyperz and LePhare) adopting various star formation histories (SFHs). We found that the assumption of SFHs with two independent components leads to the most accurate stellar masses. Exponentially declining SFHs (tau) lead to lower masses (albeit still consistent with the true values), while the assumption of single-burst SFHs results in a significant mass underestimation. Thus, we conclude that studies based on the higher masses inferred from fitting the SEDs of real submm galaxies with double SFHs are most likely to be correct, implying that submm galaxies lie on the high-mass end of the main sequence of star-forming galaxies. This conclusion appears robust to assumptions of whether or not submm galaxies are driven by major mergers, since the suite of simulated galaxies modelled here contains examples of both merging and isolated galaxies. We identified discrepancies between the true and inferred stellar ages (rather than the dust attenuation) as the primary determinant of the success/failure of the mass recovery. Regardless of the choice of SFH, the SED-derived stellar masses exhibit a factor of ~2 scatter around the true value; this scatter is an inherent limitation of the SED modelling due to simplified assumptions. Finally, we found that the contribution of active galactic nuclei does not have any significant impact on the derived stellar masses.Comment: Accepted to A&A. 11 pages, 9 figures, 1 table. V2 main changes: 1) discussion of the stellar age as the main parameter influencing the success of an SED model (Fig. 4, 5, 7); 2) discussion of the age-dust degeneracy (Fig 9); 3) the comparison of real and simulated submm galaxies (Fig 1

    FAST: A multi-processed environment for visualization of computational fluid dynamics

    Get PDF
    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed

    FAST: A multi-processed environment for visualization of computational fluid

    Get PDF
    Three dimensional, unsteady, multizoned fluid dynamics simulations over full scale aircraft is typical of problems being computed at NASA-Ames on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10 to 30 Mflop range, it is felt that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These large, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this time. These visualization techniques will change as the supercomputing environment, and hence the scientific methods used, evolve ever further. Visualization of computational aerodynamics require flexible, extensible, and adaptable software tools for performing analysis tasks. FAST (Flow Analysis Software Toolkit), an implementation of a software system for fluid mechanics analysis that is based on this approach is discussed

    Scientific Visualization Using the Flow Analysis Software Toolkit (FAST)

    Get PDF
    Over the past few years the Flow Analysis Software Toolkit (FAST) has matured into a useful tool for visualizing and analyzing scientific data on high-performance graphics workstations. Originally designed for visualizing the results of fluid dynamics research, FAST has demonstrated its flexibility by being used in several other areas of scientific research. These research areas include earth and space sciences, acid rain and ozone modelling, and automotive design, just to name a few. This paper describes the current status of FAST, including the basic concepts, architecture, existing functionality and features, and some of the known applications for which FAST is being used. A few of the applications, by both NASA and non-NASA agencies, are outlined in more detail. Described in the Outlines are the goals of each visualization project, the techniques or 'tricks' used lo produce the desired results, and custom modifications to FAST, if any, done to further enhance the analysis. Some of the future directions for FAST are also described

    A User-Centred Approach to Reducing Sedentary Behaviour

    Get PDF
    The use of digital technologies in the administration of healthcare is growing at a rapid rate. However, such platforms are often expensive. As people are living longer, the strain placed on hospitals is increasing. It is evident that a usercentric approach is needed, which aims to prevent illness before a hospital visit is required. As such, with the levels of obesity rising, preventing this illness before such resources are required has the potential to save an enormous amount of time and money, whilst promoting a healthier lifestyle. New and novel approaches are needed, which are inexpensive and pervasive in nature. One such approach is to use human digital memories. This outlet provides visual lifelogs, composed of a variety of data, which can be used to identify periods of inactivity. This paper explores how the DigMem system is used to successfully recognise activity and create temporal memory boxes of human experiences, which can be used to monitor sedentary behaviour

    Capturing and Sharing Human Digital Memories with the Aid of Ubiquitous Peer– to–Peer Mobile Services

    Get PDF
    The explosion of mobile computing and the sharing of content ubiquitously has enabled users to create and share memories instantly. Access to different data sources, such as location, movement, and physiology, has helped to create a data rich society where new and enhanced memories will form part of everyday life. Peer–to–Peer (P2P) systems have also increased in popularity over the years, due to their ad hoc and decentralized nature. Mobile devices are “smarter” and are increasingly becoming part of P2P systems; opening up a whole new dimension for capturing, sharing and interacting with enhanced human digital memories. This will require original and novel platforms that automatically compose data sources from ubiquitous ad-hoc services that are prevalent within the environments we occupy. This is important for a number of reasons. Firstly, it will allow digital memories to be created that include richer information, such as how you felt when the memory was created and how you made others feel. Secondly, it provides a set of core services that can more easily manage and incorporate new sources as and when you are available. In this way memories created in the same location, and time are not necessarily similar – it depends on the data sources that are accessible. This paper presents DigMem, the initial prototype that is being developed to utilize distributed mobile services. DigMem captures and shares human digital memories, in a ubiquitous P2P environment. We present a case study to validate the implementation and evaluate the applicability of the approach
    corecore