518,416 research outputs found

    Scientific support for an orbiter middeck experiment on solid surface combustion

    Get PDF
    The objective is to determine the mechanism of gas-phase flame spread over solid fuel surfaces in the absence of any buoyancy or externally imposed gas-phase flow. Such understanding can be used to improve the fire safety aspects of space travel by providing information that will allow judicious selections of spacecraft materials and environments to be made. The planned experiment consists of measuring the flame spread rate over thermally thin and thermally thick fuels in a closed container in the low-gravity environment of the Space Shuttle. Measurements consist of flame spread rate and shape obtained from two views of the process as recorded on movie film and surface and gas-phase temperatures obtained from fine-wire thermocouples. The temperature measurements along with appropriate modeling provide information about the gas-to-solid heat flux. Environmental parameters to be varied are the oxygen concentration and pressure

    A Cognitive Comparison of Modeling Behaviors Between Novice and Expert Information Analysts

    Get PDF
    Empirical research into the novice-expert differences in information requirement analysis has recognized that the differences in knowledge and in modeling behaviors are the causes of differences in quality of requirement specifications. However, there is no cognitive process model available for explaining the interactions among the three factors: knowledge, modeling behaviors, and the quality of requirement specifications. On the basis of structure-mapping model of analogy, this article proposes a cognitive process model that views information requirement analysis as a process of conceptual mapping from the base structures (i.e., the knowledge structures of requirement analysis techniques) to the target structures (i.e., the knowledge structures of users’ problem statements). Due to the differences in knowledge, novice and expert information analysts use different types of cognitive processes, relation mapping by experts versus object-attribute mapping by novices, to model information requirements. The different cognitive processes lead to different modeling behaviors, and in turn the different modeling behaviors finally result in different qualities of requirement specifications. On the basis of the cognitive process model, two ways to improve the performance of novice information analysts are suggested: encouraging novice information analysts to think in terms of relations rather than objectattributes and providing domain-specific requirement analysis techniques that are similar to the problem domains in both relations and object-attributes

    Integrating Multiple Uncertain Views of a Static Scene Acquired by an Agile Camera System

    Get PDF
    This paper addresses the problem of merging multiple views of a static scene into a common coordinate frame, explicitly considering uncertainty. It assumes that a static world is observed by an agile vision system, whose movements are known with a limited precision, and whose observations are inaccurate and incomplete. It concentrates on acquiring uncertain three-dimensional information from multiple views, rather than on modeling or representing the information at higher levels of abstraction. Two particular problems receive attention: identifying the transformation between two viewing positions; and understanding how errors and uncertainties propagate as a result of applying the transformation. The first is solved by identifying the forward kinematics of the agile camera system. The second is solved by first treating a measurement of camera position and orientation as a uniformly distributed random vector whose component variances are related to the resolution of the encoding potentiometers, then treating an object position measurement as a normally distributed random vector whose component variances are experimentally derived, and finally determining the uncertainty of the merged points as functions of these variances

    An algorithm and system for finding the next best view in a 3-D object modeling task

    Get PDF
    Sensor placement for 3-D modeling is a growing area of computer vision and robotics. The objective of a sensor placement system is to make task-directed decisions for optimal pose selection.This thesis proposes a Next Best View (NBV) solution to the sensor placement problem. Our algorithm computes the next best view by optimizing an objective function that measures the quantity of unknown information in each of a group of potential viewpoints. The potential views are either placed uniformly around the object or are calculated from the surface normals of the occupancy grid model. Foreach iteration, the optimal pose from the objective function calculation is selected to initiate the collection of new data. The model is incrementally updated from the information acquired in each new view. This process terminates when the number of recovered voxels ceases to increase, yielding the final model.We tested two different algorithms on 8 objects of various complexity, including objects with simple concave, simple hole, and complex hole self-occlusions. The First algorithm chooses new views optimally but is slow to compute. The second algorithm is fast but not as effective as the first algorithm. The two NBV algorithm successfully model all 8 of the tested objects. The models compare well visually with the original objects within the constraints of occupancy grid resolution.Objects of complexity greater than mentioned above were not tested due to the time required for modeling. A mathematical comparison was not made between the objects and their corresponding models since we are concerned only with the acquisition of complete models, not the accuracy of the models

    Scaling edge parameters for topic-awareness in information propagation

    Get PDF
    Social media platforms play a crucial role in regulating public discourse. Recognizing the importance of understanding this complex phenomenon a large body of research has been published in attempts to model how information spreads within these platforms. These models are termed information propagation models. The majority of the existing information propagation models attempt to capture the causal relationship between to two information spreading events through modeling the probabilities of information transmission between the two users or through capturing the temporal correlations that exist between the events. While these models have been successful in the past, they fail to capture the various properties that have emerged in the recent past. One emerging property that has been presented in the recent analysis is the role the content of information plays in regulating the patterns of information spread. Specifically, social scientists believe that in the presence of large amounts of information, users tend to interact with items that help confirm their own views. This thesis explores a possible method to incorporate user-specific and event-specific features to existing information propagation models by scaling the edge parameters. Through modeling the scaling factors to capture the phenomena of selective exposure due to confirmation bias, we showcase the ability of our approach to capturing complex social dynamics. Through experiments on both synthetic and real-world datasets, we validate the advantages that could be gained over the existing models. The presented approach exhibits clearly visible performance gains on the network recovery task and performed competitively against the baselines

    On multi-view learning with additive models

    Get PDF
    In many scientific settings data can be naturally partitioned into variable groupings called views. Common examples include environmental (1st view) and genetic information (2nd view) in ecological applications, chemical (1st view) and biological (2nd view) data in drug discovery. Multi-view data also occur in text analysis and proteomics applications where one view consists of a graph with observations as the vertices and a weighted measure of pairwise similarity between observations as the edges. Further, in several of these applications the observations can be partitioned into two sets, one where the response is observed (labeled) and the other where the response is not (unlabeled). The problem for simultaneously addressing viewed data and incorporating unlabeled observations in training is referred to as multi-view transductive learning. In this work we introduce and study a comprehensive generalized fixed point additive modeling framework for multi-view transductive learning, where any view is represented by a linear smoother. The problem of view selection is discussed using a generalized Akaike Information Criterion, which provides an approach for testing the contribution of each view. An efficient implementation is provided for fitting these models with both backfitting and local-scoring type algorithms adjusted to semi-supervised graph-based learning. The proposed technique is assessed on both synthetic and real data sets and is shown to be competitive to state-of-the-art co-training and graph-based techniques.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS202 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Auto-Focus Contrastive Learning for Image Manipulation Detection

    Full text link
    Generally, current image manipulation detection models are simply built on manipulation traces. However, we argue that those models achieve sub-optimal detection performance as it tends to: 1) distinguish the manipulation traces from a lot of noisy information within the entire image, and 2) ignore the trace relations among the pixels of each manipulated region and its surroundings. To overcome these limitations, we propose an Auto-Focus Contrastive Learning (AF-CL) network for image manipulation detection. It contains two main ideas, i.e., multi-scale view generation (MSVG) and trace relation modeling (TRM). Specifically, MSVG aims to generate a pair of views, each of which contains the manipulated region and its surroundings at a different scale, while TRM plays a role in modeling the trace relations among the pixels of each manipulated region and its surroundings for learning the discriminative representation. After learning the AF-CL network by minimizing the distance between the representations of corresponding views, the learned network is able to automatically focus on the manipulated region and its surroundings and sufficiently explore their trace relations for accurate manipulation detection. Extensive experiments demonstrate that, compared to the state-of-the-arts, AF-CL provides significant performance improvements, i.e., up to 2.5%, 7.5%, and 0.8% F1 score, on CAISA, NIST, and Coverage datasets, respectively
    • …
    corecore