220,783 research outputs found

    Development and characterisation of injection moulded, all-polypropylene composites

    Get PDF
    In this work, all-polypropylene composites (all-PP composites) were manufactured by injection moulding. Prior to injection moulding, pre-impregnated pellets were prepared by a three-step process (filament winding, compression moulding and pelletizing). A highly oriented polypropylene multifilament was used as the reinforcement material, and a random polypropylene copolymer (with ethylene) was used as the matrix material. Plaque specimens were injection moulded from the pellets with either a film gate or a fan gate. The compression moulded sheets and injection moulding plaques were characterised by shrinkage tests, static tensile tests, dynamic mechanical analysis and falling weight impact tests; the fibre distribution and fibre/matrix adhesion were analysed with light microscopy and scanning electron microscopy. The results showed that with increasing fibre content, both the yield stress and the perforation energy significantly increased. Of the two types of gates used, the fan gate caused the mechanical properties of the plaque specimens to become more homogeneous (i.e., the differences in behaviour parallel and perpendicular to the flow direction became negligible)

    Parameterized complexity of machine scheduling: 15 open problems

    Full text link
    Machine scheduling problems are a long-time key domain of algorithms and complexity research. A novel approach to machine scheduling problems are fixed-parameter algorithms. To stimulate this thriving research direction, we propose 15 open questions in this area whose resolution we expect to lead to the discovery of new approaches and techniques both in scheduling and parameterized complexity theory.Comment: Version accepted to Computers & Operations Researc

    Rapid modulation of sensory processing induced by stimulus conflict

    Get PDF
    Humans are constantly confronted with environmental stimuli that conflict with task goals and can interfere with successful behavior. Prevailing theories propose the existence of cognitive control mechanisms that can suppress the processing of conflicting input and enhance that of the relevant input. However, the temporal cascade of brain processes invoked in response to conflicting stimuli remains poorly understood. By examining evoked electrical brain responses in a novel, hemifield-specific, visual-flanker task, we demonstrate that task-irrelevant conflicting stimulus input is quickly detected in higher level executive regions while simultaneously inducing rapid, recurrent modulation of sensory processing in the visual cortex. Importantly, however, both of these effects are larger for individuals with greater incongruency-related RT slowing. The combination of neural activation patterns and behavioral interference effects suggest that this initial sensory modulation induced by conflicting stimulus inputs reflects performance-degrading attentional distraction because of their incompatibility rather than any rapid task-enhancing cognitive control mechanisms. The present findings thus provide neural evidence for a model in which attentional distraction is the key initial trigger for the temporal cascade of processes by which the human brain responds to conflicting stimulus input in the environment

    Stochastic accumulation of feature information in perception and memory

    Get PDF
    It is now well established that the time course of perceptual processing influences the first second or so of performance in a wide variety of cognitive tasks. Over the last20 years, there has been a shift from modeling the speed at which a display is processed, to modeling the speed at which different features of the display are perceived and formalizing how this perceptual information is used in decision making. The first of these models(Lamberts, 1995) was implemented to fit the time course of performance in a speeded perceptual categorization task and assumed a simple stochastic accumulation of feature information. Subsequently, similar approaches have been used to model performance in a range of cognitive tasks including identification, absolute identification, perceptual matching, recognition, visual search, and word processing, again assuming a simple stochastic accumulation of feature information from both the stimulus and representations held in memory. These models are typically fit to data from signal-to-respond experiments whereby the effects of stimulus exposure duration on performance are examined, but response times (RTs) and RT distributions have also been modeled. In this article, we review this approach and explore the insights it has provided about the interplay between perceptual processing, memory retrieval, and decision making in a variety of tasks. In so doing, we highlight how such approaches can continue to usefully contribute to our understanding of cognition

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    Parallel Implementation of the Discrete Green's Function Formulation of the FDTD Method on a Multicore Central Processing Unit

    Get PDF
    Parallel implementation of the discrete Green's function formulation of the finite-difference time-domain (DGF-FDTD) method was developed on a multicore central processing unit. DGF-FDTD avoids computations of the electromagnetic field in free-space cells and does not require domain termination by absorbing boundary conditions. Computed DGF-FDTD solutions are compatible with the FDTD grid enabling the perfect hybridization of FDTD with the use of time-domain integral equation methods. The developed implementation can be applied to simulations of antenna characteristics. For the sake of example, arrays of Yagi-Uda antennas were simulated with the use of parallel DGF-FDTD. The efficiency of parallel computations was investigated as a function of the number of current elements in the FDTD grid. Although the developed method does not apply the fast Fourier transform for convolution computations, advantages stemming from the application of DGF-FDTD instead of FDTD can be demonstrated for one-dimensional wire antennas when simulation results are post-processed by the near-to-far-field transformation

    Computational Physics on Graphics Processing Units

    Full text link
    The use of graphics processing units for scientific computations is an emerging strategy that can significantly speed up various different algorithms. In this review, we discuss advances made in the field of computational physics, focusing on classical molecular dynamics, and on quantum simulations for electronic structure calculations using the density functional theory, wave function techniques, and quantum field theory.Comment: Proceedings of the 11th International Conference, PARA 2012, Helsinki, Finland, June 10-13, 201
    corecore