5,289 research outputs found

    Interactive retrieval of video using pre-computed shot-shot similarities

    Get PDF
    A probabilistic framework for content-based interactive video retrieval is described. The developed indexing of video fragments originates from the probability of the user's positive judgment about key-frames of video shots. Initial estimates of the probabilities are obtained from low-level feature representation. Only statistically significant estimates are picked out, the rest are replaced by an appropriate constant allowing efficient access at search time without loss of search quality and leading to improvement in most experiments. With time, these probability estimates are updated from the relevance judgment of users performing searches, resulting in further substantial increases in mean average precision

    Does the motor system need intermittent control?

    Get PDF
    Explanation of motor control is dominated by continuous neurophysiological pathways (e.g. trans-cortical, spinal) and the continuous control paradigm. Using new theoretical development, methodology and evidence, we propose intermittent control, which incorporates a serial ballistic process within the main feedback loop, provides a more general and more accurate paradigm necessary to explain attributes highly advantageous for competitive survival and performance

    Particle filter state estimator for large urban networks

    Get PDF
    This paper applies a particle filter (PF) state estimator to urban traffic networks. The traffic network consists of signalized intersections, the roads that link these intersections, and sensors that detect the passage time of vehicles. The traffic state X(t) specifies at each time time t the state of the traffic lights, the queue sizes at the intersections, and the location and size of all the platoons of vehicles inside the system. The basic entity of our model is a platoon of vehicles that travel close together at approximately the same speed. This leads to a discrete event simulation model that is much faster than microscopic models representing individual vehicles. Hence it is possible to execute many random simulation runs in parallel. A particle filter (PF) assigns weights to each of these simulation runs, according to how well they explain the observed sensor signals. The PF thus generates estimates at each time t of the location of the platoons, and more importantly the queue size at each intersection. These estimates can be used for controlling the optimal switching times of the traffic light

    Implementation of CUDA Accelerated Bayesian Network Learning

    Get PDF
    Bayesian networks can be used to analyze and find relationships among genetic profiles. Unfortunately, Bayesian network learning is an NP­-hard algorithm and thus takes a significant amount of time to generate an output. There has been research in this area in attempts to make this algorithm quicker, such as utilizing consensus networks. Consensus networks are aggregations of many “cheaper” Bayesian networks that are used to formulate a bigger picture. These “cheaper” networks have their search­ spaces restricted, and thus more are required to extract the relationships among the data points. To accomplish this, I implemented Bayesian network learning in C++, using reference libraries which are programmed in C and MATLAB. The network learning was implemented and structured in such a fashion that CUDA may be used to accelerate matrix operations, since the datasets are typically large enough to warrant such measures (GPGPU acceleration). However, after extensive testing, it was found that CUDA acceleration for Bayesian network learning does not significantly improve performance. In some cases, using the CUDA card is detrimental. This is mostly attributed to the fact that all the matrix operations performed are of linear nature (O(n)), and no matrix multiplication is performed (a O(n^3) operation). The cost incurred by copying the memory to and from the GPU simply outweighs the speed gained by using the GPU instead of the CPU. It is unfortunate introducing matrix acceleration couldn’t speed the learning process up by an order of magnitude, but this implementation may still be reused in the future for applications which are highly reliant on matrix multiplication. I learned a significant amount from this research experience, and will be able to apply the knowledge gained to my future work
    • 

    corecore