182,340 research outputs found

    Visual analytics with decision tree on network traffic flow for botnet detection

    Get PDF
    Visual analytics (VA) is an integral approach combining visualization, human factors, and data analysis. VA can synthesize information and derive insight from massive, dynamic, ambiguous and often conflicting data. Thus, help discover the expected and unexpected information. Moreover, the visualization could support the assessment in a timely period on which pre-emptive action can be taken. This paper discusses the implementation of visual analytics with decision tree model on network traffic flow for botnet detection. The discussion covers scenarios based on workstation, network traffic ranges and times. The experiment consists of data modeling, analytics and visualization using Microsoft PowerBI platform. Five different VA with different scenario for botnet detection is examined and analysis. From the studies, it may provide visual analytics as flexible approach for botnet detection on network traffic flow by being able to add more information related to botnet, increase path for data exploration and increase the effectiveness of analytics tool. Moreover, learning the pattern of communication and identified which is a normal behavior and abnormal behavior will be vital for security visual analyst as a future reference

    Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration

    Full text link
    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al.(2014) recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing (The Model, TM). Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise (Tong et al., 2008)

    Electrophysiological Correlates of Visual Object Category Formation in a Prototype-Distortion Task

    Get PDF
    In perceptual learning studies, participants engage in extensive training in the discrimination of visual stimuli in order to modulate perceptual performance. Much of the literature in perceptual learning has looked at the induction of the reorganization of low-level representations in V1. However, much remains to be understood about the mechanisms behind how the adult brain (an expert in visual object categorization) extracts high-level visual objects from the environment and categorically represents them in the cortical visual hierarchy. Here, I used event-related potentials (ERPs) to investigate the neural mechanisms involved in object representation formation during a hybrid visual search and prototype distortion category learning task. EEG was continuously recorded while participants performed the hybrid task, in which a peripheral array of four dot patterns was briefly flashed on a computer screen. In half of the trials, one of the four dot patterns of the array contained the target, a distorted prototype pattern. The remaining trials contained only randomly generated patterns. After hundreds of trials, participants learned to discriminate the target pattern through corrective feedback. A multilevel modeling approach was used to examine the predictive relationship between behavioral performance over time and two ERP components, the N1 and the N250. The N1 is an early sensory component related to changes in visual attention and discrimination (Hopf et al., 2002; Vogel & Luck, 2000). The N250 is a component related to category learning and expertise (Krigolson et al., 2009; Scott et al., 2008; Tanaka et al., 2006). Results indicated that while N1 amplitudes did not change with improved performance, increasingly negative N250 amplitudes did develop over time and were predictive of improvements in pattern detection accuracy

    Beyond Physical Connections: Tree Models in Human Pose Estimation

    Full text link
    Simple tree models for articulated objects prevails in the last decade. However, it is also believed that these simple tree models are not capable of capturing large variations in many scenarios, such as human pose estimation. This paper attempts to address three questions: 1) are simple tree models sufficient? more specifically, 2) how to use tree models effectively in human pose estimation? and 3) how shall we use combined parts together with single parts efficiently? Assuming we have a set of single parts and combined parts, and the goal is to estimate a joint distribution of their locations. We surprisingly find that no latent variables are introduced in the Leeds Sport Dataset (LSP) during learning latent trees for deformable model, which aims at approximating the joint distributions of body part locations using minimal tree structure. This suggests one can straightforwardly use a mixed representation of single and combined parts to approximate their joint distribution in a simple tree model. As such, one only needs to build Visual Categories of the combined parts, and then perform inference on the learned latent tree. Our method outperformed the state of the art on the LSP, both in the scenarios when the training images are from the same dataset and from the PARSE dataset. Experiments on animal images from the VOC challenge further support our findings.Comment: CVPR 201

    Toward a Taxonomy and Computational Models of Abnormalities in Images

    Full text link
    The human visual system can spot an abnormal image, and reason about what makes it strange. This task has not received enough attention in computer vision. In this paper we study various types of atypicalities in images in a more comprehensive way than has been done before. We propose a new dataset of abnormal images showing a wide range of atypicalities. We design human subject experiments to discover a coarse taxonomy of the reasons for abnormality. Our experiments reveal three major categories of abnormality: object-centric, scene-centric, and contextual. Based on this taxonomy, we propose a comprehensive computational model that can predict all different types of abnormality in images and outperform prior arts in abnormality recognition.Comment: To appear in the Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures
    • …
    corecore