1,127 research outputs found

    Hidden and Unknown Object Detection in Video

    Full text link
    Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method. Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method.nbs

    Temporal Localization of Fine-Grained Actions in Videos by Domain Transfer from Web Images

    Full text link
    We address the problem of fine-grained action localization from temporally untrimmed web videos. We assume that only weak video-level annotations are available for training. The goal is to use these weak labels to identify temporal segments corresponding to the actions, and learn models that generalize to unconstrained web videos. We find that web images queried by action names serve as well-localized highlights for many actions, but are noisily labeled. To solve this problem, we propose a simple yet effective method that takes weak video labels and noisy image labels as input, and generates localized action frames as output. This is achieved by cross-domain transfer between video frames and web images, using pre-trained deep convolutional neural networks. We then use the localized action frames to train action recognition models with long short-term memory networks. We collect a fine-grained sports action data set FGA-240 of more than 130,000 YouTube videos. It has 240 fine-grained actions under 85 sports activities. Convincing results are shown on the FGA-240 data set, as well as the THUMOS 2014 localization data set with untrimmed training videos.Comment: Camera ready version for ACM Multimedia 201

    ANALYSIS OF THE STATE OF CREDIT CONSUMER COOPERATIVES IN THE RUSSIAN FEDERATION

    Get PDF
    The relevance of the development of the credit consumer cooperatives as a promising segment of the credit market has been substantiated. The state of credit consumer cooperatives has been analyzed in the article. The theoretical foundations of the credit consumer cooperatives have been revealed. The main indicators of the credit consumer cooperatives activity have been adduced and analyzed. Directions of activity of the Central Bank in relation to the credit consumer cooperatives have been reflected. The main problems constraining development of credit consumer cooperatives have been identified. Special attention to the insecurity of property interests of shareholders has been paid. Recommendations for improvement of credit consumer cooperatives activities have been presented. The main conclusion is, that the credit consumer cooperatives have a huge financial and credit potential, but in our country they are not yet sufficiently developed. The mechanisms, adduced in the article, could ensure a steady growth of the credit consumer cooperatives’ activities

    Evaluating Two-Stream CNN for Video Classification

    Full text link
    Videos contain very rich semantic information. Traditional hand-crafted features are known to be inadequate in analyzing complex video semantics. Inspired by the huge success of the deep learning methods in analyzing image, audio and text data, significant efforts are recently being devoted to the design of deep nets for video analytics. Among the many practical needs, classifying videos (or video clips) based on their major semantic categories (e.g., "skiing") is useful in many applications. In this paper, we conduct an in-depth study to investigate important implementation options that may affect the performance of deep nets on video classification. Our evaluations are conducted on top of a recent two-stream convolutional neural network (CNN) pipeline, which uses both static frames and motion optical flows, and has demonstrated competitive performance against the state-of-the-art methods. In order to gain insights and to arrive at a practical guideline, many important options are studied, including network architectures, model fusion, learning parameters and the final prediction methods. Based on the evaluations, very competitive results are attained on two popular video classification benchmarks. We hope that the discussions and conclusions from this work can help researchers in related fields to quickly set up a good basis for further investigations along this very promising direction.Comment: ACM ICMR'1

    Receptive Field Block Net for Accurate and Fast Object Detection

    Full text link
    Current top-performing object detectors depend on deep CNN backbones, such as ResNet-101 and Inception, benefiting from their powerful feature representations but suffering from high computational costs. Conversely, some lightweight model based detectors fulfil real time processing, while their accuracies are often criticized. In this paper, we explore an alternative to build a fast and accurate detector by strengthening lightweight features using a hand-crafted mechanism. Inspired by the structure of Receptive Fields (RFs) in human visual systems, we propose a novel RF Block (RFB) module, which takes the relationship between the size and eccentricity of RFs into account, to enhance the feature discriminability and robustness. We further assemble RFB to the top of SSD, constructing the RFB Net detector. To evaluate its effectiveness, experiments are conducted on two major benchmarks and the results show that RFB Net is able to reach the performance of advanced very deep detectors while keeping the real-time speed. Code is available at https://github.com/ruinmessi/RFBNet.Comment: Accepted by ECCV 201

    Towards Bottom-Up Analysis of Social Food

    Get PDF
    in ACM Digital Health Conference 201

    The age of data-driven proteomics : how machine learning enables novel workflows

    Get PDF
    A lot of energy in the field of proteomics is dedicated to the application of challenging experimental workflows, which include metaproteomics, proteogenomics, data independent acquisition (DIA), non-specific proteolysis, immunopeptidomics, and open modification searches. These workflows are all challenging because of ambiguity in the identification stage; they either expand the search space and thus increase the ambiguity of identifications, or, in the case of DIA, they generate data that is inherently more ambiguous. In this context, machine learning-based predictive models are now generating considerable excitement in the field of proteomics because these predictive models hold great potential to drastically reduce the ambiguity in the identification process of the above-mentioned workflows. Indeed, the field has already produced classical machine learning and deep learning models to predict almost every aspect of a liquid chromatography-mass spectrometry (LC-MS) experiment. Yet despite all the excitement, thorough integration of predictive models in these challenging LC-MS workflows is still limited, and further improvements to the modeling and validation procedures can still be made. In this viewpoint we therefore point out highly promising recent machine learning developments in proteomics, alongside some of the remaining challenges

    Bose-Einstein Condensation of Helium and Hydrogen inside Bundles of Carbon Nanotubes

    Full text link
    Helium atoms or hydrogen molecules are believed to be strongly bound within the interstitial channels (between three carbon nanotubes) within a bundle of many nanotubes. The effects on adsorption of a nonuniform distribution of tubes are evaluated. The energy of a single particle state is the sum of a discrete transverse energy Et (that depends on the radii of neighboring tubes) and a quasicontinuous energy Ez of relatively free motion parallel to the axis of the tubes. At low temperature, the particles occupy the lowest energy states, the focus of this study. The transverse energy attains a global minimum value (Et=Emin) for radii near Rmin=9.95 Ang. for H2 and 8.48 Ang.for He-4. The density of states N(E) near the lowest energy is found to vary linearly above this threshold value, i.e. N(E) is proportional to (E-Emin). As a result, there occurs a Bose-Einstein condensation of the molecules into the channel with the lowest transverse energy. The transition is characterized approximately as that of a four dimensional gas, neglecting the interactions between the adsorbed particles. The phenomenon is observable, in principle, from a singular heat capacity. The existence of this transition depends on the sample having a relatively broad distribution of radii values that include some near Rmin.Comment: 21 pages, 9 figure
    corecore