1,489 research outputs found

    Hidden and Unknown Object Detection in Video

    Full text link
    Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method. Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method.nbs

    Verification of Identity and Syntax Check of Verilog and LEF Files

    Full text link
    The Verilog and LEF files are units of the digital design flow [1][2]. They are being developed in different stages. Before the development of the LEF file, the Verilog file passes through numerous steps during which partial losses of information are possible. The identity check allows to make sure that during the flow the information has not been lost. The syntax accuracy of the Verilog and LEF files is checked as well. nbspnbspnbspnbspnbspnbspnbspnbspnbspnbspnbsp The scripting language Perl is selected for the program. The language is flexible to work with text files [3]. nbspnbspnbspnbspnbspnbspnbspnbspnbspnbspnbsp The method developed in the present paper is substantial as the application of integrated circuits today is actual in different scientific, technical and many other spheres which gradually finds wider application bringing about large demand

    Temporal Localization of Fine-Grained Actions in Videos by Domain Transfer from Web Images

    Full text link
    We address the problem of fine-grained action localization from temporally untrimmed web videos. We assume that only weak video-level annotations are available for training. The goal is to use these weak labels to identify temporal segments corresponding to the actions, and learn models that generalize to unconstrained web videos. We find that web images queried by action names serve as well-localized highlights for many actions, but are noisily labeled. To solve this problem, we propose a simple yet effective method that takes weak video labels and noisy image labels as input, and generates localized action frames as output. This is achieved by cross-domain transfer between video frames and web images, using pre-trained deep convolutional neural networks. We then use the localized action frames to train action recognition models with long short-term memory networks. We collect a fine-grained sports action data set FGA-240 of more than 130,000 YouTube videos. It has 240 fine-grained actions under 85 sports activities. Convincing results are shown on the FGA-240 data set, as well as the THUMOS 2014 localization data set with untrimmed training videos.Comment: Camera ready version for ACM Multimedia 201

    Analysis of Cosequences of Faults in General Zero Transmission Lines of Power Supply Stations

    Full text link
    Zero transmission line faults in power supply systems in large apartment houses, office blocks, offices and other structures cause voltage excursions,which are in the spotlight of this research. The phenomenon has been commented from the theoretical perspective, and emergency situations, which are likely to arise as a result ofmalfunction of single-phase power supply consumers, as well as the probable dangers of fire occurrences, have been revealed. We offer to install an appropriateprotective device to avoid such emergency situations

    Evaluating Two-Stream CNN for Video Classification

    Full text link
    Videos contain very rich semantic information. Traditional hand-crafted features are known to be inadequate in analyzing complex video semantics. Inspired by the huge success of the deep learning methods in analyzing image, audio and text data, significant efforts are recently being devoted to the design of deep nets for video analytics. Among the many practical needs, classifying videos (or video clips) based on their major semantic categories (e.g., "skiing") is useful in many applications. In this paper, we conduct an in-depth study to investigate important implementation options that may affect the performance of deep nets on video classification. Our evaluations are conducted on top of a recent two-stream convolutional neural network (CNN) pipeline, which uses both static frames and motion optical flows, and has demonstrated competitive performance against the state-of-the-art methods. In order to gain insights and to arrive at a practical guideline, many important options are studied, including network architectures, model fusion, learning parameters and the final prediction methods. Based on the evaluations, very competitive results are attained on two popular video classification benchmarks. We hope that the discussions and conclusions from this work can help researchers in related fields to quickly set up a good basis for further investigations along this very promising direction.Comment: ACM ICMR'1

    Assessment of Workers’ Level of Exposure to Work-Related Musculoskeletal Discomfort in Dewatered Cassava Mash Sieving Process

    Get PDF
    This study was undertaken to assess the level of exposure of processors to work-related musculoskeletal disorder when using the locally developed traditional sieve in the sieving process. Quick ergonomic checklist (QEC)  involving the researcher’s and the processors’ assessment using the risk assessment checklist, was used in this assessment and data was obtained from a sample of one hundred and eight (108) processors randomly selected from three senatorial districts of Rivers State. Thirty-six processors from each zone comprising of 14 males and 22 females, were selected., and assessed on the bases of their back, shoulder/arm, wrist/hand and neck posture and frequency of movement during traditional sieving process. The result of the assessment showed that the highest risk of discomfort occurred at the region of the wrist/hand, followed by back, shoulder/arm, and neck. The posture used in the sieving process exposed the processors, not only to the discomfort of pain but also put them at high risk of musculoskeletal disorder at indicated by  a high level of percentage exposure of 66% QEC rating. The result indicated a need for immediate attention and change to an improved method that will reduce the discomfort on the body parts assessed. identified parts

    Efficient On-the-fly Category Retrieval using ConvNets and GPUs

    Full text link
    We investigate the gains in precision and speed, that can be obtained by using Convolutional Networks (ConvNets) for on-the-fly retrieval - where classifiers are learnt at run time for a textual query from downloaded images, and used to rank large image or video datasets. We make three contributions: (i) we present an evaluation of state-of-the-art image representations for object category retrieval over standard benchmark datasets containing 1M+ images; (ii) we show that ConvNets can be used to obtain features which are incredibly performant, and yet much lower dimensional than previous state-of-the-art image representations, and that their dimensionality can be reduced further without loss in performance by compression using product quantization or binarization. Consequently, features with the state-of-the-art performance on large-scale datasets of millions of images can fit in the memory of even a commodity GPU card; (iii) we show that an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel with downloading the new training images, allowing for a continuous refinement of the model as more images become available, and simultaneous training and ranking. The outcome is an on-the-fly system that significantly outperforms its predecessors in terms of: precision of retrieval, memory requirements, and speed, facilitating accurate on-the-fly learning and ranking in under a second on a single GPU.Comment: Published in proceedings of ACCV 201

    On Developing Trends in the Global Economy

    Get PDF
    This article addresses issues related to changes in the global model of production, which are associated with a change in the global economic paradigm, and attempts to characterize the primary causes behind crisis phenomena in the global economy
    • …
    corecore