47,831 research outputs found

    Fine recycled concrete aggregate as a material replacement in concrete production

    Get PDF
    As a fast and rapid growing nation, Malaysia undergoes a lot of development especially in construction field. Most of the building nowadays are being made mainly using concrete as it provides many favorable features such as satisfactory compressive strength, durability, availability, versatility and cost effectiveness. However, in pursuing the development era, sometimes the authorities overlooked about the construction and demolition (C&D) waste that had been created along the development progress. Construction and demolition waste is becoming a vital issue especially to the environmental aspect in many large cities in the world (Chen et al., 2002). Shen [1] describe C&D waste as the waste which generated from renovation, site clearing, demolition, construction, roadwork, land excavation and civil and building construction. Construction and Demolition (C&D) waste constitutes a major portion of total solid waste production in the world, and most of it is used in landfills .

    Deep View-Sensitive Pedestrian Attribute Inference in an end-to-end Model

    Full text link
    Pedestrian attribute inference is a demanding problem in visual surveillance that can facilitate person retrieval, search and indexing. To exploit semantic relations between attributes, recent research treats it as a multi-label image classification task. The visual cues hinting at attributes can be strongly localized and inference of person attributes such as hair, backpack, shorts, etc., are highly dependent on the acquired view of the pedestrian. In this paper we assert this dependence in an end-to-end learning framework and show that a view-sensitive attribute inference is able to learn better attribute predictions. Our proposed model jointly predicts the coarse pose (view) of the pedestrian and learns specialized view-specific multi-label attribute predictions. We show in an extensive evaluation on three challenging datasets (PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute prediction model provides competitive performance and improves on the published state-of-the-art on these datasets.Comment: accepted BMVC 201

    Multispectral Deep Neural Networks for Pedestrian Detection

    Full text link
    Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11% and yields a missing rate 3.5% lower than the other proposed architectures.Comment: 13 pages, 8 figures, BMVC 2016 ora

    Deformable Part-based Fully Convolutional Network for Object Detection

    Full text link
    Existing region-based object detectors are limited to regions with fixed box geometry to represent objects, even if those are highly non-rectangular. In this paper we introduce DP-FCN, a deep model for object detection which explicitly adapts to shapes of objects with deformable parts. Without additional annotations, it learns to focus on discriminative elements and to align them, and simultaneously brings more invariance for classification and geometric information to refine localization. DP-FCN is composed of three main modules: a Fully Convolutional Network to efficiently maintain spatial resolution, a deformable part-based RoI pooling layer to optimize positions of parts and build invariance, and a deformation-aware localization module explicitly exploiting displacements of parts to improve accuracy of bounding box regression. We experimentally validate our model and show significant gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on PASCAL VOC 2007 and 2012 with VOC data only.Comment: Accepted to BMVC 2017 (oral

    Play and Learn: Using Video Games to Train Computer Vision Models

    Full text link
    Video games are a compelling source of annotated data as they can readily provide fine-grained groundtruth for diverse tasks. However, it is not clear whether the synthetically generated data has enough resemblance to the real-world images to improve the performance of computer vision models in practice. We present experiments assessing the effectiveness on real-world data of systems trained on synthetic RGB images that are extracted from a video game. We collected over 60000 synthetic samples from a modern video game with similar conditions to the real-world CamVid and Cityscapes datasets. We provide several experiments to demonstrate that the synthetically generated RGB images can be used to improve the performance of deep neural networks on both image segmentation and depth estimation. These results show that a convolutional network trained on synthetic data achieves a similar test error to a network that is trained on real-world data for dense image classification. Furthermore, the synthetically generated RGB images can provide similar or better results compared to the real-world datasets if a simple domain adaptation technique is applied. Our results suggest that collaboration with game developers for an accessible interface to gather data is potentially a fruitful direction for future work in computer vision.Comment: To appear in the British Machine Vision Conference (BMVC), September 2016. -v2: fixed a typo in the reference

    Coulomb interaction and stability of CE-type structure in half-doped manganites, reply

    Full text link
    In his Comment (cond-mat/0104353), Shen points out that the on-site Coulomb interaction, that can cause charge order in half-doped manganites, also destabilizes the magnetic CE-phase observed in these systems. This is a valid observation, but it is not a priori clear whether in the relevant parameter regime the C-phase is indeed lower in energy then the CE-phase within our model. We conclude that the proposed model, which correctly captures the interplay of spin, charge and orbital degrees of freedom in the half-doped manganites and gives a reasonable description of their electronic structure, is by itself not sufficient for the precise determination of the regions of stabilities of different phases. For this several other factors should be taken into account.Comment: 1 page, to appear in Phys. Rev. Let

    An extension of the Beurling-Chen-Hadwin-Shen theorem for noncommutative Hardyspaces associated with finite von Neumann algebras

    Get PDF
    In 2015, Yanni Chen, Don Hadwin and Junhao Shen proved a noncommutative version of Beurling\u27s theorem for a continuous unitarily invariant norm α on a tracial von Neumann algebra (M,τ)such that α is one dominating with respect to τ. The role of H^∞ is played by a maximal subdiagonal algebra A . In the talk, we first will show that if α is a continuous normalized unitarily invariant norm on (M,τ), then there exists a faithful normal tracial state ρ on M and a constant c \u3e0 such that α is a c times one norm-dominating norm on (M,ρ). Moreover, ρ (x)= τ (xg), where x in M, g is positive in L^1 (Z,τ), where Z is the center of M . Here c and ρ are not unique. However, if there is a c and ρ so that the Fuglede-Kadison determinant of g is positive, then Beurling-Chen-Hadwin-Shen theorem holds for L^(α ) (M,τ). The key ingredients in the proof of our result include a factorization theorem and a density theorem for for L^(α ) (M,ρ)

    A slight improvement to Korenblum's constant

    Get PDF
    Let A2(D)A^2(D) be the Bergman space over the open unit disk DD in the complex plane. Korenblum conjectured that there is an absolute constant c(0,1)c \in (0,1) such that whenever f(z)g(z)|f(z)|\le |g(z)| in the annulus c<z<1c<|z|<1 then f(z)g(z)||f(z)|| \le ||g(z)||.In 2004 C.Wang gave an upper bound on cc,that is, c<0.67795c < 0.67795, and in 2006 A.Schuster gave a lower bound ,c>0.21c > 0.21 .In this paper we slightly improve the upper bound for cc

    Class-Weighted Convolutional Features for Visual Instance Search

    Get PDF
    Image retrieval in realistic scenarios targets large dynamic datasets of unlabeled images. In these cases, training or fine-tuning a model every time new images are added to the database is neither efficient nor scalable. Convolutional neural networks trained for image classification over large datasets have been proven effective feature extractors for image retrieval. The most successful approaches are based on encoding the activations of convolutional layers, as they convey the image spatial information. In this paper, we go beyond this spatial information and propose a local-aware encoding of convolutional features based on semantic information predicted in the target image. To this end, we obtain the most discriminative regions of an image using Class Activation Maps (CAMs). CAMs are based on the knowledge contained in the network and therefore, our approach, has the additional advantage of not requiring external information. In addition, we use CAMs to generate object proposals during an unsupervised re-ranking stage after a first fast search. Our experiments on two public available datasets for instance retrieval, Oxford5k and Paris6k, demonstrate the competitiveness of our approach outperforming the current state-of-the-art when using off-the-shelf models trained on ImageNet. The source code and model used in this paper are publicly available at http://imatge-upc.github.io/retrieval-2017-cam/.Comment: To appear in the British Machine Vision Conference (BMVC), September 201
    corecore