31 research outputs found

    Semantics-Driven Large-Scale 3D Scene Retrieval

    Get PDF

    Impact of temperature and switching rate on forward and reverse conduction of GaN and SiC cascode devices:A technology evaluation

    Get PDF
    This paper provides the first comprehensive study on the forward and reverse conduction and reliability performance of the Gallium Nitride (GaN) and Silicon Carbide (SiC) power cascode devices, in comparison with standard silicon & SiC power MOSFETs and the silicon superjunction MOSFETs. The impact of temperature and the external gate resistance are investigated, and a practical yet accurate analytical model has been developed to calculate the switching rate of cascode devices. The 3 rd quadrant operation devices through the body diodes is also studied along with unclamped switching properties for avalanche breakdown limits of GaN and SiC cascodes

    A Survey of Recent 3D Scene Analysis and Processing Methods

    No full text
    With ubiquitous cameras and popular 3D scanning and capturing devices to help us capture 2D/3D scene data, there are many scene understanding related applications, as well as quite a few important and interesting research problems in processing, analyzing, and understanding the available scene data. During the recent several years, there is a significant advancement in different research directions in this field and quite a few novel 3D scene analysis and processing methods have been proposed correspondingly in each direction. This paper provides a review and critical evaluation on the most recent (i.e., within five recent years) and novel data-driven or semantics-driven 3D scene analysis and processing methods, as well as several involved 3D scene datasets. For each method, its advantage(s) and disadvantage(s) are discussed, after an overview and/or analysis of the approach. Finally, based on the review, we propose several promising future research directions in this field

    Distinct tissue-specific transcriptional regulation revealed by gene regulatory networks in maize

    No full text
    Abstract Background Transcription factors (TFs) are proteins that can bind to DNA sequences and regulate gene expression. Many TFs are master regulators in cells that contribute to tissue-specific and cell-type-specific gene expression patterns in eukaryotes. Maize has been a model organism for over one hundred years, but little is known about its tissue-specific gene regulation through TFs. In this study, we used a network approach to elucidate gene regulatory networks (GRNs) in four tissues (leaf, root, SAM and seed) in maize. We utilized GENIE3, a machine-learning algorithm combined with large quantity of RNA-Seq expression data to construct four tissue-specific GRNs. Unlike some other techniques, this approach is not limited by high-quality Position Weighed Matrix (PWM), and can therefore predict GRNs for over 2000 TFs in maize. Results Although many TFs were expressed across multiple tissues, a multi-tiered analysis predicted tissue-specific regulatory functions for many transcription factors. Some well-studied TFs emerged within the four tissue-specific GRNs, and the GRN predictions matched expectations based upon published results for many of these examples. Our GRNs were also validated by ChIP-Seq datasets (KN1, FEA4 and O2). Key TFs were identified for each tissue and matched expectations for key regulators in each tissue, including GO enrichment and identity with known regulatory factors for that tissue. We also found functional modules in each network by clustering analysis with the MCL algorithm. Conclusions By combining publicly available genome-wide expression data and network analysis, we can uncover GRNs at tissue-level resolution in maize. Since ChIP-Seq and PWMs are still limited in several model organisms, our study provides a uniform platform that can be adapted to any species with genome-wide expression data to construct GRNs. We also present a publicly available database, maize tissue-specific GRN (mGRN, https://www.bio.fsu.edu/mcginnislab/mgrn/), for easy querying. All source code and data are available at Github (https://github.com/timedreamer/maize_tissue-specific_GRN)

    Sketch/Image-Based 3D Scene Retrieval: Benchmark, Algorithm, Evaluation

    No full text
    Sketch/Image-based 3D scene retrieval is to retrieve man-made 3D scene models given a user\u27s hand-drawn 2D scene sketch or a 2D scene image usually captured by a camera. It is a brand new but also very challenging research topic in the field of 3D object retrieval due to the semantic gap in their representations: 3D scene models or views differ from either non-realistic 2D scene sketches or realistic 2D scene images. Due to the intuitiveness in sketching and ubiquitous availability in image capturing, this research topic has vast applications such as 3D scene reconstruction, autonomous driving cars, 3D geometry video retrieval, and 3D AR/VR entertainment. To boost this interesting and important research, we build the currently largest and most comprehensive 2D scene sketch/image-based 3D scene retrieval benchmark1, develop a convolutional neural network (CNN)-based 3D scene retrieval algorithm and finally conduct an evaluation on the benchmark

    Semantic Tree-Based 3D Scene Model Recognition

    No full text
    © 2020 IEEE. 3D scene recognition is important for many applications including robotics, autonomous driving cars, augmented reality (AR), virtual reality (VR), 3D movie and game production. A lot of semantic information (i.e. objects, object parts and object groups) is existing in 3D scene models. To significantly improve 3D scene recognition accuracy, we incorporate such semantic information into the recognition process by building a semantic scene tree and propose a deep random field (DRF) model-based semantic 3D scene recognition approach. Experiments demonstrate that the semantic approach can effectively capture semantic information of 3D scene models, accurately measure their similarities, and therefore greatly enhance the recognition performance. Code, data and experimental results can be found on the project homepage

    I2S2: Image-To-Scene Sketch Translation Using Conditional Input and Adversarial Networks

    No full text
    Image generation from sketch is a popular and well-studied computer vision problem. However, the inverse problem image-to-sketch (I2S) synthesis still remains open and challenging, let alone image-to-scene sketch (I2S 2 ) synthesis, especially when full-scene sketch generations are highly desired. In this paper, we propose a framework for generating full-scene sketch representations from natural scene images, aiming to generate outputs that approximate hand-drawn scene sketches. Specifically, we exploit generative adversarial models to produce full-scene sketches given arbitrary input images that are actually conditions which are incorporated to guide the distribution mapping in the context of adversarial learning. To advance the use of such conditions, we further investigate edge detection solutions and propose to utilize Holistically-nested Edge Detection (HED) maps to condition the generative model. We conduct extensive experiments to validate the proposed framework and provide detailed quantitative and qualitative evaluations to demonstrate its effectiveness. In addition, we also demonstrate the flexibility of the proposed framework by using different conditional inputs, such as the Canny edge detector

    Legislative Documents

    No full text
    Also, variously referred to as: House bills; House documents; House legislative documents; legislative documents; General Court documents
    corecore