318 research outputs found

    LED receiver impedance and its effects on LED-LED visible light communications

    Full text link
    This paper experimentally demonstrates that the AC impedance spectrum of the LED as a photodetector heavily depends on the received optical power, which may cause the impedance mismatch between the LED and the post trans-impedance amplifier. The optical power dependent impedance of the LED is well fitted by a modified dispersive carrier transport model for inorganic semiconductors. The bandwidth of the LED-LED visible light communication link is further shown to decrease with the optical power received by the LED. This leads to a trade-off between link bandwidth and SNR, and consequently affects the choice of the proper dada modulation scheme.Comment: 9 pages, 9 figures, submitted to Optics Expres

    Phototransistor-like Light Controllable IoT Sensor based on Series-connected RGB LEDs

    Get PDF
    An IoT optical sensor based on the series-connected RGB LEDs is designed, which exhibits the light-controllable optical-to-electrical response like a phototransistor. The IoT sensor has the maximal AC and DC responsivities to the violet light mixed by blue and red light. Its responsivity to the blue light is programmable by the impinging red or green light. A theoretical model based on the light-dependent impedance is developed to interpret its novel optoelectronic response. Such IoT sensor can simultaneously serve as the transmitter and the receiver in the IoT optical communication network, thus significantly reduces the system complexity.Comment: 4 pages, 2 figures, submitted to Electronic Device Letter

    Intra-Storm Temporal Patterns of Rainfall in China Using Huff Curves

    Get PDF
    Intra-storm temporal distributions of precipitation are important for infiltration, runoff, and erosion process understanding and models. A convenient and established method for characterizing precipitation hyetographs is the use of non-dimensional Huff curves. In this study, 11,801 erosive rainfall events with 1 min resolution data collected over 30 to 40 years from 18 weather stations located across the central and eastern parts of China were analyzed to produce Huff curves. Each event was classified according to the quartile period within the event that contained the greatest fraction of rainfall. The results showed that 38.3% of events had the maximum rainfall amounts in the first quartile, followed by the second (26.8%), third (22.4%), and fourth (12.5%) quartiles. Quartile I and II events were generally characteristic of shorter duration and heavier intensity events. Quartile I events averaged 23% shorter durations than quartile IV events, whereas the mean intensity (Iavg), mean maximum 30 min intensity (I30), and mean rainfall erosivity index (EI30) were 1.71, 1.22, and 1.23 times greater, respectively, than those for quartile IV and were significant at a 5% level based on two-sample t-tests. The proportion of quartile I events was less for events of longer duration, whereas the proportions of quartile III and IV events were greater. Two-sample Kolmogorov-Smirnov tests suggested that regional Huff curves can be derived for the central and eastern parts of China. Regional Huff curves developed in this study exhibited dissimilarities in terms of the percentages of storms for different quartiles and the shapes of the curves compared to those reported for Illinois, peninsular Malaysia, and Santa Catarina in Brazil

    DeepSeq: Deep Sequential Circuit Learning

    Full text link
    Circuit representation learning is a promising research direction in the electronic design automation (EDA) field. With sufficient data for pre-training, the learned general yet effective representation can help to solve multiple downstream EDA tasks by fine-tuning it on a small set of task-related data. However, existing solutions only target combinational circuits, significantly limiting their applications. In this work, we propose DeepSeq, a novel representation learning framework for sequential netlists. Specifically, we introduce a dedicated graph neural network (GNN) with a customized propagation scheme to exploit the temporal correlations between gates in sequential circuits. To ensure effective learning, we propose to use a multi-task training objective with two sets of strongly related supervision: logic probability and transition probability at each node. A novel dual attention aggregation mechanism is introduced to facilitate learning both tasks efficiently. Experimental results on various benchmark circuits show that DeepSeq outperforms other GNN models for sequential circuit learning. We evaluate the generalization capability of DeepSeq on a downstream power estimation task. After fine-tuning, DeepSeq can accurately estimate power across various circuits under different workloads

    SGFormer: Semantic Graph Transformer for Point Cloud-based 3D Scene Graph Generation

    Full text link
    In this paper, we propose a novel model called SGFormer, Semantic Graph TransFormer for point cloud-based 3D scene graph generation. The task aims to parse a point cloud-based scene into a semantic structural graph, with the core challenge of modeling the complex global structure. Existing methods based on graph convolutional networks (GCNs) suffer from the over-smoothing dilemma and can only propagate information from limited neighboring nodes. In contrast, SGFormer uses Transformer layers as the base building block to allow global information passing, with two types of newly-designed layers tailored for the 3D scene graph generation task. Specifically, we introduce the graph embedding layer to best utilize the global information in graph edges while maintaining comparable computation costs. Furthermore, we propose the semantic injection layer to leverage linguistic knowledge from large-scale language model (i.e., ChatGPT), to enhance objects' visual features. We benchmark our SGFormer on the established 3DSSG dataset and achieve a 40.94% absolute improvement in relationship prediction's R@50 and an 88.36% boost on the subset with complex scenes over the state-of-the-art. Our analyses further show SGFormer's superiority in the long-tail and zero-shot scenarios. Our source code is available at https://github.com/Andy20178/SGFormer.Comment: To be published in Thirty-Eighth AAAI Conference on Artificial Intelligenc
    • …
    corecore