464 research outputs found

    Design and Optimization of HVAC System of Spacecraft

    Get PDF

    The Warning of the Listed Company According to the Market Quality Indexes

    Get PDF
    All listed companies in the Shanghai Stock Exchange in 2008 were ranked according to the total market capitalization, the revenue and the net profit. The countdown 100 listed companies were a class. The random extraction 100 listed companies were a class. The probability warning model of the market quality for the listed companies was established by the two types companies as categorical dependent variables and the price impacting index and the excess volatility rate as the independent variables that were selected among 11 market quality indexes by forward stepwise method. When the probability value was greater than 0.5, the market quality of the listed company will be warned. On the contrary, the listed company was excellent in the market quality. The accuracy rate of the model was 87.7%.

    The Discrimination Classification in the Listed Companies in Accordance with the Market Quality Indexes

    Get PDF
    All listed companies in the Shanghai Stock Exchange in 2008 were ranked according to the total market capitalization, the revenue and the net profit. The top 100 listed companies and the countdown listed companies were selected. Then the random 100 listed companies were extracted. The two discrimination classification functions of the listed companies in accordance with the market quality indexes were established by the three types companies as categorical dependent variables and the price impacting index, the liquidity index, the large transaction costs and the excess volatility ratio as the independent variables that were selected among the 11 market quality indexes by the forward stepwise method. The accuracy rate of the discrimination classification functions of the listed companies was 77.74% through verifying by the original back substitution. The classification results were significant. Key words: The price impacting index; The liquidity index; The large transaction costs; The excess volatility ratio; The discrimination classification functio

    MOSFET Modulated Dual Conversion Gain CMOS Image Sensors

    Get PDF
    In recent years, vision systems based on CMOS image sensors have acquired significant ground over those based on charge-coupled devices (CCD). The main advantages of CMOS image sensors are their high level of integration, random accessibility, and low-voltage, low-power operation. Previously proposed high dynamic range enhancement schemes focused mainly on extending the sensor dynamic range at the high illumination end. Sensor dynamic range extension at the low illumination end has not been addressed. Since most applications require low-noise, high-sensitivity, characteristics for imaging of the dark region as well as dynamic range expansion to the bright region, the availability of a low-noise, high-sensitivity pixel device is particularly important. In this dissertation, a dual-conversion-gain (DCG) pixel architecture was proposed; this architecture increases the signal to noise ratio (SNR) and the dynamic range of CMOS image sensors at both the low and high illumination ends. The dual conversion gain pixel improves the dynamic range by changing the conversion gain based on the illumination level without increasing artifacts or increasing the imaging readout noise floor. A MOSFET is used to modulate the capacitance of the charge sensing node. Under high light illumination conditions, a low conversion gain is used to achieve higher full well capacity and wider dynamic range. Under low light conditions, a high conversion gain is enabled to lower the readout noise and achieve excellent low light performance. A sensor prototype using the new pixel architecture with 5.6μm pixel pitch was designed and fabricated using Micron Technology’s 130nm 3-metal and 2-poly silicon process. The periphery circuitries were designed to readout the pixel and support the pixel characterization needs. The pixel design, readout timing, and operation voltage were optimized. A detail sensor characterization was performed; a 127μV/e was achieved for the high conversion gain mode and 30.8μV/e for the low conversion gain mode. Characterization results confirm that a 42ke linear full well was achieved for the low conversion gain mode and 10.5ke for the high conversion gain mode. An average 2.1e readout noise was measured for the high conversion gain mode and 8.6e for the low conversion gain mode. The total sensor dynamic range was extended to 86dB by combining the two modes of operation with a 46.2dB maximum SNR. Several images were taken by the prototype sensor under different illumination levels. The simple processed color images show the clear advantage of the high conversion gain mode for the low light imaging

    Nowhere to Hide: Cross-modal Identity Leakage between Biometrics and Devices

    Get PDF
    Along with the benefits of Internet of Things (IoT) come potential privacy risks, since billions of the connected devices are granted permission to track information about their users and communicate it to other parties over the Internet. Of particular interest to the adversary is the user identity which constantly plays an important role in launching attacks. While the exposure of a certain type of physical biometrics or device identity is extensively studied, the compound effect of leakage from both sides remains unknown in multi-modal sensing environments. In this work, we explore the feasibility of the compound identity leakage across cyber-physical spaces and unveil that co-located smart device IDs (e.g., smartphone MAC addresses) and physical biometrics (e.g., facial/vocal samples) are side channels to each other. It is demonstrated that our method is robust to various observation noise in the wild and an attacker can comprehensively profile victims in multi-dimension with nearly zero analysis effort. Two real-world experiments on different biometrics and device IDs show that the presented approach can compromise more than 70\% of device IDs and harvests multiple biometric clusters with ~94% purity at the same time

    Functional evaluation of Asp76, 84, 102 and 150 in human arsenic(III) methyltransferase (hAS3MT) interacting with S-adenosylmethionine

    Get PDF
    AbstractWe prepared eight mutants (D76P, D76N, D84P, D84N, D102P, D102N, D150P and D150N) to investigate the functions of residues Asp76, 84, 102 and 150 in human arsenic(III) methyltransferase (hAS3MT) interacting with the S-adenosylmethionine (SAM)-binding. The affinity of all the mutants for SAM were weakened. All the mutants except for D150N completely lost their methylation activities. Residues Asp76, 84, 102 and 150 greatly influenced hAS3MT catalytic activity via affecting SAM-binding or methyl transfer. Asp76 and 84 were located in the SAM-binding pocket, and Asp102 significantly affected SAM-binding via forming hydrogen bonds with SAM

    MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering and Beyond

    Full text link
    Neural radiance fields (NeRF) and its subsequent variants have led to remarkable progress in neural rendering. While most of recent neural rendering works focus on objects and small-scale scenes, developing neural rendering methods for city-scale scenes is of great potential in many real-world applications. However, this line of research is impeded by the absence of a comprehensive and high-quality dataset, yet collecting such a dataset over real city-scale scenes is costly, sensitive, and technically difficult. To this end, we build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches. Leveraging the Unreal Engine 5 City Sample project, we develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities. Flexible controls over environmental factors like light, weather, human and car crowd are also available in our pipeline, supporting the need of various tasks covering city-scale neural rendering and beyond. The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size 28km228km^2. On top of MatrixCity, a thorough benchmark is also conducted, which not only reveals unique challenges of the task of city-scale neural rendering, but also highlights potential improvements for future works. The dataset and code will be publicly available at our project page: https://city-super.github.io/matrixcity/.Comment: Accepted to ICCV 2023. Project page: $\href{https://city-super.github.io/matrixcity/}{this\, https\, URL}

    OmniCity: Omnipotent City Understanding with Multi-level and Multi-view Images

    Full text link
    This paper presents OmniCity, a new dataset for omnipotent city understanding from multi-level and multi-view images. More precisely, the OmniCity contains multi-view satellite images as well as street-level panorama and mono-view images, constituting over 100K pixel-wise annotated images that are well-aligned and collected from 25K geo-locations in New York City. To alleviate the substantial pixel-wise annotation efforts, we propose an efficient street-view image annotation pipeline that leverages the existing label maps of satellite view and the transformation relations between different views (satellite, panorama, and mono-view). With the new OmniCity dataset, we provide benchmarks for a variety of tasks including building footprint extraction, height estimation, and building plane/instance/fine-grained segmentation. Compared with the existing multi-level and multi-view benchmarks, OmniCity contains a larger number of images with richer annotation types and more views, provides more benchmark results of state-of-the-art models, and introduces a novel task for fine-grained building instance segmentation on street-level panorama images. Moreover, OmniCity provides new problem settings for existing tasks, such as cross-view image matching, synthesis, segmentation, detection, etc., and facilitates the developing of new methods for large-scale city understanding, reconstruction, and simulation. The OmniCity dataset as well as the benchmarks will be available at https://city-super.github.io/omnicity
    corecore