1,400 research outputs found

    Near-ideal spontaneous photon sources in silicon quantum photonics

    Get PDF
    While integrated photonics is a robust platform for quantum information processing, architectures for photonic quantum computing place stringent demands on high quality information carriers. Sources of single photons that are highly indistinguishable and pure, that are either near-deterministic or heralded with high efficiency, and that are suitable for mass-manufacture, have been elusive. Here, we demonstrate on-chip photon sources that simultaneously meet each of these requirements. Our photon sources are fabricated in silicon using mature processes, and exploit a novel dual-mode pump-delayed excitation scheme to engineer the emission of spectrally pure photon pairs through intermodal spontaneous four-wave mixing in low-loss spiralled multi-mode waveguides. We simultaneously measure a spectral purity of 0.9904±0.00060.9904 \pm 0.0006, a mutual indistinguishably of 0.987±0.0020.987 \pm 0.002, and >90%>90\% intrinsic heralding efficiency. We measure on-chip quantum interference with a visibility of 0.96±0.020.96 \pm 0.02 between heralded photons from different sources. These results represent a decisive step for scaling quantum information processing in integrated photonics

    Zofenopril and incidence of cough: a review of published and unpublished data.

    Get PDF
    open2noOBJECTIVE: Cough is a typical side effect of angiotensin-converting enzyme (ACE) inhibitors, though its frequency quantitatively varies among the different compounds. Data on the incidence of cough with the lipophilic third-generation ACE inhibitor zofenopril are scanty and never systematically analyzed. The purpose of this paper is to give an overview on the epidemiology, pathophysiology, and treatment of ACE inhibitor-induced cough and to assess the incidence of cough induced by zofenopril treatment. METHODS: Published and unpublished data from randomized and postmarketing zofenopril trials were merged together and analyzed. RESULTS: Twenty-three studies including 5794 hypertensive patients and three studies including 1455 postmyocardial infarction patients exposed for a median follow-up time of 3 months to zofenopril at doses of 7.5-60 mg once-daily were analyzed. The incidence of zofenopril-induced cough was 2.6% (range 0%-4.2%): 2.4% in the hypertension trials (2.4% in the double-blind randomized studies and 2.4% in the open-label postmarketing studies) and 3.6% in the doubleblind randomized postmyocardial infarction trials. Zofenopril-induced cough was generally of a mild to moderate intensity, occurred significantly (P < 0.001) more frequently in the first 3-6 months of treatment (3.0% vs 0.2% 9-12 months), and always resolved or improved upon therapy discontinuation. Zofenopril doses of 30 mg and 60 mg resulted in significantly (P = 0.042) greater rate of cough (2.1% and 2.6%, respectively) than doses of 7.5 mg and 15 mg (0.4% and 0.7%, respectively). In direct comparison trials (enalapril and lisinopril), incidence of cough was not significantly different between zofenopril and other ACE inhibitors (2.4% vs 2.7%). CONCLUSION: Evidence from a limited number of studies indicates a relatively low incidence of zofenopril-induced cough. Large head-to-head comparison studies versus different ACE inhibitors are needed to highlight possible differences between zofenopril and other ACE inhibitors in the incidence of cough.openOmboni S.; Borghi C.Omboni S.; Borghi C

    Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras

    Get PDF
    Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene. They capture pixel-wise brightness variations and output a corresponding stream of asynchronous events. Despite having multiple advantages with respect to traditional cameras, their use is partially prevented by the limited applicability of traditional data processing and vision algorithms. To this aim, we present a framework which exploits the output stream of event cameras to synthesize RGB frames, relying on an initial or a periodic set of color key-frames and the sequence of intermediate events. Differently from existing work, we propose a deep learning-based frame synthesis method, consisting of an adversarial architecture combined with a recurrent module. Qualitative results and quantitative per-pixel, perceptual, and semantic evaluation on four public datasets confirm the quality of the synthesized images

    Driver Face Verification with Depth Maps

    Get PDF
    Face verification is the task of checking if two provided images contain the face of the same person or not. In this work, we propose a fully-convolutional Siamese architecture to tackle this task, achieving state-of-the-art results on three publicly-released datasets, namely Pandora, High-Resolution Range-based Face Database (HRRFaceD), and CurtinFaces. The proposed method takes depth maps as the input, since depth cameras have been proven to be more reliable in different illumination conditions. Thus, the system is able to work even in the case of the total or partial absence of external light sources, which is a key feature for automotive applications. From the algorithmic point of view, we propose a fully-convolutional architecture with a limited number of parameters, capable of dealing with the small amount of depth data available for training and able to run in real time even on a CPU and embedded boards. The experimental results show acceptable accuracy to allow exploitation in real-world applications with in-board cameras. Finally, exploiting the presence of faces occluded by various head garments and extreme head poses available in the Pandora dataset, we successfully test the proposed system also during strong visual occlusions. The excellent results obtained confirm the efficacy of the proposed method

    Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from Depth Maps

    Get PDF
    Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications, such as the detection of unsafe situations or the study of mutual interactions for statistical and social purposes. In this paper, we propose a non-invasive and light-invariant framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera. The method can be applied to any robot without requiring hardware access to the internal states. We introduce a novel representation of the predicted pose, namely Semi-Perspective Decoupled Heatmaps (SPDH), to accurately compute 3D joint locations in world coordinates adapting efficient deep networks designed for the 2D Human Pose Estimation. The proposed approach, which takes as input a depth representation based on XYZ coordinates, can be trained on synthetic depth data and applied to real-world settings without the need for domain adaptation techniques. To this end, we present the SimBa dataset, based on both synthetic and real depth images, and use it for the experimental evaluation. Results show that the proposed approach, made of a specific depth map representation and the SPDH, overcomes the current state of the art

    Mercury: a vision-based framework for Driver Monitoring

    Get PDF
    In this paper, we propose a complete framework, namely Mercury, that combines Computer Vision and Deep Learning algorithms to continuously monitor the driver during the driving activity. The proposed solution complies to the require-ments imposed by the challenging automotive context: the light invariance, in or-der to have a system able to work regardless of the time of day and the weather conditions. Therefore, infrared-based images, i.e. depth maps (in which each pixel corresponds to the distance between the sensor and that point in the scene), have been exploited in conjunction with traditional intensity images. Second, the non-invasivity of the system is required, since driver’s movements must not be impeded during the driving activity: in this context, the use of camer-as and vision-based algorithms is one of the best solutions. Finally, real-time per-formance is needed since a monitoring system must immediately react as soon as a situation of potential danger is detected

    Improving Car Model Classification through Vehicle Keypoint Localization

    Get PDF
    In this paper, we present a novel multi-task framework which aims to improve the performance of car model classification leveraging visual features and pose information extracted from single RGB images. In particular, we merge the visual features obtained through an image classification network and the features computed by a model able to predict the pose in terms of 2D car keypoints. We show how this approach considerably improves the performance on the model classification task testing our framework on a subset of the Pascal3D dataset containing the car classes. Finally, we conduct an ablation study to demonstrate the performance improvement obtained with respect to a single visual classifier network
    corecore