4,661 research outputs found

    Recanalization of the Chronically Occluded Internal Carotid Artery: Review of the Literature.

    Get PDF
    Introduction: We reviewed the literature on interventions for patients with medically refractory chronically occluded internal carotid artery (COICA) to assess the risks and/or benefits after recanalization via an endovascular technique (ET) or hybrid surgery (HS, i.e., ET plus carotid endarterectomy). Methods: A systematic search of the electronic databases was performed. Patients with COICA were classified into 4 different categories according to Hasan et al classification. Results: Eighteen studies satisfied the inclusion criteria. Only 6 studies involved an HS procedure. We identified 389 patients with COICA who underwent ET or HS; 91% were males. The overall perioperative complication rate was 10.1% (95% confidence interval [CI]: 7.4%-13.1%). For types A and B, the successful recanalization rate was 95.4% (95% CI: 86.5%-100%), with a 13.7% (95% CI: 2.3%-27.4%) complication rate. For type C, the success rate for ET was 45.7% (95% CI: 17.8%-70.7%), with a complication rate of 46.0% (95% CI: 20.0%-71.4%) for ET and for the HS technique 87.6% (95% CI: 80.9%-94.4%), with a complication rate of 14.0% (95% CI: 7.0%-21.8%). For type D, the success rate of recanalization was 29.8% (95% CI: 7.8%-52.8%), with a 29.8% (95% CI: 6.1%-56.3%) complication rate. Successful recanalization resulted in a symmetrical perfusion between both cerebral hemispheres, resolution of penumbra, normalization of the mean transit time, and improvement in Montreal Cognitive Assessment (MoCA) score (ΔMoCA = 9.80 points; P = 0.004). Conclusions: Type A and B occlusions benefit from ET, especially in the presence of a large penumbra. Type C occlusions can benefit from HS. Unfortunately, we did not identify an intervention to help patients with type D occlusions. A phase 2b randomized controlled trial is needed to confirm these findings

    Advanced digital SAR processing study

    Get PDF
    A highly programmable, land based, real time synthetic aperture radar (SAR) processor requiring a processed pixel rate of 2.75 MHz or more in a four look system was designed. Variations in range and azimuth compression, number of looks, range swath, range migration and SR mode were specified. Alternative range and azimuth processing algorithms were examined in conjunction with projected integrated circuit, digital architecture, and software technologies. The advaced digital SAR processor (ADSP) employs an FFT convolver algorithm for both range and azimuth processing in a parallel architecture configuration. Algorithm performace comparisons, design system design, implementation tradeoffs and the results of a supporting survey of integrated circuit and digital architecture technologies are reported. Cost tradeoffs and projections with alternate implementation plans are presented

    Learning to infer: RL-based search for DNN primitive selection on Heterogeneous Embedded Systems

    Full text link
    Deep Learning is increasingly being adopted by industry for computer vision applications running on embedded devices. While Convolutional Neural Networks' accuracy has achieved a mature and remarkable state, inference latency and throughput are a major concern especially when targeting low-cost and low-power embedded platforms. CNNs' inference latency may become a bottleneck for Deep Learning adoption by industry, as it is a crucial specification for many real-time processes. Furthermore, deployment of CNNs across heterogeneous platforms presents major compatibility issues due to vendor-specific technology and acceleration libraries. In this work, we present QS-DNN, a fully automatic search based on Reinforcement Learning which, combined with an inference engine optimizer, efficiently explores through the design space and empirically finds the optimal combinations of libraries and primitives to speed up the inference of CNNs on heterogeneous embedded devices. We show that, an optimized combination can achieve 45x speedup in inference latency on CPU compared to a dependency-free baseline and 2x on average on GPGPU compared to the best vendor library. Further, we demonstrate that, the quality of results and time "to-solution" is much better than with Random Search and achieves up to 15x better results for a short-time search

    Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP

    Full text link
    Open-vocabulary segmentation is a challenging task requiring segmenting and recognizing objects from an open set of categories. One way to address this challenge is to leverage multi-modal models, such as CLIP, to provide image and text features in a shared embedding space, which bridges the gap between closed-vocabulary and open-vocabulary recognition. Hence, existing methods often adopt a two-stage framework to tackle the problem, where the inputs first go through a mask generator and then through the CLIP model along with the predicted masks. This process involves extracting features from images multiple times, which can be ineffective and inefficient. By contrast, we propose to build everything into a single-stage framework using a shared Frozen Convolutional CLIP backbone, which not only significantly simplifies the current two-stage pipeline, but also remarkably yields a better accuracy-cost trade-off. The proposed FC-CLIP, benefits from the following observations: the frozen CLIP backbone maintains the ability of open-vocabulary classification and can also serve as a strong mask generator, and the convolutional CLIP generalizes well to a larger input resolution than the one used during contrastive image-text pretraining. When training on COCO panoptic data only and testing in a zero-shot manner, FC-CLIP achieve 26.8 PQ, 16.8 AP, and 34.1 mIoU on ADE20K, 18.2 PQ, 27.9 mIoU on Mapillary Vistas, 44.0 PQ, 26.8 AP, 56.2 mIoU on Cityscapes, outperforming the prior art by +4.2 PQ, +2.4 AP, +4.2 mIoU on ADE20K, +4.0 PQ on Mapillary Vistas and +20.1 PQ on Cityscapes, respectively. Additionally, the training and testing time of FC-CLIP is 7.5x and 6.6x significantly faster than the same prior art, while using 5.9x fewer parameters. FC-CLIP also sets a new state-of-the-art performance across various open-vocabulary semantic segmentation datasets. Code at https://github.com/bytedance/fc-clipComment: code and model available at https://github.com/bytedance/fc-cli

    6. Schiller and Romanticism

    Full text link
    To define romanticism is to attempt something which the romantics themselves insist cannot be done. But we can try to identify and then describe it, first pointing out what it is not. One stable element in romanticism has been its consistent rejection of its opposite, classicism. While no great piece of art has ever existed which did not contain elements of both romanticism and classicism, the partisans of these two different points of view have insisted that different emphases made it great. Where classicism emphasised analysis, objectivity harmony, wholeness, meaning, and discipline, romanticism stressed synthesis,subjectivity,disharmony, individuality,suggestiveness. and spontaneity. [excerpt

    Functional connectivity within the voice perception network and its behavioural relevance

    Get PDF
    International audienceRecognizing who is speaking is a cognitive ability characterized by considerable individual differences, which could relate to the inter-individual variability observed in voice-elicited BOLD activity. Since voice perception is sustained by a complex brain network involving temporal voice areas (TVAs) and, even if less consistently, extra-temporal regions such as frontal cortices, functional connectivity (FC) during an fMRI voice localizer (passive listening of voices vs non-voices) has been computed within twelve temporal and frontal voice-sensitive regions ("voice patches") individually defined for each subject (N ¼ 90) to account for inter-individual variability. Results revealed that voice patches were positively co-activated during voice listening and that they were characterized by different FC pattern depending on the location (anterior/posterior) and the hemisphere. Importantly, FC between right frontal and temporal voice patches was behaviorally relevant: FC significantly increased with voice recognition abilities as measured in a voice recognition test performed outside the scanner. Hence, this study highlights the importance of frontal regions in voice perception and it supports the idea that looking at FC between stimulus-specific and higher-order frontal regions can help understanding individual differences in processing social stimuli such as voices
    corecore