57 research outputs found

    Specific Visualization of Glioma Cells in Living Low-Grade Tumor Tissue

    Get PDF
    BACKGROUND: The current therapy of malignant gliomas is based on surgical resection, radio-chemotherapy and chemotherapy. Recent retrospective case-series have highlighted the significance of the extent of resection as a prognostic factor predicting the course of the disease. Complete resection in low-grade gliomas that show no MRI-enhanced images are especially difficult. The aim in this study was to develop a robust, specific, new fluorescent probe for glioma cells that is easy to apply to live tumor biopsies and could identify tumor cells from normal brain cells at all levels of magnification. METHODOLOGY/PRINCIPAL FINDINGS: In this investigation we employed brightly fluorescent, photostable quantum dots (QDs) to specifically target epidermal growth factor receptor (EGFR) that is upregulated in many gliomas. Living glioma and normal cells or tissue biopsies were incubated with QDs coupled to EGF and/or monoclonal antibodies against EGFR for 30 minutes, washed and imaged. The data include results from cell-culture, animal model and ex vivo human tumor biopsies of both low-grade and high-grade gliomas and show high probe specificity. Tumor cells could be visualized from the macroscopic to single cell level with contrast ratios as high as 1000: 1 compared to normal brain tissue. CONCLUSIONS/SIGNIFICANCE: The ability of the targeted probes to clearly distinguish tumor cells in low-grade tumor biopsies, where no enhanced MRI image was obtained, demonstrates the great potential of the method. We propose that future application of specifically targeted fluorescent particles during surgery could allow intraoperative guidance for the removal of residual tumor cells from the resection cavity and thus increase patient survival

    Automated Design of Application-Specific Smart Camera Architectures

    No full text
    Parallel heterogeneous multiprocessor systems are often shunned in embedded system design, not only because of their design complexity but because of the programming burden. Programs for such systems are architecture-dependent: the application developer needs architecture-specific knowledge to implement his algorithms, as each processor has its own characteristics and programming language. He will therefore often stick to the architectures he knows best instead of looking for the best one. This leads to suboptimal solutions, and costly redesign efforts if the chosen architecture later proves to be insufficient. Our solution to this problem uses a programming model based on the concept of architecture independence through algorithm dependence. By limiting the expressiveness of a programming language to just those concepts needed to implement a given class of algorithms, it may be compiled to a variety of different (parallel) processor architectures. We introduce a new meta-programming language that can be used to compile these algorithm-specific languages. The user program then consists of a number of algorithms written in different languages, and which are automatically mapped to the multiprocessor system, achieving architecture independence. We use this architecture independence to conduct an automated design space exploration of possible architectures, creating a Pareto front of optimal trade-offs between performance, area and power consumption. The developer can choose the final architecture from this set.Applied Science

    Smartcam Design Framework.

    No full text

    Smartcam Design Framework.

    No full text

    SmartCam: Devices for Embedded Intelligent Cameras

    No full text
    The advent and subsequent popularity of low cost, low power CMOS vision sensors enables us to integrate processing logic on the camera chip itself, thereby creating so-called smart sensors. They have an on-chip SIMD data processing array controlled by an o#-chip controller. Smart sensors can execute low-level image processing routines as soon as one or more image lines are converted; they do not have to wait for the whole image. High level image processing like feature extraction and object detection and tracking can be performed with a separate powerful o#-chip processor
    • …
    corecore