28 research outputs found
Autonomous hazard detection and avoidance
During GFY 91, Draper Laboratory was awarded a task by NASA-JSC under contract number NAS9-18426 to study and evaluate the potential for achieving safe autonomous landings on Mars using an on-board autonomous hazard detection and avoidance (AHDA) system. This report describes the results of that study. The AHDA task had four objectives: to demonstrate, via a closed-loop simulation, the ability to autonomously select safe landing sites and the ability to maneuver to the selected site; to identify key issues in the development of AHDA systems; to produce strawman designs for AHDA sensors and algorithms; and to perform initial trade studies leading to better understanding of the effect of sensor/terrain/viewing parameters on AHDA algorithm performance. This report summarizes the progress made during the first year, with primary emphasis on describing the tools developed for simulating a closed-loop AHDA landing. Some cursory performance evaluation results are also presented
Recommended from our members
High Fidelity System Modeling for High Quality Image Reconstruction in Clinical CT
Today, while many researchers focus on the improvement of the regularization term in IR algorithms, they pay less concern to the improvement of the fidelity term. In this paper, we hypothesize that improving the fidelity term will further improve IR image quality in low-dose scanning, which typically causes more noise. The purpose of this paper is to systematically test and examine the role of high-fidelity system models using raw data in the performance of iterative image reconstruction approach minimizing energy functional. We first isolated the fidelity term and analyzed the importance of using focal spot area modeling, flying focal spot location modeling, and active detector area modeling as opposed to just flying focal spot motion. We then compared images using different permutations of all three factors. Next, we tested the ability of the fidelity terms to retain signals upon application of the regularization term with all three factors. We then compared the differences between images generated by the proposed method and Filtered-Back-Projection. Lastly, we compared images of low-dose in vivo data using Filtered-Back-Projection, Iterative Reconstruction in Image Space, and the proposed method using raw data. The initial comparison of difference maps of images constructed showed that the focal spot area model and the active detector area model also have significant impacts on the quality of images produced. Upon application of the regularization term, images generated using all three factors were able to substantially decrease model mismatch error, artifacts, and noise. When the images generated by the proposed method were tested, conspicuity greatly increased, noise standard deviation decreased by 90% in homogeneous regions, and resolution also greatly improved. In conclusion, the improvement of the fidelity term to model clinical scanners is essential to generating higher quality images in low-dose imaging
31st Annual Meeting and Associated Programs of the Society for Immunotherapy of Cancer (SITC 2016) : part two
Background
The immunological escape of tumors represents one of the main ob- stacles to the treatment of malignancies. The blockade of PD-1 or CTLA-4 receptors represented a milestone in the history of immunotherapy. However, immune checkpoint inhibitors seem to be effective in specific cohorts of patients. It has been proposed that their efficacy relies on the presence of an immunological response. Thus, we hypothesized that disruption of the PD-L1/PD-1 axis would synergize with our oncolytic vaccine platform PeptiCRAd.
Methods
We used murine B16OVA in vivo tumor models and flow cytometry analysis to investigate the immunological background.
Results
First, we found that high-burden B16OVA tumors were refractory to combination immunotherapy. However, with a more aggressive schedule, tumors with a lower burden were more susceptible to the combination of PeptiCRAd and PD-L1 blockade. The therapy signifi- cantly increased the median survival of mice (Fig. 7). Interestingly, the reduced growth of contralaterally injected B16F10 cells sug- gested the presence of a long lasting immunological memory also against non-targeted antigens. Concerning the functional state of tumor infiltrating lymphocytes (TILs), we found that all the immune therapies would enhance the percentage of activated (PD-1pos TIM- 3neg) T lymphocytes and reduce the amount of exhausted (PD-1pos TIM-3pos) cells compared to placebo. As expected, we found that PeptiCRAd monotherapy could increase the number of antigen spe- cific CD8+ T cells compared to other treatments. However, only the combination with PD-L1 blockade could significantly increase the ra- tio between activated and exhausted pentamer positive cells (p= 0.0058), suggesting that by disrupting the PD-1/PD-L1 axis we could decrease the amount of dysfunctional antigen specific T cells. We ob- served that the anatomical location deeply influenced the state of CD4+ and CD8+ T lymphocytes. In fact, TIM-3 expression was in- creased by 2 fold on TILs compared to splenic and lymphoid T cells. In the CD8+ compartment, the expression of PD-1 on the surface seemed to be restricted to the tumor micro-environment, while CD4 + T cells had a high expression of PD-1 also in lymphoid organs. Interestingly, we found that the levels of PD-1 were significantly higher on CD8+ T cells than on CD4+ T cells into the tumor micro- environment (p < 0.0001).
Conclusions
In conclusion, we demonstrated that the efficacy of immune check- point inhibitors might be strongly enhanced by their combination with cancer vaccines. PeptiCRAd was able to increase the number of antigen-specific T cells and PD-L1 blockade prevented their exhaus- tion, resulting in long-lasting immunological memory and increased median survival
R&D management and the use of dynamic metrics
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1997.Includes bibliographical references (leaves 98-105).by Homer H. Pien.M.S
Computationally efficient shape analysis via level sets
In recent years, curve evolution has been applied to smoothing of shapes and shape analysis with considerable success, especially in biomedical image analysis. The multiscale analysis provides information regarding parts of shapes, their axes or centers and shape skeletons. In this paper, we show that the level sets of an edge-strength function provide essentially the same shape analysis as provided by curve evolution. The new method has several advantages over the method of curve evolution. Since the governing equation is linear, the implementation is simpler and faster. The same equation applies to problems of higher dimension. An important advantage is that unlike the method of curve evolution, the new method is applicable to shapes which may have junctions such as triple points. The edge-strength may be calculated from raw images without first extracting the shape outline. Thus the method can be applied to raw images. The method provides a way to approach the segmentation problem and shape analysis within a common integrated framework
ARTICLE NO. IV970612 Extraction of Shape Skeletons from Grayscale Images 1
are not adequate for representing shapes [2, 3, 13]. For Shape skeletons have been used in computer vision to repre- example, they fail to capture two-dimensional features sent shapes and discover their salient features. Earlier attempts such as necks and protrusions or blobs versus ribbons. were based on morphological approach in which a shape is Alternative approaches have been suggested within both eroded successively and uniformly until it is reduced to its computer science and psychology to take into account the skeleton. The main difficulty with this approach is its sensitivity two-dimensional nature of the shape. One such approach to noise and several approaches have been proposed for dealing is the well-known Blum transform [2, 3]. It is commonly with this problem. In this paper, we propose a new method based on diffusion to smooth out the noise and extract shape visualized by the grassfire analogy in which one imagines skeletons in a robust way. In the process, we also obtain segmen- the interior of the shape filled with dry grass and fire is tation of the shape into parts. The main tool for shape analysis started simultaneously at all points on the shape boundary. is a function called the ââedge-strengthâ â function. Its level The advancing front propagates with constant speed an
Extraction of Shape Skeletons from Grayscale Images. Computer Vision and Image Understanding
Abstract 1 Shape skeletons have been used in Computer Vision to represent shapes and discover their salient features. Earlier attempts were based on morphological approach in which a shape is eroded successively and uniformly until it is reduced to its skeleton. The main difficulty with this approach is its sensitivity to noise and several approaches have been proposed for dealing with this problem. In this paper, we propose a new method based on diffusion to smooth out the noise and extract shape skeletons in a robust way. In the process, we also obtain segmentation of the shape into parts. The main tool for shape analysis is a function called the âedge-strengthâ function. Its level curves are smoothed analogs of the successive shape outlines obtained during the morphological erosion. The new method is closely related to the popular method of curve evolution, but has several advantages over it. Since the governing equation is linear, the implementation is simpler and faster. The same equation applies to problems in higher dimension. Unlike most other methods, the new method is applicable to shapes which may have junctions such as triple points. Another advantage is that the method is robust with respect to gaps in the shape outline. Since it is seldom possible to extract complete shape outlines from a noisy grayscale image, thi
Multi GPU Implementation of Iterative Tomographic Reconstruction Algorithms
Although iterative reconstruction techniques (IRTs) have been shown to produce images of superior quality over conventional filtered back projection (FBP) based algorithms, the use of IRT in a clinical setting has been hampered by the significant computational demands of these algorithms. In this paper we present results of our efforts to overcome this hurdle by exploiting the combined computational power of multiple graphical processing units (GPUs). We have implemented forward and backward projection steps of reconstruction on an NVIDIA Tesla S870 hardware using CUDA. We have been able to accelerate forward projection by 71x and backward projection by 137x. We generate these results with no perceptible difference in image quality between the GPU and serial CPU implementations. This work illustrates the power of using commercial off-the-shelf relatively low-cost GPUs, potentially allowing IRT tomographic image reconstruction to be run in near real time, lowering the barrier to entry of IRT, and enabling deployment in the clinic