11,388 research outputs found

    Fluorescent-Antibody Targeting of Insulin-Like Growth Factor-1 Receptor Visualizes Metastatic Human Colon Cancer in Orthotopic Mouse Models.

    Get PDF
    Fluorescent-antibody targeting of metastatic cancer has been demonstrated by our laboratory to enable tumor visualization and effective fluorescence-guided surgery. The goal of the present study was to determine whether insulin-like growth factor-1 receptor (IGF-1R) antibodies, conjugated with bright fluorophores, could enable visualization of metastatic colon cancer in orthotopic nude mouse models. IGF-1R antibody (clone 24-31) was conjugated with 550 nm, 650 nm or PEGylated 650 nm fluorophores. Subcutaneous, orthotopic, and liver metastasis models of colon cancer in nude mice were targeted with the fluorescent IGF-1R antibodies. Western blotting confirmed the expression of IGF-1R in HT-29 and HCT 116 human colon cancer cell lines, both expressing green fluorescent protein (GFP). Labeling with fluorophore-conjugated IGF-1R antibody demonstrated fluorescent foci on the membrane of colon cancer cells. Subcutaneously- and orthotopically-transplanted HT-29-GFP and HCT 116-GFP tumors brightly fluoresced at the longer wavelengths after intravenous administration of fluorescent IGF-1R antibodies. Orthotopically-transplanted HCT 116-GFP tumors were brightly labeled by fluorescent IGF-1R antibodies such that they could be imaged non-invasively at the longer wavelengths. In an experimental liver metastasis model, IGF-1R antibodies conjugated with PEGylated 650 nm fluorophores selectively highlighted the liver metastases, which could then be non-invasively imaged. The IGF-1R fluorescent-antibody labeled liver metastases were very bright compared to the normal liver and the fluorescent-antibody label co-located with green fluorescent protein (GFP) expression of the colon cancer cells. The present study thus demonstrates that fluorophore-conjugated IGF-1R antibodies selectively visualize metastatic colon cancer and have clinical potential for improved diagnosis and fluorescence-guided surgery

    Metabolic fingerprinting to assess the impact of salinity on carotenoid content in developing tomato fruits

    Get PDF
    As the presence of health-promoting substances has become a significant aspect of tomato fruit appreciation, this study investigated nutrient solution salinity as a tool to enhance carotenoid accumulation in cherry tomato fruit (Solanum lycopersicum L. cv. Juanita). Hereby, a key objective was to uncover the underlying mechanisms of carotenoid metabolism, moving away from typical black box research strategies. To this end, a greenhouse experiment with five salinity treatments (ranging from 2.0 to 5.0 decisiemens (dS) m(-1)) was carried out and a metabolomic fingerprinting approach was applied to obtain valuable insights on the complicated interactions between salinity treatments, environmental conditions, and the plant's genetic background. Hereby, several hundreds of metabolites were attributed a role in the plant's salinity response (at the fruit level), whereby the overall impact turned out to be highly depending on the developmental stage. In addition, 46 of these metabolites embraced a dual significance as they were ascribed a prominent role in carotenoid metabolism as well. Based on the specific mediating actions of the retained metabolites, it could be determined that altered salinity had only marginal potential to enhance carotenoid accumulation in the concerned tomato fruit cultivar. This study invigorates the usefulness of metabolomics in modern agriculture, for instance in modeling tomato fruit quality. Moreover, the metabolome changes that were caused by the different salinity levels may enclose valuable information towards other salinity-related plant processes as well

    A Review and Characterization of Progressive Visual Analytics

    Get PDF
    Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA’s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions

    A comparative study using an autostereoscopic display with augmented and virtual reality

    Full text link
    Advances in display devices are facilitating the integration of stereoscopic visualization in our daily lives. However, autostereoscopic visualization has not been extensively exploited. In this paper, we present a system that combines Augmented Reality (AR) and autostereoscopic visualization. We also present the first study that compares different aspects using an autostereoscopic display with AR and VR, in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualization are possibilities for developing edutainment systems for childrenThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02). We would like to thank the following for their contributions: AIJU, the "Escola d'Estiu" and especially Ignacio Segui, Juan Cano, Miguelon Gimenez, and Javier Irimia. This work would not have been possible without their collaboration. The ALF3D project (TIN2009-14103-03) for the autostereoscopic display. Roberto Vivo, Rafa Gaitan, Severino Gonzalez, and M. Jose Vicent, for their help. The children's parents who signed the agreement to allow their children to participate in the study. The children who participated in the study. The ETSInf for letting us use its facilities during the testing phase.Arino, J.; Juan Lizandra, MC.; Gil Gómez, JA.; Mollá Vayá, RP. (2014). A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour and Information Technology. 33(6):646-655. https://doi.org/10.1080/0144929X.2013.815277S646655336Azuma, R. T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, 6(4), 355-385. doi:10.1162/pres.1997.6.4.355Blum, T.et al. 2012. Mirracle: augmented reality in-situ visualization of human anatomy using a magic mirror.In: IEEE virtual reality workshops, 4–8 March 2012, Costa Mesa, CA, USA. Washington, DC: IEEE Computer Society, 169–170.Botden, S. M. B. I., Buzink, S. N., Schijven, M. P., & Jakimowicz, J. J. (2007). Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference? World Journal of Surgery, 31(4), 764-772. doi:10.1007/s00268-006-0724-yChittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training: Motivations, issues, opportunities. Computers & Education, 49(1), 3-18. doi:10.1016/j.compedu.2005.06.002Dodgson, N. A. (2005). Autostereoscopic 3D displays. Computer, 38(8), 31-36. doi:10.1109/mc.2005.252Ehara, J., & Saito, H. (2006). Texture overlay for virtual clothing based on PCA of silhouettes. 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2006.297805Eisert, P., Fechteler, P., & Rurainsky, J. (2008). 3-D Tracking of shoes for Virtual Mirror applications. 2008 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2008.4587566Fiala, M. (2007). Magic Mirror System with Hand-held and Wearable Augmentations. 2007 IEEE Virtual Reality Conference. doi:10.1109/vr.2007.352493Froner, B., Holliman, N. S., & Liversedge, S. P. (2008). A comparative study of fine depth perception on two-view 3D displays. Displays, 29(5), 440-450. doi:10.1016/j.displa.2008.03.001Holliman, N. S., Dodgson, N. A., Favalora, G. E., & Pockett, L. (2011). Three-Dimensional Displays: A Review and Applications Analysis. IEEE Transactions on Broadcasting, 57(2), 362-371. doi:10.1109/tbc.2011.2130930Ilgner, J. F. R., Kawai, T., Shibata, T., Yamazoe, T., & Westhofen, M. (2006). Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Stereoscopic Displays and Virtual Reality Systems XIII. doi:10.1117/12.647591Jeong, J.-S., Park, C., Kim, M., Oh, W.-K., & Yoo, K.-H. (2011). Development of a 3D Virtual Laboratory with Motion Sensor for Physics Education. Ubiquitous Computing and Multimedia Applications, 253-262. doi:10.1007/978-3-642-20975-8_28Jones, J. A., Swan, J. E., Singh, G., Kolstad, E., & Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. Proceedings of the 5th symposium on Applied perception in graphics and visualization - APGV ’08. doi:10.1145/1394281.1394283Juan, M. C., & Pérez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Computers & Graphics, 34(6), 756-766. doi:10.1016/j.cag.2010.08.001Kaufmann, H., & Csisinko, M. (2011). Wireless Displays in Educational Augmented Reality Applications. Handbook of Augmented Reality, 157-175. doi:10.1007/978-1-4614-0064-6_6Kaufmann, H., & Meyer, B. (2008). Simulating educational physical experiments in augmented reality. ACM SIGGRAPH ASIA 2008 educators programme on - SIGGRAPH Asia ’08. doi:10.1145/1507713.1507717Konrad, J. (2011). 3D Displays. Optical and Digital Image Processing, 369-395. doi:10.1002/9783527635245.ch17Konrad, J., & Halle, M. (2007). 3-D Displays and Signal Processing. IEEE Signal Processing Magazine, 24(6), 97-111. doi:10.1109/msp.2007.905706Kwon, H., & Choi, H.-J. (2012). A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel. Stereoscopic Displays and Applications XXIII. doi:10.1117/12.907793Livingston, M. A., Zanbaka, C., Swan, J. E., & Smallman, H. S. (s. f.). Objective measures for the effectiveness of augmented reality. IEEE Proceedings. VR 2005. Virtual Reality, 2005. doi:10.1109/vr.2005.1492798Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers & Education, 50(4), 1339-1353. doi:10.1016/j.compedu.2006.12.008Montgomery, D. J., Woodgate, G. J., Jacobs, A. M. S., Harrold, J., & Ezra, D. (2001). Performance of a flat-panel display system convertible between 2D and autostereoscopic 3D modes. Stereoscopic Displays and Virtual Reality Systems VIII. doi:10.1117/12.430813Morphew, M. E., Shively, J. R., & Casey, D. (2004). Helmet-mounted displays for unmanned aerial vehicle control. Helmet- and Head-Mounted Displays IX: Technologies and Applications. doi:10.1117/12.541031Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20-28. doi:10.1016/j.cag.2005.10.004Petkov, E. G. (2010). Educational Virtual Reality through a Multiview Autostereoscopic 3D Display. Innovations in Computing Sciences and Software Engineering, 505-508. doi:10.1007/978-90-481-9112-3_86Shen, Y., Ong, S. K., & Nee, A. Y. C. (2011). Vision-Based Hand Interaction in Augmented Reality Environment. International Journal of Human-Computer Interaction, 27(6), 523-544. doi:10.1080/10447318.2011.555297Swan, J. E., Jones, A., Kolstad, E., Livingston, M. A., & Smallman, H. S. (2007). Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics, 13(3), 429-442. doi:10.1109/tvcg.2007.1035Urey, H., Chellappan, K. V., Erden, E., & Surman, P. (2011). State of the Art in Stereoscopic and Autostereoscopic Displays. Proceedings of the IEEE, 99(4), 540-555. doi:10.1109/jproc.2010.2098351Zhang, Y., Ji, Q., and Zhang, W., 2010. Multi-view autostereoscopic 3D display.In: International conference on optics photonics and energy engineering, 10–11 May 2010, Wuhan, China. Washington, DC: IEEE Computer Society, 58–61
    corecore