7 research outputs found
High level describable attributes for predicting aesthetics and interestingness
With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness ” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”. 1
Baby Talk: Understanding and Generating Simple Image Descriptions
We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work. 1
1028 nm single mode Ytterbium-doped fiber laser
An efficient Ytterbium-doped fiber laser (YDFL) operating at 1028 nm is demonstrated using a fiber Bragg grating (FBG). The Ytterbium-doped fiber (YDF) is drawn from Yb2O3-doped preform, fabricated through deposition of porous layer of composition SiO2-GeO2 by the MCVD process in conjunction with a solution doping technique. The fabricated YDF has a 0.1 mol % of Yb2O3 in the core, a Ytterbium ion lifetime of 1.1 ms, an absorption of 7.65 dB/m at 976 nm. The fiber laser has the maximum efficiency of 51% with pump power threshold of 19 mW using a FBG in conjunction with a Fresnel reflection to form a linear cavity resonator. The efficiency and threshold are better compared to the similar YDFL configuration using a commercial YDF
Exosome-based smart drug delivery tool for cancer theranostics
Exosomes are the phospholipid-membrane-bound subpopulation of extracellular vesicles derived from the plasma membrane. The main activity of exosomes is cellular communication. In cancer, exosomes play an important rolefrom two distinct perspectives, one related to carcinogenesis and the other as theragnostic and drug delivery tools. The outer phospholipid membrane of Exosome improves drug targeting efficiency. . Some of the vital features of exosomes such as biocompatibility, low toxicity, and low immunogenicity make it a more exciting drug delivery system. Exosome-based drug delivery is a new innovative approach to cancer treatment. Exosome-associated biomarker analysis heralded a new era of cancer diagnostics in a more specific way. This Review focuses on exosome biogenesis, sources, isolation, interrelationship with cancer and exosome-related cancer biomarkers, drug loading methods, exosome-based biomolecule delivery, advances and limitations of exosome-based drug delivery, and exosome-based drug delivery in clinical settings studies. The exosome-based understanding of cancer will change the diagnostic and therapeutic approach in the future.</p