18 research outputs found

    Role of Fluorescein angiography in evaluation of posterior segment disorders

    Get PDF
    Objective: To study the role of fluorescein angiography in the evaluation of posterior segment diseases. Materials & Methods: A hospital based prospective randomized study was done which included 80 patients. Detailed patient history was taken and a thorough ocular and systemic examination was done. All patients were examined by ophthalmoscopy (direct, indirect and slit lamp examination with +90 D lens), followed by fluorescein angiography. Ophthalmoscopic and fluorescein angiography findings were analyzed and categorized. Patients were advised necessary ocular and systemic treatment. Results: 80 cases with posterior segment diseases were analyzed and sub-divided into categories of Diabetic retinopathy, vascular occlusive disorders, age related macular degenerations, Central serous chorioretinopathy, inflammatory disorders and miscellaneous conditions. Fundus Fluorescein Angiography (FFA) altered the diagnosis in 37.5% of cases and categorized the lesions in all cases. 11% of patients experienced adverse reactions like nausea and vomiting. On statistical analysis, FFA proved to be a far superior diagnostic modality than clinical examination (ophthalmoscopy) in diagnosing fundus pathology. Conclusion: FFA is a superior diagnostic tool and is a necessity for evaluating, localizing and categorization of lesions in Retinal, Macular and Choroidal pathologies

    Contemporary Operating Systems are not ready for Peer Computing:

    No full text
    Peer systems should be treated as second class citizen

    Delay tolerant collaborations among campus-wide wireless users

    No full text
    Abstract—The ubiquitous deployment of wireless LAN networks are allowing students to embrace laptops as their preferred computing platform. We investigated the viability of building collaborative applications to share contents amongst student groups. In our application scenario, the university will provide wireless infrastructure throughout the campus but not the storage infrastructure required to store the shared contents. Laptops will likely exhibit weak availability. Hence, these collaborative applications need to tolerate long delays in propagating updates amongst the participants. In this paper, we presented a preliminary analysis of message forwarding behavior under realistically resource constrained node scenarios. Our experiments were based on the observed wireless user behavior at the University of Notre Dame. Our experiments showed the inherent limits of epidemic propagation in real campus wireless network scenarios. I

    Trace driven analysis of the long term evolution of gnutella peer-to-peer traffic

    No full text
    Abstract. Peer-to-Peer (P2P) applications, such as Gnutella, are evolving to address some of the observed performance issues. In this paper, we analyze Gnutella behavior in 2003, 2005, and 2006. During this time, the protocol evolved from v0.4 to v0.6 to address problems with overhead of overlay maintenance and query traffic bandwidth. The goal of this paper is to understand whether the newer protocols address the prior concerns. We observe that the new architecture alleviated the bandwidth consumption for low capacity peers while increasing the bandwidth consumption at high capacity peers. We measured a decrease in incoming query rate. However, highly connected ultra-peers must maintain many connections to which they forward all queries thereby increasing the outgoing query traffic. We also show that these changes have not significantly improved search performance. The effective success rate experienced at a forwarding peer has only increased from 3.5 % to 6.9%. Over 90 % of queries forwarded by a peer do not result in any query hits. With an average query size of over 100 bytes and 30 neighbors for an ultra-peer, this results in almost 1 GB of wasted bandwidth in a 24 hour session. We outline solution approaches to solve this problem and make P2P systems viable for a diverse range of applications.

    Abstract JPEG Compression Metric as a Quality Aware Image Transcoding

    No full text
    Transcoding is becoming a preferred technique to tailor multimedia objects for delivery across variable network bandwidth and for storage and display on the destination device. This paper presents techniques to quantify the quality-versus-size tradeoff characteristics for transcoding JPEG images. We analyze the characteristics of images available in typical Web sites and explore how we can perform informed transcoding using the JPEG compression metric. We present the effects of this transcoding on the image storage size and image information quality. We also present ways of predicting the computational cost as well as potential space benefits achieved by the transcoding. These results are useful in any system that uses transcoding to reduce access latencies, increase effective storage space as well as reduce access costs.

    Automated Storage Reclamation Using Temporal Importance Annotations ∗

    No full text
    This work focuses on scenarios that require the storage of large amounts of data. Such systems require the ability to either continuously increase the storage space or reclaim space by deleting contents. Traditionally, storage systems relegated object reclamation to applications. In this work, content creators explicitly annotate the object using a temporal importance function. The storage system uses this information to evict less important objects. The challenge is to design importance functions that are simple and expressive. We describe a two step temporal importance function. We introduce the notion of storage importance density to quantify the importance levels for which the storage is full. Using extensive simulations and observations of a university wide lecture video capture and storage application, we show that our abstraction allows the users to express the amount of persistence for each individual object.

    Towards portable multi-camera high definition video capture using smartphones

    No full text
    Abstract—Real-time tele-immersion requires low latency and synchronized multi-camera capture. Prior high definition (HD) capture systems were bulky. We investigate the suitability of using flocks of smartphone cameras for tele-immersion. Smartphones integrate capture and streaming into a single portable package. However, they archive the captured video into a movie. Hence, we create a sequence of H.264 movies and stream them. Capture delay is reduced by minimizing the number of frames in each movie segment. However, fewer frames reduces compression efficiency. Also, smartphone video encoders do not sacrifice video quality to lower the compression latency or the stream size. On an iPhone 4S, our application that uses published APIs streams 1920x1080 videos at 16.5 fps with a delay of 712 ms between a real-life event and displaying an uncompressed bitmap of this event on a local laptop. Note that the bulky Cisco Tandberg required 300 ms delay. Stereoscopic video from two unsynchronized smartphones also showed minimal visual artifacts in an indoor setting. Keywords-portable tele-immersion, smartphone camera I
    corecore