560 research outputs found

    Medical Data Visual Synchronization and Information interaction Using Internet-based Graphics Rendering and Message-oriented Streaming

    Get PDF
    The rapid technology advances in medical devices make possible the generation of vast amounts of data, which contain massive quantities of diagnostic information. Interactively accessing and sharing the acquired data on the Internet is critically important in telemedicine. However, due to the lack of efficient algorithms and high computational cost, collaborative medical data exploration on the Internet is still a challenging task in clinical settings. Therefore, we develop a web-based medical image rendering and visual synchronization software platform, in which novel algorithms are created for parallel data computing and image feature enhancement, where Node.js and Socket.IO libraries are utilized to establish bidirectional connections between server and clients in real time. In addition, we design a new methodology to stream medical information among all connected users, whose identities and input messages can be automatically stored in database and extracted in web browsers. The presented software framework will provide multiple medical practitioners with immediate visual feedback and interactive information in applications such as collaborative therapy planning, distributed treatment, and remote clinical health care

    Efficient, Distributed and Interactive Neuroimaging Data Analysis Using the LONI Pipeline

    Get PDF
    The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications, documentation and usage are available online (http://Pipeline.loni.ucla.edu)

    Real time web-based toolbox for computer vision

    Get PDF
    The last few years have been strongly marked by the presence of multimedia data (images and videos) in our everyday lives. These data are characterized by a fast frequency of creation and sharing since images and videos can come from different devices such as cameras, smartphones or drones. The latter are generally used to illustrate objects in different situations (airports, hospitals, public areas, sport games, etc.). As result, image and video processing algorithms have got increasing importance for several computer vision applications such as motion tracking, event detection and recognition, multimedia indexation and medical computer-aided diagnosis methods. In this paper, we propose a real time cloud-based toolbox (platform) for computer vision applications. This platform integrates a toolbox of image and video processing algorithms that can be run in real time and in a secure way. The related libraries and hardware drivers are automatically integrated and configured in order to offer to users an access to the different algorithms without the need to download, install and configure software or hardware. Moreover, the platform offers the access to the integrated applications from multiple users thanks to the use of Docker (Merkel, 2014) containers and images. Experimentations were conducted within three kinds of algorithms: 1. image processing toolbox. 2. Video processing toolbox. 3. 3D medical methods such as computer-aided diagnosis for scoliosis and osteoporosis.  These experimentations demonstrated the interest of our platform for sharing our scientific contributions related to computer vision domain. The scientific researchers could be able to develop and share easily their applications fastly and in a safe way

    Accessing 3D Data

    Get PDF
    The issue of access and discoverability is not simply a matter of permissions and availability. To identify, locate, retrieve, and reuse 3D materials requires consideration of a multiplicity of content types, as well as community and financial investment to resolve challenges related to usability, interoperability, sustainability, and equity. This chapter will cover modes, audiences, assets and decision points, technology requirements, and limitations impacting access, as well as providing recommendations for next steps

    Automatic Computation of Potential Tumor Regions in Cancer Detection using Fractal analysis techniques

    Get PDF
    Radiology is one of the most active and technologically advanced fields in medicine. It was born from the most advanced physics concepts, and it became a reality thanks to the state-of-the art of electronics and computer science. The advances of medical imaging have made possible the early detection and diagnosis of multiple affections that were not at our reach just some years ago. However, progress comes with a price. The raise of imaging machinery has implied that the number and complexity of technical parameters have grown in the same proportion, and the amount of information generated by the imaging devices is much larger. In spite of technical progress, medical imaging supply chain invariantly finalizes at the same point: a human being, typically the radiologist or medical practitioner in charge to interpret the obtained images. At the end, it is not unusual that human operators check one by one two hundred slices of a Computer Tomography coming from a single routine control scanner. It is not surprising if some tiny detail is missed when searching for “something wrong”, especially after some hours of continuous visualization, or due to insufficient time budgets. One of the milestones of this work is providing the reader with an overview of the field of volumetric medical imaging, in order to achieve a sufficient understanding of the problematic involving this discipline. This master thesis is mainly an exercise of exploration of a set of techniques, based on fractal analysis, aimed to provide any sort of computational help to the personal in charge of the interpretation of volumetric medical images. Fractal analysis is a set of powerful tools which have been applied successfully in multiple fields. The thesis goal has been to apply these techniques within the scope of tumor detection on liver tissues and evaluate their efficiency and adequateness

    Information visualization of the stock market ticks : toward a new trading interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, February 2004.Includes bibliographical references (leaves 80-82).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Ticks, the second-to-second trades and quotes of a market, might be considered the atoms of finance. They represent the basic, defining transactions that represent an asset in the market. Almost all financial concepts, such as returns or risk, are essentially abstractions from tick data. Like atoms, ticks form a truly massive dataset - millions per day, far too numerous to represent with traditional graphical methods. Information-intensive disciplines such as medicine, bioinformatics, the earth sciences, and computational fluid dynamics have adopted modern visualization methods to manage and explore their respective mountains of data. Finance, however, has done little with the discipline of visualization to date. This research presents the hypothesis that information visualization of tick data can improve human performance for intraday equity trading. To consider the hypothesis, the act of equity trading is broken into functional tasks, and the tasks mapped to information requirements. Using the TAQ historical dataset, the research evaluates new 2D and 3D information designs for tick data, and creates a visual language for equity trading. The functional tasks and information designs are implemented in a visualization application, which provides an almost purely graphical trading interface to historical ticks. The user experience, trading performance, and analytical insight from this application are evaluated versus numeric methods. Based upon this experiment, the research concludes by exploring the usability, potential issues and future directions of trading and tick visualization in general.by Pasha Roberts.S.M

    Offline and Online Interactive Frameworks for MRI and CT Image Analysis in the Healthcare Domain : The Case of COVID-19, Brain Tumors and Pancreatic Tumors

    Get PDF
    Medical imaging represents the organs, tissues and structures underneath the outer layers of skin and bones etc. and stores information on normal anatomical structures for abnormality detection and diagnosis. In this thesis, tools and techniques are used to automate the analysis of medical images, emphasizing the detection of brain tumor anomalies from brain MRIs, Covid infections from lung CT images and pancreatic tumor from pancreatic CT images. Image processing methods such as filtering and thresholding models, geometry models, graph models, region-based analysis, connected component analysis, machine learning models, and recent deep learning models are used. The following problems for medical images : abnormality detection, abnormal region segmentation, interactive user interface to represent the results of detection and segmentation while receiving feedbacks from healthcare professionals to improve the analysis procedure, and finally report generation, are considered in this research. Complete interactive systems containing conventional models, machine learning, and deep learning methods for different types of medical abnormalities have been proposed and developed in this thesis. The experimental results show promising outcomes that has led to the incorporation of the methods for the proposed solutions based on the observations of the performance metrics and their comparisons. Although currently separate systems have been developed for brain tumor, Covid and pancreatic cancer, the success of the developed systems show a promising potential to combine them to form a generalized system for analyzing medical imaging of different types collected from any organs to detect any type of abnormalities
    corecore