12 research outputs found
The technology of image processing and its application in the business world.
by Choy Ho Yuk, Anthony & Lo Shin Sing, Samuel.Thesis (M.B.A.)--Chinese University of Hong Kong, 1991.Bibliography: leaves 80-82.ABSTRACT --- p.iiTABLE OF CONTENTS --- p.iiiLIST OF ILLUSTRATIONS --- p.vChapterChapter I . --- INTRODUCTION --- p.1Evolution of Image Processing --- p.1Scope and Statement of the Problem --- p.2Methodology --- p.4Chapter II. --- TECHNOLOGY OF IMAGE PROCESSING --- p.6Concept of Image --- p.6What is Image --- p.6Image as Non-coded Information --- p.10Types of Image --- p.10General Flow of Image Processing --- p.11Storage of Image Documents --- p.14Electromagnetic devices --- p.15Optical Disk and Juke Boxes --- p.15Storage Management --- p.18Image Management System --- p.20Image Communication --- p.22Chapter III. --- APPLICATIONS OF IMAGE PROCESSING --- p.25How to Implement an Image Processing System --- p.25Feasibility Study --- p.25Implementation Stages --- p.27Benefits of Image Processing --- p.31Storage --- p.31Document Organization --- p.32Data Security --- p.32Data Integrity --- p.33Document Retrieval and Workflow Management --- p.33Concurrency --- p.34Issues of Image Processing --- p.35Cost Justification --- p.35Paper Storage Elimination --- p.37Data Conversion --- p.38Legal acceptance of Image Document --- p.39Environment suitable for Image Processing --- p.40Banks --- p.40Hospitals --- p.41Insurance Companies --- p.42USAA Image Processing Case Study --- p.43Chapter IV. --- INTEGRATION OF IMAGE PROCESSING WITH OTHER TECHNOLOGY --- p.47Interface with Data Processing --- p.47Interface with Microfilm --- p.48Input --- p.48Storage Media --- p.49Output --- p.50Software --- p.51Comparison between Microfilm and Optical Disk --- p.52Integration of Microfilm with Optical Disk --- p.56Interface with Facsimile --- p.57Chapter V. --- EVALUATION OF EXISTING IMAGING SYSTEMS AND PRODUCTS --- p.59Image Management Systems --- p.59Wang Integrated Image Systems --- p.60IBM Imageplus --- p.61Philips Megadoc --- p.62Scanners --- p.63Wang Laboratories --- p.64Ricoh --- p.65Optical Disks --- p.66Storage Dimensions --- p.67Wang Laboratories --- p.68Limitation of Optical Disk Systems --- p.68Printers --- p.69Wang Laboratories --- p.70IBM --- p.71Workstations --- p.71Wang PC 200/300 Series Image Workstation --- p.72IBM PS/2 Imageplus Workstation --- p.73Chapter VI. --- CONCLUSION --- p.75BIBLIOGRAPHY --- p.8
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
Improvements to the alignment process in electron-beam lithography
Electron beam lithography is capable of defining structures with sub-10 nm linewidths. To exploit this capability to produce working devices with structures defined in multiple 'lithographic steps' a process of alignment must be used. The conventional method of scanning the electron beam across simple geometrically shaped markers will be shown inherently to limit the alignment accuracy attainable. Improvements to alignment allow precise placement of elements in complex multi-level devices and may be used to realise structures which are significantly smaller than the single exposure resist limit.
Correlation based alignment has been used previously as an alignment technique, providing improvements to the attainable accuracy and noise immunity of alignment. It is well known that the marker pattern used in correlation based alignment has a strong influence on the magnitude of the improvements that can be realised. There has, to date, however, been no analytical study of how the design of marker pattern affects the correlation process and hence the alignment accuracy possible. This thesis analyses the correlation process to identify the features of marker patterns that are advantageous for correlation based alignment. Several classes of patterns have been investigated, with a range of metrics used to determine the suitability and performance of each type of pattern. Penrose tilings were selected on this basis as the most appropriate pattern type for use as markers in correlation based alignment.
A process for performing correlation based alignment has been implemented on a commercial electron beam lithography tool and the improvements to the alignment accuracy have been demonstrated. A method of measuring alignment accuracy at the nanometer scale, based on the Fourier analysis of inter-digitated grating has been introduced.
The improvements in alignment accuracy realised have been used to facilitate the fabrication of 'nanogap' and 'nanowire' devices - structures which have application in the fields of molecular electronics and quantum conduction. Fabrication procedures for such devices are demonstrated and electrical measurements of such structures presented to show that it is a feasible method of fabrication which offers much greater flexibility than the existing methods for creating these devices
Records Management System: Indexing Standard, Document Standard, Technology Standard, Laboratory Configuration, 1997
Purpose of Records Management Sysytem indexing standards document is to develop indexing standards that will best support Iowa DOT's effort to build an agency-wide Records Management Syste
Integração modular em visualizadores médicos e aposição da marcação CE do Dispositivo Médico
Neste documento ´e feita a descrição detalhada da integração modular de um script no software OsiriX. O objectivo deste script ´e determinar o diâmetro central da artéria aorta a partir de uma Tomografia Computorizada. Para tal são abordados conceitos relacionados com a temática do processamento de imagem digital, tecnologias associadas, e.g., a norma DICOM e desenvolvimento de software.
Como estudo preliminar, são analisados diversos visualizadores de imagens médica, utilizados para investigação ou mesmo comercializados.
Foram realizadas duas implementações distintas do plugin. A primeira versão do plugin faz a invocação do script de processamento usando o ficheiro de estudo armazenado em disco; a segunda versão faz a passagem de dados através de um bloco de memória partilhada e utiliza o framework Java Native Interface.
Por fim, é demonstrado todo o processo de aposição da Marcação CE de um dispositivo médico de classe IIa e obtenção da declaração de conformidade por parte de um Organismo Notificado.
Utilizaram-se os Sistemas Operativos Mac OS X e Linux e as linguagens de programação Java, Objective-C e Python.This thesis presents an approach to some concepts related to the theme digital image processing and associated technologies, for example, the DICOM standard.
There are presented some of the medical image viewer used for research or to be sold, a detailed description of modular integration into the software of a script-Osirix is also made. The purpose of this script is to determine the core diameter of the aorta from a CT scan.
Two different implementations of the plug-in have been performed. The first version of the plug-in relies on the processing script using the study file stored on disk; The second version makes the transitions from the data through a shared memory block and uses the Java Native Interface framework.
Finally, it is shown the whole process of affixing the CE marking of a medical device class IIa and the attainment of the declaration of conformity by a Notified Body.
Operating Systems as Mac OS X and Linux have been used as well as the programming languages Java, Objective-C and Python
Digital imaging technology assessment: Digital document storage project
An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes
Sparse Coral Classification Using Deep Convolutional Neural Networks
Autonomous repair of deep-sea coral reefs is a recent proposed idea to support the oceans ecosystem in which is vital for commercial fishing, tourism and other species. This idea can be operated through using many small autonomous underwater vehicles (AUVs) and swarm intelligence techniques to locate and replace chunks of coral which have been broken off, thus enabling re-growth and maintaining the habitat. The aim of this project is developing machine vision algorithms to enable an underwater robot to locate a coral reef and a chunk of coral on the seabed and prompt the robot to pick it up. Although there is no literature on this particular problem, related work on fish counting may give some insight into the problem. The technical challenges are principally due to the potential lack of clarity of the water and platform stabilization as well as spurious artifacts (rocks, fish, and crabs). We present an efficient sparse classification for coral species using supervised deep learning method called Convolutional Neural Networks (CNNs). We compute Weber Local Descriptor (WLD), Phase Congruency (PC), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors, which are employed to be supplementary channels (feature-based maps) besides basic spatial color channels (spatial-based maps) of coral input image, we also experiment state-of-art preprocessing underwater algorithms for image enhancement and color normalization and color conversion adjustment. Our proposed coral classification method is developed under MATLAB platform, and evaluated by two different coral datasets (University of California San Diego's Moorea Labeled Corals, and Heriot-Watt University's Atlantic Deep Sea)
Recommended from our members
Hardware, software, and applications of super-resolution microscopy
Modern microscopy techniques can image beyond the diffraction limit, in three spatial dimensions, and capture sub-cellular resolution videos, providing new biological insight and assisting in drug development.
However, such advanced instruments typically require expert engineers and physicists to operate them, limiting their throughput and practicality for answering biological questions.
Moreover, analysis of the raw data is complicated and there are significant barriers to publishing and sharing the data with others.
This thesis addresses these problems, presenting two tools designed to reduce the level of expertise required to acquire and publish modern microscopy data.
The development of a structured illumination microscope (SIM) is described, with a particular emphasis on control and reconstruction software designed to make SIM accessible to biologists who are new to super-resolution microscopy.
The microscope's ease-of-use has led to a wide variety of biological investigations, which are presented as case studies to assist readers of this thesis in designing their own SIM experiments.
The current practice for publishing 3D data is to show 2D intensity projections or fly-through videos, which present the data only from the author's perspective and do not give readers the opportunity to explore the results themselves.
To solve this problem, Chapter 3 introduces my new volumetric rendering program, FPBioimage, which runs in a web browser.
By creating a tool that is intuitive and easy to use, FPBioimage enables researchers around the world to immediately view their colleagues' experimental results, even when separated by thousands of miles.
Two biological studies are discussed in detail to highlight the ability of these tools to answer the latest questions in cell biology.
SIM's combination of high speed and high resolution video capture reveals a pinching phenomenon in the endoplasmic reticulum which was previously unknown, responsible for active flow of luminal proteins.
FPBioimage is used to show metal organic frameworks successfully delivering sensitive drugs to cells, establishing a new method of cancer treatment.
All software presented in this thesis is freely available, and has been carefully written to be reusable by other researchers.
This is evidenced by OMERO, an online microscopy data repository, adopting FPBioimage as their default volumetric renderer.
The open-source license under which the software is distributed means that developers can continue to build on the programs, extending the capabilities as new technology becomes available.Integrated Photonic and Electronic Systems Centre for Doctoral Training (IPES CDT
Eigenimage Processing of Frontal Chest Radiographs
The goal of this research was to improve the speed and accuracy of reporting by clinical radiologists. By applying a technique known as eigenimage processing to chest radiographs, abnormal findings were enhanced and a classification scheme developed. Results confirm that the method is feasible for clinical use. Eigenimage processing is a popular face recognition routine that has only recently been applied to medical images, but it has not previously been applied to full size radiographs. Chest radiographs were chosen for this research because they are clinically important and are challenging to process due to their large data content. It is hoped that the success with these images will enable future work on other medical images such as those from CT and MRI. Eigenimage processing is based on a multivariate statistical method which identifies patterns of variance within a training set of images. Specifically it involves the application of a statistical technique called principal components analysis to a training set. For this research, the training set was a collection of 77 normal radiographs. This processing produced a set of basis images, known as eigenimages, that best describe the variance within the training set of normal images. For chest radiographs the basis images may also be referred to as 'eigenchests'. Images to be tested were described in terms of eigenimages. This identified patterns of variance likely to be normal. A new image, referred to as the remainder image, was derived by removing patterns of normal variance, thus making abnormal patterns of variance more conspicuous. The remainder image could either be presented to clinicians or used as part of a computer aided diagnosis system. For the image sets used, the discriminatory power of a classification scheme approached 90%. While the processing of the training set required significant computation time, each test image to be classified or enhanced required only a few seconds to process. Thus the system could be integrated into a clinical radiology department