2,091 research outputs found

    Computer-assisted detection of lung cancer nudules in medical chest X-rays

    Get PDF
    Diagnostic medicine was revolutionized in 1895 with Rontgen's discovery of x-rays. X-ray photography has played a very prominent role in diagnostics of all kinds since then and continues to do so. It is true that more sophisticated and successful medical imaging systems are available. These include Magnetic Resonance Imaging (MRI), Computerized Tomography (CT) and Positron Emission Tomography (PET). However, the hardware instalment and operation costs of these systems remain considerably higher than x-ray systems. Conventional x-ray photography also has the advantage of producing an image in significantly less time than MRI, CT and PET. X-ray photography is still used extensively, especially in third world countries. The routine diagnostic tool for chest complaints is the x-ray. Lung cancer may be diagnosed by the identification of a lung cancer nodule in a chest x-ray. The cure of lung cancer depends upon detection and diagnosis at an early stage. Presently the five-year survival rate of lung cancer patients is approximately 10%. If lung cancer can be detected when the tumour is still small and localized, the five-year survival rate increases to about 40%. However, currently only 20% of lung cancer cases are diagnosed at this early stage. Giger et al wrote that "detection and diagnosis of cancerous lung nodules in chest radiographs are among the most important and difficult tasks performed by radiologists"

    The investigation of the characterisation of flotation froths and design of a machine vision system for monitoring the operation of a flotation cell ore concentration

    Get PDF
    Electrical and Electronic EngineeringThis dissertation investigates the application of digital image processing techniques in the development of a machine vision system that is capable of characterising the froth structures prevalent on the surface of industrial flotation cells. At present, there is no instrument available that has the ability to measure the size and shape of the bubbles that constitute the surface froth. For this reason, research into a vision based system for surface froth characterisation has been undertaken. Being able to measure bubble size and shape would have far reaching consequences, not only in enhancing the understanding of the flotation process but also in the control and optimization of flotation cells

    Multiscale Edge Detection using a Finite Element Framework for Hexagonal Pixel-based Images

    Get PDF

    New Business Models for the Reuse of Secondary Resources from WEEEs

    Get PDF
    This open access book summarizes research being pursued within the FENIX project, funded by the EU community under the H2020 programme, the goal of which is to design a new product service paradigm able to promote innovative business models, to open added value to the vessels and to create new market segments. It experiments and validates its approach on three new concepts of added-value specialized vessels able to run requested services for several maritime sectors in the most effective, efficient, economic valuable and eco-friendly way. The three vessels share the same lean design methodology, IoT tools and HPC simulation strategy: a lean fact-based design model approach, which combines real operative data at sea with lean methodology, to support the development and implementation of the vessel concepts; IT customized tools to enable the acquisition, processing and usage of on board and local weather data, through an IoT platform, to provide business services to different stakeholders; HPC simulation, providing a virtual towing tank environment, for early vessel design improvement and testing. The book demonstrates that an integrated LCC analysis and LCC strategy to guarantee sustainability to vessels concepts and the proper environmental attention inside the maritime industry

    Towards the automation of product geometric verification: An overview

    Get PDF
    The paper aims at providing an overview on the current automation level of geometric verification process with reference to some aspects that can be considered crucial to achieve a greater efficiency, accuracy and repeatability of the inspection process. Although we are still far from making this process completely automatic, several researches were made in recent years to support and speed up the geometric error evaluation and to make it less human-intensive. The paper, in particular, surveys: (1) models of specification developed for an integrated approach to tolerancing; (2) state of the art of Computer-Aided Inspection Planning (CAIP); (3) research efforts recently made for limiting or eliminating the human contribution during the data processing aimed at geometric error evaluation. Possible future perspectives of the research on the automation of geometric verification process are finally described

    An Alternative to Graph Matching for Locating Objects from their Salient Features

    Full text link
    means for locating objects in two dimensions. However, the technique has certain problems, since the maximal clique approach to graph matching which it employs can be excessively computation intensive. This raises the question of whether better results could be obtained by other means. Here we attempt to answer this question, and in particular to compare the LFF and GHT schemas. The actual comparison is carried out in section 4, sections 2 and 3 being devoted to respective preliminary studies of the two methods. The local-feature-focus method has become a standard means for robustly locating objects in two dimensions. Yet it is not without its difficulties, since the maximal clique approach to graph matching which it employs is excessively computation intensive, belonging to the class of NP-complete problems. Here we explore whether similar results can be obtained using other approaches, and in particular with the generalised Hough transform. The latter approach is found to be essentially equivalent to graph matching, while permitting objects to be located in polynomial (0(n)) time. 1

    New Business Models for the Reuse of Secondary Resources from WEEEs

    Get PDF
    This open access book summarizes research being pursued within the FENIX project, funded by the EU community under the H2020 programme, the goal of which is to design a new product service paradigm able to promote innovative business models, to open added value to the vessels and to create new market segments. It experiments and validates its approach on three new concepts of added-value specialized vessels able to run requested services for several maritime sectors in the most effective, efficient, economic valuable and eco-friendly way. The three vessels share the same lean design methodology, IoT tools and HPC simulation strategy: a lean fact-based design model approach, which combines real operative data at sea with lean methodology, to support the development and implementation of the vessel concepts; IT customized tools to enable the acquisition, processing and usage of on board and local weather data, through an IoT platform, to provide business services to different stakeholders; HPC simulation, providing a virtual towing tank environment, for early vessel design improvement and testing. The book demonstrates that an integrated LCC analysis and LCC strategy to guarantee sustainability to vessels concepts and the proper environmental attention inside the maritime industry

    Management of object-oriented action-based distributed programs

    Get PDF
    Phd ThesisThis thesis addresses the problem of managing the runtime behaviour of distributed programs. The thesis of this work is that management is fundamentally an information processing activity and that the object model, as applied to actionbased distributed systems and database systems, is an appropriate representation of the management information. In this approach, the basic concepts of classes, objects, relationships, and atomic transition systems are used to form object models of distributed programs. Distributed programs are collections of objects whose methods are structured using atomic actions, i.e., atomic transactions. Object models are formed of two submodels, each representing a fundamental aspect of a distributed program. The structural submodel represents a static perspective of the distributed program, and the control submodel represents a dynamic perspective of it. Structural models represent the program's objects, classes and their relationships. Control models represent the program's object states, events, guards and actions-a transition system. Resolution of queries on the distributed program's object model enable the management system to control certain activities of distributed programs. At a different level of abstraction, the distributed program can be seen as a reactive system where two subprograms interact: an application program and a management program; they interact only through sensors and actuators. Sensors are methods used to probe an object's state and actuators are methods used to change an object's state. The management program is capable to prod the application program into action by activating sensors and actuators available at the interface of the application program. Actions are determined by management policies that are encoded in the management program. This way of structuring the management system encourages a clear modularization of application and management distributed programs, allowing better separation of concerns. Managemental concerns can be dealt with by the management program, functional concerns can be assigned to the application program. The object-oriented action-based computational model adopted by the management system provides a natural framework for the implementation of faulttolerant distributed programs. Object orientation provides modularity and extensibility through object encapsulation. Atomic actions guarantee the consistency of the objects of the distributed program despite concurrency and failures. Replication of the distributed program provides increased fault-tolerance by guaranteeing the consistent progress of the computation, even though some of the replicated objects can fail. A prototype management system based on the management theory proposed above has been implemented atop Arjuna; an object-oriented programming system which provides a set of tools for constructing fault-tolerant distributed programs. The management system is composed of two subsystems: Stabilis, a management system for structural information, and Vigil, a management system for control information. Example applications have been implemented to illustrate the use of the management system and gather experimental evidence to give support to the thesis.CNPq (Consellho Nacional de Desenvolvimento Cientifico e Tecnol6gico, Brazil): BROADCAST (Basic Research On Advanced Distributed Computing: from Algorithms to SysTems)
    corecore