50 research outputs found

    Characterisation of atrial flutter variants based on the analysis of spatial vectorcardiographic trajectory from standard ECG

    Get PDF
    After atrial fibrillation, atrial flutter is the most common atrial tachyarrhythmia. Its diagnosis relies on the twelve lead electrocadriogram analysis of the distinctive waves in several leads. Nonetheless, the accurate identification of the type of atrial flutter still requires an invasive procedure. The maneuver for healing atrial flutter consists on ablating a section of the anatomy of the atria, to stop the macroreentrant circuit to keep happening, allowing the signal to travel to the ventricles in stead of staying at the atria. The region to ablate directly depends on the place at which the macroreentrant circuit is located, which at the same time depends on the type of atrial flutter. Being able to noninvasively detect the atrial flutter variant would produce a great advantage when healing this illness. The hypothesis stated at this dissertation is based on the slow conduction regions as the key factor to distinguish the atrial flutter class. This and unveiling further relations between cardiac illnesses and their signal’s alter ego are the purpose of this research project. With such aim, different methods are developed based on the vectorcardiographic representation of electrocardiograms from patients suffering from different atrial flutter types. These methods consist on the characterisation of vectorcardiographic signals from different standpoints. Besides, a mathematical model is implemented to create a large database with synthetic vectorcardiographic signals allowing to test the validity of the utilised methods. The results prove the importance of slow regions in the vectorcardiographic representation of the patient’s signals to characterise the atrial flutter type non-invasively. Furthermore, the analysis of the outcome of the different methods reveal a wide variety of features relating characteristics of the vectorcardiographic signal to the anatomy and physiology of this cardiac disease. Hence, not only results supporting the hypothesis were successful (taking into account some limitations), but also a variegated assortment of results unmasked remarkable relations among the vectorcardiographic signal and the characteristics of the atrial flutter disease.Ingeniería Biomédic

    Expert knowledge for computerized ECG interpretation

    Get PDF
    In this study, two main questions are addressed: (1) Can the time consuming and cumbersome development and refinement of (heuristic) ECG classifiers be alleviated, and (2) Is it possible to increase diagnostic performance of ECG computer programs by combining knowledge from multiple sources? Chapters 2 and 3 are of an introductory character. In Chapter 2, the measurement part of MEANS is described and evaluated. This research largely depends on the earlier work of Talman [11]. In Chapter 3, different methods of diagnostic ECG classification are described and their pros and cons discussed. The issue is raised whether or not the ECG should be classified using as much prior information as possible, and our position is made clear. The first question~ how to ease the transfer of cardiological knowledge into computer algorithms, is addressed in Chapters 4 and 5. The development and refinement of heuristic ECG classifiers is impeded by two problems: (1) It generally requires a computer expert to translate the cardiologist's reasoning into computer language without the average cardiologist being able to verify whether his diagnostic intentions were properly realized, and (2) The classifiers are often so complex as to obscure insight into their doings when a particular case is processed by the classification program. To circumvent these problems. we developed a dedicated language. DTL (Decision Tree Language), and an interpreter and compiler of that language. In Chapter 4, a comprehensive description of the DTL environment is given. In Chapter 5, the use of the environment to optimize MEANS, following a procedure of stepwise refmement, is described The second question, whether it is feasible to combine knowledge from multiple sources in order to increase diagnostic performance of an ECG computer program, is explored from several perspectives in Chapters 6 tlrrough

    Effects of Felled Shortleaf Pine (Pinus Echinata Mill.) Moisture Loss on Oviposition Preferences and Survival of Sirex Nigricornis F. (Hymenoptera: Siricidae)

    Get PDF
    The European woodwasp, Sirex noctilio F. (Hymenoptera: Siricidae) utilizes pine as its host during larval development. Females drill through pine bark to deposit eggs, a symbiotic fungus, Amylostereum, and phytotoxic mucus into the tree. In their native range, these insects are not viewed as primary pests because they attack dead or dying trees. Over the last century, this woodwasp has been accidentally introduced into several countries in the southern hemisphere. Some regions have incurred millions of dollars in damage to large plantations of the widely planted pine species, radiata pine (Pinus radiata D. Don). Sirex noctilio was discovered in northeastern United States and Canada in 2004. Prior studies have focused on damage done to pine stands in the southern hemisphere and, because those pines are not native, these studies may not be applicable to native pines in the U.S. The southeastern U.S. contains millions of hectares of possibly susceptible pine trees and, thus it is advisable to study the native Arkansas woodwasp, S. nigricornis F., (as a species with similar biology) in preparation for a possible invasion by its exotic counterpart. The objectives of this research were to 1) examine how shortleaf pine (Pinus echinata Mill.) logs (bolts) in moderate drought conditions of Arkansas lose moisture over time, and 2) determine oviposition preferences of Sirex nigricornis females in aging pine bolts. To complete these objectives, shortleaf pines were felled and moisture content was measured over a period of 45 days. Moisture content results were used to create parameters for oviposition choice experiments. After a cross-sectional cut was made, the most moisture loss occurs within 3-4 cm of bolt ends while the center of the bolt stays consistent during this time period. Females prefer to oviposit in recently cut bolts. Using these results, trap tree methods can be altered to create more efficient methods of siricid capture and laboratory rearing

    Verification, slicing, and visualization of programs with contracts

    Get PDF
    Tese de doutoramento em Informática (área de especialização em Ciências da Computação)As a specification carries out relevant information concerning the behaviour of a program, why not explore this fact to slice a program in a semantic sense aiming at optimizing it or easing its verification? It was this idea that Comuzzi, in 1996, introduced with the notion of postcondition-based slicing | slice a program using the information contained in the postcondition (the condition Q that is guaranteed to hold at the exit of a program). After him, several advances were made and different extensions were proposed, bridging the two areas of Program Verification and Program Slicing: specifically precondition-based slicing and specification-based slicing. The work reported in this Ph.D. dissertation explores further relations between these two areas aiming at discovering mutual benefits. A deep study of specification-based slicing has shown that the original algorithm is not efficient and does not produce minimal slices. In this dissertation, traditional specification-based slicing algorithms are revisited and improved (their formalization is proposed under the name of assertion-based slicing), in a new framework that is appropriate for reasoning about imperative programs annotated with contracts and loop invariants. In the same theoretical framework, the semantic slicing algorithms are extended to work at the program level through a new concept called contract based slicing. Contract-based slicing, constituting another contribution of this work, allows for the study of a program at an interprocedural level, enabling optimizations in the context of code reuse. Motivated by the lack of tools to prove that the proposed algorithms work in practice, a tool (GamaSlicer) was also developed. It implements all the existing semantic slicing algorithms, in addition to the ones introduced in this dissertation. This third contribution is based on generic graph visualization and animation algorithms that were adapted to work with verification and slice graphs, two specific cases of labelled control low graphs.Tendo em conta que uma especificação contém informação relevante no que diz respeito ao comportamento de um programa, faz sentido explorar este facto para o cortar em fatias (slice) com o objectivo de o optimizar ou de facilitar a sua verificação. Foi precisamente esta ideia que Comuzzi introduziu, em 1996, apresentando o conceito de postcondition-based slicing que consiste em cortar um programa usando a informação contida na pos-condicão (a condição Q que se assegura ser verdadeira no final da execução do programa). Depois da introdução deste conceito, vários avanços foram feitos e diferentes extensões foram propostas, aproximando desta forma duas áreas que até então pareciam desligadas: Program Verification e Program Slicing. Entre estes conceitos interessa-nos destacar as noções de precondition-based slicing e specification-based slicing, que serão revisitadas neste trabalho. Um estudo aprofundado do conceito de specification-based slicing relevou que o algoritmo original não é eficiente e não produz slices mínimos. O trabalho reportado nesta dissertação de doutoramento explora a ideia de tornar mais próximas essas duas áreas visando obter benefícios mútuos. Assim, estabelecendo uma nova base teórica matemática, os algoritmos originais de specification-based slicing são revistos e aperfeiçoados | a sua formalizacão é proposta com o nome de assertion-based slicing. Ainda sobre a mesma base teórica, os algoritmos de slicing são extendidos, de forma a funcionarem ao nível do programa; alem disso introduz-se um novo conceito: contract-based slicing. Este conceito, contract-based slicing, sendo mais um dos contributos do trabalho aqui descrito, possibilita o estudo de um programa ao nível externo de um procedimento, permitindo, por um lado, otimizações no contexto do seu uso, e por outro, a sua reutilização segura. Devido à falta de ferramentas que provem que os algoritmos propostos de facto funcionam na prática, foi desenvolvida uma, com o nome GamaSlicer, que implementa todos os algoritmos existentes de slicing semântico e os novos propostos. Uma terceira contribuição é baseada nos algoritmos genéricos de visualização e animação de grafos que foram adaptados para funcionar com os grafos de controlo de fluxo etiquetados e os grafos de verificação e slicing.Fundação para a Ciência e a Tecnologia (FCT) através da Bolsa de Doutoramento SFRH/BD/33231/2007Projecto RESCUE (contrato FCT sob a referência PTDC / EIA / 65862 /2006)Projecto CROSS (contrato FCT sob a referência PTDC / EIACCO / 108995 / 2008

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Spacelab 3

    Get PDF
    The primary purpose of the Spacelab 3 mission is to conduct materials science experiments in a stable low-gravity environment. In addition, the crew will do research in life sciences, fluid mechanics, atmospheric science, and astronomy. Spacelab 3 and a mission scenario are described. Mission development and management and the crew are described. Summaries of the scientific investigations are also included

    A tool for evaluating the early-stage design of corvettes

    Get PDF
    Thesis (S.M. in Naval Architecture and Marine Engineering)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 140).In naval architecture terminology, the term "corvette" refers to a class of ships that are shorter than frigates and longer than patrol boats. Corvettes have always been the centerpiece of the navies whose mission requirements are based on littoral combat such as Anti-Submarine Warfare, Mine Warfare, and Anti-Surface Warfare. Numerous studies have focused on frigates and patrol boats in the history of naval architecture. However, few studies applied to corvettes. There is a trend in the ship building industry to design new ships as corvettes [1] since they can operate both independently and in joint missions. However, it is difficult for a naval architect to manage all the information flow throughout the corvette design process. When the displacement of the ship gets larger, this design process also becomes more complicated. The management of this process becomes more efficient by using computer programs. However, programs for use in the design of corvettes do not exist. This thesis explains how early-stage estimations are made for corvettes. In order to cover this future trend in marine transportation, a MatlabTM model for the estimation of the main characteristics of corvettes in the early-stage design is also developed. This MatlabTM model is based on a statistical analysis of existing ships that are classified as corvettes. The database used in this study is created by using the public information that is available to the author. For this study, design lanes are created, trend lines are drawn and relationships between the desired values are graphed. For the validation of the code, the Kral J Petar Kresimir, Eilat (SAAR 5) and Robinson are used as reference ships in this study. The customer requirements of these ships are entered into the model. The results show that the data of these ships fall within the design lanes.by Mustafa Yasin Kara.S.M.in Naval Architecture and Marine Engineerin
    corecore