2,073 research outputs found

    Fast and accurate classification of echocardiograms using deep learning

    Get PDF
    Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.Comment: 31 pages, 8 figure

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities

    Simulation and Optimization Of Ant Colony Optimization Algorithm For The Stochiastic Uncapacitated Location-Allocation Problem

    Get PDF
    This study proposes a novel methodology towards using ant colony optimization (ACO) with stochastic demand. In particular, an optimizationsimulation-optimization approach is used to solve the Stochastic uncapacitated location-allocation problem with an unknown number of facilities, and an objective of minimizing the fixed and transportation costs. ACO is modeled using discrete event simulation to capture the randomness of customers’ demand, and its objective is to optimize the costs. On the other hand, the simulated ACO’s parameters are also optimized to guarantee superior solutions. This approach’s performance is evaluated by comparing its solutions to the ones obtained using deterministic data. The results show that simulation was able to identify better facility allocations where the deterministic solutions would have been inadequate due to the real randomness of customers’ demands

    Enriching open-world knowledge graphs with expressive negative statements

    Get PDF
    Machine knowledge about entities and their relationships has been a long-standing goal for AI researchers. Over the last 15 years, thousands of public knowledge graphs have been automatically constructed from various web sources. They are crucial for use cases such as search engines. Yet, existing web-scale knowledge graphs focus on collecting positive statements, and store very little to no negatives. Due to their incompleteness, the truth of absent information remains unknown, which compromises the usability of the knowledge graph. In this dissertation: First, I make the case for selective materialization of salient negative statements in open-world knowledge graphs. Second, I present our methods to automatically infer them from encyclopedic and commonsense knowledge graphs, by locally inferring closed-world topics from reference comparable entities. I then discuss our evaluation fin-dings on metrics such as correctness and salience. Finally, I conclude with open challenges and future opportunities.Machine knowledge about entities and their relationships has been a long-standing goal for AI researchers. Over the last 15 years, thousands of public knowledge graphs have been automatically constructed from various web sources. They are crucial for use cases such as search engines. Yet, existing web-scale knowledge graphs focus on collecting positive statements, and store very little to no negatives. Due to their incompleteness, the truth of absent information remains unknown, which compromises the usability of the knowledge graph. In this dissertation: First, I make the case for selective materialization of salient negative statements in open-world knowledge graphs. Second, I present our methods to automatically infer them from encyclopedic and commonsense knowledge graphs, by locally inferring closed-world topics from reference comparable entities. I then discuss our evaluation fin-dings on metrics such as correctness and salience. Finally, I conclude with open challenges and future opportunities.Wissensgraphen über Entitäten und ihre Attribute sind eine wichtige Komponente vieler KI-Anwendungen. Wissensgraphen im Webmaßstab speichern fast nur positive Aussagen und übersehen negative Aussagen. Aufgrund der Unvollständigkeit von Open-World-Wissensgraphen werden fehlende Aussagen als unbekannt und nicht als falsch betrachtet. Diese Dissertation plädiert dafür, Wissensgraphen mit informativen Aussagen anzureichern, die nicht gelten, und so ihren Mehrwert für Anwendungen wie die Beantwortung von Fragen und die Zusammenfassung von Entitäten zu verbessern. Mit potenziell Milliarden negativer Aussagen von Kandidaten bewältigen wir vier Hauptherausforderungen. 1. Korrektheit (oder Plausibilität) negativer Aussagen: Unter der Open-World-Annahme (OWA) reicht es nicht aus, zu prüfen, ob ein negativer Kandidat im Wissensgraphen nicht explizit als positiv angegeben ist, da es sich möglicherweise um eine fehlende Aussage handeln kann. Von entscheidender Bedeutung sind Methoden zur Prüfung großer Kandidatengruppen, und zur Beseitigung falsch positiver Ergebnisse. 2. Bedeutung negativer Aussagen: Die Menge korrekter negativer Aussagen ist sehr groß, aber voller trivialer oder unsinniger Aussagen, z. B. “Eine Katze kann keine Daten speichern.”. Es sind Methoden zur Quantifizierung der Aussagekraft von Negativen erforderlich. 3. Abdeckung der Themen: Abhängig von der Datenquelle und den Methoden zum Abrufen von Kandidaten erhalten einige Themen oder Entitäten in demWissensgraphen möglicherweise keine negativen Kandidaten. Methoden müssen die Fähigkeit gewährleisten, Negative über fast jede bestehende Entität zu entdecken. 4. Komplexe negative Aussagen: In manchen Fällen erfordert das Ausdrücken einer Negation mehr als ein Wissensgraphen-Tripel. Beispielsweise ist “Einstein hat keine Ausbildung erhalten” eine inkorrekte Negation, aber “Einstein hat keine Ausbildung an einer US-amerikanischen Universität erhalten” ist korrekt. Es werden Methoden zur Erzeugung komplexer Negationen benötigt. Diese Dissertation geht diese Herausforderungen wie folgt an. 1. Wir plädieren zunächst für die selektive Materialisierung negativer Aussagen über Entitäten in enzyklopädischen (gut kanonisierten) Open-World-Wissensgraphen, und definieren formal drei Arten negativer Aussagen: fundiert, universell abwesend und konditionierte negative Aussagen. Wir stellen die Peer-basierte Negationsinferenz-Methode vor, um Listen hervorstechender Negationen über Entitäten zu erstellen. Die Methode berechnet relevante Peers für eine bestimmte Eingabeentität und verwendet ihre positiven Eigenschaften, um Erwartungen für die Eingabeentität festzulegen. Eine Erwartung, die nicht erfüllt ist, ist ein unmittelbar negativer Kandidat und wird dann anhand von Häufigkeits-, Wichtigkeits- und Unerwartetheitsmetriken bewertet. 2. Wir schlagen die Methode musterbasierte Abfrageprotokollextraktion vor, um hervorstechende Negationen aus umfangreichen Textquellen zu extrahieren. Diese Methode extrahiert hervorstechende Negationen über eine Entität, indem sie große Korpora, z.B., die Anfrageprotokolle von Suchmaschinen, unter Verwendung einiger handgefertigter Muster mit negativen Schlüsselwörtern sammelt. 3. Wir führen die UnCommonsense-Methode ein, um hervorstechende negative Phrasen über alltägliche Konzepte in weniger kanonisierten commonsense-KGs zu generieren. Diese Methode ist für die Negationsinferenz, Prüfung und Einstufung kurzer Phrasen in natürlicher Sprache konzipiert. Sie berechnet vergleichbare Konzepte für ein bestimmtes Zielkonzept, leitet aus dem Vergleich ihrer positiven Kandidaten Negationen ab, und prüft diese Kandidaten im Vergleich zum Wissensgraphen selbst, sowie mit Sprachmodellen (LMs) als externer Wissensquelle. Schließlich werden die Kandidaten mithilfe semantischer Ähnlichkeitserkennungshäufigkeitsmaßen eingestuft. 4. Um die Exploration unserer Methoden und ihrer Ergebnisse zu erleichtern, implementieren wir zwei Prototypensysteme. In Wikinegata wird ein System zur Präsentation der Peer-basierten Methode entwickelt, mit dem Benutzer negative Aussagen über 500K Entitäten aus 11 Klassen untersuchen und verschiedene Parameter der Peer-basierten Inferenzmethode anpassen können. Sie können den Wissensgraphen auch mithilfe einer Suchmaske mit negierten Prädikaten befragen. Im UnCommonsense-System können Benutzer genau prüfen, was die Methode bei jedem Schritt hervorbringt, sowie Negationen zu 8K alltäglichen Konzepten durchsuchen. Darüber hinaus erstellen wir mithilfe der Peer-basierten Negationsinferenzmethode den ersten groß angelegten Datensatz zu Demografie und Ausreißern in Interessengemeinschaften und zeigen dessen Nützlichkeit in Anwendungsfällen wie der Identifizierung unterrepräsentierter Gruppen. 5. Wir veröffentlichen alle in diesen Projekten erstellten Datensätze und Quellcodes unter https://www.mpi-inf.mpg.de/negation-in-kbs und https://www.mpi-inf.mpg.de/Uncommonsense

    Exploring the Effects of Cooperative Adaptive Cruise Control in Mitigating Traffic Congestion

    Get PDF
    The aim of this research is to examine the impact of CACC (Cooperative Adaptive Cruise Control) equipped vehicles on traffic-flow characteristics of a multilane highway system. The research identifies how CACC vehicles affect the dynamics of traffic flow on a road network and demonstrates the potential benefits of reducing traffic congestion due to stop-and-go traffic conditions. An agent-based traffic simulation model is developed specifically to examine the effect of these intelligent vehicles on the traffic flow dynamics. Traffic performance metrics characterizing the evolution of traffic congestion throughout the road network, are analyzed. Different CACC penetration levels are studied. The positive impact of the CACC technology is demonstrated and shown that it has an impact of increasing the highway capacity and mitigating traffic congestions. This effect is sensitive to the market penetration and the traffic arrival rate. In addition, a progressive deployment strategy for CACC is proposed and validated

    Towards reducing traffic congestion using cooperative adaptive cruise control on a freeway with a ramp

    Get PDF
    Purpose: In this paper, the impact of Cooperative Adaptive Cruise Control (CACC) systems on traffic performance is examined using microscopic agent-based simulation. Using a developed traffic simulation model of a freeway with an on-ramp - created to induce perturbations and to trigger stop-and-go traffic, the CACC system’s effect on the traffic performance is studied. The previously proposed traffic simulation model is extended and validated. By embedding CACC vehicles in different penetration levels, the results show significance and indicate the potential of CACC systems to improve traffic characteristics and therefore can be used to reduce traffic congestion. The study shows that the impact of CACC is positive but is highly dependent on the CACC market penetration. The flow rate of the traffic using CACC is proportional to the market penetration rate of CACC equipped vehicles and the density of the traffic. Design/methodology/approach: This paper uses microscopic simulation experiments followed by a quantitative statistical analysis. Simulation enables researchers manipulating the system variables to straightforwardly predict the outcome on the overall system, giving researchers the unique opportunity to interfere and make improvements to performance. Thus with simulation, changes to variables that might require excessive time, or be unfeasible to carry on real systems, are often completed within seconds. Findings: The findings of this paper are summarized as follow: • Provide and validate a platform (agent-based microscopic traffic simulator) in which any CACC algorithm (current or future) may be evaluated. • Provide detailed analysis associated with implementation of CACC vehicles on freeways. • Investigate whether embedding CACC vehicles on freeways has a significant positive impact or not. Research limitations/implications: The main limitation of this research is that it has been conducted solely in a computer laboratory. Laboratory experiments and/or simulations provide a controlled setting, well suited for preliminary testing and calibrating of the input variables. However, laboratory testing is by no means sufficient for the entire methodology validation. It must be complemented by fundamental field testing. As far as the simulation model limitations, accidents, weather conditions, and obstacles in the roads were not taken into consideration. Failures in the operation of the sensors and communication of CACC design equipment were also not considered. Additionally, the special HOV lanes were limited to manual vehicles and CACC vehicles. Emergency vehicles, buses, motorcycles, and other type of vehicles were not considered in this dissertation. Finally, it is worthy to note that the human factor is far more sophisticated, hard to predict, and flexible to be exactly modeled in a traffic simulation model perfectly. Some human behavior could occur in real life that the simulation model proposed would fail to model. Practical implications: A high percentage of CACC market penetration is not occurring in the near future. Thus, reaching a high penetration will always be a challenge for this type of research. The public accessibility for such a technology will always be a major practical challenge. With such a small headway safety gap, even if the technology was practically proven to be efficient and safe, having the public to accept it and feel comfortable in using it will always be a challenge facing the success of the CACC technology. Originality/value: The literature on the impact of CACC on traffic dynamics is limited. In addition, no previous work has proposed an open-source microscopic traffic simulator where different CACC algorithms could be easily used and tested. We believe that the proposed model is more realistic than other traffic models, and is one of the very first models to model the behavior CACC vehicles on freeways.Peer Reviewe

    Electric Machines: Tool in MATLAB

    Get PDF
    This chapter presents an educational modeling and parametric study of specific types of transformers, generators, and motors used in power system. Equivalent circuit models are presented and basic equations are developed. Through tests and operating conditions, essential parameters for each presented machine are extracted. Graphical user interface (GUI) on MATLAB software is used to study and analyze each element. GUI allows better comprehension and clearer vision to analyze the performance of each electric machine, thus, a complementary educational tool. In addition, GUI permits optimal collaborative learning situations when linked with the theoretical expansion and, thus, is a teaching process that forges the connection between traditional subjects and science education

    Writing Condition and Electronic Arbitration A Comparative Study

    Get PDF
    This research is concerned with the issue of writing the arbitration agreement which is a formal condition required by the comparative legislation to conclude the arbitration agreement. Its purpose is to identify all the legal aspects of this condition and demonstrate its concept, nature and aspects. Then a question about the extent of the need for the traditional writing condition for the electronic arbitration agreement to be legal and correct is raised out with respect to showing the concept of this kind of arbitration. Namely, how the writing condition is satisfied within it and what the required conditions for the electronic signature are so that the electronic arbitration agreement is effective and valid. This is done through a comparative study between the laws of each of Jordan, Egypt and England and the relevant international agreements and laws that treat this issue. This research has been divided into an introduction and two parts. In the first part, we tackled the writing condition, its writing aspects and nature; and in the second part we looked into the condition of writing the electronic arbitration agreement. Finally, the conclusions and the recommendations have been presented
    corecore