10 research outputs found

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    Trust assessment in the context of unrepresentative information

    Get PDF
    Trust and reputation algorithms are social methods, complementary to security protocols, that guide agents in multi-agent systems (MAS) in identifying trustworthy partners to communicate with. Agents need to interact to complete tasks, which requires delegating to an agent who has the time, resources or information to achieve it. Existing trust and reputation assessment methods can be accurate when they are learning from representative information, however, representative information rarely exists for all agents at all times. Improving trust mechanisms can benefit many open and distributed multi-agent applications. For example, distributing subtasks to trustworthy agents in pervasive computing or choosing who to share safe and high quality files with in a peer-to-peer network. Trust and reputation algorithms use the outcomes from past interaction experiences with agents to assess their behaviour. Stereotype models supplement trust and reputation methods when there is a lack of direct interaction experiences by inferring the target will behave the same as agents who are observably similar. These mechanisms can be effective in MAS where behaviours and agents do not change, or change in a simplistic way, for example, if agents changed their behaviour at the same rate. In real-world networks, agents experience fluctuations in their location, resources, knowledge, availability, time and priorities. Existing work does not account for the resulting dynamic dynamic populations and dynamic agent behaviours. Additionally, trust, reputation and stereotype models encourage repeat interactions with the same subset of agents which increase the uncertainty about the behaviour of the rest of the agent population. In the long term, having a biased view of the population hinders the discovery of new and better interaction partners. The diversity of agents and environments across MAS means that rigid approaches of maintaining and using data keep outdated information in some situations and not enough data in others. A logical improvement is for agents to manage information flexibly and adapt to their situation. In this thesis we present the following contributions. We propose a method to improve partner selection by making agents aware of a lack of diversity in their own knowledge and how to then make alternative behavioural assessments. We present methods for detecting dynamic behaviour in groups of agents, and give agents the statistical tools to decide which data are relevant. We introduce a data-free stereotype method to be used when there are no representative data for a data-driven behaviour assessment. Finally, we consider how agents can summarise agent behaviours to learn and exploit in depth behavioural patterns. The work presented in this thesis is evaluated in a synthetic environment designed to mimic characteristics of real-world networks and are comparable to evaluation environments from prominent trust and stereotype literature. The results show our work improves agents’ average reward from interactions by selecting better partners. We show that the efficacy of our work is most noticeable in environments where agents have sparse data, because it improve agents’ trust assessments under uncertainty

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    Algorithms and Software for the Analysis of Large Complex Networks

    Get PDF
    The work presented intersects three main areas, namely graph algorithmics, network science and applied software engineering. Each computational method discussed relates to one of the main tasks of data analysis: to extract structural features from network data, such as methods for community detection; or to transform network data, such as methods to sparsify a network and reduce its size while keeping essential properties; or to realistically model networks through generative models

    Endemic Machines:Acoustic adaptation and evolutionary agents

    Get PDF

    Coping with new Challenges in Clustering and Biomedical Imaging

    Get PDF
    The last years have seen a tremendous increase of data acquisition in different scientific fields such as molecular biology, bioinformatics or biomedicine. Therefore, novel methods are needed for automatic data processing and analysis of this large amount of data. Data mining is the process of applying methods like clustering or classification to large databases in order to uncover hidden patterns. Clustering is the task of partitioning points of a data set into distinct groups in order to minimize the intra cluster similarity and to maximize the inter cluster similarity. In contrast to unsupervised learning like clustering, the classification problem is known as supervised learning that aims at the prediction of group membership of data objects on the basis of rules learned from a training set where the group membership is known. Specialized methods have been proposed for hierarchical and partitioning clustering. However, these methods suffer from several drawbacks. In the first part of this work, new clustering methods are proposed that cope with problems from conventional clustering algorithms. ITCH (Information-Theoretic Cluster Hierarchies) is a hierarchical clustering method that is based on a hierarchical variant of the Minimum Description Length (MDL) principle which finds hierarchies of clusters without requiring input parameters. As ITCH may converge only to a local optimum we propose GACH (Genetic Algorithm for Finding Cluster Hierarchies) that combines the benefits from genetic algorithms with information-theory. In this way the search space is explored more effectively. Furthermore, we propose INTEGRATE a novel clustering method for data with mixed numerical and categorical attributes. Supported by the MDL principle our method integrates the information provided by heterogeneous numerical and categorical attributes and thus naturally balances the influence of both sources of information. A competitive evaluation illustrates that INTEGRATE is more effective than existing clustering methods for mixed type data. Besides clustering methods for single data objects we provide a solution for clustering different data sets that are represented by their skylines. The skyline operator is a well-established database primitive for finding database objects which minimize two or more attributes with an unknown weighting between these attributes. In this thesis, we define a similarity measure, called SkyDist, for comparing skylines of different data sets that can directly be integrated into different data mining tasks such as clustering or classification. The experiments show that SkyDist in combination with different clustering algorithms can give useful insights into many applications. In the second part, we focus on the analysis of high resolution magnetic resonance images (MRI) that are clinically relevant and may allow for an early detection and diagnosis of several diseases. In particular, we propose a framework for the classification of Alzheimer's disease in MR images combining the data mining steps of feature selection, clustering and classification. As a result, a set of highly selective features discriminating patients with Alzheimer and healthy people has been identified. However, the analysis of the high dimensional MR images is extremely time-consuming. Therefore we developed JGrid, a scalable distributed computing solution designed to allow for a large scale analysis of MRI and thus an optimized prediction of diagnosis. In another study we apply efficient algorithms for motif discovery to task-fMRI scans in order to identify patterns in the brain that are characteristic for patients with somatoform pain disorder. We find groups of brain compartments that occur frequently within the brain networks and discriminate well among healthy and diseased people

    Developments toward a Silicon Strip Tracker for the PANDA Experiment

    Get PDF
    The PANDA detector at the future FAIR facility in Darmstadt will be a key experiment in the understanding of the strong interaction at medium energies where perturbative models fail to describe the quark-quark interaction. An important feature of the detector system is the ability to reconstruct secondary decay vertices of short-lived intermediate states by means of a powerful particle tracking system with the the Micro-Vertex Detector (MVD) as central element to perform high-resolution charmonium and open-charm spectroscopy. The MVD is conceived with pixel detectors in the inner parts and double-sided silicon strip detectors at the outer half in a very lightweight design. The PANDA detector system shall be operated in a self-triggering broadband acquisition mode. Implications on the read-out electronics and the construction of the front-end assemblies are analyzed and evaluation of prototype DSSD-detectors wrt. signal-to-noise ratio, noise figures, charge sharing behavior, spacial resolution and radiation degradation discussed. Methods of electrical sensor characterization with different measurement setups are investigated which may be useful for future large-scale QA procedures. A novel algorithm for recovering multiple degenerate cluster hit patterns of double-sided strip sensors is introduced and a possible architecture of a Module Data Concentrator ASIC (MDC) aggregating multiple front-end data streams conceived. A first integrative concept for the construction and assembly of DSSD modules for the barrel part of the MVD is introduced as a conclusion of the thesis. Furthermore, a detailed description of a simplified procedure for the calculation of displacement damage in compound materials is given as reference which was found useful for the retrieval of non-ionizing energy loss for materials other than silicon.Der PANDA Detektor im zukünftigen FAIR-Beschleunigerkomplex in Darmstadt wird ein Schlüsselexperiment im Verständnis der starken Wechselwirkung bei mittleren Energien, bei denen kein Zugang über perturbative Methoden zur Quark-Quark Interaktion existiert, sein. Eine wichtige Eigenschaft des Detektorsystems, die Ortsrekonstruktion sekundärer Zerfallsvertizes kurzlebiger Zwischenzustände, wird dabei durch ein Spurverfolgungssystem mit dem Mikro-Vertex Detektor (MVD) als wichtigstem Element zur hochauflösenden Charmoniumund Open-Charm Spektroskopie garantiert. Der MVD ist konzipiert als leichtgewichtiges, geteiltes Silizium-Detektorsystem mit Pixeldetektoren im inneren Bereich und doppelseitigen Streifendetektoren (DSSD) in den äußeren Regionen. Das PANDA Detektorsystem soll in einem selbstgetriggertem Regime Daten breitbandig und ohne Totzeitverluste verarbeiten können. Die sich daraus ergebenden Implikationen auf den Aufbau der Ausleseelektronik und der Front-end-Baugruppen werden analysiert und es werden Ergebnisse von Messungen an DSSD-Prototypen im Hinblick auf Signal-zu-Rausch-Verhältnis, Rauscheigenschaften, Ladungsteilungsverhalten, Ortsauflösung und Bestrahlungstoleranz diskutiert. Methoden zur elektrischen Charakterisierung von Sensoren werden untersucht, die bei zukünftigen großangelegten QA-Untersuchungen nützlich eingesetzt werden können. Ein neuartiger Cluster- Korrelationsalgorithmus, welcher mehrfach entartete Clusterhit-Muster zu erkennen vermag wird ebenso vorgestellt wie eine mögliche Architektur des noch zu entwickelnden Module-Data- Concentrator ASIC (MDC), welcher die Datenströme der Front-end Chips auf Modulebene zusammenfassen soll. Ein erstes integratives Konzept für Konstruktion und Zusammenbau von DSSD-Modulen des Barrel-Bereichs des MVD wird im Abschluss der Dissertation vorgestellt. Darüber hinaus wird eine detaillierte Beschreibung einer vereinfachten Vorschrift zur Berechnung des Versetzungsschadens durch Neutronen in zusammengesetzten Stoffen angegeben, welche sich als nützlich für die Ableitung des nicht-ionisierenden Energieverlustes in Materialien neben Silizium erwiesen hat

    Reflektierte algorithmische Textanalyse. Interdisziplinäre(s) Arbeiten in der CRETA-Werkstatt

    Get PDF
    The Center for Reflected Text Analytics (CRETA) develops interdisciplinary mixed methods for text analytics in the research fields of the digital humanities. This volume is a collection of text analyses from specialty fields including literary studies, linguistics, the social sciences, and philosophy. It thus offers an overview of the methodology of the reflected algorithmic analysis of literary and non-literary texts
    corecore