86 research outputs found

    Automatic reproducibility and parallelism for biological image analysis workflows

    Get PDF
    Current microscopy techniques hugely profit from modern microscopes producing a massive amount of increasingly complex data which are analysed by sophisticated algorithms. As a result, previously undistinguishable phenomena can be observed. However, this development coincides with new challenges for the biologist executing these experiments. Data storage, data processing, parallelisation, automation, and reproducibility are important factors in mastering these new techniques as they incur additional effort previously of less impact for the biologists. Existing solutions address the mentioned factors separately. Image storage systems manage the storage of data, specialised tool solve individual processing problems, and workflow systems help with automation and ensure reproducibility. Finally, parallelisation is a topic that is slowly gaining traction in the field of the specialised tools. However, there exist gaps between these solutions that the biologist has to bridge by hand, and which lower the overall efficiency. This work introduces a new software, whose design considers the mentioned aspects. It is a plugin to the microscopy images storage system OMERO and is called OPE. This approach eliminates nearly all overhead the biologist faces by integrating a system covering processing, reproducibility, and parallelisation into the data storage.Die Entwicklung neuer Techniken und Methoden in der computergestützten Mikroskopie haben die Grenze des Beobacht- und Messbaren immer weiter verschoben. Dabei basieren viele der heute verwendeten Methoden auf der komplexen Auswertung von großen Datenmengen. Daraus ergeben sich neue, anspruchsvolle Verarbeitungsschritte, die Wissenschaftler auf dem Gebiet der biologischen und klinischen Forschung auf dem Weg zum Endergebnis durchführen müssen. Diese zusätzlichen Schritte erschweren es dem Anwender sich auf seine Kernkompetenzen zu konzentrieren, da Aspekte, wie die Wahl eines angemessenen Verarbeitungswerkzeuges, die korrekte Verwendung von diesem, die Ablage der Ergebnisse sowie die Reproduzierbarkeit aller Schritte zu beachten ist. Lösungsansätze für einen Teil dieser Probleme sind in den letzten Jahren vermehrt vorgestellt worden. Es fehlt bis dato jedoch ein Ansatz, der alle Probleme in ihrer Gesamtheit adressiert. Für diesen Zweck wurde in dieser Arbeit OPE (OMERO Processing Extension) erstellt und im Folgenden untersucht. OPE ist eine Erweiterung für das OMERO Mikroskopiebildablagesystem. Es berücksichtigt von Grund auf alle angesprochenen Aspekte und befreit so den Nutzer von automatisierbarer Zusatzarbeit

    Algorithmische und Code-Optimierungen Molekulardynamiksimulationen für Verfahrenstechnik

    Get PDF
    The focus of this work lies on implementational improvements and, in particular, node-level performance optimization of the simulation software ls1-mardyn. Through data structure improvements, SIMD vectorization and, especially, OpenMP parallelization, the world’s first simulation of 2*1013 molecules at over 1 PFLOP/sec was enabled. To allow for long-range interactions, the Fast Multipole Method was introduced to ls1-mardyn. The algorithm was optimized for sequential, shared-memory, and distributed-memory execution on up to 32,768 MPI processes.Der Fokus dieser Arbeit liegt auf Code-Optimierungen und insbesondere Leistungsoptimierung auf Knoten-Ebene für die Simulationssoftware ls1-mardyn. Durch verbesserte Datenstrukturen, SIMD-Vektorisierung und vor allem OpenMP-Parallelisierung wurde die weltweit erste Petaflop-Simulation von 2*1013 Molekülen ermöglicht. Zur Simulation von langreichweitigen Wechselwirkungen wurde die Fast-Multipole-Methode in ls1-mardyn eingeführt. Sequenzielle, Shared- und Distributed-Memory-Optimierungen wurden angewandt und erlaubten eine Ausführung auf bis zu 32768 MPI-Prozessen

    Multi-Context Reasoning in Continuous Data-Flow Environments

    Get PDF
    The field of artificial intelligence, research on knowledge representation and reasoning has originated a large variety of formats, languages, and formalisms. Over the decades many different tools emerged to use these underlying concepts. Each one has been designed with some specific application in mind and are even used nowadays, where the internet is seen as a service to be sufficient for the age of Industry 4.0 and the Internet of Things. In that vision of a connected world, with these many different formalisms and systems, a formal way to uniformly exchange information, such as knowledge and belief is imperative. That alone is not enough, because even more systems get integrated into the online world and nowadays we are confronted with a huge amount of continuously flowing data. Therefore a solution is needed to both, allowing the integration of information and dynamic reaction to the data which is provided in such continuous data-flow environments. This work aims to present a unique and novel pair of formalisms to tackle these two important needs by proposing an abstract and general solution. We introduce and discuss reactive Multi-Context Systems (rMCS), which allow one to utilise different knowledge representation formalisms, so-called contexts which are represented as an abstract logic framework, and exchange their beliefs through bridge rules with other contexts. These multiple contexts need to mutually agree on a common set of beliefs, an equilibrium of belief sets. While different Multi-Context Systems already exist, they are only solving this agreement problem once and are neither considering external data streams, nor are they reasoning continuously over time. rMCS will do this by adding means of reacting to input streams and allowing the bridge rules to reason with this new information. In addition we propose two different kind of bridge rules, declarative ones to find a mutual agreement and operational ones for adapting the current knowledge for future computations. The second framework is more abstract and allows computations to happen in an asynchronous way. These asynchronous Multi-Context Systems are aimed at modelling and describing communication between contexts, with different levels of self-management and centralised management of communication and computation. In this thesis rMCS will be analysed with respect to usability, consistency management, and computational complexity, while we will show how asynchronous Multi-Context Systems can be used to capture the asynchronous ideas and how to model an rMCS with it. Finally we will show how rMCSs are positioned in the current world of stream reasoning and that it can capture currently used technologies and therefore allows one to seamlessly connect different systems of these kinds with each other. Further on this also shows that rMCSs are expressive enough to simulate the mechanics used by these systems to compute the corresponding results on its own as an alternative to already existing ones. For asynchronous Multi-Context Systems, we will discuss how to use them and that they are a very versatile tool to describe communication and asynchronous computation

    On the role of Computational Logic in Data Science: representing, learning, reasoning, and explaining knowledge

    Get PDF
    In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions

    Software engineering perspectives on physiological computing

    Get PDF
    Physiological computing is an interesting and promising concept to widen the communication channel between the (human) users and computers, thus allowing an increase of software systems' contextual awareness and rendering software systems smarter than they are today. Using physiological inputs in pervasive computing systems allows re-balancing the information asymmetry between the human user and the computer system: while pervasive computing systems are well able to flood the user with information and sensory input (such as sounds, lights, and visual animations), users only have a very narrow input channel to computing systems; most of the time, restricted to keyboards, mouse, touchscreens, accelerometers and GPS receivers (through smartphone usage, e.g.). Interestingly, this information asymmetry often forces the user to subdue to the quirks of the computing system to achieve his goals -- for example, users may have to provide information the software system demands through a narrow, time-consuming input mode that the system could sense implicitly from the human body. Physiological computing is a way to circumvent these limitations; however, systematic means for developing and moulding physiological computing applications into software are still unknown. This thesis proposes a methodological approach to the creation of physiological computing applications that makes use of component-based software engineering. Components help imposing a clear structure on software systems in general, and can thus be used for physiological computing systems as well. As an additional bonus, using components allow physiological computing systems to leverage reconfigurations as a means to control and adapt their own behaviours. This adaptation can be used to adjust the behaviour both to the human and to the available computing environment in terms of resources and available devices - an activity that is crucial for complex physiological computing systems. With the help of components and reconfigurations, it is possible to structure the functionality of physiological computing applications in a way that makes them manageable and extensible, thus allowing a stepwise and systematic extension of a system's intelligence. Using reconfigurations entails a larger issue, however. Understanding and fully capturing the behaviour of a system under reconfiguration is challenging, as the system may change its structure in ways that are difficult to fully predict. Therefore, this thesis also introduces a means for formal verification of reconfigurations based on assume-guarantee contracts. With the proposed assume-guarantee contract framework, it is possible to prove that a given system design (including component behaviours and reconfiguration specifications) is satisfying real-time properties expressed as assume-guarantee contracts using a variant of real-time linear temporal logic introduced in this thesis - metric interval temporal logic for reconfigurable systems. Finally, this thesis embeds both the practical approach to the realisation of physiological computing systems and formal verification of reconfigurations into Scrum, a modern and agile software development methodology. The surrounding methodological approach is intended to provide a frame for the systematic development of physiological computing systems from first psychological findings to a working software system with both satisfactory functionality and software quality aspects. By integrating practical and theoretical aspects of software engineering into a self-contained development methodology, this thesis proposes a roadmap and guidelines for the creation of new physiological computing applications.Physiologisches Rechnen ist ein interessantes und vielversprechendes Konzept zur Erweiterung des Kommunikationskanals zwischen (menschlichen) Nutzern und Rechnern, und dadurch die Berücksichtigung des Nutzerkontexts in Software-Systemen zu verbessern und damit Software-Systeme intelligenter zu gestalten, als sie es heute sind. Physiologische Eingangssignale in ubiquitären Rechensystemen zu verwenden, ermöglicht eine Neujustierung der Informationsasymmetrie, die heute zwischen Menschen und Rechensystemen existiert: Während ubiquitäre Rechensysteme sehr wohl in der Lage sind, den Menschen mit Informationen und sensorischen Reizen zu überfluten (z.B. durch Töne, Licht und visuelle Animationen), hat der Mensch nur sehr begrenzte Einflussmöglichkeiten zu Rechensystemen. Meistens stehen nur Tastaturen, die Maus, berührungsempfindliche Bildschirme, Beschleunigungsmesser und GPS-Empfänger (zum Beispiel durch Mobiltelefone oder digitale Assistenten) zur Verfügung. Diese Informationsasymmetrie zwingt die Benutzer zur Unterwerfung unter die Usancen der Rechensysteme, um ihre Ziele zu erreichen - zum Beispiel müssen Nutzer Daten manuell eingeben, die auch aus Sensordaten des menschlichen Körpers auf unauffällige weise erhoben werden können. Physiologisches Rechnen ist eine Möglichkeit, diese Beschränkung zu umgehen. Allerdings fehlt eine systematische Methodik für die Entwicklung physiologischer Rechensysteme bis zu fertiger Software. Diese Dissertation präsentiert einen methodischen Ansatz zur Entwicklung physiologischer Rechenanwendungen, der auf der komponentenbasierten Softwareentwicklung aufbaut. Der komponentenbasierte Ansatz hilft im Allgemeinen dabei, eine klare Architektur des Software-Systems zu definieren, und kann deshalb auch für physiologische Rechensysteme angewendet werden. Als zusätzlichen Vorteil erlaubt die Komponentenorientierung in physiologischen Rechensystemen, Rekonfigurationen als Mittel zur Kontrolle und Anpassung des Verhaltens von physiologischen Rechensystemen zu verwenden. Diese Adaptionstechnik kann genutzt werden um das Verhalten von physiologischen Rechensystemen an den Benutzer anzupassen, sowie an die verfügbare Recheninfrastruktur im Sinne von Systemressourcen und Geräten - eine Maßnahme, die in komplexen physiologischen Rechensystemen entscheidend ist. Mit Hilfe der Komponentenorientierung und von Rekonfigurationen wird es möglich, die Funktionalität von physiologischen Rechensystemen so zu strukturieren, dass das System wartbar und erweiterbar bleibt. Dadurch wird eine schrittweise und systematische Erweiterung der Funktionalität des Systems möglich. Die Verwendung von Rekonfigurationen birgt allerdings Probleme. Das Systemverhalten eines Software-Systems, das Rekonfigurationen unterworfen ist zu verstehen und vollständig einzufangen ist herausfordernd, da das System seine Struktur auf schwer vorhersehbare Weise verändern kann. Aus diesem Grund führt diese Arbeit eine Methode zur formalen Verifikation von Rekonfigurationen auf Grundlage von Annahme-Zusicherungs-Verträgen ein. Mit dem vorgeschlagenen Annahme-Zusicherungs-Vertragssystem ist es möglich zu beweisen, dass ein gegebener Systementwurf (mitsamt Komponentenverhalten und Spezifikation des Rekonfigurationsverhaltens) eine als Annahme-Zusicherungs-Vertrag spezifizierte Echtzeiteigenschaft erfüllt. Für die Spezifikation von Echtzeiteigenschaften kann eine Variante von linearer Temporallogik für Echtzeit verwendet werden, die in dieser Arbeit eingeführt wird: Die metrische Intervall-Temporallogik für rekonfigurierbare Systeme. Schließlich wird in dieser Arbeit sowohl ein praktischer Ansatz zur Realisierung von physiologischen Rechensystemen als auch die formale Verifikation von Rekonfigurationen in Scrum eingebettet, einer modernen und agilen Softwareentwicklungsmethodik. Der methodische Ansatz bietet einen Rahmen für die systematische Entwicklung physiologischer Rechensysteme von Erkenntnissen zur menschlichen Physiologie hin zu funktionierenden physiologischen Softwaresystemen mit zufriedenstellenden funktionalen und qualitativen Eigenschaften. Durch die Integration sowohl von praktischen wie auch theoretischen Aspekten der Softwaretechnik in eine vollständige Entwicklungsmethodik bietet diese Arbeit einen Fahrplan und Richtlinien für die Erstellung neuer physiologischer Rechenanwendungen

    Parallel Multiscale Contact Dynamics for Rigid Non-spherical Bodies

    Get PDF
    The simulation of large numbers of rigid bodies of non-analytical shapes or vastly varying sizes which collide with each other is computationally challenging. The fundamental problem is the identification of all contact points between all particles at every time step. In the Discrete Element Method (DEM), this is particularly difficult for particles of arbitrary geometry that exhibit sharp features (e.g. rock granulates). While most codes avoid non-spherical or non-analytical shapes due to the computational complexity, we introduce an iterative-based contact detection method for triangulated geometries. The new method is an improvement over a naive brute force approach which checks all possible geometric constellations of contact and thus exhibits a lot of execution branching. Our iterative approach has limited branching and high floating point operations per processed byte. It thus is suitable for modern Single Instruction Multiple Data (SIMD) CPU hardware. As only the naive brute force approach is robust and always yields a correct solution, we propose a hybrid solution that combines the best of the two worlds to produce fast and robust contacts. In terms of the DEM workflow, we furthermore propose a multilevel tree-based data structure strategy that holds all particles in the domain on multiple scales in grids. Grids reduce the total computational complexity of the simulation. The data structure is combined with the DEM phases to form a single touch tree-based traversal that identifies both contact points between particle pairs and introduces concurrency to the system during particle comparisons in one multiscale grid sweep. Finally, a reluctant adaptivity variant is introduced which enables us to realise an improved time stepping scheme with larger time steps than standard adaptivity while we still minimise the grid administration overhead. Four different parallelisation strategies that exploit multicore architectures are discussed for the triad of methodological ingredients. Each parallelisation scheme exhibits unique behaviour depending on the grid and particle geometry at hand. The fusion of them into a task-based parallelisation workflow yields promising speedups. Our work shows that new computer architecture can push the boundary of DEM computability but this is only possible if the right data structures and algorithms are chosen
    corecore