387 research outputs found

    The biomechanical importance of the scaphoid-centrale fusion during simulated knuckle-walking and its implications for human locomotor evolution

    Get PDF
    © 2020, The Author(s). Inferring the locomotor behaviour of the last common ancestor (LCA) of humans and African apes is still a divisive issue. An African great-ape-like ancestor using knuckle-walking is still the most parsimonious hypothesis for the LCA, despite diverse conflicting lines of evidence. Crucial to this hypothesis is the role of the centrale in the hominoid wrist, since the fusion of this bone with the scaphoid is among the clearest morphological synapomorphies of African apes and hominins. However, the exact functional significance of this fusion remains unclear. We address this question by carrying out finite element simulations of the hominoid wrist during knuckle-walking by virtually generating fused and unfused morphologies in a sample of hominoids. Finite element analysis was applied to test the hypothesis that a fused scaphoid-centrale better withstands the loads derived from knuckle-walking. The results show that fused morphologies display lower stress values, hence supporting a biomechanical explanation for the fusion as a functional adaptation for knuckle-walking. This functional interpretation for the fusion contrasts with the current inferred positional behaviour of the earliest hominins, thus suggesting that this morphology was probably retained from an LCA that exhibited knuckle-walking as part of its locomotor repertoire and that was probably later exapted for other functions

    From Algorithmic Computing to Autonomic Computing

    Get PDF
    In algorithmic computing, the program follows a predefined set of rules – the algorithm. The analyst/designer of the program analyzes the intended tasks of the program, defines the rules for its expected behaviour and programs the implementation. The creators of algorithmic software must therefore foresee, identify and implement all possible cases for its behaviour in the future application! However, what if the problem is not fully defined? Or the environment is uncertain? What if situations are too complex to be predicted? Or the environment is changing dynamically? In many such cases algorithmic computing fails. In such situations, the software needs an additional degree of freedom: Autonomy! Autonomy allows software to adapt to partially defined problems, to uncertain or dynamically changing environments and to situations that are too complex to be predicted. As more and more applications – such as autonomous cars and planes, adaptive power grid management, survivable networks, and many more – fall into this category, a gradual switch from algorithmic computing to autonomic computing takes place. Autonomic computing has become an important software engineering discipline with a rich literature, an active research community, and a growing number of applications.:Introduction 5 1 A Process Data Based Autonomic Optimization of Energy Efficiency in Manufacturing Processes, Daniel Höschele 9 2 Eine autonome Optimierung der Stabilität von Produktionsprozessen auf Basis von Prozessdaten, Richard Horn 25 3 Assuring Safety in Autonomous Systems, Christian Rose 41 4 MAPE-K in der Praxis - Grundlage für eine mögliche automatische Ressourcenzuweisung, in der Cloud Michael Schneider 5

    Quantitative investigations on the human entorhinal area

    Get PDF

    Different effect of mycorrhizal inoculation in direct and indirect reclamation of spoil banks

    Get PDF
    Spoil banks generated during coal mining are usually reclaimed by layering of fertile soil over original barren clay (co called indirect reclamation). This well-proven method is effective from the aspect of vegetation establishment and production, but it is very expensive. Direct reclamation of spoil bank clay promises much cheaper approach, yet its success is uncertain and the process might be rather long-term.This two-year field study aimed to assess the effect of application of commercially produced inoculum of arbuscular mycorrhizal fungi (AMF) Symbivit® on growth of two plant species commonly used for reclamation (Lotus corniculatus and Arrhenatherum elatius) sown on three different substrates: organic substrate (mixture of papermill waste, tree-bark and compost) and loess (both substrates typical for indirect reclamation) and original spoil bank clays (simulation of direct reclamation). On organic substrate and loess, A. elatius outcompeted the legume and established 100 % cover in all treatments. The effect of mycorrhizal inoculation was not observed. In contrast, on clay both species established successfully. The produced biomass and cover were, however, substantially lower compared to organic substrate and loess. In clay the positive effect of introduced AMF on plant was observed.Mycorrhizal inoculation was useful for supporting plant growth at direct reclamation. Direct reclamation in itself seems suitable for small-scale application, i.e. in patches where indirect reclamation is inconvenient or more diverse vegetation is required. Key words: arbuscular mycorrhizal fungi; inoculum; clay; papermill waste; loess; Arrhenatherum elatius; Lotus corniculatu

    Transkranieller Doppler-Ultraschall (TCD):die Entwicklung von "Fingerabdruckspektren" in der transkraniellen Dopplersonographie zur Diagnose der Zerebralen Mikroangiopathie

    Full text link
    Hintergrund: Es gibt einen großen an Screeningmethoden hinsichtlich des Vorliegens einer zerebralen Mikroangiopathie. Mittels TCD gewonnene Flussspektren können einen indirekten Hinweis auf die Mikrozirkulation innerhalb des Gehirns geben. Methoden: Aus der ACM wurden Flussspektren erzeugt, standardisiert und per AVERAGE Mittelwerte berechnet. Dann wurden Beschleunigungskurven erstellt und aus diesen die positiven wie negativen Extremwerte extrahiert. Validierung bei 5 Gesunden mit reproduzierbaren Ergebnissen. Ferner wurden in ebenso die Trennschärfe zwischen gesunden und erkrankten Probanden untersucht. Ergebnisse: Es konnte die Reproduzierbarkeit der Daten und die Trennschärfe bewiesen werden. Es bestand keine Abhängigkeit der gewonnenen Daten von Herzfrequenz, Blutdruck oder CO2-Gehalt der Ausatemluft. Schlussfolgerung: Die dopplersonographische Erfassung der Flussspektren der ACM und die anschließende Aufarbeitung mittels AVERAGE stellen eine verlässliche Untersuchungsmethode dar

    Processor Allocation for Optimistic Parallelization of Irregular Programs

    Full text link
    Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: potentially interfering tasks are launched dynamically, and the runtime system detects conflicts between concurrent activities, aborting and rolling back conflicting tasks. However, parallelism in irregular algorithms is very complex. In a regular algorithm like dense matrix multiplication, the amount of parallelism can usually be expressed as a function of the problem size, so it is reasonably straightforward to determine how many processors should be allocated to execute a regular algorithm of a certain size (this is called the processor allocation problem). In contrast, parallelism in irregular algorithms can be a function of input parameters, and the amount of parallelism can vary dramatically during the execution of the irregular algorithm. Therefore, the processor allocation problem for irregular algorithms is very difficult. In this paper, we describe the first systematic strategy for addressing this problem. Our approach is based on a construct called the conflict graph, which (i) provides insight into the amount of parallelism that can be extracted from an irregular algorithm, and (ii) can be used to address the processor allocation problem for irregular algorithms. We show that this problem is related to a generalization of the unfriendly seating problem and, by extending Tur\'an's theorem, we obtain a worst-case class of problems for optimistic parallelization, which we use to derive a lower bound on the exploitable parallelism. Finally, using some theoretically derived properties and some experimental facts, we design a quick and stable control strategy for solving the processor allocation problem heuristically.Comment: 12 pages, 3 figures, extended version of SPAA 2011 brief announcemen

    Autonomic Computing: State of the Art - Promises - Impact

    Get PDF
    Software has never been as important as today – and its impact on life, work and society is growing at an impressive rate. We are in the flow of a software-induced transformation of nearly all aspects of our way of life and work. The dependence on software has become almost total. Malfunctions and unavailability may threaten vital areas of our society, life and work at any time. The two massive challenges of software are one hand the complexity of the software and on the other hand the disruptive environment. Complexity of the software is a result of the size, the continuously growing functionality, the more complicated technology and the growing networking. The unfortunate consequence is that complexity leads to many problems in design, development, evolution and operation of software-systems, especially of large software-systems. All software-systems live in an environment. Many of today’s environments can be disruptive and cause severe problems for the systems and their users. Examples of disruptions are attacks, failures of partner systems or networks, faults in communications or malicious activities. Traditionally, both growing complexity and disruptions from the environment have been tackled by better and better software engineering. The development and operating processes are constantly being improved and more powerful engineering tools are introduced. For defending against disruptions, predictive methods – such as risk analysis or fault trees – are used. All this techniques are based on the ingenuity, experience and skills of the engineers! However, the growing complexity and the increasing intensity of possible disruptions from the environment make it more and more questionable, if people are really able to successfully cope with this raising challenge in the future. Already, serious research suggests that this is not the case anymore and that we need assistance from the software-systems themselves! Here enters “autonomic computing” – A promising branch of software science which enables software-systems with self-configuring, self-healing, self-optimization and self-protection capabilities. Autonomic computing systems are able to re-organize, optimize, defend and adapt themselves with no real-time human intervention. Autonomic computing relies on many branches of science – especially computer science, artificial intelligence, control theory, machine learning, multi-agent systems and more. Autonomic computing is an active research field which currently transfers many of its results into software engineering and many applications. This Hauptseminar offered the opportunity to learn about the fascinating technology “autonomic computing” and to do some personal research guided by a professor and assisted by the seminar peers.:Introduction 5 1 What Knowledge Does a Taxi Need? – Overview of Rule Based, Model Based and Reinforcement Learning Systems for Autonomic Computing (Anja Reusch) 11 2 Chancen und Risiken von Virtual Assistent Systemen (Felix Hanspach) 23 3 Evolution einer Microservice Architektur zu Autonomic Computing (Ilja Bauer) 37 4 Mögliche Einflüsse von autonomen Informationsdiensten auf ihre Nutzer (Jan Engelmohr) 49 5 The Benefits of Resolving the Trust Issues between Autonomic Computing Systems and their Users (Marc Kandler) 6

    Cognitive Computing: Collected Papers

    Get PDF
    Cognitive Computing' has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world. Cognitive computers – also called 'intelligent machines' – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers. A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …). A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans. Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.:Introduction 5 1. Applying the Subsumption Architecture to the Genesis Story Understanding System – A Notion and Nexus of Cognition Hypotheses (Felix Mai) 9 2. Benefits and Drawbacks of Hardware Architectures Developed Specifically for Cognitive Computing (Philipp Schröppe)l 19 3. Language Workbench Technology For Cognitive Systems (Tobias Nett) 29 4. Networked Brain-based Architectures for more Efficient Learning (Tyler Butler) 41 5. Developing Better Pharmaceuticals – Using the Virtual Physiological Human (Ben Blau) 51 6. Management of existential Risks of Applications leveraged through Cognitive Computing (Robert Richter) 6

    Autonomic Computing: State of the Art - Promises - Impact

    Get PDF
    Software has never been as important as today – and its impact on life, work and society is growing at an impressive rate. We are in the flow of a software-induced transformation of nearly all aspects of our way of life and work. The dependence on software has become almost total. Malfunctions and unavailability may threaten vital areas of our society, life and work at any time. The two massive challenges of software are one hand the complexity of the software and on the other hand the disruptive environment. Complexity of the software is a result of the size, the continuously growing functionality, the more complicated technology and the growing networking. The unfortunate consequence is that complexity leads to many problems in design, development, evolution and operation of software-systems, especially of large software-systems. All software-systems live in an environment. Many of today’s environments can be disruptive and cause severe problems for the systems and their users. Examples of disruptions are attacks, failures of partner systems or networks, faults in communications or malicious activities. Traditionally, both growing complexity and disruptions from the environment have been tackled by better and better software engineering. The development and operating processes are constantly being improved and more powerful engineering tools are introduced. For defending against disruptions, predictive methods – such as risk analysis or fault trees – are used. All this techniques are based on the ingenuity, experience and skills of the engineers! However, the growing complexity and the increasing intensity of possible disruptions from the environment make it more and more questionable, if people are really able to successfully cope with this raising challenge in the future. Already, serious research suggests that this is not the case anymore and that we need assistance from the software-systems themselves! Here enters “autonomic computing” – A promising branch of software science which enables software-systems with self-configuring, self-healing, self-optimization and self-protection capabilities. Autonomic computing systems are able to re-organize, optimize, defend and adapt themselves with no real-time human intervention. Autonomic computing relies on many branches of science – especially computer science, artificial intelligence, control theory, machine learning, multi-agent systems and more. Autonomic computing is an active research field which currently transfers many of its results into software engineering and many applications. This Hauptseminar offered the opportunity to learn about the fascinating technology “autonomic computing” and to do some personal research guided by a professor and assisted by the seminar peers.:Introduction 5 1 What Knowledge Does a Taxi Need? – Overview of Rule Based, Model Based and Reinforcement Learning Systems for Autonomic Computing (Anja Reusch) 11 2 Chancen und Risiken von Virtual Assistent Systemen (Felix Hanspach) 23 3 Evolution einer Microservice Architektur zu Autonomic Computing (Ilja Bauer) 37 4 Mögliche Einflüsse von autonomen Informationsdiensten auf ihre Nutzer (Jan Engelmohr) 49 5 The Benefits of Resolving the Trust Issues between Autonomic Computing Systems and their Users (Marc Kandler) 6

    An efficient quantum algorithm for the hidden subgroup problem in extraspecial groups

    Get PDF
    Extraspecial groups form a remarkable subclass of p-groups. They are also present in quantum information theory, in particular in quantum error correction. We give here a polynomial time quantum algorithm for finding hidden subgroups in extraspecial groups. Our approach is quite different from the recent algorithms presented in [17] and [2] for the Heisenberg group, the extraspecial p-group of size p3 and exponent p. Exploiting certain nice automorphisms of the extraspecial groups we define specific group actions which are used to reduce the problem to hidden subgroup instances in abelian groups that can be dealt with directly.Comment: 10 page
    • …
    corecore