870 research outputs found

    Behavior out of control: experimental evolution of resistance to host manipulation

    Get PDF
    Many parasites alter their host's phenotype in a manner that enhances their own fitness beyond the benefits they would gain from normal exploitation. Such host manipulation is rarely consistent with the host's best interests resulting in suboptimal and often fatal behavior from the host's perspective. In this case, hosts should evolve resistance to host manipulation. The cestode Schistocephalus solidus manipulates the behavior of its first intermediate copepod host to reduce its predation susceptibility and avoid fatal premature predation before the parasite is ready for transmission to its subsequent host. Thereafter, S. solidus increases host activity to facilitate transmission. If successful, this host manipulation is necessarily fatal for the host. I selected the copepod Macrocyclops albidus, a first intermediate host of S. solidus, for resistance or susceptibility to host manipulation to investigate their evolvability. Selection on the host indeed increased host manipulation in susceptible and reduced host manipulation in resistant selection lines. Interestingly, this seemed to be at least partly due to changes in the baseline levels of the modified trait (activity) rather than actual changes in resistance or susceptibility to host manipulation. Hence, hosts seem restricted in how rapidly and efficiently they can evolve resistance to host manipulation

    Der "deviante" Körper: die Verhandlung des weiblichen Körpers in alltäglichen Kleidungspraktiken medialer Selbstinszenierung

    Get PDF
    Kleidung kann als Material zur Identitätskonstruktion verstanden werden, das anhand der Kenntnis impliziter wie expliziter Regeln situationsadäquat zur Inszenierung des erfolgreichen Subjekts eingesetzt wird. Dem devianten Körper wird der Zugang zu trendbewusster Kleidung limitiert und somit auch die Identitätskonstruktion als "fashionable persona" erschwert. Anhand einer inhaltsanalytischen Untersuchung von Blog-Postings des Curvy Sewing Collective (CSC) und Selbstpräsentationen der sich als kurvig oder fett bezeichnenden Autorinnen können die Auseinandersetzung mit Kleidung als Material und der Praxis des Ankleidens als an einer sozialen Umwelt ausgerichteten Form der Optimierung des eigenen Körpers nachgezeichnet werden. Im Zusammenhang mit der Gemeinschaft des CSC entwickeln die untersuchten Subjekte Technologien des Selbst, die es erlauben, sich trotz wahrgenommener Devianz sozial akzeptabel zu kleiden und eine erfolgreiche Identitätskonstruktion als attraktive, modebewusste Frau zu unterstützen. Obgleich die dargestellten Attraktivitätsnormen Vorstellungen von normativer Weiblichkeit reproduzieren, ermöglicht das Kollektiv so individuelles Empowerment für die Teilnehmerinnen.Clothes can be regarded as the material by which we construct identities. By knowing the implicit and explicit rules of social settings we can use this material to successfully stage a subject. Deviant bodies have limited access to seasonal fashion trends, which also prevents them constructing "fashionable personae". Using a qualitative content analysis, the article investigates blog postings by the Curvy Sewing Collective. The self-presentations of authors who self-identify as curvy or fat allow me to illustrate how these women use clothes as the material for as well as the practice of dressing as strategies for optimizing one’s corporeal capital. Women develop technologies of the self in cooperation with the Curvy Sewing Collective which enable them to create socially acceptable clothing, despite their deviant bodies. In addition, these technologies support successful identity construction processes and produce images of attractive, fashionable subjects. The collective therefore fosters individual empowerment, though under the conditions of normative femininity

    Feedback-Driven Data Clustering

    Get PDF
    The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting. As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results. This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction. The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters. The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations

    The Discriminative Generalized Hough Transform for Localization of Highly Variable Objects and its Application for Surveillance Recordings

    Get PDF
    This work is about the localization of arbitrary objects in 2D images in general and the localization of persons in video surveillance recordings in particular. More precisely, it is about localizing specific landmarks. Thereby the possibilities and limitations of localization approaches based on the Generalized Hough Transform (GHT), especially of the Discriminative Generalized Hough Transform (DGHT) will be evaluated. GHT-based approaches determine the number of matching model and feature points and the most likely target point position is given by the highest number of matching model and feature points. Additionally, the DGHT comprises a statistical learning approach to generate optimal DGHT-models achieving good results on medical images. This work will show that the DGHT is not restricted to medical tasks but has issues with large target object variabilities, which are frequent in video surveillance tasks. As all GHT-based approaches also the DGHT only considers the number of matching model-feature-point-combinations, which means that all model points are treated independently. This work will show that model points are not independent of each other and considering them independently will result in high error rates. This drawback is analyzed and a universal solution, which is not only applicable for the DGHT but all GHT-based approaches, is presented. This solution is based on an additional classifier that takes the whole set of matching model-feature-point-combinations into account to estimate a confidence score. On all tested databases, this approach could reduce the error rates drastically by up to 94.9%. Furthermore, this work presents a general approach for combining multiple GHT-models into a deeper model. This can be used to combine the localization results of different object landmarks such as mouth, nose, and eyes. Similar to Convolutional Neural Networks (CNNs) this will split the target object variability into multiple and smaller variabilities. A comparison of GHT-based approaches with CNNs and a description of the advantages, disadvantages, and potential application of both approaches will conclude this work.Diese Arbeit beschäftigt sich im Allgemeinen mit der Lokalisierung von Objekten in 2D Bilddaten und im Speziellen mit der Lokalisierung von Personen in Videoüberwachungsaufnahmen. Genauer gesagt handelt es sich hierbei um die Lokalisierung spezieller Landmarken. Dabei werden die Möglichkeiten und Limiterungen von Lokalisierungsverfahren basierend auf der Generalisierten Hough Transformation (GHT) untersucht, insbesondere die der Diskriminativen Generalisierten Hough Transformation (DGHT). Bei GHT-basierten Ansätze wird die Anzahl an übereinstimmenden Modelpunkten und Merkmalspunkten ermittelt und die wahrscheinlicheste Objekt-Position ergibt sich aus der höchsten Anzahl an übereinstimmenden Model- und Merkmalspunkte. Die DGHT umfasst darüber hinaus noch ein statistisches Lernverfahren, um optimale DGHT-Modele zu erzeugen und erzielte damit auf medizinischen Bilder und Anwendungen sehr gute Erfolge. Wie sich in dieser Arbeit zeigen wird, ist die DGHT nicht auf medizinische Anwendungen beschränkt, hat allerdings Schwierigkeiten große Variabilität der Ziel-Objekte abzudecken, wie sie in Überwachungsszenarien zu erwarten sind. Genau wie alle GHT-basierten Ansätze leidet auch die DGHT unter dem Problem, dass lediglich die Anzahl an übereinstimmenden Model- und Merkmalspunkten ermittelt wird, was bedeutet, dass alle Modelpunkte unabhängig voneinander betrachtet werden. Dass Modelpunkte nicht unabhängig voneinander sind, wird im Laufe dieser Arbeit gezeigt werden, und die unabhängige Betrachtung führt gerade bei sehr variablen Zielobjekten zu einer hohen Fehlerrate. Dieses Problem wird in dieser Arbeit grundlegend untersucht und ein allgemeiner Lösungsansatz vorgestellt, welcher nicht nur für die DGHT sondern grundsätzlich für alle GHT-basierten Verfahren Anwendung finden kann. Die Lösung basiert auf der Integration eines zusätzlichen Klassifikators, welcher die gesamte Menge an übereinstimmenden Model- und Merkmalspunkten betrachtet und anhand dessen ein zusätzliches Konfidenzmaß vergibt. Dadurch konnte auf allen getesteten Datenbanken eine deutliche Reduktion der Fehlerrate erzielt werden von bis zu 94.9%. Darüber hinaus umfasst die Arbeit einen generellen Ansatz zur Kombination mehrere GHT-Model in einem tieferen Model. Dies kann dazu verwendet werden, um die Lokalisierungsergebnisse verschiedener Objekt-Landmarken zu kombinieren, z. B. die von Mund, Nase und Augen. Ähnlich wie auch bei Convolutional Neural Networks (CNNs) ist es damit möglich über mehrere Ebenen unterschiedliche Bereiche zu lokalisieren und somit die Variabilität des Zielobjektes in mehrere, leichter zu handhabenden Variabilitäten aufzuspalten. Abgeschlossen wird die Arbeit durch einen Vergleich von GHT-basierten Ansätzen mit CNNs und einer Beschreibung der Vor- und Nachteile und mögliche Einsatzfelder beider Verfahren

    An Ontological Framework for Characterizing Hydrological Flow Processes

    Get PDF
    The spatio-temporal processes that describe hydrologic flow - the movement of water above and below the surface of the Earth -- are currently underrepresented in formal semantic representations of the water domain. This paper analyses basic flow processes in the hydrology domain and systematically studies the hydrogeological entities, such as different rock and water bodies, the ground surface or subsurface zones, that participate in them. It identifies the source and goal entities and the transported water (the theme) as common participants in hydrologic flow and constructs a taxonomy of different flow patterns based on differences in source and goal participants. The taxonomy and related concepts are axiomatized in first-order logic as refinements of DOLCE\u27s participation relation and reusing hydrogeological concepts from the Hydro Foundational Ontology (HyFO). The formalization further enhances HyFO and contributes to improved knowledge integration in the hydrology domain

    Flexible G1 Interpolation of Quad Meshes

    Get PDF
    International audienceTransforming an arbitrary mesh into a smooth G1 surface has been the subject of intensive research works. To get a visual pleasing shape without any imperfection even in the presence of extraordinary mesh vertices is still a challenging problem in particular when interpolation of the mesh vertices is required. We present a new local method, which produces visually smooth shapes while solving the interpolation problem. It consists of combining low degree biquartic BĂ©zier patches with minimum number of pieces per mesh face, assembled together with G1-continuity. All surface control points are given explicitly. The construction is local and free of zero-twists. We further show that within this economical class of surfaces it is however possible to derive a sufficient number of meaningful degrees of freedom so that standard optimization techniques result in high quality surfaces

    Formal Qualitative Spatial Augmentation of the Simple Feature Access Model

    Get PDF
    The need to share and integrate heterogeneous geospatial data has resulted in the development of geospatial data standards such as the OGC/ISO standard Simple Feature Access (SFA), that standardize operations and simple topological and mereotopological relations over various geometric features such as points, line segments, polylines, polygons, and polyhedral surfaces. While SFA\u27s supplied relations enable qualitative querying over the geometric features, the relations\u27 semantics are not formalized. This lack of formalization prevents further automated reasoning - apart from simple querying - with the geometric data, either in isolation or in conjunction with external purely qualitative information as one might extract from textual sources, such as social media. To enable joint qualitative reasoning over geometric and qualitative spatial information, this work formalizes the semantics of SFA\u27s geometric features and mereotopological relations by defining or restricting them in terms of the spatial entity types and relations provided by CODIB, a first-order logical theory from an existing logical formalization of multidimensional qualitative space
    • …
    corecore