8,224 research outputs found

    An Efficient Multiway Mergesort for GPU Architectures

    Full text link
    Sorting is a primitive operation that is a building block for countless algorithms. As such, it is important to design sorting algorithms that approach peak performance on a range of hardware architectures. Graphics Processing Units (GPUs) are particularly attractive architectures as they provides massive parallelism and computing power. However, the intricacies of their compute and memory hierarchies make designing GPU-efficient algorithms challenging. In this work we present GPU Multiway Mergesort (MMS), a new GPU-efficient multiway mergesort algorithm. MMS employs a new partitioning technique that exposes the parallelism needed by modern GPU architectures. To the best of our knowledge, MMS is the first sorting algorithm for the GPU that is asymptotically optimal in terms of global memory accesses and that is completely free of shared memory bank conflicts. We realize an initial implementation of MMS, evaluate its performance on three modern GPU architectures, and compare it to competitive implementations available in state-of-the-art GPU libraries. Despite these implementations being highly optimized, MMS compares favorably, achieving performance improvements for most random inputs. Furthermore, unlike MMS, state-of-the-art algorithms are susceptible to bank conflicts. We find that for certain inputs that cause these algorithms to incur large numbers of bank conflicts, MMS can achieve up to a 37.6% speedup over its fastest competitor. Overall, even though its current implementation is not fully optimized, due to its efficient use of the memory hierarchy, MMS outperforms the fastest comparison-based sorting implementations available to date

    Skeleton as a probe of the cosmic web: the 2D case

    Full text link
    We discuss the skeleton as a probe of the filamentary structures of a 2D random field. It can be defined for a smooth field as the ensemble of pairs of field lines departing from saddle points, initially aligned with the major axis of local curvature and connecting them to local maxima. This definition is thus non local and makes analytical predictions difficult, so we propose a local approximation: the local skeleton is given by the set of points where the gradient is aligned with the local curvature major axis and where the second component of the local curvature is negative. We perform a statistical analysis of the length of the total local skeleton, chosen for simplicity as the set of all points of space where the gradient is either parallel or orthogonal to the main curvature axis. In all our numerical experiments, which include Gaussian and various non Gaussian realizations such as \chi^2 fields and Zel'dovich maps, the differential length is found within a normalization factor to be very close to the probability distribution function of the smoothed field. This is in fact explicitly demonstrated in the Gaussian case. This result might be discouraging for using the skeleton as a probe of non Gausiannity, but our analyses assume that the total length of the skeleton is a free, adjustable parameter. This total length could in fact be used to constrain cosmological models, in CMB maps but also in 3D galaxy catalogs, where it estimates the total length of filaments in the Universe. Making the link with other works, we also show how the skeleton can be used to study the dynamics of large scale structure.Comment: 15 pages, 11 figures, submitted to MNRA

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Man-made Surface Structures from Triangulated Point Clouds

    Get PDF
    Photogrammetry aims at reconstructing shape and dimensions of objects captured with cameras, 3D laser scanners or other spatial acquisition systems. While many acquisition techniques deliver triangulated point clouds with millions of vertices within seconds, the interpretation is usually left to the user. Especially when reconstructing man-made objects, one is interested in the underlying surface structure, which is not inherently present in the data. This includes the geometric shape of the object, e.g. cubical or cylindrical, as well as corresponding surface parameters, e.g. width, height and radius. Applications are manifold and range from industrial production control to architectural on-site measurements to large-scale city models. The goal of this thesis is to automatically derive such surface structures from triangulated 3D point clouds of man-made objects. They are defined as a compound of planar or curved geometric primitives. Model knowledge about typical primitives and relations between adjacent pairs of them should affect the reconstruction positively. After formulating a parametrized model for man-made surface structures, we develop a reconstruction framework with three processing steps: During a fast pre-segmentation exploiting local surface properties we divide the given surface mesh into planar regions. Making use of a model selection scheme based on minimizing the description length, this surface segmentation is free of control parameters and automatically yields an optimal number of segments. A subsequent refinement introduces a set of planar or curved geometric primitives and hierarchically merges adjacent regions based on their joint description length. A global classification and constraint parameter estimation combines the data-driven segmentation with high-level model knowledge. Therefore, we represent the surface structure with a graphical model and formulate factors based on likelihood as well as prior knowledge about parameter distributions and class probabilities. We infer the most probable setting of surface and relation classes with belief propagation and estimate an optimal surface parametrization with constraints induced by inter-regional relations. The process is specifically designed to work on noisy data with outliers and a few exceptional freeform regions not describable with geometric primitives. It yields full 3D surface structures with watertightly connected surface primitives of different types. The performance of the proposed framework is experimentally evaluated on various data sets. On small synthetically generated meshes we analyze the accuracy of the estimated surface parameters, the sensitivity w.r.t. various properties of the input data and w.r.t. model assumptions as well as the computational complexity. Additionally we demonstrate the flexibility w.r.t. different acquisition techniques on real data sets. The proposed method turns out to be accurate, reasonably fast and little sensitive to defects in the data or imprecise model assumptions.Künstliche Oberflächenstrukturen aus triangulierten Punktwolken Ein Ziel der Photogrammetrie ist die Rekonstruktion der Form und Größe von Objekten, die mit Kameras, 3D-Laserscannern und anderern räumlichen Erfassungssystemen aufgenommen wurden. Während viele Aufnahmetechniken innerhalb von Sekunden triangulierte Punktwolken mit Millionen von Punkten liefern, ist deren Interpretation gewöhnlicherweise dem Nutzer überlassen. Besonders bei der Rekonstruktion künstlicher Objekte (i.S.v. engl. man-made = „von Menschenhand gemacht“ ist man an der zugrunde liegenden Oberflächenstruktur interessiert, welche nicht inhärent in den Daten enthalten ist. Diese umfasst die geometrische Form des Objekts, z.B. quaderförmig oder zylindrisch, als auch die zugehörigen Oberflächenparameter, z.B. Breite, Höhe oder Radius. Die Anwendungen sind vielfältig und reichen von industriellen Fertigungskontrollen über architektonische Raumaufmaße bis hin zu großmaßstäbigen Stadtmodellen. Das Ziel dieser Arbeit ist es, solche Oberflächenstrukturen automatisch aus triangulierten Punktwolken von künstlichen Objekten abzuleiten. Sie sind definiert als ein Verbund ebener und gekrümmter geometrischer Primitive. Modellwissen über typische Primitive und Relationen zwischen Paaren von ihnen soll die Rekonstruktion positiv beeinflussen. Nachdem wir ein parametrisiertes Modell für künstliche Oberflächenstrukturen formuliert haben, entwickeln wir ein Rekonstruktionsverfahren mit drei Verarbeitungsschritten: Im Rahmen einer schnellen Vorsegmentierung, die lokale Oberflächeneigenschaften berücksichtigt, teilen wir die gegebene vermaschte Oberfläche in ebene Regionen. Unter Verwendung eines Schemas zur Modellauswahl, das auf der Minimierung der Beschreibungslänge beruht, ist diese Oberflächensegmentierung unabhängig von Kontrollparametern und liefert automatisch eine optimale Anzahl an Regionen. Eine anschließende Verbesserung führt eine Menge von ebenen und gekrümmten geometrischen Primitiven ein und fusioniert benachbarte Regionen hierarchisch basierend auf ihrer gemeinsamen Beschreibungslänge. Eine globale Klassifikation und bedingte Parameterschätzung verbindet die datengetriebene Segmentierung mit hochrangigem Modellwissen. Dazu stellen wir die Oberflächenstruktur in Form eines graphischen Modells dar und formulieren Faktoren basierend auf der Likelihood sowie auf apriori Wissen über die Parameterverteilungen und Klassenwahrscheinlichkeiten. Wir leiten die wahrscheinlichste Konfiguration von Flächen- und Relationsklassen mit Hilfe von Belief-Propagation ab und schätzen eine optimale Oberflächenparametrisierung mit Bedingungen, die durch die Relationen zwischen benachbarten Primitiven induziert werden. Der Prozess ist eigens für verrauschte Daten mit Ausreißern und wenigen Ausnahmeregionen konzipiert, die nicht durch geometrische Primitive beschreibbar sind. Er liefert wasserdichte 3D-Oberflächenstrukturen mit Oberflächenprimitiven verschiedener Art. Die Leistungsfähigkeit des vorgestellten Verfahrens wird an verschiedenen Datensätzen experimentell evaluiert. Auf kleinen, synthetisch generierten Oberflächen untersuchen wir die Genauigkeit der geschätzten Oberflächenparameter, die Sensitivität bzgl. verschiedener Eigenschaften der Eingangsdaten und bzgl. Modellannahmen sowie die Rechenkomplexität. Außerdem demonstrieren wir die Flexibilität bzgl. verschiedener Aufnahmetechniken anhand realer Datensätze. Das vorgestellte Rekonstruktionsverfahren erweist sich als genau, hinreichend schnell und wenig anfällig für Defekte in den Daten oder falsche Modellannahmen

    odMLtables: A User-Friendly Approach for Managing Metadata of Neurophysiological Experiments

    Get PDF
    An essential aspect of scientific reproducibility is a coherent and complete acquisition of metadata along with the actual data of an experiment. The high degree of complexity and heterogeneity of neuroscience experiments requires a rigorous management of the associated metadata. The odML framework represents a solution to organize and store complex metadata digitally in a hierarchical format that is both human and machine readable. However, this hierarchical representation of metadata is difficult to handle when metadata entries need to be collected and edited manually during the daily routines of a laboratory. With odMLtables, we present an open-source software solution that enables users to collect, manipulate, visualize, and store metadata in tabular representations (in xls or csv format) by providing functionality to convert these tabular collections to the hierarchically structured metadata format odML, and to either extract or merge subsets of a complex metadata collection. With this, odMLtables bridges the gap between handling metadata in an intuitive way that integrates well with daily lab routines and commonly used software products on the one hand, and the implementation of a complete, well-defined metadata collection for the experiment in a standardized format on the other hand. We demonstrate usage scenarios of the odMLtables tools in common lab routines in the context of metadata acquisition and management, and show how the tool can assist in exploring published datasets that provide metadata in the odML format

    Spellburst: A Node-based Interface for Exploratory Creative Coding with Natural Language Prompts

    Full text link
    Creative coding tasks are often exploratory in nature. When producing digital artwork, artists usually begin with a high-level semantic construct such as a "stained glass filter" and programmatically implement it by varying code parameters such as shape, color, lines, and opacity to produce visually appealing results. Based on interviews with artists, it can be effortful to translate semantic constructs to program syntax, and current programming tools don't lend well to rapid creative exploration. To address these challenges, we introduce Spellburst, a large language model (LLM) powered creative-coding environment. Spellburst provides (1) a node-based interface that allows artists to create generative art and explore variations through branching and merging operations, (2) expressive prompt-based interactions to engage in semantic programming, and (3) dynamic prompt-driven interfaces and direct code editing to seamlessly switch between semantic and syntactic exploration. Our evaluation with artists demonstrates Spellburst's potential to enhance creative coding practices and inform the design of computational creativity tools that bridge semantic and syntactic spaces
    • …
    corecore