11,365 research outputs found

    Semantic cognition: a re-examination of the recurrent network "hub" model

    Get PDF
    This paper explores a model of “semantic cognition” first described in Rogers et al. (2004). This model was shown to reproduce the behaviour of neurological patients who perform poorly on a variety of tests of semantic knowledge; thus purporting to provide a comprehensive explanation for semantic deficits as found in patients with semantic dementia and, as extended in Lambon Ralph, Lowe, and Rogers (2007), individuals with herpes simplex virus encephalitis. Therefore, not only does the model emulate these semantic impairments, it also underpins a theoretical account of such memory disturbances. We report preliminary results arising from an attempted reimplementation of the Rogers et al. model. Specifically, while we were able to successfully reimplement the fully-functioning model and recreate “normal” behaviour, our attempts to replicate the behaviour of semantically impaired patients by lesioning the model were mixed. Our results suggest that while semantic impairments reminiscent of patients may arise when the Rogers et al. model is lesioned, such impairments are not a necessary consequence of the model. We discuss the implications of these apparently negative results for the Rogers et al. account of semantic cognition

    PatTrieSort - External String Sorting based on Patricia Tries

    Get PDF
    External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data

    Toxoplasma gondii Syntaxin 6 is required for vesicular transport between endosomal-like compartments and the Golgi Complex

    Get PDF
    Apicomplexans are obligate intracellular parasites that invade the host cell in an active process that relies on unique secretory organelles (micronemes, rhoptries and dense granules) localized at the apical tip of these highly polarized eukaryotes. In order for the contents of these specialized organelles to reach their final destination, these proteins are sorted post-Golgi and it has been speculated that they pass through endosomal-like compartments (ELCs), where they undergo maturation. Here, we characterize a Toxoplasma gondii homologue of Syntaxin 6 (TgStx6), a well-established marker for the early endosomes and trans Golgi network (TGN) in diverse eukaryotes. Indeed, TgStx6 appears to have a role in the retrograde transport between ELCs, the TGN and the Golgi, because overexpression of TgStx6 results in the development of abnormally shaped parasites with expanded ELCs, a fragmented Golgi and a defect in inner membrane complex maturation. Interestingly, other organelles such as the micronemes, rhoptries and the apicoplast are not affected, establishing the TGN as a major sorting compartment where several transport pathways intersect. It therefore appears thatToxoplasma has retained a plant-like secretory pathway

    Limitations of Intra-operator Parallelism Using Heterogeneous Computing Resources

    Get PDF
    The hardware landscape is changing from homogeneous multi-core systems towards wildly heterogeneous systems combining different computing units, like CPUs and GPUs. To utilize these heterogeneous environments, database query execution has to adapt to cope with different architectures and computing behaviors. In this paper, we investigate the simple idea of partitioning an operator’s input data and processing all data partitions in parallel, one partition per computing unit. For heterogeneous systems, data has to be partitioned according to the performance of the computing units. We define a way to calculate the partition sizes, analyze the parallel execution exemplarily for two database operators, and present limitations that could hinder significant performance improvements. The findings in this paper can help system developers to assess the possibilities and limitations of intra-operator parallelism in heterogeneous environments, leading to more informed decisions if this approach is beneficial for a given workload and hardware environment

    Focal adhesion disassembly requires clathrin-dependent endocytosis of integrins

    Get PDF
    AbstractCell migration requires the controlled disassembly of focal adhesions, but the underlying mechanisms remain poorly understood. Here, we show that adhesion turnover is mediated through dynamin- and clathrin-dependent endocytosis of activated β1 integrins. Consistent with this, clathrin and the clathrin adaptors AP-2 and disabled-2 (DAB2) distribute along with dynamin 2 to adhesion sites prior to adhesion disassembly. Moreover, knockdown of either dynamin 2 or both clathrin adaptors blocks β1 integrin internalization, leading to impaired focal adhesion disassembly and cell migration. Together, these results provide important insight into the mechanisms underlying adhesion disassembly and identify novel components of the disassembly pathway

    An Application-Specific Instruction Set for Accelerating Set-Oriented Database Primitives

    Get PDF
    The key task of database systems is to efficiently manage large amounts of data. A high query throughput and a low query latency are essential for the success of a database system. Lately, research focused on exploiting hardware features like superscalar execution units, SIMD, or multiple cores to speed up processing. Apart from these software optimizations for given hardware, even tailor-made processing circuits running on FPGAs are built to run mostly stateless query plans with incredibly high throughput. A similar idea, which was already considered three decades ago, is to build tailor-made hardware like a database processor. Despite their superior performance, such application-specific processors were not considered to be beneficial because general-purpose processors eventually always caught up so that the high development costs did not pay off. In this paper, we show that the development of a database processor is much more feasible nowadays through the availability of customizable processors. We illustrate exemplarily how to create an instruction set extension for set-oriented database rimitives. The resulting application-specific processor provides not only a high performance but it also enables very energy-efficient processing. Our processor requires in various configurations more than 960x less energy than a high-end x86 processor while providing the same performance

    Hardware-conscious query processing for the many-core era

    Get PDF
    Die optimale Nutzung von moderner Hardware zur Beschleunigung von Datenbank-Anfragen ist keine triviale Aufgabe. Viele DBMS als auch DSMS der letzten Jahrzehnte basieren auf Sachverhalten, die heute kaum noch Gültigkeit besitzen. Ein Beispiel hierfür sind heutige Server-Systeme, deren Hauptspeichergröße im Bereich mehrerer Terabytes liegen kann und somit den Weg für Hauptspeicherdatenbanken geebnet haben. Einer der größeren letzten Hardware Trends geht hin zu Prozessoren mit einer hohen Anzahl von Kernen, den sogenannten Manycore CPUs. Diese erlauben hohe Parallelitätsgrade für Programme durch Multithreading sowie Vektorisierung (SIMD), was die Anforderungen an die Speicher-Bandbreite allerdings deutlich erhöht. Der sogenannte High-Bandwidth Memory (HBM) versucht diese Lücke zu schließen, kann aber ebenso wie Many-core CPUs jeglichen Performance-Vorteil negieren, wenn dieser leichtfertig eingesetzt wird. Diese Arbeit stellt die Many-core CPU-Architektur zusammen mit HBM vor, um Datenbank sowie Datenstrom-Anfragen zu beschleunigen. Es wird gezeigt, dass ein hardwarenahes Kostenmodell zusammen mit einem Kalibrierungsansatz die Performance verschiedener Anfrageoperatoren verlässlich vorhersagen kann. Dies ermöglicht sowohl eine adaptive Partitionierungs und Merge-Strategie für die Parallelisierung von Datenstrom-Anfragen als auch eine ideale Konfiguration von Join-Operationen auf einem DBMS. Nichtsdestotrotz ist nicht jede Operation und Anwendung für die Nutzung einer Many-core CPU und HBM geeignet. Datenstrom-Anfragen sind oft auch an niedrige Latenz und schnelle Antwortzeiten gebunden, welche von höherer Speicher-Bandbreite kaum profitieren können. Hinzu kommen üblicherweise niedrigere Taktraten durch die hohe Kernzahl der CPUs, sowie Nachteile für geteilte Datenstrukturen, wie das Herstellen von Cache-Kohärenz und das Synchronisieren von parallelen Thread-Zugriffen. Basierend auf den Ergebnissen dieser Arbeit lässt sich ableiten, welche parallelen Datenstrukturen sich für die Verwendung von HBM besonders eignen. Des Weiteren werden verschiedene Techniken zur Parallelisierung und Synchronisierung von Datenstrukturen vorgestellt, deren Effizienz anhand eines Mehrwege-Datenstrom-Joins demonstriert wird.Exploiting the opportunities given by modern hardware for accelerating query processing speed is no trivial task. Many DBMS and also DSMS from past decades are based on fundamentals that have changed over time, e.g., servers of today with terabytes of main memory capacity allow complete avoidance of spilling data to disk, which has prepared the ground some time ago for main memory databases. One of the recent trends in hardware are many-core processors with hundreds of logical cores on a single CPU, providing an intense degree of parallelism through multithreading as well as vectorized instructions (SIMD). Their demand for memory bandwidth has led to the further development of high-bandwidth memory (HBM) to overcome the memory wall. However, many-core CPUs as well as HBM have many pitfalls that can nullify any performance gain with ease. In this work, we explore the many-core architecture along with HBM for database and data stream query processing. We demonstrate that a hardware-conscious cost model with a calibration approach allows reliable performance prediction of various query operations. Based on that information, we can, therefore, come to an adaptive partitioning and merging strategy for stream query parallelization as well as finding an ideal configuration of parameters for one of the most common tasks in the history of DBMS, join processing. However, not all operations and applications can exploit a many-core processor or HBM, though. Stream queries optimized for low latency and quick individual responses usually do not benefit well from more bandwidth and suffer from penalties like low clock frequencies of many-core CPUs as well. Shared data structures between cores also lead to problems with cache coherence as well as high contention. Based on our insights, we give a rule of thumb which data structures are suitable to parallelize with focus on HBM usage. In addition, different parallelization schemas and synchronization techniques are evaluated, based on the example of a multiway stream join operation
    corecore