2,418 research outputs found
Virtual communities as narrative processes
By facing the problem to describe the history of a virtual community as the sequence of events generated by its participants, a different perception of the meaning of communitywares emerges. This paper describes a proposal for a virtual community system based on the narrative process that supports the social evolution of the community
Model-based design for self-sustainable sensor nodes
Long-term and maintenance-free operation is a critical feature for large-scale deployed battery-operated sensor nodes. Energy harvesting (EH) is the most promising technology to overcome the energy bottleneck of today’s sensors and to enable the vision of perpetual operation. However, relying on fluctuating environmental energy requires an application-specific analysis of the energy statistics combined with an in-depth characterization of circuits and algorithms, making design and verification complex. This article presents a model-based design (MBD) approach for EH-enabled devices accounting for the dynamic behavior of components in the power generation, conversion, storage, and discharge paths. The extension of existing compact models combined with data-driven statistical modeling of harvesting circuits allows accurate offline analysis, verification, and validation. The presented approach facilitates application-specific optimization during the development phase and reliable long-term evaluation combined with environmental datasets. Experimental results demonstrate the accuracy and flexibility of this approach: the model verification of a solar-powered wireless sensor node shows a determination coefficient () of 0.992, resulting in an energy error of only -1.57 % between measurement and simulation. Compared to state-of-practice methods, the MBD approach attains a reduction of the estimated state-of-charge error of up to 10.2 % in a real-world scenario. MBD offers non-trivial insights on critical design choices: the analysis of the storage element selection reveals a 2–3 times too high self-discharge per capacity ratio for supercapacitors and a peak current constrain for lithium-ion polymer batteries
More than<i> Relata Refero</i>: Representing the Various Roles of Reported Speech in Argumentative Discourse
Reported speech, or relata refero, although not always part of the argumentation tout court, can be an important element of argumentative discourse. It might, for instance, provide information on the position of another party in the discussion or function as part of the premise of an argument from authority. Whereas existing methods of representing argumentative discourse focus on arguments and their interrelations, this paper develops a method that enables the analyst to also include informative elements in the representation, focusing on reported speech. It does so by incorporating the notion of ‘voice’ into the representation framework of Adpositional Argumentation (AdArg). In particular, the paper explains how to formalize the constituents of this notion and illustrates its use in representing (1) an author’s report of the position of another party (including the supporting argumentation); (2) an author’s own position (including the supporting argumentation); and (3) source-based arguments such as the argument from authority, with an indication of the distance of the source from the author
Annotation with adpositional argumentation:Guidelines for building a Gold Standard Corpus of argumentative discourse
This paper explains Adpositional Argumentation (AdArg), a new method for annotating arguments expressed in natural language. In describing this method, it provides the guidelines for designing a Gold Standard Corpus (GSC) of argumentative discourse in terms of so-called argumentative adpositional trees (arg-adtrees). The theoretical starting points of AdArg draw on the combination of the linguistic representation framework of Constructive Adpositional Grammars (CxAdGrams) with the argument categorisation framework of the Periodic Table of Arguments (PTA). After an explanation of these two frameworks, it is shown how AdArg can be used for annotating arguments expressed in natural language. This is done by providing the arg-adtrees of four concrete examples of arguments, which substantiate the four basic argument forms distinguished in the PTA. The present exposition of the fundamental tenets of AdArg enables the building of a GSC of argumentative discourse, that means an annotated corpus of texts and discussions of undisputable high-quality according to argumentation theory experts. Such a GSC should be conveniently annotated in terms of arg-adtrees, which is a time-consuming process, as it needs highly skilled annotators and human supervision. However, its role is crucial for developing instruments for computer-assisted argumentation analysis and eventual application based on machine learning natural language processing algorithms
Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory
Brain-inspired hyperdimensional (HD) computing models neural activity patterns of the very size of the brain's circuits with points of a hyperdimensional space, that is, with hypervectors. Hypervectors are Ddimensional (pseudo)random vectors with independent and identically distributed (i.i.d.) components constituting ultra-wide holographic words: D = 10,000 bits, for instance. At its very core, HD computing manipulates a set of seed hypervectors to build composite hypervectors representing objects of interest. It demands memory optimizations with simple operations for an efficient hardware realization. In this article, we propose hardware techniques for optimizations of HD computing, in a synthesizable open-source VHDL library, to enable co-located implementation of both learning and classification tasks on only a small portion of Xilinx UltraScale FPGAs: (1)We propose simple logical operations to rematerialize the hypervectors on the fly rather than loading them from memory. These operations massively reduce the memory footprint by directly computing the composite hypervectors whose individual seed hypervectors do not need to be stored in memory. (2) Bundling a series of hypervectors over time requires a multibit counter per every hypervector component. We instead propose a binarized back-to-back bundling without requiring any counters. This truly enables onchip learning with minimal resources as every hypervector component remains binary over the course of training to avoid otherwise multibit components. (3) For every classification event, an associative memory is in charge of finding the closest match between a set of learned hypervectors and a query hypervector by using a distance metric. This operator is proportional to hypervector dimension (D), and hence may take O(D) cycles per classification event. Accordingly, we significantly improve the throughput of classification by proposing associative memories that steadily reduce the latency of classification to the extreme of a single cycle. (4) We perform a design space exploration incorporating the proposed techniques on FPGAs for a wearable biosignal processing application as a case study. Our techniques achieve up to 2.39
7 area saving, or 2,337
7 throughput improvement. The Pareto optimal HD architecture is mapped on only 18,340 configurable logic blocks (CLBs) to learn and classify five hand gestures using four electromyography sensors
NTX: An Energy-efficient Streaming Accelerator for Floating-point Generalized Reduction Workloads in 22 nm FD-SOI
Specialized coprocessors for Multiply-Accumulate (MAC) intensive workloads such as Deep Learning are becoming widespread in SoC platforms, from GPUs to mobile SoCs. In this paper we revisit NTX (an efficient accelerator developed for training Deep Neural Networks at scale) as a generalized MAC and reduction streaming engine. The architecture consists of a set of 32 bit floating-point streaming co-processors that are loosely coupled to a RISC-V core in charge of orchestrating data movement and computation. Post-layout results of a recent silicon implementation in 22 nm FD-SOI technology show the accelerator\u2019s capability to deliver up to 20 Gflop/s at 1.25 GHz and 168 mW. Based on these results we show that a version of NTX scaled down to 14 nm can achieve a 3
7 energy efficiency improvement over contemporary GPUs at 10.4
7 less silicon area, and a compute performance of 1.4 Tflop/s for training large state-of-the-art networks with full floating-point precision. An extended evaluation of MAC-intensive kernels shows that NTX can consistently achieve up to 87% of its peak performance across general reduction workloads beyond machine learning. Its modular architecture enables deployment at different scales ranging from high-performance GPU-class to low-power embedded scenario
- …