967 research outputs found
The Adaptive Priority Queue with Elimination and Combining
Priority queues are fundamental abstract data structures, often used to
manage limited resources in parallel programming. Several proposed parallel
priority queue implementations are based on skiplists, harnessing the potential
for parallelism of the add() operations. In addition, methods such as Flat
Combining have been proposed to reduce contention by batching together multiple
operations to be executed by a single thread. While this technique can decrease
lock-switching overhead and the number of pointer changes required by the
removeMin() operations in the priority queue, it can also create a sequential
bottleneck and limit parallelism, especially for non-conflicting add()
operations.
In this paper, we describe a novel priority queue design, harnessing the
scalability of parallel insertions in conjunction with the efficiency of
batched removals. Moreover, we present a new elimination algorithm suitable for
a priority queue, which further increases concurrency on balanced workloads
with similar numbers of add() and removeMin() operations. We implement and
evaluate our design using a variety of techniques including locking, atomic
operations, hardware transactional memory, as well as employing adaptive
heuristics given the workload.Comment: Accepted at DISC'14 - this is the full version with appendices,
including more algorithm
The Evolution of Dust Disk Sizes from a Homogeneous Analysis of 1-10 Myr old Stars
We utilize ALMA archival data to estimate the dust disk size of 152
protoplanetary disks in Lupus (1-3 Myr), Chamaeleon I (2-3 Myr), and Upper-Sco
(5-11 Myr). We combine our sample with 47 disks from Tau/Aur and Oph whose dust
disk radii were estimated, as here, through fitting radial profile models to
visibility data. We use these 199 homogeneously derived disk sizes to identify
empirical disk-disk and disk-host property relations as well as to search for
evolutionary trends. In agreement with previous studies, we find that dust disk
sizes and millimeter luminosities are correlated, but show for the first time
that the relationship is not universal between regions. We find that disks in
the 2-3 Myr-old Cha I are not smaller than disks in other regions of similar
age, and confirm the Barenfeld et al. (2017) finding that the 5-10 Myr USco
disks are smaller than disks belonging to younger regions. Finally, we find
that the outer edge of the Solar System, as defined by the Kuiper Belt, is
consistent with a population of dust disk sizes which have not experienced
significant truncation
A framework for deriving semantic web services
Web service-based development represents an emerging approach for the development of distributed information systems. Web services have been mainly applied by software practitioners as a means to modularize system functionality that can be offered across a network (e.g., intranet and/or the Internet). Although web services have been
predominantly developed as a technical solution for integrating software systems, there is a more business-oriented aspect that developers and enterprises need to deal with in order to benefit from the full potential of web services in an electronic market. This âignoredâ aspect is the representation of the semantics underlying the services themselves as well as the âthingsâ that the services manage. Currently languages like the Web Services Description Language (WSDL) provide the syntactic means to describe web services, but
lack in providing a semantic underpinning. In order to harvest all the benefits of web services technology, a framework has been developed for deriving business semantics from syntactic descriptions of web services. The benefits of such a framework are two-fold. Firstly, the framework provides a way to gradually construct domain ontologies from previously defined technical services. Secondly, the framework enables the
migration of syntactically defined web services toward semantic web services. The study follows a design research approach which (1) identifies the problem area and its relevance from an industrial case study and previous research, (2) develops the
framework as a design artifact and (3) evaluates the application of the framework through a relevant scenario
Amygdala-related electrical fingerprint is modulated with neurofeedback training and correlates with deep-brain activation: proof-of-concept in borderline personality disorder
Background: The modulation of brain circuits of emotion is a promising pathway to treat borderline personality disorder (BPD). Precise and scalable approaches have yet to be established. Two studies investigating the amygdala-related electrical fingerprint (Amyg-EFP) in BPD are presented: one study addressing the deep-brain correlates of Amyg-EFP, and a second study investigating neurofeedback (NF) as a means to improve brain self-regulation.
Methods: Study 1 combined electroencephalography (EEG) and simultaneous functional magnetic resonance imaging to investigate the replicability of Amyg-EFP-related brain activation found in the reference dataset (N = 24 healthy subjects, 8 female; re-analysis of published data) in the replication dataset (N = 16 female individuals with BPD). In the replication dataset, we additionally explored how the Amyg-EFP would map to neural circuits defined by the research domain criteria. Study 2 investigated a 10-session Amyg-EFP NF training in parallel to a 12-weeks residential dialectical behavior therapy (DBT) program. Fifteen patients with BPD completed the training, N = 15 matched patients served as DBT-only controls.
Results: Study 1 replicated previous findings and showed significant amygdala blood oxygenation level dependent activation in a whole-brain regression analysis with the Amyg-EFP. Neurocircuitry activation (negative affect, salience, and cognitive control) was correlated with the Amyg-EFP signal. Study 2 showed Amyg-EFP modulation with NF training, but patients received reversed feedback for technical reasons, which limited interpretation of results.
Conclusions: Recorded via scalp EEG, the Amyg-EFP picks up brain activation of high relevance for emotion. Administering Amyg-EFP NF in addition to standardized BPD treatment was shown to be feasible. Clinical utility remains to be investigated
Mathematical practice, crowdsourcing, and social machines
The highest level of mathematics has traditionally been seen as a solitary
endeavour, to produce a proof for review and acceptance by research peers.
Mathematics is now at a remarkable inflexion point, with new technology
radically extending the power and limits of individuals. Crowdsourcing pulls
together diverse experts to solve problems; symbolic computation tackles huge
routine calculations; and computers check proofs too long and complicated for
humans to comprehend.
Mathematical practice is an emerging interdisciplinary field which draws on
philosophy and social science to understand how mathematics is produced. Online
mathematical activity provides a novel and rich source of data for empirical
investigation of mathematical practice - for example the community question
answering system {\it mathoverflow} contains around 40,000 mathematical
conversations, and {\it polymath} collaborations provide transcripts of the
process of discovering proofs. Our preliminary investigations have demonstrated
the importance of "soft" aspects such as analogy and creativity, alongside
deduction and proof, in the production of mathematics, and have given us new
ways to think about the roles of people and machines in creating new
mathematical knowledge. We discuss further investigation of these resources and
what it might reveal.
Crowdsourced mathematical activity is an example of a "social machine", a new
paradigm, identified by Berners-Lee, for viewing a combination of people and
computers as a single problem-solving entity, and the subject of major
international research endeavours. We outline a future research agenda for
mathematics social machines, a combination of people, computers, and
mathematical archives to create and apply mathematics, with the potential to
change the way people do mathematics, and to transform the reach, pace, and
impact of mathematics research.Comment: To appear, Springer LNCS, Proceedings of Conferences on Intelligent
Computer Mathematics, CICM 2013, July 2013 Bath, U
Laws of order
Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated
Homogeneous Analysis of the Dust Morphology of Transition Disks Observed with ALMA: Investigating Dust Trapping and the Origin of the Cavities
We analyze the dust morphology of 29 transition disks (TDs) observed with
ALMA at (sub-) millimeter-emission. We perform the analysis in the visibility
plane to characterize the total flux, cavity size, and shape of the ring-like
structure. First, we found that the relation is much
flatter for TDs than the observed trends from samples of class II sources in
different star forming regions. This relation demonstrates that cavities open
in high (dust) mass disks, independent of the stellar mass. The flatness of
this relation contradicts the idea that TDs are a more evolved set of disks.
Two potential reasons (not mutually exclusive) may explain this flat relation:
the emission is optically thick or/and millimeter-sized particles are trapped
in a pressure bump. Second, we discuss our results of the cavity size and ring
width in the context of different physical processes for cavity formation.
Photoevaporation is an unlikely leading mechanism for the origin of the cavity
of any of the targets in the sample. Embedded giant planets or dead zones
remain as potential explanations. Although both models predict correlations
between the cavity size and the ring shape for different stellar and disk
properties, we demonstrate that with the current resolution of the
observations, it is difficult to obtain these correlations. Future observations
with higher angular resolution observations of TDs with ALMA will help to
discern between different potential origins of cavities in TDs
- âŠ