412 research outputs found
Object Segmentation in Images using EEG Signals
This paper explores the potential of brain-computer interfaces in segmenting
objects from images. Our approach is centered around designing an effective
method for displaying the image parts to the users such that they generate
measurable brain reactions. When an image region, specifically a block of
pixels, is displayed we estimate the probability of the block containing the
object of interest using a score based on EEG activity. After several such
blocks are displayed, the resulting probability map is binarized and combined
with the GrabCut algorithm to segment the image into object and background
regions. This study shows that BCI and simple EEG analysis are useful in
locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the
paper accepted to the 22nd ACM International Conference on Multimedia
(November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward
session. 10 page
A process calculus with finitary comprehended terms
We introduce the notion of an ACP process algebra and the notion of a meadow
enriched ACP process algebra. The former notion originates from the models of
the axiom system ACP. The latter notion is a simple generalization of the
former notion to processes in which data are involved, the mathematical
structure of data being a meadow. Moreover, for all associative operators from
the signature of meadow enriched ACP process algebras that are not of an
auxiliary nature, we introduce variable-binding operators as generalizations.
These variable-binding operators, which give rise to comprehended terms, have
the property that they can always be eliminated. Thus, we obtain a process
calculus whose terms can be interpreted in all meadow enriched ACP process
algebras. Use of the variable-binding operators can have a major impact on the
size of terms.Comment: 25 pages, combined with arXiv:0901.3012 [math.RA]; presentation
improved, mistakes in Table 5 correcte
Business Mereology: Imaginative Definitions of Insourcing and Outsourcing Transformations
Outsourcing, the passing on of tasks by organizations to other organizations,
often including the personnel and means to perform these tasks, has become an
important IT-business strategy over the past decades.
We investigate imaginative definitions for outsourcing relations and
outsourcing transformations. Abstract models of an extreme and unrealistic
simplicity are considered in order to investigate possible definitions of
outsourcing. Rather than covering all relevant practical cases an imaginative
definition of a concept provides obvious cases of its instantiation from which
more refined or liberal definitions may be derived.
A definition of outsourcing induces to a complementary definition of
insourcing. Outsourcing and insourcing have more complex variations in which
multiple parties are involved. All of these terms both refer to state
transformations and to state descriptions pertaining to the state obtained
after such transformations. We make an attempt to disambiguate the terminology
in that respect and we make an attempt to characterize the general concept of
sourcing which captures some representative cases.
Because mereology is the most general theory of parthood relations we coin
business mereology as the general theory in business studies which concerns the
full variety of sourcing relations and transformations
Actors, actions, and initiative in normative system specification
The logic of norms, called deontic logic, has been used to specify normative constraints for information systems. For example, one can specify in deontic logic the constraints that a book borrowed from a library should be returned within three weeks, and that if it is not returned, the library should send a reminder. Thus, the notion of obligation to perform an action arises naturally in system specification. Intuitively, deontic logic presupposes the concept of anactor who undertakes actions and is responsible for fulfilling obligations. However, the concept of an actor has not been formalized until now in deontic logic. We present a formalization in dynamic logic, which allows us to express the actor who initiates actions or choices. This is then combined with a formalization, presented earlier, of deontic logic in dynamic logic, which allows us to specify obligations, permissions, and prohibitions to perform an action. The addition of actors allows us to expresswho has the responsibility to perform an action. In addition to the application of the concept of an actor in deontic logic, we discuss two other applications of actors. First, we show how to generalize an approach taken up by De Nicola and Hennessy, who eliminate from CCS in favor of internal and external choice. We show that our generalization allows a more accurate specification of system behavior than is possible without it. Second, we show that actors can be used to resolve a long-standing paradox of deontic logic, called the paradox of free-choice permission. Towards the end of the paper, we discuss whether the concept of an actor can be combined with that of an object to formalize the concept of active objects
The age of data-driven proteomics : how machine learning enables novel workflows
A lot of energy in the field of proteomics is dedicated to the application of challenging experimental workflows, which include metaproteomics, proteogenomics, data independent acquisition (DIA), non-specific proteolysis, immunopeptidomics, and open modification searches. These workflows are all challenging because of ambiguity in the identification stage; they either expand the search space and thus increase the ambiguity of identifications, or, in the case of DIA, they generate data that is inherently more ambiguous. In this context, machine learning-based predictive models are now generating considerable excitement in the field of proteomics because these predictive models hold great potential to drastically reduce the ambiguity in the identification process of the above-mentioned workflows. Indeed, the field has already produced classical machine learning and deep learning models to predict almost every aspect of a liquid chromatography-mass spectrometry (LC-MS) experiment. Yet despite all the excitement, thorough integration of predictive models in these challenging LC-MS workflows is still limited, and further improvements to the modeling and validation procedures can still be made. In this viewpoint we therefore point out highly promising recent machine learning developments in proteomics, alongside some of the remaining challenges
Soundness of Unravelings for Conditional Term Rewriting Systems via Ultra-Properties Related to Linearity
Unravelings are transformations from a conditional term rewriting system
(CTRS, for short) over an original signature into an unconditional term
rewriting systems (TRS, for short) over an extended signature. They are not
sound w.r.t. reduction for every CTRS, while they are complete w.r.t.
reduction. Here, soundness w.r.t. reduction means that every reduction sequence
of the corresponding unraveled TRS, of which the initial and end terms are over
the original signature, can be simulated by the reduction of the original CTRS.
In this paper, we show that an optimized variant of Ohlebusch's unraveling for
a deterministic CTRS is sound w.r.t. reduction if the corresponding unraveled
TRS is left-linear or both right-linear and non-erasing. We also show that
soundness of the variant implies that of Ohlebusch's unraveling. Finally, we
show that soundness of Ohlebusch's unraveling is the weakest in soundness of
the other unravelings and a transformation, proposed by Serbanuta and Rosu, for
(normal) deterministic CTRSs, i.e., soundness of them respectively implies that
of Ohlebusch's unraveling.Comment: 49 pages, 1 table, publication in Special Issue: Selected papers of
the "22nd International Conference on Rewriting Techniques and Applications
(RTA'11)
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
Batalin-Vilkovisky Integrals in Finite Dimensions
The Batalin-Vilkovisky method (BV) is the most powerful method to analyze
functional integrals with (infinite-dimensional) gauge symmetries presently
known. It has been invented to fix gauges associated with symmetries that do
not close off-shell. Homological Perturbation Theory is introduced and used to
develop the integration theory behind BV and to describe the BV quantization of
a Lagrangian system with symmetries. Localization (illustrated in terms of
Duistermaat-Heckman localization) as well as anomalous symmetries are discussed
in the framework of BV.Comment: 35 page
- …