23,693 research outputs found
Neural scaling laws for an uncertain world
Autonomous neural systems must efficiently process information in a wide
range of novel environments, which may have very different statistical
properties. We consider the problem of how to optimally distribute receptors
along a one-dimensional continuum consistent with the following design
principles. First, neural representations of the world should obey a neural
uncertainty principle---making as few assumptions as possible about the
statistical structure of the world. Second, neural representations should
convey, as much as possible, equivalent information about environments with
different statistics. The results of these arguments resemble the structure of
the visual system and provide a natural explanation of the behavioral
Weber-Fechner law, a foundational result in psychology. Because the derivation
is extremely general, this suggests that similar scaling relationships should
be observed not only in sensory continua, but also in neural representations of
``cognitive' one-dimensional quantities such as time or numerosity
Flexible provisioning of Web service workflows
Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures
Integration of decision support systems to improve decision support performance
Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes
An eclectic quadrant of rule based system verification: work grounded in verification of fuzzy rule bases.
In this paper, we used a research approach based on grounded theory in order to classify methods proposed in literature that try to extend the verification of classical rule bases to the case of fuzzy knowledge modeling. Within this area of verification we identify two dual lines of thought respectively leading to what is termed respectively static and dynamic anomaly detection methods. The major outcome of the confrontation of both approaches is that their results, most often stated in terms of necessary and/or sufficient conditions are difficult to reconcile. This paper addresses precisely this issue by the construction of a theoretical framework, which enables to effectively evaluate the results of both static and dynamic verification theories. Things essentially go wrong when in the quest for a good affinity, matching or similarity measure, one neglects to take into account the effect of the implication operator, an issue that rises above and beyond the fuzzy setting that initiated the research. The findings can easily be generalized to verification issues in any knowledge coding setting.Systems;
Redundancy in Systems which Entertain a Model of Themselves: Interaction Information and the Self-organization of Anticipation
Mutual information among three or more dimensions (mu-star = - Q) has been
considered as interaction information. However, Krippendorff (2009a, 2009b) has
shown that this measure cannot be interpreted as a unique property of the
interactions and has proposed an alternative measure of interaction information
based on iterative approximation of maximum entropies. Q can then be considered
as a measure of the difference between interaction information and redundancy
generated in a model entertained by an observer. I argue that this provides us
with a measure of the imprint of a second-order observing system -- a model
entertained by the system itself -- on the underlying information processing.
The second-order system communicates meaning hyper-incursively; an observation
instantiates this meaning-processing within the information processing. The net
results may add to or reduce the prevailing uncertainty. The model is tested
empirically for the case where textual organization can be expected to contain
intellectual organization in terms of distributions of title words, author
names, and cited references
- …