714 research outputs found
GAMES: A new Scenario for Software and Knowledge Reuse
Games are a well-known test bed for testing search algorithms and learning methods, and many authors have presented numerous reasons for the research in this area. Nevertheless, they have not received the attention they deserve as software projects.
In this paper, we analyze the applicability of software
and knowledge reuse in the games domain. In spite of the
need to find a good evaluation function, search algorithms
and interface design can be said to be the primary concerns.
In addition, we will discuss the current state of the main
statistical learning methods and how they can be addressed
from a software engineering point of view. So, this paper
proposes a reliable environment and adequate tools, necessary in order to achieve high levels of reuse in the games domain
Recommended from our members
Combining centralised and distributed testing
Many systems interact with their environment at distributed interfaces (ports) and sometimes it is not possible to place synchronised local testers at the ports of the system under test (SUT). There are then two main approaches to testing: having independent local testers or a single centralised tester that interacts asynchronously with the SUT. The power of using independent testers has been captured using implementation relation \dioco. In this paper we define implementation relation \diococ for the centralised approach and prove that \dioco and \diococ are incomparable. This shows that the frameworks detect different types of faults and so we devise a hybrid framework and define an implementation relation \diocos for this. We prove that the hybrid framework is more powerful than the distributed and centralised approaches. We then prove that the Oracle problem is NP-complete for \diococ and \diocos but can be solved in polynomial time if we place an upper bound on the number of ports. Finally, we consider the problem of deciding whether there is a test case that is guaranteed to force a finite state model into a particular state or to distinguish two states, proving that both problems are undecidable for the centralised and hybrid frameworks
Autonomous Architectural Assembly And Adaptation
An increasingly common solution for systems which are deployed in unpredictable
or dangerous environments is to provide the system with an autonomous or selfmanaging
capability. This capability permits the software of the system to adapt to
the environmental conditions encountered at runtime by deciding what changes
need to be made to the systemās behaviour in order to continue meeting the
requirements imposed by the designer. The chief advantage of this approach comes
from a reduced reliance on the brittle assumptions made at design time.
In this work, we describe mechanisms for adapting the software architecture of
a system using a declarative expression of the functional requirements (derived
from goals), structural constraints and preferences over the space of non-functional
properties possessed by the components of the system. The declarative approach
places this work in contrast to existing schemes which require more fine-grained,
often procedural, specifications of how to perform adaptations. Our algorithm for
assembling and re-assembling configurations chooses between solutions that meet
both the functional requirements and the structural constraints by comparing
the non-functional properties of the selected components against the designerās
preferences between, for example, a high-performance or a highly reliable solution.
In addition to the centralised algorithm, we show how the approach can be applied
to a distributed system with no central or master node that is aware of the full
space of solutions. We use a gossip protocol as a mechanism by which peer nodes
can propose what they think the component configuration is (or should be). Gossip
ensures that the nodes will reach agreement on a solution, and will do so in a
logarithmic number of steps. This latter property ensures the approach can scale
to very large systems. Finally, the work is validated on a number of case studies
An Atlas of Exotic Variability in IGR J17091-3624: A Comparison with GRS 1915+105
We performed an analysis of all RXTE observations of the Low Mass X-ray
Binary and Black Hole Candidate IGR J17091-3624 during the 2011-2013 outburst
of the source. By creating lightcurves, hardness-intensity diagrams and power
density spectra of each observation, we have created a set of 9 variability
`classes' that phenomenologically describe the range of types of variability
seen in this object. We compare our set of variability classes to those
established by Belloni et al. (2000) to describe the similar behaviour of the
LMXB GRS 1915+105, finding that some types of variability seen in IGR
J17091-3624 are not represented in data of GRS 1915+105. We also use all
available X-ray data of the 2011-2013 outburst of IGR J17091-3624 to analyse
its long-term evolution, presenting the first detection of IGR J17091-3624
above 150 keV as well as noting the presence of `re-flares' during the latter
stages of the outburst. Using our results we place new constraints on the mass
and distance of the object, and find that it accretes at <33% of its Eddington
limit. As such, we conclude that Eddington-limited accretion can no longer be
considered a sufficient or necessary criterion for GRS 1915+105-like
variability to occur in Low Mass X-Ray Binaries.Comment: 26 Pages, 31 Figures, 8 Tables. Accepted to MNRA
An Automated tool to detect variable sources in the Vista Variables in the VĆa LĆ”ctea Survey. The VVV Variables (V^4) catalog of tiles d001 and d002
27 pages, 19 figuresTime-varying phenomena are one of the most substantial sources of astrophysical information, and their study has led to many fundamental discoveries in modern astronomy. We have developed an automated tool to search for and analyze variable sources in the near-infrared K s band using the data from the VISTA Variables in the VĆa LĆ”ctea (VVV) ESO Public Large Survey. This process relies on the characterization of variable sources using different variability indices calculated from time series generated with point-spread function (PSF) photometry of sources under analysis. In particular, we used two main indices, the total amplitude and the eta index Ī·, to identify variable sources. Once the variable objects are identified, periods are determined with generalized Lomb-Scargle periodograms and the information potential metric. Variability classes are assigned according to a compromise between comparisons with VVV templates and the period of the variability. The automated tool is applied on VVV tiles d001 and d002 and led to the discovery of 200 variable sources. We detected 70 irregular variable sources and 130 periodic ones. In addition, nine open-cluster candidates projected in the region are analyzed, and the infrared variable candidates found around these clusters are further scrutinized by cross-matching their locations against emission star candidates from VPHAS+ survey H Ī± color cuts.Peer reviewedFinal Accepted Versio
- ā¦