33 research outputs found

    Circuit FPGA for active rules selection in a transition P system region

    Get PDF
    P systems or Membrane Computing are a type of a distributed, massively parallel and non deterministic system based on biological membranes. These systems perform a computation through transition between two consecutive configurations. As it is well known in membrane computing, a configuration consists in a m-tuple of multisets present at any moment in the existing m regions of the system at that moment time. Transitions between two configurations are performed by using evolution rules which are in each region of the system in a non-deterministic maximally parallel manner. This article shows the development of a hardware circuit of selection of active rules in a membrane of a transition P-system. This development has been researched by using the Quartus II tool of Altera Semiconductors. In the first place, the initial specifications are defined in orfer to outline the synthesis of the circuit of active rules selection. Later on the design and synthesis of the circuit will be shown, as well as, the operation tests required to present the obtained results

    Prediction based task scheduling in distributed computing

    Full text link

    Achievements, Open Problems and Challenges for Search Based Software Testing

    Full text link
    testing as an optimisation problem, which can be attacked using computational search techniques from the field of Search Based Software Engineering (SBSE). We present an analysis of the SBST research agenda1, focusing on the open problems and chal-lenges of testing non-functional properties, in particular a topic we call ‘Search Based Energy Testing ’ (SBET), Multi-objective SBST and SBST for Test Strategy Identification. We conclude with a vision of FIFIVERIFY tools, which would automatically find faults, fix them and verify the fixes. We explain why we think such FIFIVERIFY tools constitute an exciting challenge for the SBSE community that already could be within its reach. I

    Feature learning for stock price prediction shows a significant role of analyst rating

    Get PDF
    Data Availability Statement: The code is available from https://mkhushi.github.io/ (accessed on 1 February 2021). Dataset License: License under which the dataset is made available (CC0).Efficient Market Hypothesis states that stock prices are a reflection of all the information present in the world and generating excess returns is not possible by merely analysing trade data which is already available to all public. Yet to further the research rejecting this idea, a rigorous literature review was conducted and a set of five technical indicators and 23 fundamental indicators was identified to establish the possibility of generating excess returns on the stock market. Leveraging these data points and various classification machine learning models, trading data of the 505 equities on the US S&P500 over the past 20 years was analysed to develop a classifier effective for our cause. From any given day, we were able to predict the direction of change in price by 1% up to 10 days in the future. The predictions had an overall accuracy of 83.62% with a precision of 85% for buy signals and a recall of 100% for sell signals. Moreover, we grouped equities by their sector and repeated the experiment to see if grouping similar assets together positively effected the results but concluded that it showed no significant improvements in the performance—rejecting the idea of sector-based analysis. Also, using feature ranking we could identify an even smaller set of 6 indicators while maintaining similar accuracies as that from the original 28 features and also uncovered the importance of buy, hold and sell analyst ratings as they came out to be the top contributors in the model. Finally, to evaluate the effectiveness of the classifier in real-life situations, it was backtested on FAANG (Facebook, Amazon, Apple, Netflix & Google) equities using a modest trading strategy where it generated high returns of above 60% over the term of the testing dataset. In conclusion, our proposed methodology with the combination of purposefully picked features shows an improvement over the previous studies, and our model predicts the direction of 1% price changes on the 10th day with high confidence and with enough buffer to even build a robotic trading system.This research received no external funding

    Zone-based formal specification and timing analysis of real-time self-adaptive systems

    Get PDF
    Self-adaptive software systems are able to autonomously adapt their behavior at run-time to react to internal dynamics and to uncertain and changing environment conditions. Formal specification and verification of self-adaptive systems are tasks generally very difficult to carry out, especially when involving time constraints. In this case, in fact, the system correctness depends also on the time associated with events. This article introduces the Zone-based Time Basic Petri nets specification formalism. The formalism adopts timed adaptation models to specify self-adaptive behavior with temporal constraints, and relies on a zone-based modeling approach to support separation of concerns. Zones identified during the modeling phase can be then used as modules either in isolation, to verify intra-zone properties, or all together, to verify inter-zone properties over the entire system. In addition, the framework allows the verification of (timed) robustness properties to guarantee self-healing capabilities when higher levels of reliability and availability are required to the system, especially when dealing with time-critical systems. This article presents also the ZAFETY tool, a Java software implementation of the proposed framework, and the validation and experimental results obtained in modeling and verifying two time-critical self-adaptive systems: the Gas Burner system and the Unmanned Aerial Vehicle system

    A Conceptual Framework for Adapation

    Get PDF
    We present a white-box conceptual framework for adaptation. We called it CODA, for COntrol Data Adaptation, since it is based on the notion of control data. CODA promotes a neat separation between application and adaptation logic through a clear identification of the set of data that is relevant for the latter. The framework provides an original perspective from which we survey a representative set of approaches to adaptation ranging from programming languages and paradigms, to computational models and architectural solutions

    A Conceptual Framework for Adapation

    Get PDF
    This paper presents a white-box conceptual framework for adaptation that promotes a neat separation of the adaptation logic from the application logic through a clear identification of control data and their role in the adaptation logic. The framework provides an original perspective from which we survey archetypal approaches to (self-)adaptation ranging from programming languages and paradigms, to computational models, to engineering solutions

    A Conceptual Framework for Adapation

    Get PDF
    This paper presents a white-box conceptual framework for adaptation that promotes a neat separation of the adaptation logic from the application logic through a clear identification of control data and their role in the adaptation logic. The framework provides an original perspective from which we survey archetypal approaches to (self-)adaptation ranging from programming languages and paradigms, to computational models, to engineering solutions

    Generic access to symbolic computing services

    Get PDF
    Symbolic computation is one of the computational domains that requires large computational resources. Computer Algebra Systems (CAS), the main tools used for symbolic computations, are mainly designed to be used as software tools installed on standalone machines that do not provide the required resources for solving large symbolic computation problems. In order to support symbolic computations an infrastructure built upon massively distributed computational environments must be developed. Building an infrastructure for symbolic computations requires a thorough analysis of the most important requirements raised by the symbolic computation world and must be built based on the most suitable architectural styles and technologies. The architecture that we propose is composed of several main components: the Computer Algebra System (CAS) Server that exposes the functionality implemented by one or more supporting CASs through generic interfaces of Grid Services; the Architecture for Grid Symbolic Services Orchestration (AGSSO) Server that allows seamless composition of CAS Server capabilities; and client side libraries to assist the users in describing workflows for symbolic computations directly within the CAS environment. We have also designed and developed a framework for automatic data management of mathematical content that relies on OpenMath encoding. To support the validation and fine tuning of the system we have developed a simulation platform that mimics the environment on which the architecture is deployed
    corecore