1,919 research outputs found

    Digital Ecosystems: Ecosystem-Oriented Architectures

    Full text link
    We view Digital Ecosystems to be the digital counterparts of biological ecosystems. Here, we are concerned with the creation of these Digital Ecosystems, exploiting the self-organising properties of biological ecosystems to evolve high-level software applications. Therefore, we created the Digital Ecosystem, a novel optimisation technique inspired by biological ecosystems, where the optimisation works at two levels: a first optimisation, migration of agents which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. The Digital Ecosystem was then measured experimentally through simulations, with measures originating from theoretical ecology, evaluating its likeness to biological ecosystems. This included its responsiveness to requests for applications from the user base, as a measure of the ecological succession (ecosystem maturity). Overall, we have advanced the understanding of Digital Ecosystems, creating Ecosystem-Oriented Architectures where the word ecosystem is more than just a metaphor.Comment: 39 pages, 26 figures, journa

    Automated pairwise testing approach based on classification tree modeling and negative selection algorithm

    Get PDF
    Generating the test cases for analysis is an important activity in software testing to increase the trust level of users. The traditional way to generate test cases is called exhaustive testing. It is infeasible and time consuming because it generates too many numbers of test cases. A combinatorial testing was used to solve the exhaustive testing problem. The popular technique in combinatorial testing is called pairwise testing that involves the interaction of two parameters. Although pairwise testing can cover the exhaustive testing problems, there are several issues that should be considered. First issue is related to modeling of the system under test (SUT) as a preprocess for test case generation as it has yet to be implemented in automated proposed approaches. The second issue is different approaches generate different number of test cases for different covering arrays. These issues showed that there is no one efficient way to find the optimal solution in pairwise testing that would consider the invalid combination or constraint. Therefore, a combination of Classification Tree Method and Negative Selection Algorithm (CTM-NSA) was developed in this research. The CTM approach was revised and enhanced to be used as the automated modeling and NSA approach was developed to optimize the pairwise testing by generate the low number of test cases. The findings showed that the CTM-NSA outperformed the other modeling method in terms of easing the tester and generating a low number of test cases in the small SUT size. Furthermore, it is comparable to the efficient approaches as compared to many of the test case generation approaches in large SUT size as it has good characteristic in detecting the self and non-self-sample. This characteristic occurs during the detection stage of NSA by covering the best combination of values for all parameters and considers the invalid combinations or constraints in order to achieve a hundred percent pairwise testing coverage. In addition, validation of the approach was performed using Statistical Wilcoxon Signed-Rank Test. Based on these findings, CTM-NSA had been shown to be able perform modeling in an automated way and achieve the minimum or a low number of test cases in small SUT size

    Energy Efficient Virtual Machine Migration in Cloud Data Centers

    Get PDF
    Cloud computing services have been on the rise over the past few decades, which has led to an increase in the number of data centers worldwide which increasingly consume more and more amount of energy for their operation, leading to high carbon dioxide emissions and also high operation costs. Cloud computing infrastructures are designed to support the accessibility and deployment of various service oriented applications by the users. The resources are the major source of the power consumption in data centers along with air conditioning and cooling equipment. Moreover the energy consumption in the cloud is proportional to the resource utilization and data centers are almost the worlds highest consumers of electricity. It is therefore, the need of the hour to devise efficient consolidation schemes for the cloud model to minimize energy and increase Return of Investment(ROI) for the users by decreasing the operating costs. The consolidation problem is NP-complete in nature, which requires heuristic techniques to get a sub-optimal solution. The complexity of the problem increases with increase in cloud infrastructure. We have proposed a new consolidation scheme for the virtual machines(VMs) by improving the host overload detection phase of the scheme. The resulting scheme is effective in reducing the energy and the level of Service Level Agreement(SLA) violations both, to a considerable extent. For testing the performance of our implementation on cloud we need a simulation environment that can provide us an environment with system and behavioural modelling of the actual cloud computing components, and can generate results that can help us in the analysis so that we can deploy them on actual clouds. CloudSim is one such simulation toolkit that allows us to test and analyse our allocation and selection algorithms. In this thesis we have used CloudSim version 3.0.3 to test and analyse our policies and modifications in the current policies. The advantages of using CloudSim 3.0.3 is that it takes very less effort and time to implement cloud-based application and we can test the performance of application services in heterogeneous Cloud environments. The observations are validated by simulating the experiment using the CLoudSim framework and the data provided by PlanetLab

    Network Partitioning in Distributed Agent-Based Models

    Get PDF
    Agent-Based Models (ABMs) are an emerging simulation paradigm for modeling complex systems, comprised of autonomous, possibly heterogeneous, interacting agents. The utility of ABMs lies in their ability to represent such complex systems as self-organizing networks of agents. Modeling and understanding the behavior of complex systems usually occurs at large and representative scales, and often obtaining and visualizing of simulation results in real-time is critical. The real-time requirement necessitates the use of in-memory computing, as it is difficult and challenging to handle the latency and unpredictability of disk accesses. Combining this observation with the scale requirement emphasizes the need to use parallel and distributed computing platforms, such as MPI-enabled CPU clusters. Consequently, the agent population must be partitioned across different CPUs in a cluster. Further, the typically high volume of interactions among agents can quickly become a significant bottleneck for real-time or large-scale simulations. The problem is exacerbated if the underlying ABM network is dynamic and the inter-process communication evolves over the course of the simulation. Therefore, it is critical to develop topology-aware partitioning mechanisms to support such large simulations. In this dissertation, we demonstrate that distributed agent-based model simulations benefit from the use of graph partitioning algorithms that involve a local, neighborhood-based perspective. Such methods do not rely on global accesses to the network and thus are more scalable. In addition, we propose two partitioning schemes that consider the bottom-up individual-centric nature of agent-based modeling. The First technique utilizes label-propagation community detection to partition the dynamic agent network of an ABM. We propose a latency-hiding, seamless integration of community detection in the dynamics of a distributed ABM. To achieve this integration, we exploit the similarity in the process flow patterns of a label-propagation community-detection algorithm and self-organizing ABMs. In the second partitioning scheme, we apply a combination of the Guided Local Search (GLS) and Fast Local Search (FLS) metaheuristics in the context of graph partitioning. The main driving principle of GLS is the dynamic modi?cation of the objective function to escape local optima. The algorithm augments the objective of a local search, thereby transforming the landscape structure and escaping a local optimum. FLS is a local search heuristic algorithm that is aimed at reducing the search space of the main search algorithm. It breaks down the space into sub-neighborhoods such that inactive sub-neighborhoods are removed from the search process. The combination of GLS and FLS allowed us to design a graph partitioning algorithm that is both scalable and sensitive to the inherent modularity of real-world networks

    Automated Variability Analysis and Testing of an E-Commerce Site. An Experience Report

    Get PDF
    In this paper, we report on our experience on the development of La Hilandera, an e-commerce site selling haberdashery products and craft supplies in Europe. The store has a huge input space where customers can place almost three millions of different orders which made testing an ex-tremely di cult task. To address the challenge, we explored the applicability of some of the practices for variability management in software product lines. First, we used a feature model to represent the store input space which provided us with a variability view easy to understand, share and discuss with all the stakeholders. Second, we used techniques for the automated analysis of feature models for the detection and repair of inconsistent and missing con guration settings. Finally, we used test selection and prioritization techniques for the generation of a manageable and effective set of test cases. Our ndings, summarized in a set of lessons learnt, suggest that variability techniques could successfully address many of the challenges found when developing e-commerce sites.CICYT TIN2012-32273Junta de Andalucía TIC-5906Junta de Andalucía P12-TIC- 186

    Human Computation and Convergence

    Full text link
    Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation - methods that leverage the respective strengths of humans and machines in distributed information-processing systems - formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address societies most wicked problems and achieve planetary homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added references to page 1 and 3, and corrected typ

    Dissecting the Active Site of the Collagenolytic Cathepsin L3 Protease of the Invasive Stage of Fasciola hepatica

    Full text link
    Background: A family of secreted cathepsin L proteases with differential activities is essential for host colonization and survival in the parasitic flatworm Fasciola hepatica. While the blood feeding adult secretes predominantly FheCL1, an enzyme with a strong preference for Leu at the S2 pocket of the active site, the infective stage produces FheCL3, a unique enzyme with collagenolytic activity that favours Pro at P2. Methodology/Principal Findings: Using a novel unbiased multiplex substrate profiling and mass spectrometry methodology (MSP-MS), we compared the preferences of FheCL1 and FheCL3 along the complete active site cleft and confirm that while the S2 imposes the greatest influence on substrate selectivity, preferences can be indicated on other active site subsites. Notably, we discovered that the activity of FheCL1 and FheCL3 enzymes is very different, sharing only 50% of the cleavage sites, supporting the idea of functional specialization. We generated variants of FheCL1 and FheCL3 with S2 and S3 residues by mutagenesis and evaluated their substrate specificity using positional scanning synthetic combinatorial libraries (PS-SCL). Besides the rare P2 Pro preference, FheCL3 showed a distinctive specificity at the S3 pocket, accommodating preferentially the small Gly residue. Both P2 Pro and P3 Gly preferences were strongly reduced when Trp67 of FheCL3 was replaced by Leu, rendering the enzyme incapable of digesting collagen. In contrast, the inverse Leu67Trp substitution in FheCL1 only slightly reduced its Leu preference and improved Pro acceptance in P2, but greatly increased accommodation of Gly at S3. Conclusions/Significance: These data reveal the significance of S2 and S3 interactions in substrate binding emphasizing the role for residue 67 in modulating both sites, providing a plausible explanation for the FheCL3 collagenolytic activity essential to host invasion. The unique specificity of FheCL3 could be exploited in the design of specific inhibitors selectively directed to specific infective stage parasite proteinases. © 2013 Corvo et al
    corecore