1,268 research outputs found

    Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyService Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment

    Simulating Land Use Land Cover Change Using Data Mining and Machine Learning Algorithms

    Get PDF
    The objectives of this dissertation are to: (1) review the breadth and depth of land use land cover (LUCC) issues that are being addressed by the land change science community by discussing how an existing model, Purdue\u27s Land Transformation Model (LTM), has been used to better understand these very important issues; (2) summarize the current state-of-the-art in LUCC modeling in an attempt to provide a context for the advances in LUCC modeling presented here; (3) use a variety of statistical, data mining and machine learning algorithms to model single LUCC transitions in diverse regions of the world (e.g. United States and Africa) in order to determine which tools are most effective in modeling common LUCC patterns that are nonlinear; (4) develop new techniques for modeling multiple class (MC) transitions at the same time using existing LUCC models as these models are rare and in great demand; (5) reconfigure the existing LTM for urban growth boundary (UGB) simulation because UGB modeling has been ignored by the LUCC modeling community, and (6) compare two rule based models for urban growth boundary simulation for use in UGB land use planning. The review of LTM applications during the last decade indicates that a model like the LTM has addressed a majority of land change science issues although it has not explicitly been used to study terrestrial biodiversity issues. The review of the existing LUCC models indicates that there is no unique typology to differentiate between LUCC model structures and no models exist for UGB. Simulations designed to compare multiple models show that ANN-based LTM results are similar to Multivariate Adaptive Regression Spline (MARS)-based models and both ANN and MARS-based models outperform Classification and Regression Tree (CART)-based models for modeling single LULC transition; however, for modeling MC, an ANN-based LTM-MC is similar in goodness of fit to CART and both models outperform MARS in different regions of the world. In simulations across three regions (two in United States and one in Africa), the LTM had better goodness of fit measures while the outcome of CART and MARS were more interpretable and understandable than the ANN-based LTM. Modeling MC LUCC require the examination of several class separation rules and is thus more complicated than single LULC transition modeling; more research is clearly needed in this area. One of the greatest challenges identified with MC modeling is evaluating error distributions and map accuracies for multiple classes. A modified ANN-based LTM and a simple rule based UGBM outperformed a null model in all cardinal directions. For UGBM model to be useful for planning, other factors need to be considered including a separate routine that would determine urban quantity over time

    Entropy-Based Resource Management in Complex Cloud Environment

    Get PDF
    Resource Management is an NP-complete problem, the complexity of which increases substantially in the Cloud environment. The complexity of cloud resource management can originate from many factors: the scale of the resources; the heterogeneity of the resource types and the interdependencies of these; as well as the variability, dynamicity, and unpredictability of resource run-time performance. Complexity has many negative effects in relation to satisfying the Quality of Service (QoS) requirements of cloud applications, such as cost, performance, availability, and reliability. If an application cannot guarantee its QoS, it will be hard to populate. However, the vast majority of research efforts into cloud resource management implicitly assume the Cloud to be a simplifying technology and that the cloud resource's performance is determined and predictable. These incorrect assumptions may significantly affect the QoS of any cloud application developed under it, causing its resource management strategy to be less than robust. In spite of there being extensive research into complexity issues in many diverse fields ranging from computational biology to decision making in economics, the study of complexity in cloud resource management systems is limited. In this thesis, I address the complexity problems of Cloud Resource Management Systems by introducing the use of Entropy Theory in relation to them. The main contributions of this thesis are as follows: 1. A cloud simulation tool-kit, ComplexCloudSim, is implemented in order to help tackle the research question: what is the role of complexity in QoS-aware cloud resource management? The uncovering of Chaotic Behaviour in Cloud Resource Management Systems by using the Damage Spreading Analysis method. 2. The comprehensive definition of complexity in the Cloud Resource Management Systems; such can be primarily classified into two categories: Global System Complexity and Local Resource Complexity. 3. An Entropy Theory based resource management model is proposed for the purposes of identifying, measuring, analyzing and controlling (i.e., reducing and avoiding) complexity. 4. A Cellular Automata Entropy based methodology is proposed as a solution to the Cloud resource allocation problem; this methodology is capable of managing Global System Complexity. 6. Once the root cause of the complexity has been identified using the Local Activity Principle, a Resource Entropy Based Local Activity Ranking system can be proposed which solves the job scheduling problem by managing Local Resource Complexity. Finally, on this latter basis, I implement a system which I have termed an Entropy Scheduler within a popular real-world cloud analysis engine, Apache Spark. Experiments demonstrate that the new Entropy Scheduler significantly reduces the average query response time by 15% - 20% and standard deviation by 30% - 45% compare with the native Fair Scheduler for running CPU intensive applications in Apache Spark, when the Spark server is not overloaded

    Turku Centre for Computer Science – Annual Report 2013

    Get PDF
    Due to a major reform of organization and responsibilities of TUCS, its role, activities, and even structures have been under reconsideration in 2013. The traditional pillar of collaboration at TUCS, doctoral training, was reorganized due to changes at both universities according to the renewed national system for doctoral education. Computer Science and Engineering and Information Systems Science are now accompanied by Mathematics and Statistics in newly established doctoral programs at both University of Turku and &Aring;bo Akademi University. Moreover, both universities granted sufficient resources to their respective programmes for doctoral training in these fields, so that joint activities at TUCS can continue. The outcome of this reorganization has the potential of proving out to be a success in terms of scientific profile as well as the quality and quantity of scientific and educational results.&nbsp; International activities that have been characteristic to TUCS since its inception continue strong. TUCS&rsquo; participation in European collaboration through EIT ICT Labs Master&rsquo;s and Doctoral School is now more active than ever. The new double degree programs at MSc and PhD level between University of Turku and Fudan University in Shaghai, P.R.China were succesfully set up and are&nbsp; now running for their first year. The joint students will add to the already international athmosphere of the ICT House.&nbsp; The four new thematic reseach programmes set up acccording to the decision by the TUCS Board have now established themselves, and a number of events and other activities saw the light in 2013. The TUCS Distinguished Lecture Series managed to gather a large audience with its several prominent speakers. The development of these and other research centre activities continue, and&nbsp; new practices and structures will be initiated to support the tradition of close academic collaboration.&nbsp; The TUCS&rsquo; slogan Where Academic Tradition Meets the Exciting Future has proven true throughout these changes. Despite of the dark clouds on the national and European economic sky, science and higher education in the field have managed to retain all the key ingredients for success. Indeed, the future of ICT and Mathematics in Turku seems exciting.</p

    Urban land cover change detection analysis and modeling spatio-temporal Growth dynamics using Remote Sensing and GIS Techniques: A case study of Dhaka, Bangladesh

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Dhaka, the capital of Bangladesh, has undergone radical changes in its physical form, not only in its vast territorial expansion, but also through internal physical transformations over the last decades. In the process of urbanization, the physical characteristic of Dhaka is gradually changing as open spaces have been transformed into building areas, low land and water bodies into reclaimed builtup lands etc. This new urban fabric should be analyzed to understand the changes that have led to its creation. The primary objective of this research is to predict and analyze the future urban growth of Dhaka City. Another objective is to quantify and investigate the characteristics of urban land cover changes (1989-2009) using the Landsat satellite images of 1989, 1999 and 2009. Dhaka City Corporation (DCC) and its surrounding impact areas have been selected as the study area. A fisher supervised classification method has been applied to prepare the base maps with five land cover classes. To observe the change detection, different spatial metrics have been used for quantitative analysis. Moreover, some postclassification change detection techniques have also been implemented. Then it is found that the ‘builtup area’ land cover type is increasing in high rate over the years. The major contributors to this change are ‘fallow land’ and ‘water body’ land cover types. In the next stage, three different models have been implemented to simulate the land cover map of Dhaka city of 2009. These are named as ‘Stochastic Markov (St_Markov)’ Model, ‘Cellular Automata Markov (CA_Markov)’ Model and ‘Multi Layer Perceptron Markov (MLP_Markov)’ Model. Then the best-fitted model has been selected based on various Kappa statistics values and also by implementing other model validation techniques. This is how the ‘Multi Layer Perceptron Markov (MLP_Markov)’ Model has been qualified as the most suitable model for this research. Later, using the MLP_Markov model, the land cover map of 2019 has been predicted. The MLP_Markov model shows that 58% of the total study area will be converted into builtup area cover type in 2019. The interpretation of depicting the future scenario in quantitative accounts, as demonstrated in this research, will be of great value to the urban planners and decision makers, for the future planning of modern Dhaka City

    AI technology for remote clinical assessment and monitoring

    Get PDF
    Objective: To report the clinical validation of an innovative, artificial intelligence (AI)-powered, portable and non-invasive medical device called Wound Viewer. The AI medical device uses dedicated sensors and AI algorithms to remotely collect objective and precise clinical data, including three-dimensional (3D) wound measurements, tissue composition and wound classification through the internationally recognised Wound Bed Preparation (WBP) protocol; this data can then be shared through a secure General Data Protection Regulation (GDPR)- and Health Insurance Portability and Accountability Act (HIPAA)-compliant data transfer system. This trial aims to test the reliability and precision of the AI medical device and its ability to aid health professionals in clinically evaluating wounds as efficiently remotely as at the bedside. Method: This non-randomised comparative clinical trial was conducted in the Clinica San Luca (Turin, Italy). Patients were divided into three groups: (i) patients with venous and arterial ulcers in the lower limbs; (ii) patients with diabetes and presenting with diabetic foot syndrome; and (iii) patients with pressure ulcers. Each wound was evaluated for area, depth, volume and WBP wound classification. Each patient was examined once and the results, analysed by the AI medical device, were compared against data obtained following visual evaluation by the physician and research team. The area and depth were compared with a Kruskal–Wallis one-way analysis of variations in the obtained distribution (expected p-value>0.1 for both tests). The WBP classification and tissue segmentation were analysed by directly comparing the classification obtained by the AI medical device against that of the testing physician. Results: A total of 150 patients took part in the trial. The results demonstrated that the AI medical device's AI algorithm could acquire objective clinical parameters in a completely automated manner. The AI medical device reached 97% accuracy against the WBP classification and tissue segmentation analysis compared with that performed in person by the physician. Moreover, data regarding the measurements of the wounds, as analysed through the Kruskal–Wallis technique, showed that the data distribution proved comparable with the other methods of measurement previously clinically validated in the literature (p=0.9). Conclusion: These findings indicate that remote wound assessment undertaken by physicians is as effective through the AI medical device as bedside examination, and that the device was able to assess wounds and provide a precise WBP wound classification. Furthermore, there was no need for manual data entry, thereby reducing the risk of human error while preserving high-quality clinical diagnostic data

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Leveraging Relational Structure through Message Passing for Modelling Non-Euclidean Data

    Get PDF
    Modelling non-Euclidean data is difficult since objects for comparison can be formed of different numbers of constituent parts with different numbers of relations between them, and traditional (Euclidean) methods are non-trivial to apply. Message passing enables such modelling by leveraging the structure of the relations within a (or between) given object(s) in order to represent and compare structure in a vectorized form of fixed dimensions. In this work, we contribute novel message passing techniques that improve state of the art for non-Euclidean modelling in a set of specifically chosen domains. In particular, (1) we introduce an attention-based structure-aware global pooling operator for graph classification and demonstrate its effectiveness on a range of chemical property prediction benchmarks, we also show that our method outperforms state of the art graph classifiers in a graph isomorphism test, and demonstrate the interpretability of our method with respect to the learned attention coefficients. (2) We propose a style similarity measure for Boundary Representations (B-Reps) that leverages the style signals in the second order statistics of the activations in a pre-trained (unsupervised) 3D encoder, and learns their relative importance to an end-user through few-shot learning. Our approach differs from existing data-driven 3D style methods since it may be used in completely unsupervised settings. We show quantitatively that our proposed method with B-Reps is able to capture stronger style signals than alternative methods on meshes and point clouds despite its significantly greater computational efficiency. We also show it is able to generate meaningful style gradients with respect to the input shape. (3) We introduce a novel message passing-based model of computation and demonstrate its effectiveness in expressing the complex dependencies of biological systems necessary to model life-like systems and tracing cell lineage during cancerous tumour growth, and demonstrate the improvement over existing methods in terms of post-analysis

    Monitoring and modeling urban sprinkling: a new perspective of land take

    Get PDF
    According to the studies done until now on the recent urban transformation dynamics, namely urban sprinkling, this thesis aims to investigate the phenomenon from different points of view to bring out its unsustainable character. The urban dispersion phenomena, specific characteristic of low-density territories, will be examined through the sprinkling index by including new components in addition to the traditional settlement system components. It allows to evaluate the shape of the anthropic settlements and the distance between them which often results in fragmentation of the urban settlements which in turn generate landscape fragmentation. Nowadays, both in the proximity of large cities and in more external areas such as rural areas, there are often evidences of strong fragmentation of the anthropic settlements in which, even if the amount of occupied surface (land take) may not seem worrying, its configuration determines a general decrease in ecological connectivity, landscape quality and general degradation of soil functions. The general hypothesis is that fragmentation (of urban, landscape and habitat) can become an indicator of land take. In fact, it is not enough to consider only the loss of natural or agricultural areas, but also the distribution of buildings in the landscape matrix, i.e., its spatial component. An emblematic case is that of Basilicata region whose dynamics of transformation from the 50s to the present day will be investigated in this thesis. According to the latest report of the Italian Institute for Environmental Protection and Research (ISPRA 2020), the Basilicata region has only 3.15% of land consumption compared to the entire regional surface. This indicator is in contrast with the shape of the anthropic settlements which results fragmented and dispersed. It is essential that the effects of fragmentation as well as ecosystem disaggregation take on a "measurable" character, joining the list of indicators of urban and territorial quality such as land take and land consumption that European Union addresses to national communities currently consider essential and decisive to highlighting the efficiency/inefficiency of environmental and landscape management. It is crucial to understand and investigate what have been and will be in the future the most influential drivers on these dynamics that contribute intrinsically to land consumption and to define the addresses or the thresholds to contain this pulverized and disordered dissemination of anthropic settlements
    • …
    corecore