91 research outputs found

    A service broker for Intercloud computing

    Get PDF
    This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments

    Development of a pattern library and a decision support system for building applications in the domain of scientific workflows for e-Science

    Get PDF
    Karastoyanova et al. created eScienceSWaT (eScience SoftWare Engineering Technique), that targets at providing a user-friendly and systematic approach for creating applications for scientific experiments in the domain of e-Science. Even though eScienceSWaT is used, still many choices about the scientific experiment model, IT experiment model and infrastructure have to be made. Therefore, a collection of best practices for building scientific experiments is required. Additionally, these best practice need to be connected and organized. Finally, a Decision Support System (DSS) that is based on the best practices and enables decisions about the various choices for e-Science solutions, needs to be developed. Hence, various e-Science applications are examined in this thesis. Best practices are recognised by abstracting from the identified problem-solution pairs in the e-Science applications. Knowledge and best practices from natural science, computer science and software engineering are stored in patterns. Furthermore, relationship types among patterns are worked out. Afterwards, relationships among the patterns are defined and the patterns are organized in a pattern library. In addition, the concept for a DSS that provisions the patterns and its prototypical implementation are presented

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyService Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment

    Artificial intelligence and smart vision for building and construction 4.0: Machine and deep learning methods and applications

    Get PDF
    This article presents a state-of-the-art review of the applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) in building and construction industry 4.0 in the facets of architectural design and visualization; material design and optimization; structural design and analysis; offsite manufacturing and automation; construction management, progress monitoring, and safety; smart operation, building management and health monitoring; and durability, life cycle analysis, and circular economy. This paper presents a unique perspective on applications of AI/DL/ML in these domains for the complete building lifecycle, from conceptual stage, design stage, construction stage, operational and maintenance stage until the end of life. Furthermore, data collection strategies using smart vision and sensors, data cleaning methods (post-processing), data storage for developing these models are discussed, and the challenges in model development and strategies to overcome these challenges are elaborated. Future trends in these domains and possible research avenues are also presented

    Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control

    Get PDF
    Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry. At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    Medium access control protocol for visible light communication in vehicular communication networks

    Get PDF
    Recent achievements in the automotive industry related to lighting apparatuses include the use of LED or laser technology to illuminate the vehicle environment. This advancement resulted in greater energy efficiency and increased safety with selective illumination segments. A secondary effect was creating a new field for researchers in which they can utilize LED fast modulation using the Pulse Width Modulation (PWM) signal. Using LED to encode and transmit data is a relatively new and innovative concept. On the other field, there have been advancements in vehicular communication using radio frequency at 2.4 or 5GHz. This research focuses mainly on a field in which visible light augments or replaces radio frequency communication between vehicles. This research also investigates the effect of asymmetry on network performance using Visible Light Communication (VLC) in vehicular networks. Different types of asymmetry were defined and tested in real-world simulation experiments. Research results showed that asymmetry has a negative influence on network performance, though that effect is not significant. The main focus of the research is to develop a lightweight and new Media Access Control (MAC) protocol for VLC in vehicular networks. To develop a MAC protocol for VLC, special software was developed on top of the existing Network Simulation Environment (NSE). A new VLC MAC protocol for Vehicle to Vehicle (V2V) was benchmarked using a defined set of metrics. The benchmark was conducted as a set of designed simulation experiments against the referent IEEE 802.11b MAC protocol. Both protocols used a newly defined VLC-equipped vehicle model. Each simulation experiment depicted a specific network and traffic situation. The total number of scenarios was eleven. The last set of simulations was conducted in realworld scenarios on the virtual streets of Suffolk, VA, USA. Using defined metrics, the test showed that the new VLC MAC protocol for V2V is better than the referent protocol.Nedavna dostignuća u automobilskoj industriji koja se tiču opreme za osvjetljivanje uključuju korištenje LED ili laserskih rasvjetnih tijela za osvjetljivanje okoline. Ovime se postižu uštede u potrošnji energije kao i povećana sigurnost u prometu. LED rasvjeta je uniformnija od običnih žarulja tako da osvjetljenje bude ravnomjernije i preciznije. Obzirom da su LED selektivne moguće je odabrati segment ceste koji se želi osvijetliti. Upravo ta fleksibilnost LED otvara novi prostor za istraživače gdje mogu koristiti PWM signal za modulaciju podataka. PWM je poseban signal koji ima varijabilnu širinu pulsa na izlazu. Istraživači i znanstvenici mogu koristiti LED za kodiranje i prijenos podataka između automobila. Prednosti korištenja komunikacije u vidljivom dijelu elektro-magnetskog spektra (eng.VLC) je u činjenici da taj segment nije zaštićen licencama te je otvoren za slobodno korištenje. Osim toga, vidljivo, neintenzivno svjetlo nema biološki negativnih posljedica. Kod korištenja PWM signala za modulaciju, postojeći izlaz svjetla i njegova funkcija (osvjetljivanja ceste) nisu narušeni. Ljudsko oko ne može detektirati oscilacije tako visoke frekvencije (oko 5 kHz) S druge strane, komponente koje mogu primiti poslani signal su foto diode ili kamere. Kamere su već prisutne na modernom vozilu u obliku prednje kamere ili stražnje kamere za pomoć pri parkiranju. U svakom slučaju, tehnologija je već prisutna na modernom vozilu. Na drugom području, znanstvenici rade na komunikaciji između vozila koristeći radio valove niže frekvencije 2.4 ili 5 GHz. Komunikacija između automobila je predmet standardizacije i mnoge zemlje već propisuju pravila za obaveznu ugradnju opreme za takav oblik komunikacije. Prednost takvog koncepta je razmjena podatka; od onih za zabavu pa do kritičnih i sigurnosnih podataka npr. informacija o nadolazećem mjestu gdje se dogodila prometna nesreća. Ovo istraživanje se fokusira na proširenje ili zamjenu radio komunikacije sa komunikacijom koristeći vidljivi dio spektra (npr. LED i kamere). Jedan od glavnih nedostataka takvog koncepta je ne postojanje adekvatnog i specijaliziranog protokola za kontrolu pristupa mediju (eng. MAC). Drugi problem je nepoznati efekt asimetrije u VLC komunikaciji na performanse mrežne komunikacija. Ovo istraživanje je prepoznalo i klasificiralo različite tipove asimetrije. Svaki tip je testiran u sklopu simulacijskog eksperimenta u stvarnim scenarijima. Pokazalo se je da asimetrija negativno utječe na mrežne performanse, međutim taj efekt nije značajan jer uzrokuje manje od 0.5 % neuspješno poslanih poruka. Glavni fokus istraživanja je razvoj novog i pojednostavljenog MAC protokola za VLC komunikaciju između automobila. Kako bi se razvio novi MAC protokol nad VLC tehnologijom u prometnim mrežama, bilo je nužno napraviti i novu razvojnu okolinu koja se bazira na postojećim mrežnim simulatorima. Novi VLCMAC protokol za komunikaciju između automobila je testiran koristeći definirani set metrika. Testovi su napravljeni u obliku simulacijskih eksperimenata u kojima su uspored¯ivane performanse novog i referentnog protokola. Referentni protokol, u ovom istraživanju je IEEE 802.11b MAC protokol. U sklopu ovog rada definiran je i model vozila opremljen VLC tehnologijom. U simulacijskim eksperimentima je korišten isti model vozila za oba protokola. Za potrebe istraživanja je definirano jedanaest simulacijskih eksperimenata, svaki od njih opisuje specifične situacije u mrežnim komunikacijama kao i u prometu. Završni simulacijski scenariji uključuju okolinu iz stvarnosti, mreža ulica grada Suffolka, SAD. Osim stvarnih ulica, vozila su se kretala i razmjenjivala podatke koristeći mrežnu komunikaciju na kompletnom ISO/OSI mrežnom stogu sa zamijenjenim MAC podslojem. Razvojna okolina uključuje preciznu provjeru fizičkih karakteristika na razini putanje zrake svjetlosti. Ova preciznost je bila nužna kako bi simulacije bile što vjerodostojnije stvarnim sustavima. Obzirom da se radi o mnogo kalkulacija, obično računalo nije dostatno za izvođenje simulacijskih eksperimenata; zbog toga su se eksperimenti izvodili na klasteru računala Sveučilišta u Zagrebu. Koristeći definirane metrike, istraživanje je pokazalo kako je novi VLC MAC protokol za komunikaciju između automobila bolji od referentnog protokola.

    The 8th International Conference on Time Series and Forecasting

    Get PDF
    The aim of ITISE 2022 is to create a friendly environment that could lead to the establishment or strengthening of scientific collaborations and exchanges among attendees. Therefore, ITISE 2022 is soliciting high-quality original research papers (including significant works-in-progress) on any aspect time series analysis and forecasting, in order to motivating the generation and use of new knowledge, computational techniques and methods on forecasting in a wide range of fields
    corecore