105 research outputs found

    Generic Methods for Adaptive Management of Service Level Agreements in Cloud Computing

    Get PDF
    The adoption of cloud computing to build and deliver application services has been nothing less than phenomenal. Service oriented systems are being built using disparate sources composed of web services, replicable datastores, messaging, monitoring and analytics functions and more. Clouds augment these systems with advanced features such as high availability, customer affinity and autoscaling on a fair pay-per-use cost model. The challenge lies in using the utility paradigm of cloud beyond its current exploit. Major trends show that multi-domain synergies are creating added-value service propositions. This raises two questions on autonomic behaviors, which are specifically ad- dressed by this thesis. The first question deals with mechanism design that brings the customer and provider(s) together in the procurement process. The purpose is that considering customer requirements for quality of service and other non functional properties, service dependencies need to be efficiently resolved and legally stipulated. The second question deals with effective management of cloud infrastructures such that commitments to customers are fulfilled and the infrastructure is optimally operated in accordance with provider policies. This thesis finds motivation in Service Level Agreements (SLAs) to answer these questions. The role of SLAs is explored as instruments to build and maintain trust in an economy where services are increasingly interdependent. The thesis takes a wholesome approach and develops generic methods to automate SLA lifecycle management, by identifying and solving relevant research problems. The methods afford adaptiveness in changing business landscape and can be localized through policy based controls. A thematic vision that emerges from this work is that business models, services and the delivery technology are in- dependent concepts that can be finely knitted together by SLAs. Experimental evaluations support the message of this thesis, that exploiting SLAs as foundations for market innovation and infrastructure governance indeed holds win-win opportunities for both cloud customers and cloud providers

    Computation in Complex Networks

    Get PDF
    Complex networks are one of the most challenging research focuses of disciplines, including physics, mathematics, biology, medicine, engineering, and computer science, among others. The interest in complex networks is increasingly growing, due to their ability to model several daily life systems, such as technology networks, the Internet, and communication, chemical, neural, social, political and financial networks. The Special Issue “Computation in Complex Networks" of Entropy offers a multidisciplinary view on how some complex systems behave, providing a collection of original and high-quality papers within the research fields of: • Community detection • Complex network modelling • Complex network analysis • Node classification • Information spreading and control • Network robustness • Social networks • Network medicin

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    ICE-B 2010:proceedings of the International Conference on e-Business

    Get PDF
    The International Conference on e-Business, ICE-B 2010, aims at bringing together researchers and practitioners who are interested in e-Business technology and its current applications. The mentioned technology relates not only to more low-level technological issues, such as technology platforms and web services, but also to some higher-level issues, such as context awareness and enterprise models, and also the peculiarities of different possible applications of such technology. These are all areas of theoretical and practical importance within the broad scope of e-Business, whose growing importance can be seen from the increasing interest of the IT research community. The areas of the current conference are: (i) e-Business applications; (ii) Enterprise engineering; (iii) Mobility; (iv) Business collaboration and e-Services; (v) Technology platforms. Contributions vary from research-driven to being more practical oriented, reflecting innovative results in the mentioned areas. ICE-B 2010 received 66 submissions, of which 9% were accepted as full papers. Additionally, 27% were presented as short papers and 17% as posters. All papers presented at the conference venue were included in the SciTePress Digital Library. Revised best papers are published by Springer-Verlag in a CCIS Series book

    Recent Advances in Social Data and Artificial Intelligence 2019

    Get PDF
    The importance and usefulness of subjects and topics involving social data and artificial intelligence are becoming widely recognized. This book contains invited review, expository, and original research articles dealing with, and presenting state-of-the-art accounts pf, the recent advances in the subjects of social data and artificial intelligence, and potentially their links to Cyberspace

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Proceedings der 11. Internationalen Tagung Wirtschaftsinformatik (WI2013) - Band 1

    Get PDF
    The two volumes represent the proceedings of the 11th International Conference on Wirtschaftsinformatik WI2013 (Business Information Systems). They include 118 papers from ten research tracks, a general track and the Student Consortium. The selection of all submissions was subject to a double blind procedure with three reviews for each paper and an overall acceptance rate of 25 percent. The WI2013 was organized at the University of Leipzig between February 27th and March 1st, 2013 and followed the main themes Innovation, Integration and Individualization.:Track 1: Individualization and Consumerization Track 2: Integrated Systems in Manufacturing Industries Track 3: Integrated Systems in Service Industries Track 4: Innovations and Business Models Track 5: Information and Knowledge ManagementDie zweibändigen Tagungsbände zur 11. Internationalen Tagung Wirtschaftsinformatik (WI2013) enthalten 118 Forschungsbeiträge aus zehn thematischen Tracks der Wirtschaftsinformatik, einem General Track sowie einem Student Consortium. Die Selektion der Artikel erfolgte nach einem Double-Blind-Verfahren mit jeweils drei Gutachten und führte zu einer Annahmequote von 25%. Die WI2013 hat vom 27.02. - 01.03.2013 unter den Leitthemen Innovation, Integration und Individualisierung an der Universität Leipzig stattgefunden.:Track 1: Individualization and Consumerization Track 2: Integrated Systems in Manufacturing Industries Track 3: Integrated Systems in Service Industries Track 4: Innovations and Business Models Track 5: Information and Knowledge Managemen

    Genetic Algorithms in Software Architecture Synthesis

    Get PDF
    Ohjelmistoarkkitehtuurien suunnittelu on kriittinen vaihe ohjelmistokehitystä, sillä arkkitehtuuri määrittelee ohjelmiston rungon: miten ohjelma jaetaan eri komponentteihin, ja miten komponentit ovat yhteydessä toisiinsa. Ohjelmisto voidaan yleensä toteuttaa toimivasti monella eri tavalla, mutta toimiva toteutus ei aina takaa, että ohjelmisto on myös toteutettu laadukkaasti. Laadun takeena onkin huolella ja taidolla suunniteltu arkkitehtuuri. Ohjelmistoarkkitehtuurin suunnittelu on haastavaa. Suunnitelmaa tehdessä tulee ottaa huomioon monen eri sidosryhmän (esim. käyttäjä, toteuttaja, markkinoija) vaatimukset ja miettiä, miten mahdollisimman suuri osa vaatimuksista voidaan toteuttaa arkkitehtuurissa. Arkkitehtuurisuunnittelu vaatiikin kokeneen ohjelmistoarkkitehdin, joka on hankkinut tietotaitonsa vuosien ajalta eri ohjelmistoprojekteista. Kokemukseen perustuvan tiedon lisäksi ohjelmistoarkkitehtuurisuunnittelun käytäntöjä on koottu eräänlaisiksi katalogeiksi, joissa esitellään hyväksi havaittuja ratkaisuja, ns. suunnittelutyylejä ja -malleja, yleisiin arkkitehtuurisuunnitteluongelmiin. Voidaankin ajatella, että arkkitehtuuri tuotetaan etsimällä (kokemukseen nojaten) paras mahdollinen kombinaatio suunnittelumalleja ja -tyylejä. Arkkitehtuurin suunnittelu onkin siis eräänlainen optimointiongelma. Ohjelmistoista tulee jatkuvasti yhä monimutkaisempia. Sovelluksien monimutkaistuessa myös arkkitehtuurisuunnittelu muuttuu entistä vaikeammaksi ja vie yhä enemmän aikaa. Suunnittelun perustuminen hiljaiseen tietoon ja arkkitehtien kokemukseen tekee prosessista yhä hitaamman ja läpinäkymättömämmän. Arkkitehtuurisuunnittelun automatisointi toisikin suuria säästöjä. Henkilöstövaihdosten yhteydessä ei myöskään tarvitsisi pelätä tietotaidon katoamista, kun arkkitehtuurisuunnittelu olisi helposti toistettavissa aina alusta lähtien. Tässä väitöskirjassa on tutkittu, miten parhaan mahdollisen ratkaisun etsintäprosessin (eli suunnittelumallien ja -tyylien soveltamisen) voisi automatisoida. Monimutkaisissa optimointiongelmissa käytetään etsintäalgoritmeja, jotka haravoivat hakuavaruutta jollain satunnaistetulla menetelmällä. Yksi suosituimmista etsintäalgoritmeista on geneettinen algoritmi. Geneettiset algoritmit tarkastelevat aina pientä ratkaisujoukkoa kerrallaan ja etsivät parasta ratkaisua yhdistelemällä osia löydetyistä ratkaisuista sekä muuntelemalla ratkaisuja. Jokaiselle ratkaisulle lasketaan laatuarvo, ja luonnonvalintaa jäljitellen jatketaan parhaiden vaihtoehtojen tarkastelua sekä kehittelyä ja hylätään huonoimmat ratkaisut. Etsintäalgoritmien käyttämistä ohjelmistokehityksen ongelmiin, esim. ohjelmistosuunnitteluun, testaukseen ja projektinhallintaan, kutsutaan etsintäperustaiseksi ohjelmistokehitykseksi. Väitöskirja kuuluu etsintäperustaisen ohjelmistosuunnittelun alaan, ja siinä tutkitaan ns. ohjelmistoarkkitehtuurisynteesiä geneettisten algoritmien avulla. Ohjelmistoarkkitehtuurisynteesi lähtee ns. nolla-arkkitehtuurista , joka toteuttaa järjestelmän toiminnalliset vaatimukset, mutta ei ota kantaa laatuvaatimuksiin. Laatua pyritään parantamaan lisäämällä lähtöarkkitehtuuriin suunnittelutyylejä ja -malleja. Väitöskirjassa laatuarviointiin on käytetty muunneltavuutta, tehokkuutta ja ymmärrettävyyttä. Lopputuloksena saadaan ehdotus arkkitehtuurista, joka toteuttaa toiminnalliset vaatimukset ja on myös laadukas. Geneettisiä algoritmeja ei ole aiemmin sovellettu vastaavantasoisiin suunnitteluongelmiin, joten toteutuksessa on kehitetty uusi tapa mallintaa arkkitehtuuri geneettiselle algoritmille sekä laskukaava arkkitehtuurin laadulle. Perustoteutuksen lisäksi myös geneettisen algoritmin eri ominaisuuksia, ns. risteytysoperaatiota ja laatufunktiota on tutkittu tarkemmin, ja niille on kehitetty vaihtoehtoisia toteutuksia. Tapaustarkasteluista saadut tulokset osoittavat, että tällä hetkellä geneettisiin algoritmeihin perustuvaa arkkitehtuurisynteesi tuottaa suunnilleen samantasoisia ratkaisuja kuin kolmannen vuosikurssin ohjelmistotekniikan opiskelija.This thesis presents an approach for synthesizing software architectures with genetic algorithms. Previously in the literature, genetic algorithms have been mostly used to improve existing architectures. The method presented here, however, focuses on upstream design. The chosen genetic construction of software architectures is based on a model which contains information on functional requirements only. Architecture styles and design patterns are used to transform the initial high-level model to a more detailed design. Quality attributes, here modifiability, efficiency and complexity, are encoded in the algorithm s fitness function for evaluating the produced solutions. The final solution is given as a UML class diagram. While the main contribution is introducing the method for architecture synthesis, basic tool support for the implementation is also presented. Two case studies are used for evaluation. One case study uses the sketch for an electronic home control system, which is a typical embedded system. The other case study is based on a robot war game simulator, which is a typical framework system. Evaluation is mostly based on fitness graphs and (subjective) evaluation of produced class diagrams. In addition to the basic approach, variations and extensions regarding crossover and fitness function have been made. While the standard algorithm uses a random crossover, asexual reproduction and complementary crossover are also studied. Asexual crossover corresponds to real-life design situations, where two architectures are rarely combined. Complementary crossover, in turn, attempts to purposefully combine good parts of two architectures. The fitness function is extended with the option to include modifiability scenarios, which enables more targeted design decisions as critical parts of the architecture can be evaluated individually. In order to achieve a wider range of solutions that answer to competing quality demands, a multi-objective approach using Pareto optimality is given as an alternative for the single weighted fitness function. The multi-objective approach evaluates modifiability and efficiency, and gives as output the class diagrams of the whole Pareto front of the last generation. Thus, extremes for both quality attributes as well as solutions in the middle ground can be compared. An experimental study is also conducted where independent experts evaluate produced solutions for the electronic home control. Results show that genetic software architecture synthesis is indeed feasible, and the quality of solutions at this stage is roughly at the level of third year software engineering students

    Power control with Machine Learning Techniques in Massive MIMO cellular and cell-free systems

    Get PDF
    This PhD thesis presents a comprehensive investigation into power control (PC) optimization in cellular (CL) and cell-free (CF) massive multiple-input multiple-output (mMIMO) systems using machine learning (ML) techniques. The primary focus is on enhancing the sum spectral efficiency (SE) of these systems by leveraging various ML methods. To begin with, it is combined and extended two existing datasets, resulting in a unique dataset tailored for this research. The weighted minimum mean square error (WMMSE) method, a popular heuristic approach, is utilized as the baseline method for addressing the sum SE maximization problem. It is compared the performance of the WMMSE method with the deep Q-network (DQN) method through training on the complete dataset in both CL and CF-mMIMO systems. Furthermore, the PC problem in CL/CF-mMIMO systems is effectively tackled through the application of ML-based algorithms. These algorithms present highly efficient solutions with significantly reduced computational complexity [3]. Several ML methods are proposed for CL/CF-mMIMO systems, tailored explicitly to address the PC problem in CL/CF-mMIMO systems. Among them are the innovative proposed Fuzzy/DQN method, proposed DNN/GA method, proposed support vector machine (SVM) method, proposed SVM/RBF method, proposed decision tree (DT) method, proposed K-nearest neighbour (KNN) method, proposed linear regression (LR) method, and the novel proposed fusion scheme. The fusion schemes expertly combine multiple ML methods, such as system model 1 (DNN, DNN/GA, DQN, fuzzy/DQN, and SVM algorithms) and system model 2 (DNN, SVM-RBF, DQL, LR, KNN, and DT algorithms), which are thoroughly evaluated to maximize the sum spectral efficiency (SE), offering a viable alternative to computationally intensive heuristic algorithms. Subsequently, the DNN method is singled out for its exceptional performance and is further subjected to in-depth analysis. Each of the ML methods is trained on a merged dataset to extract a novel feature vector, and their respective performances are meticulously compared against the WMMSE method in the context of CL/CF-mMIMO systems. This research promises to pave the way for more robust and efficient PC solutions, ensuring enhanced SE and ultimately advancing the field of CL/CF-mMIMO systems. The results reveal that the DNN method outperforms the other ML methods in terms of sum SE, while exhibiting significantly lower computational complexity compared to the WMMSE algorithm. Therefore, the DNN method is chosen for examining its transferability across two datasets (dataset A and B) based on their shared common features. Three scenarios are devised for the transfer learning method, involving the training of the DNN method on dataset B (S1), the utilization of model A and dataset B (S2), and the retraining of model A on dataset B (S3). These scenarios are evaluated to assess the effectiveness of the transfer learning approach. Furthermore, three different setups for the DNN architecture (DNN1, DNN2, and DNN3) are employed and compared to the WMMSE method based on performance metrics such as mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE). Moreover, the research evaluates the impact of the number of base stations (BSs), access points (APs), and users on PC in CL/CF-mMIMO systems using ML methodology. Datasets capturing diverse scenarios and configurations of mMIMO systems were carefully assembled. Extensive simulations were conducted to analyze how the increasing number of BSs/APs affects the dimensionality of the input vector in the DNN algorithm. The observed improvements in system performance are quantified by the enhanced discriminative power of the model, illustrated through the cumulative distribution function (CDF). This metric encapsulates the model's ability to effectively capture and distinguish patterns across diverse scenarios and configurations within mMIMO systems. The parameter of the CDF being indicated is the probability. Specifically, the improved area under the CDF refers to an enhanced probability of a random variable falling below a certain threshold. This enhancement denotes improved model performance, showcasing a greater precision in predicting outcomes. Interestingly, the number of users was found to have a limited effect on system performance. The comparison between the DNN-based PC method and the conventional WMMSE method revealed the superior performance and efficiency of the DNN algorithm. Lastly, a comprehensive assessment of the DNN method against the WMMSE method was conducted for addressing the PC optimization problem in both CL and CF system architectures. In addition to, this thesis focuses on enhancing spectral efficiency (SE) in wireless communication systems, particularly within cell-free (CF) mmWave massive MIMO environments. It explores the challenges of optimizing SE through traditional methods, including the weighted minimum mean squared error (WMMSE), fractional programming (FP), water-filling, and max-min fairness approaches. The prevalence of access points (APs) over user equipment (UE) highlights the importance of zero-forcing precoding (ZFP) in CF-mMIMO. However, ZFP faces issues related to channel aging and resource utilization. To address these challenges, a novel scheme called delay-tolerant zero-forcing precoding (DT-ZFP) is introduced, leveraging deep learning-aided channel prediction to mitigate channel aging effects. Additionally, a cutting-edge power control (PC) method, HARP-PC, is proposed, combining heterogeneous graph neural network (HGNN), adaptive neuro-fuzzy inference system (ANFIS), and reinforcement learning (RL) to optimize SE in dynamic CF mmWave-mMIMO systems. This research advances the field by addressing these challenges and introducing innovative approaches to enhance PC and SE in contemporary wireless communication networks. Overall, this research contributes to the advancement of PC optimization in CL/CF-mMIMO systems through the application of ML techniques, demonstrating the potential of the DNN method, and providing insights into system performance under various scenarios and network configurations
    corecore