17 research outputs found

    A Systematic Mapping Study of MMOG Backend Architectures

    Get PDF
    The advent of utility computing has revolutionized almost every sector of traditional software development. Especially commercial cloud computing services, pioneered by the likes of Amazon, Google and Microsoft, have provided an unprecedented opportunity for the fast and sustainable development of complex distributed systems. Nevertheless, existing models and tools aim primarily for systems where resource usage—by humans and bots alike—is logically and physically quite disperse resulting in a low likelihood of conflicting resource access. However, a number of resource-intensive applications, such as Massively Multiplayer Online Games (MMOGs) and large-scale simulations introduce a requirement for a very large common state with many actors accessing it simultaneously and thus a high likelihood of conflicting resource access. This paper presents a systematic mapping study of the state-of-the-art in software technology aiming explicitly to support the development of MMOGs, a class of large-scale, resource-intensive software systems.By examining the main focus of a diverse set of related publications, we identify a list of criteria that are important for MMOG development. Then, we categorize the selected studies based on the inferred criteria in order to compare their approach, unveil the challenges faced in each of them and reveal research trends that might be present. Finally we attempt to identify research directions which appear promising for enabling the use of standardized technology for this class of systems

    A Systematic Mapping Study of MMOG Backend Architectures

    Get PDF
    The advent of utility computing has revolutionized almost every sector of traditional software development. Especially commercial cloud computing services, pioneered by the likes of Amazon, Google and Microsoft, have provided an unprecedented opportunity for the fast and sustainable development of complex distributed systems. Nevertheless, existing models and tools aim primarily for systems where resource usage—by humans and bots alike—is logically and physically quite disperse resulting in a low likelihood of conflicting resource access. However, a number of resource-intensive applications, such as Massively Multiplayer Online Games (MMOGs) and large-scale simulations introduce a requirement for a very large common state with many actors accessing it simultaneously and thus a high likelihood of conflicting resource access. This paper presents a systematic mapping study of the state-of-the-art in software technology aiming explicitly to support the development of MMOGs, a class of large-scale, resource-intensive software systems.By examining the main focus of a diverse set of related publications, we identify a list of criteria that are important for MMOG development. Then, we categorize the selected studies based on the inferred criteria in order to compare their approach, unveil the challenges faced in each of them and reveal research trends that might be present. Finally we attempt to identify research directions which appear promising for enabling the use of standardized technology for this class of systems

    A framework for economic analysis of network architectures

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This thesis firstly surveys and summarizes the state-of-the-art studies from two research areas in Software De fined Networking (SDN) architecture: (i) control plane scalability and (ii) Quality of Service (QoS)-related problems. It also outlines the potential challenges and open problems that need to be addressed further for more scalable SDN control planes and better and complete QoS abilities in SDN networks. The thesis secondly presents a hierarchical SDN design along with an inter-AS QoS-guaranteed routing approach. This design addresses the scalability problems of control plane and privacy concerns of inter-AS QoS routing philosophies in SDN. After exploring the roots of control plane scalability problems in SDN, the thesis then proposes a metric to quantitatively evaluate the control plane scalability in SDN. Later, the thesis presents a general framework for economic analysis of network architectures and designs. To this end, the thesis defines and utilizes two metrics, Unit Service Cost Scalability and Cost-to-Service, to evaluate how SDN architecture performs compared to MPLS architecture in terms of unit cost for a service and cost of introducing a new service along with giving mathematical models to calculate Capital Expenditures (CAPEX) and Operational Expenditures (OPEX) of a network. Moreover, the thesis studies the problem of optimal final pricing for services by proposing an optimal pricing scheme for a service request with QoS in SDN environment while aiming to maximize benefits of both service providers and customers. Finally, the thesis investigates how programmable network architectures, i.e. SDN, affect the network economics compared to traditional network architectures, i.e. MPLS, in case of failures along with exploring the economic impact of failures in different SDN control plane models

    Arquitectura, técnicas y modelos para posibilitar la Ciencia de Datos en el Archivo de la Misión Gaia

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 26/05/2017.The massive amounts of data that the world produces every day pose new challenges to modern societies in terms of how to leverage their inherent value. Social networks, instant messaging, video, smart devices and scientific missions are just mere examples of the vast number of sources generating data every second. As the world becomes more and more digitalized, new needs arise for organizing, archiving, sharing, analyzing, visualizing and protecting the ever-increasing data sets, so that we can truly develop into a data-driven economy that reduces inefficiencies and increases sustainability, creating new business opportunities on the way. Traditional approaches for harnessing data are not suitable any more as they lack the means for scaling to the larger volumes in a timely and cost efficient manner. This has somehow changed with the advent of Internet companies like Google and Facebook, which have devised new ways of tackling this issue. However, the variety and complexity of the value chains in the private sector as well as the increasing demands and constraints in which the public one operates, needs an ongoing research that can yield newer strategies for dealing with data, facilitate the integration of providers and consumers of information, and guarantee a smooth and prompt transition when adopting these cutting-edge technological advances. This thesis aims at providing novel architectures and techniques that will help perform this transition towards Big Data in massive scientific archives. It highlights the common pitfalls that must be faced when embracing it and how to overcome them, especially when the data sets, their transformation pipelines and the tools used for the analysis are already present in the organizations. Furthermore, a new perspective for facilitating a smoother transition is laid out. It involves the usage of higher-level and use case specific frameworks and models, which will naturally bridge the gap between the technological and scientific domains. This alternative will effectively widen the possibilities of scientific archives and therefore will contribute to the reduction of the time to science. The research will be applied to the European Space Agency cornerstone mission Gaia, whose final data archive will represent a tremendous discovery potential. It will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), providing unprecedented position, parallax and proper motion measurements for about one billion stars. The successful exploitation of this data archive will depend to a large degree on the ability to offer the proper architecture, i.e. infrastructure and middleware, upon which scientists will be able to do exploration and modeling with this huge data set. In consequence, the approach taken needs to enable data fusion with other scientific archives, as this will produce the synergies leading to an increment in scientific outcome, both in volume and in quality. The set of novel techniques and frameworks presented in this work addresses these issues by contextualizing them with the data products that will be generated in the Gaia mission. All these considerations have led to the foundations of the architecture that will be leveraged by the Science Enabling Applications Work Package. Last but not least, the effectiveness of the proposed solution will be demonstrated through the implementation of some ambitious statistical problems that will require significant computational capabilities, and which will use Gaia-like simulated data (the first Gaia data release has recently taken place on September 14th, 2016). These ambitious problems will be referred to as the Grand Challenge, a somewhat grandiloquent name that consists in inferring a set of parameters from a probabilistic point of view for the Initial Mass Function (IMF) and Star Formation Rate (SFR) of a given set of stars (with a huge sample size), from noisy estimates of their masses and ages respectively. This will be achieved by using Hierarchical Bayesian Modeling (HBM). In principle, the HBM can incorporate stellar evolution models to infer the IMF and SFR directly, but in this first step presented in this thesis, we will start with a somewhat less ambitious goal: inferring the PDMF and PDAD. Moreover, the performance and scalability analyses carried out will also prove the suitability of the models for the large amounts of data that will be available in the Gaia data archive.Las grandes cantidades de datos que se producen en el mundo diariamente plantean nuevos retos a la sociedad en términos de cómo extraer su valor inherente. Las redes sociales, mensajería instantánea, los dispositivos inteligentes y las misiones científicas son meros ejemplos del gran número de fuentes generando datos en cada momento. Al mismo tiempo que el mundo se digitaliza cada vez más, aparecen nuevas necesidades para organizar, archivar, compartir, analizar, visualizar y proteger la creciente cantidad de datos, para que podamos desarrollar economías basadas en datos e información que sean capaces de reducir las ineficiencias e incrementar la sostenibilidad, creando nuevas oportunidades de negocio por el camino. La forma en la que se han manejado los datos tradicionalmente no es la adecuada hoy en día, ya que carece de los medios para escalar a los volúmenes más grandes de datos de una forma oportuna y eficiente. Esto ha cambiado de alguna manera con la llegada de compañías que operan en Internet como Google o Facebook, ya que han concebido nuevas aproximaciones para abordar el problema. Sin embargo, la variedad y complejidad de las cadenas de valor en el sector privado y las crecientes demandas y limitaciones en las que el sector público opera, necesitan una investigación continua en la materia que pueda proporcionar nuevas estrategias para procesar las enormes cantidades de datos, facilitar la integración de productores y consumidores de información, y garantizar una transición rápida y fluida a la hora de adoptar estos avances tecnológicos innovadores. Esta tesis tiene como objetivo proporcionar nuevas arquitecturas y técnicas que ayudarán a realizar esta transición hacia Big Data en archivos científicos masivos. La investigación destaca los escollos principales a encarar cuando se adoptan estas nuevas tecnologías y cómo afrontarlos, principalmente cuando los datos y las herramientas de transformación utilizadas en el análisis existen en la organización. Además, se exponen nuevas medidas para facilitar una transición más fluida. Éstas incluyen la utilización de software de alto nivel y específico al caso de uso en cuestión, que haga de puente entre el dominio científico y tecnológico. Esta alternativa ampliará de una forma efectiva las posibilidades de los archivos científicos y por tanto contribuirá a la reducción del tiempo necesario para generar resultados científicos a partir de los datos recogidos en las misiones de astronomía espacial y planetaria. La investigación se aplicará a la misión de la Agencia Espacial Europea (ESA) Gaia, cuyo archivo final de datos presentará un gran potencial para el descubrimiento y hallazgo desde el punto de vista científico. La misión creará el catálogo en tres dimensiones más grande y preciso de nuestra galaxia (la Vía Láctea), proporcionando medidas sin precedente acerca del posicionamiento, paralaje y movimiento propio de alrededor de mil millones de estrellas. Las oportunidades para la explotación exitosa de este archivo de datos dependerán en gran medida de la capacidad de ofrecer la arquitectura adecuada, es decir infraestructura y servicios, sobre la cual los científicos puedan realizar la exploración y modelado con esta inmensa cantidad de datos. Por tanto, la estrategia a realizar debe ser capaz de combinar los datos con otros archivos científicos, ya que esto producirá sinergias que contribuirán a un incremento en la ciencia producida, tanto en volumen como en calidad de la misma. El conjunto de técnicas e infraestructuras innovadoras presentadas en este trabajo aborda estos problemas, contextualizándolos con los productos de datos que se generarán en la misión Gaia. Todas estas consideraciones han conducido a los fundamentos de la arquitectura que se utilizará en el paquete de trabajo de aplicaciones que posibilitarán la ciencia en el archivo de la misión Gaia (Science Enabling Applications). Por último, la eficacia de la solución propuesta se demostrará a través de la implementación de dos problemas estadísticos que requerirán cantidades significativas de cómputo, y que usarán datos simulados en el mismo formato en el que se producirán en el archivo de la misión Gaia (la primera versión de datos recogidos por la misión está disponible desde el día 14 de Septiembre de 2016). Estos ambiciosos problemas representan el Gran Reto (Grand Challenge), un nombre grandilocuente que consiste en inferir una serie de parámetros desde un punto de vista probabilístico para la función de masa inicial (Initial Mass Function) y la tasa de formación estelar (Star Formation Rate) dado un conjunto de estrellas (con una muestra grande), desde estimaciones con ruido de sus masas y edades respectivamente. Esto se abordará utilizando modelos jerárquicos bayesianos (Hierarchical Bayesian Modeling). Enprincipio,losmodelospropuestos pueden incorporar otros modelos de evolución estelar para inferir directamente la función de masa inicial y la tasa de formación estelar, pero en este primer paso presentado en esta tesis, empezaremos con un objetivo algo menos ambicioso: la inferencia de la función de masa y distribución de edades actual (Present-Day Mass Function y Present-Day Age Distribution respectivamente). Además, se llevará a cabo el análisis de rendimiento y escalabilidad para probar la idoneidad de la implementación de dichos modelos dadas las enormes cantidades de datos que estarán disponibles en el archivo de la misión Gaia...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Next generation control of transport networks

    Get PDF
    It is widely understood by telecom operators and industry analysts that bandwidth demand is increasing dramatically, year on year, with typical growth figures of 50% for Internet-based traffic [5]. This trend means that the consumers will have both a wide variety of devices attaching to their networks and a range of high bandwidth service requirements. The corresponding impact is the effect on the traffic engineered network (often referred to as the “transport network”) to ensure that the current rate of growth of network traffic is supported and meets predicted future demands. As traffic demands increase and newer services continuously arise, novel network elements are needed to provide more flexibility, scalability, resilience, and adaptability to today’s transport network. The transport network provides transparent traffic engineered communication of user, application, and device traffic between attached clients (software and hardware) and establishing and maintaining point-to-point or point-to-multipoint connections. The research documented in this thesis was based on three initial research questions posed while performing research at British Telecom research labs and investigating control of transport networks of future transport networks: 1. How can we meet Internet bandwidth growth yet minimise network costs? 2. Which enabling network technologies might be leveraged to control network layers and functions cooperatively, instead of separated network layer and technology control? 3. Is it possible to utilise both centralised and distributed control mechanisms for automation and traffic optimisation? This thesis aims to provide the classification, motivation, invention, and evolution of a next generation control framework for transport networks, and special consideration of delivering broadcast video traffic to UK subscribers. The document outlines pertinent telecoms technology and current art, how requirements I gathered, and research I conducted, and by which the transport control framework functional components are identified and selected, and by which method the architecture was implemented and applied to key research projects requiring next generation control capabilities, both at British Telecom and the wider research community. Finally, in the closing chapters, the thesis outlines the next steps for ongoing research and development of the transport network framework and key areas for further study

    Aeronautical engineering: A continuing bibliography with indexes (supplement 269)

    Get PDF
    This bibliography lists 539 reports, articles, and other documents introduced into the NASA scientific and technical information system in August, 1991. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    EVOLUTION OF THE SUBCONTINENTAL LITHOSPHERE DURING MESOZOIC TETHYAN RIFTING: CONSTRAINTS FROM THE EXTERNAL LIGURIAN MANTLE SECTION (NORTHERN APENNINE, ITALY)

    Get PDF
    Our study is focussed on mantle bodies from the External Ligurian ophiolites, within the Monte Gavi and Monte Sant'Agostino areas. Here, two distinct pyroxenite-bearing mantle sections were recognized, mainly based on their plagioclase-facies evolution. The Monte Gavi mantle section is nearly undeformed and records reactive melt infiltration under plagioclase-facies conditions. This process involved both peridotites (clinopyroxene-poor lherzolites) and enclosed spinel pyroxenite layers, and occurred at 0.7–0.8 GPa. In the Monte Gavi peridotites and pyroxenites, the spinel-facies clinopyroxene was replaced by Ca-rich plagioclase and new orthopyroxene, typically associated with secondary clinopyroxene. The reactive melt migration caused increase of TiO2 contents in relict clinopyroxene and spinel, with the latter also recording a Cr2O3 increase. In the Monte Gavi peridotites and pyroxenites, geothermometers based on slowly diffusing elements (REE and Y) record high temperature conditions (1200-1250 °C) related to the melt infiltration event, followed by subsolidus cooling until ca. 900°C. The Monte Sant'Agostino mantle section is characterized by widespread ductile shearing with no evidence of melt infiltration. The deformation recorded by the Monte Sant'Agostino peridotites (clinopyroxene-rich lherzolites) occurred at 750–800 °C and 0.3–0.6 GPa, leading to protomylonitic to ultramylonitic textures with extreme grain size reduction (10–50 μm). Compared to the peridotites, the enclosed pyroxenite layers gave higher temperature-pressure estimates for the plagioclase-facies re-equilibration (870–930 °C and 0.8–0.9 GPa). We propose that the earlier plagioclase crystallization in the pyroxenites enhanced strain localization and formation of mylonite shear zones in the entire mantle section. We subdivide the subcontinental mantle section from the External Ligurian ophiolites into three distinct domains, developed in response to the rifting evolution that ultimately formed a Middle Jurassic ocean-continent transition: (1) a spinel tectonite domain, characterized by subsolidus static formation of plagioclase, i.e. the Suvero mantle section (Hidas et al., 2020), (2) a plagioclase mylonite domain experiencing melt-absent deformation and (3) a nearly undeformed domain that underwent reactive melt infiltration under plagioclase-facies conditions, exemplified by the the Monte Sant'Agostino and the Monte Gavi mantle sections, respectively. We relate mantle domains (1) and (2) to a rifting-driven uplift in the late Triassic accommodated by large-scale shear zones consisting of anhydrous plagioclase mylonites. Hidas K., Borghini G., Tommasi A., Zanetti A. & Rampone E. 2021. Interplay between melt infiltration and deformation in the deep lithospheric mantle (External Liguride ophiolite, North Italy). Lithos 380-381, 105855

    Impact of Etna’s volcanic emission on major ions and trace elements composition of the atmospheric deposition

    Get PDF
    Mt. Etna, on the eastern coast of Sicily (Italy), is one of the most active volcanoes on the planet and it is widely recognized as a big source of volcanic gases (e.g., CO2 and SO2), halogens, and a lot of trace elements, to the atmosphere in the Mediterranean region. Especially during eruptive periods, Etna’s emissions can be dispersed over long distances and cover wide areas. A group of trace elements has been recently brought to attention for their possible environmental and human health impacts, the Technology-critical elements. The current knowledge about their geochemical cycles is still scarce, nevertheless, recent studies (Brugnone et al., 2020) evidenced a contribution from the volcanic activity for some of them (Te, Tl, and REE). In 2021, in the framework of the research project “Pianeta Dinamico”, by INGV, a network of 10 bulk collectors was implemented to collect, monthly, atmospheric deposition samples. Four of these collectors are located on the flanks of Mt. Etna, other two are in the urban area of Catania and three are in the industrial area of Priolo, all most of the time downwind of the main craters. The last one, close to Cesarò (Nebrodi Regional Park), represents the regional background. The research aims to produce a database on major ions and trace element compositions of the bulk deposition and here we report the values of the main physical-chemical parameters and the deposition fluxes of major ions and trace elements from the first year of research. The pH ranged from 3.1 to 7.7, with a mean value of 5.6, in samples from the Etna area, while it ranged between 5.2 and 7.6, with a mean value of 6.4, in samples from the other study areas. The EC showed values ranging from 5 to 1032 μS cm-1, with a mean value of 65 μS cm-1. The most abundant ions were Cl- and SO42- for anions, Na+ and Ca+ for cations, whose mean deposition fluxes, considering all sampling sites, were 16.6, 6.8, 8.4, and 6.0 mg m-2 d, respectively. The highest deposition fluxes of volcanic refractory elements, such as Al, Fe, and Ti, were measured in the Etna’s sites, with mean values of 948, 464, and 34.3 μg m-2 d-1, respectively, higher than those detected in the other sampling sites, further away from the volcanic source (26.2, 12.4, 0.5 μg m-2 d-1, respectively). The same trend was also observed for volatile elements of prevailing volcanic origin, such as Tl (0.49 μg m-2 d-1), Te (0.07 μg m-2 d-1), As (0.95 μg m-2 d-1), Se (1.92 μg m-2 d-1), and Cd (0.39 μg m-2 d-1). Our preliminary results show that, close to a volcanic area, volcanic emissions must be considered among the major contributors of ions and trace elements to the atmosphere. Their deposition may significantly impact the pedosphere, hydrosphere, and biosphere and directly or indirectly human health
    corecore