230 research outputs found

    A TxQoS-aware business transaction framework

    Get PDF
    In this thesis, we propose a transaction framework to provide comprehensive and flexible transaction support for contract-driven, service-oriented business processes. The research follows the research method outlined below. Initially, a thorough investigation on current state of affairs was made. Afterwards, we carried out a case study, which we utilized to identify the problems that are likely to occur during the execution of business processes. As the result of the solution design, the concepts, scenarios, life cycles, reference architectures, and mechanisms were proposed to address the problems. The design took place on the conceptual level, while the coding/programming and implementation is out of the scope of this thesis. The business-oriented solution design allows for transaction qualities to be specified and guaranteed by a contractual approach named as TxQoS (Transactional Quality of Service). The technology-oriented design enables flexible composition of ATCs (Abstract Transaction Constructs) as a transaction schema to support the execution of complex processes. As the last step of research, we validated the feasibility of our design by a utility study conducted in a large telecom project, which has complex processes that are service-oriented and contract-driven. Finally, we discussed the contributions and limitations of the research. The main contribution of the thesis is the BTF (Business Transaction Framework) that addresses process execution reliability. The TxQoS approach enables the specification of transaction qualities in terms of FIAT (Fluency, Interference, Alternation, Transparency) properties. This businessfriendly approach allows the providers and users to agree on transaction qualities before process execution time. The building blocks of the proposed framework, ATCs, are reusable and configurable templates, and are abstracted and generalized from existing transaction models. The various transaction requirements of sub-processes and process chunks can be represented by corresponding ATCs, which allow for a flexible composition. Integrated, the TxQoS and ATC approaches work together to form a TxQoS-aware business transaction framework

    Arquitectura, técnicas y modelos para posibilitar la Ciencia de Datos en el Archivo de la Misión Gaia

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 26/05/2017.The massive amounts of data that the world produces every day pose new challenges to modern societies in terms of how to leverage their inherent value. Social networks, instant messaging, video, smart devices and scientific missions are just mere examples of the vast number of sources generating data every second. As the world becomes more and more digitalized, new needs arise for organizing, archiving, sharing, analyzing, visualizing and protecting the ever-increasing data sets, so that we can truly develop into a data-driven economy that reduces inefficiencies and increases sustainability, creating new business opportunities on the way. Traditional approaches for harnessing data are not suitable any more as they lack the means for scaling to the larger volumes in a timely and cost efficient manner. This has somehow changed with the advent of Internet companies like Google and Facebook, which have devised new ways of tackling this issue. However, the variety and complexity of the value chains in the private sector as well as the increasing demands and constraints in which the public one operates, needs an ongoing research that can yield newer strategies for dealing with data, facilitate the integration of providers and consumers of information, and guarantee a smooth and prompt transition when adopting these cutting-edge technological advances. This thesis aims at providing novel architectures and techniques that will help perform this transition towards Big Data in massive scientific archives. It highlights the common pitfalls that must be faced when embracing it and how to overcome them, especially when the data sets, their transformation pipelines and the tools used for the analysis are already present in the organizations. Furthermore, a new perspective for facilitating a smoother transition is laid out. It involves the usage of higher-level and use case specific frameworks and models, which will naturally bridge the gap between the technological and scientific domains. This alternative will effectively widen the possibilities of scientific archives and therefore will contribute to the reduction of the time to science. The research will be applied to the European Space Agency cornerstone mission Gaia, whose final data archive will represent a tremendous discovery potential. It will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), providing unprecedented position, parallax and proper motion measurements for about one billion stars. The successful exploitation of this data archive will depend to a large degree on the ability to offer the proper architecture, i.e. infrastructure and middleware, upon which scientists will be able to do exploration and modeling with this huge data set. In consequence, the approach taken needs to enable data fusion with other scientific archives, as this will produce the synergies leading to an increment in scientific outcome, both in volume and in quality. The set of novel techniques and frameworks presented in this work addresses these issues by contextualizing them with the data products that will be generated in the Gaia mission. All these considerations have led to the foundations of the architecture that will be leveraged by the Science Enabling Applications Work Package. Last but not least, the effectiveness of the proposed solution will be demonstrated through the implementation of some ambitious statistical problems that will require significant computational capabilities, and which will use Gaia-like simulated data (the first Gaia data release has recently taken place on September 14th, 2016). These ambitious problems will be referred to as the Grand Challenge, a somewhat grandiloquent name that consists in inferring a set of parameters from a probabilistic point of view for the Initial Mass Function (IMF) and Star Formation Rate (SFR) of a given set of stars (with a huge sample size), from noisy estimates of their masses and ages respectively. This will be achieved by using Hierarchical Bayesian Modeling (HBM). In principle, the HBM can incorporate stellar evolution models to infer the IMF and SFR directly, but in this first step presented in this thesis, we will start with a somewhat less ambitious goal: inferring the PDMF and PDAD. Moreover, the performance and scalability analyses carried out will also prove the suitability of the models for the large amounts of data that will be available in the Gaia data archive.Las grandes cantidades de datos que se producen en el mundo diariamente plantean nuevos retos a la sociedad en términos de cómo extraer su valor inherente. Las redes sociales, mensajería instantánea, los dispositivos inteligentes y las misiones científicas son meros ejemplos del gran número de fuentes generando datos en cada momento. Al mismo tiempo que el mundo se digitaliza cada vez más, aparecen nuevas necesidades para organizar, archivar, compartir, analizar, visualizar y proteger la creciente cantidad de datos, para que podamos desarrollar economías basadas en datos e información que sean capaces de reducir las ineficiencias e incrementar la sostenibilidad, creando nuevas oportunidades de negocio por el camino. La forma en la que se han manejado los datos tradicionalmente no es la adecuada hoy en día, ya que carece de los medios para escalar a los volúmenes más grandes de datos de una forma oportuna y eficiente. Esto ha cambiado de alguna manera con la llegada de compañías que operan en Internet como Google o Facebook, ya que han concebido nuevas aproximaciones para abordar el problema. Sin embargo, la variedad y complejidad de las cadenas de valor en el sector privado y las crecientes demandas y limitaciones en las que el sector público opera, necesitan una investigación continua en la materia que pueda proporcionar nuevas estrategias para procesar las enormes cantidades de datos, facilitar la integración de productores y consumidores de información, y garantizar una transición rápida y fluida a la hora de adoptar estos avances tecnológicos innovadores. Esta tesis tiene como objetivo proporcionar nuevas arquitecturas y técnicas que ayudarán a realizar esta transición hacia Big Data en archivos científicos masivos. La investigación destaca los escollos principales a encarar cuando se adoptan estas nuevas tecnologías y cómo afrontarlos, principalmente cuando los datos y las herramientas de transformación utilizadas en el análisis existen en la organización. Además, se exponen nuevas medidas para facilitar una transición más fluida. Éstas incluyen la utilización de software de alto nivel y específico al caso de uso en cuestión, que haga de puente entre el dominio científico y tecnológico. Esta alternativa ampliará de una forma efectiva las posibilidades de los archivos científicos y por tanto contribuirá a la reducción del tiempo necesario para generar resultados científicos a partir de los datos recogidos en las misiones de astronomía espacial y planetaria. La investigación se aplicará a la misión de la Agencia Espacial Europea (ESA) Gaia, cuyo archivo final de datos presentará un gran potencial para el descubrimiento y hallazgo desde el punto de vista científico. La misión creará el catálogo en tres dimensiones más grande y preciso de nuestra galaxia (la Vía Láctea), proporcionando medidas sin precedente acerca del posicionamiento, paralaje y movimiento propio de alrededor de mil millones de estrellas. Las oportunidades para la explotación exitosa de este archivo de datos dependerán en gran medida de la capacidad de ofrecer la arquitectura adecuada, es decir infraestructura y servicios, sobre la cual los científicos puedan realizar la exploración y modelado con esta inmensa cantidad de datos. Por tanto, la estrategia a realizar debe ser capaz de combinar los datos con otros archivos científicos, ya que esto producirá sinergias que contribuirán a un incremento en la ciencia producida, tanto en volumen como en calidad de la misma. El conjunto de técnicas e infraestructuras innovadoras presentadas en este trabajo aborda estos problemas, contextualizándolos con los productos de datos que se generarán en la misión Gaia. Todas estas consideraciones han conducido a los fundamentos de la arquitectura que se utilizará en el paquete de trabajo de aplicaciones que posibilitarán la ciencia en el archivo de la misión Gaia (Science Enabling Applications). Por último, la eficacia de la solución propuesta se demostrará a través de la implementación de dos problemas estadísticos que requerirán cantidades significativas de cómputo, y que usarán datos simulados en el mismo formato en el que se producirán en el archivo de la misión Gaia (la primera versión de datos recogidos por la misión está disponible desde el día 14 de Septiembre de 2016). Estos ambiciosos problemas representan el Gran Reto (Grand Challenge), un nombre grandilocuente que consiste en inferir una serie de parámetros desde un punto de vista probabilístico para la función de masa inicial (Initial Mass Function) y la tasa de formación estelar (Star Formation Rate) dado un conjunto de estrellas (con una muestra grande), desde estimaciones con ruido de sus masas y edades respectivamente. Esto se abordará utilizando modelos jerárquicos bayesianos (Hierarchical Bayesian Modeling). Enprincipio,losmodelospropuestos pueden incorporar otros modelos de evolución estelar para inferir directamente la función de masa inicial y la tasa de formación estelar, pero en este primer paso presentado en esta tesis, empezaremos con un objetivo algo menos ambicioso: la inferencia de la función de masa y distribución de edades actual (Present-Day Mass Function y Present-Day Age Distribution respectivamente). Además, se llevará a cabo el análisis de rendimiento y escalabilidad para probar la idoneidad de la implementación de dichos modelos dadas las enormes cantidades de datos que estarán disponibles en el archivo de la misión Gaia...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Application-based authentication on an inter-VM traffic in a Cloud environment

    Get PDF
    Cloud Computing (CC) is an innovative computing model in which resources are provided as a service over the Internet, on an as-needed basis. It is a large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. Since cloud is often enabled by virtualization and share a common attribute, that is, the allocation of resources, applications, and even OSs, adequate safeguards and security measures are essential. In fact, Virtualization creates new targets for intrusion due to the complexity of access and difficulty in monitoring all interconnection points between systems, applications, and data sets. This raises many questions about the appropriate infrastructure, processes, and strategy for enacting detection and response to intrusion in a Cloud environment. Hence, without strict controls put in place within the Cloud, guests could violate and bypass security policies, intercept unauthorized client data, and initiate or become the target of security attacks. This article shines the light on the issues of security within Cloud Computing, especially inter-VM traffic visibility. In addition, the paper lays the proposition of an Application Based Security (ABS) approach in order to enforce an application-based authentication between VMs, through various security mechanisms, filtering, structures, and policies

    A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing

    Get PDF
    Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one protocol to the design of new ones. Design patterns can help by gathering and sharing proven and reusable solution to common, reoccurring design problems. The Application-level Communication Protocols Design Patterns language captures this knowledge about application-level protocol design, so developers can create better, more fitting protocols base on these common and well proven solutions. Another aspect of contemporary development technics is the need of distribution of software artifacts. Most of the development companies have started using Cloud Computing services to overcome this need; either public or private clouds are widely used. Future developers need to manage this technology infrastructure, software, and platform as services. These two aspects, communication protocols design and cloud computing represent an opportunity to contribute to the software development community and to the software engineering education curriculum. The Application-level Communication Protocols Design Patterns language aims to help solve communication software design. The use of cloud computing in programming assignments targets on a positive influence on improving the Analysis to Reuse skills of students of computer science careers

    The Foundations of American Distance Education: A Century of Collegiate Correspondence Study

    Get PDF
    A century after correspondence study began in the United States, the Independent Study Division of the National Continuing Education Association has launched an ambitious project to record the history, achievements, ideas, issues, and research pertinent to practitioners, faculties, and students in distance education. The publication of The Foundations of American Distance Education: A Century of Collegiate Correspondence Study offers the profession an opportunity to gain a sense of perspective on the past, as well as on the present, that will help prepare to meet future challenges. Within this field, it has been common to cite two periods of historic development, each of which was connected to the publication of a book that had important consequences. The first is Bittner and Mallory's University Teaching by Mail (1933), which describes the origins of the field and the integration of correspondence study into American universities, and the second is Wedemeyer and Childs' New Perspectives in University Correspondence Study (1961), which assesses the incorporation of new technologies. In addition, the two volumes of the Brandenburg Memorial Essays on Correspondence Instruction (1963 and 1966), which were products of a distance education "summit" seminar in the early 1960s, prompted a new professionalism. Correspondence study practitioners began to take a modest pride in their own profession, and to insist upon steadily raising the professional level of their own scholarship and teaching. It is my hope that this new volume will have a similar influence on the profession. The past century of correspondence instruction has been a remarkable period of growth and challenge. Present demands are equally enormous: integration of more sophisticated media in instruction and management, improvement of testing and evaluation, and meeting the educational needs of an increasingly diverse population. In the past century the proliferation of the correspondence study /independent study I distance education movement has generated educational change throughout the world. Today researchers and practitioners bring into the field new concepts, perceptions, and scholarship, as well as new teaching-learning models. The lessons of the past emphasize that much hard work, innovation, and initiative are necessary to keep pace with the challenges of the times. The articles in this volume provide opportunities for reflection, practical information, and guidance for independent study' s second century
    • …
    corecore