3,414 research outputs found

    A Methodology for Engineering Collaborative and ad-hoc Mobile Applications using SyD Middleware

    Get PDF
    Today’s web applications are more collaborative and utilize standard and ubiquitous Internet protocols. We have earlier developed System on Mobile Devices (SyD) middleware to rapidly develop and deploy collaborative applications over heterogeneous and possibly mobile devices hosting web objects. In this paper, we present the software engineering methodology for developing SyD-enabled web applications and illustrate it through a case study on two representative applications: (i) a calendar of meeting application, which is a collaborative application and (ii) a travel application which is an ad-hoc collaborative application. SyD-enabled web objects allow us to create a collaborative application rapidly with limited coding effort. In this case study, the modular software architecture allowed us to hide the inherent heterogeneity among devices, data stores, and networks by presenting a uniform and persistent object view of mobile objects interacting through XML/SOAP requests and responses. The performance results we obtained show that the application scales well as we increase the group size and adapts well within the constraints of mobile devices

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    Service Selection of Ensuring Transactional Reliability and QoS for Web Service Composition

    Get PDF
    Service-Oriented Architecture (SOA) provides a flexible framework of service composition. Using standard-based protocols, composite service can be constructed by integrating component services independently. As component services are developed by different organization and offer diverse transactional properties and QoS characteristics, it is a challenging problem how to select suitable component services which ensure reliable execution of composite Web service and construct the optimal composite Web service. In this paper, we propose a selection approach that combines transactional properties of ensuring reliability and QoS characteristics. In the selection approach, we build automaton model to implement transactional-aware service selection and use the model to guarantee reliable execution of composite Web service. We also define aggregation functions, and use a Multiple-Attribute Decision-Making approach for the utility function to achieve Qos-based optimal service selection. Finally, two scenarios of experiments are presented to demonstrate the validity of the selection approach

    An agent based architecture to support monitoring in plug and produce manufacturing systems using knowledge extraction

    Get PDF
    In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime

    Interconnection Architecture of Proximity Smart IoE-Networks with Centralised Management

    Full text link
    [ES] La interoperabilidad entre los objetos comunicados es el objetivo principal del internet de las cosas (IoT). Algunos esfuerzos para lograrlo han generado diversas propuestas de arquitecturas, sin embargo, aún no se ha llegado a un conceso. Estas arquitecturas difieren en el tipo de estructura, grado de centralización, algoritmo de enrutamiento, métricas de enrutamiento, técnicas de descubrimiento, algoritmos de búsqueda, segmentación, calidad de servicio y seguridad, entre otros. Algunas son mejores que otras, dependiendo del entorno en el que se desempeñan y del tipo de parámetro que se use. Las más populares son las orientadas a eventos o acciones basadas en reglas, las cuales han permitido que IoT ingrese en el mercado y logre una rápida masificación. Sin embargo, su interoperabilidad se basa en alianzas entre fabricantes para lograr su compatibilidad. Esta solución se logra en la nube con una plataforma que unifica a las diferentes marcas aliadas. Esto permite la introducción de estas tecnologías a la vida común de los usuarios pero no resuelve problemas de autonomía ni de interoperabilidad. Además, no incluye a la nueva generación de redes inteligentes basadas en cosas inteligentes. La arquitectura propuesta en esta tesis toma los aspectos más relevantes de las cuatro arquitecturas IoT más aceptadas y las integra en una, separando la capa IoT (comúnmente presente en estas arquitecturas), en tres capas. Además, está pensada para abarcar redes de proximidad (integrando diferentes tecnologías de interconexión IoT) y basar su funcionamiento en inteligencia artificial (AI). Por lo tanto, esta propuesta aumenta la posibilidad de lograr la interoperabilidad esperada y aumenta la funcionalidad de cada objeto en la red enfocada en prestar un servicio al usuario. Aunque el sistema que se propone incluye el procesamiento de una inteligencia artificial, sigue los mismos aspectos técnicos que sus antecesoras, ya que su operación y comunicación continúan basándose en la capa de aplicación y trasporte de la pila de protocolo TCP/IP. Sin embargo, con el fin de aprovechar los protocolos IoT sin modificar su funcionamiento, se crea un protocolo adicional que se encapsula y adapta a su carga útil. Se trata de un protocolo que se encarga de descubrir las características de un objeto (DFSP) divididas en funciones, servicios, capacidades y recursos, y las extrae para centralizarla en el administrador de la red (IoT-Gateway). Con esta información el IoT-Gateway puede tomar decisiones como crear grupos de trabajo autónomos que presten un servicio al usuario y enrutar a los objetos de este grupo que prestan el servicio, además de medir la calidad de la experiencia (QoE) del servicio; también administra el acceso a internet e integra a otras redes IoT, utilizando inteligencia artificial en la nube. Al basarse esta propuesta en un nuevo sistema jerárquico para interconectar objetos de diferente tipo controlados por AI con una gestión centralizada, se reduce la tolerancia a fallos y seguridad, y se mejora el procesamiento de los datos. Los datos son preprocesados en tres niveles dependiendo del tipo de servicio y enviados a través de una interfaz. Sin embargo, si se trata de datos sobre sus características estos no requieren mucho procesamiento, por lo que cada objeto los preprocesa de forma independiente, los estructura y los envía a la administración central. La red IoT basada en esta arquitectura tiene la capacidad de clasificar un objeto nuevo que llegue a la red en un grupo de trabajo sin la intervención del usuario. Además de tener la capacidad de prestar un servicio que requiera un alto procesamiento (por ejemplo, multimedia), y un seguimiento del usuario en otras redes IoT a través de la nube.[CA] La interoperabilitat entre els objectes comunicats és l'objectiu principal de la internet de les coses (IoT). Alguns esforços per aconseguir-ho han generat diverses propostes d'arquitectures, però, encara no s'arriba a un concens. Aquestes arquitectures difereixen en el tipus d'estructura, grau de centralització, algoritme d'encaminament, mètriques d'enrutament, tècniques de descobriment, algoritmes de cerca, segmentació, qualitat de servei i seguretat entre d'altres. Algunes són millors que altres depenent de l'entorn en què es desenvolupen i de el tipus de paràmetre que es faci servir. Les més populars són les orientades a esdeveniments o accions basades en regles. Les quals li han permès entrar al mercat i aconseguir una ràpida massificació. No obstant això, la seva interoperabilitat es basa en aliances entre fabricants per aconseguir la seva compatibilitat. Aquesta solució s'aconsegueix en el núvol amb una plataforma que unifica les diferents marques aliades. Això permet la introducció d'aquestes tecnologies a la vida comuna dels usuaris però no resol problemes d'autonomia ni d'interoperabilitat. A més, no inclou a la nova generació de xarxes intel·ligents basades en coses intel·ligents. L'arquitectura proposada en aquesta tesi, pren els aspectes més rellevants de les quatre arquitectures IoT mes acceptades i les integra en una, separant la capa IoT (comunament present en aquestes arquitectures), en tres capes. A més aquesta pensada en abastar xarxes de proximitat (integrant diferents tecnologies d'interconnexió IoT) i basar el seu funcionament en intel·ligència artificial. Per tant, aquesta proposta augmenta la possibilitat d'aconseguir la interoperabilitat esperada i augmenta la funcionalitat de cada objecte a la xarxa enfocada a prestar un servei a l'usuari. Tot i que el sistema que es proposa inclou el processament d'una intel·ligència artificial, segueix els mateixos aspectes tècnics que les seves antecessores, ja que, la seva operació i comunicació se segueix basant en la capa d'aplicació i transport de la pila de protocol TCP / IP. No obstant això, per tal d'aprofitar els protocols IoT sense modificar el seu funcionament es crea un protocol addicional que s'encapsula i s'adapta a la seva càrrega útil. Es tracta d'un protocol que s'encarrega de descobrir les característiques d'un objecte (DFSP) dividides en funcions, serveis, capacitats i recursos, i les extreu per centralitzar-la en l'administrador de la xarxa (IoT-Gateway). Amb aquesta informació l'IoT-Gateway pot prendre decisions com crear grups de treball autònoms que prestin un servei a l'usuari i encaminar als objectes d'aquest grup que presten el servei. A més de mesurar la qualitat de l'experiència (QoE) de el servei. També administra l'accés a internet i integra a altres xarxes Iot, utilitzant intel·ligència artificial en el núvol. A l'basar-se aquesta proposta en un nou sistema jeràrquic per interconnectar objectes de diferent tipus controlats per AI amb una gestió centralitzada, es redueix la tolerància a fallades i seguretat, i es millora el processament de les dades. Les dades són processats en tres nivells depenent de el tipus de servei i enviats a través d'una interfície. No obstant això, si es tracta de dades sobre les seves característiques aquests no requereixen molt processament, de manera que cada objecte els processa de forma independent, els estructura i els envia a l'administració central. La xarxa IoT basada en aquesta arquitectura té la capacitat de classificar un objecte nou que arribi a la xarxa en un grup de treball sense la intervenció de l'usuari. A més de tenir la capacitat de prestar un servei que requereixi un alt processament (per exemple multimèdia), i un seguiment de l'usuari en altres xarxes IoT a través del núvol.[EN] Interoperability between communicating objects is the main goal of the Internet of Things (IoT). Efforts to achieve this have generated several architectures' proposals; however, no consensus has yet been reached. These architectures differ in structure, degree of centralisation, routing algorithm, routing metrics, discovery techniques, search algorithms, segmentation, quality of service, and security. Some are better than others depending on the environment in which they perform, and the type of parameter used. The most popular are those oriented to events or actions based on rules, which has allowed them to enter the market and achieve rapid massification. However, their interoperability is based on alliances between manufacturers to achieve compatibility. This solution is achieved in the cloud with a dashboard that unifies the different allied brands, allowing the introduction of these technologies into users' everyday lives but does not solve problems of autonomy or interoperability. Moreover, it does not include the new generation of smart grids based on smart things. The architecture proposed in this thesis takes the most relevant aspects of the four most accepted IoT-Architectures and integrates them into one, separating the IoT layer (commonly present in these architectures) into three layers. It is also intended to cover proximity networks (integrating different IoT interconnection technologies) and base its operation on artificial intelligence (AI). Therefore, this proposal increases the possibility of achieving the expected interoperability and increases the functionality of each object in the network focused on providing a service to the user. Although the proposed system includes artificial intelligence processing, it follows the same technical aspects as its predecessors since its operation and communication is still based on the application and transport layer of the TCP/IP protocol stack. However, in order to take advantage of IoT-Protocols without modifying their operation, an additional protocol is created that encapsulates and adapts to its payload. This protocol discovers the features of an object (DFSP) divided into functions, services, capabilities, and resources, and extracts them to be centralised in the network manager (IoT-Gateway). With this information, the IoT-Gateway can make decisions such as creating autonomous workgroups that provide a service to the user and routing the objects in this group that provide the service. It also measures the quality of experience (QoE) of the service. Moreover, manages internet access and integrates with other IoT-Networks, using artificial intelligence in the cloud. This proposal is based on a new hierarchical system for interconnecting objects of different types controlled by AI with centralised management, reducing the fault tolerance and security, and improving data processing. Data is preprocessed on three levels depending on the type of service and sent through an interface. However, if it is data about its features, it does not require much processing, so each object preprocesses it independently, structures it and sends it to the central administration. The IoT-Network based on this architecture can classify a new object arriving on the network in a workgroup without user intervention. It also can provide a service that requires high processing (e.g., multimedia), and user tracking in other IoT-Networks through the cloud.González Ramírez, PL. (2022). Interconnection Architecture of Proximity Smart IoE-Networks with Centralised Management [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181892TESI

    An automated OpenCL FPGA compilation framework targeting a configurable, VLIW chip multiprocessor

    Get PDF
    Modern system-on-chips augment their baseline CPU with coprocessors and accelerators to increase overall computational capacity and power efficiency, and thus have evolved into heterogeneous systems. Several languages have been developed to enable this paradigm shift, including CUDA and OpenCL. This thesis discusses a unified compilation environment to enable heterogeneous system design through the use of OpenCL and a customised VLIW chip multiprocessor (CMP) architecture, known as the LE1. An LLVM compilation framework was researched and a prototype developed to enable the execution of OpenCL applications on the LE1 CPU. The framework fully automates the compilation flow and supports work-item coalescing to better utilise the CPU cores and alleviate the effects of thread divergence. This thesis discusses in detail both the software stack and target hardware architecture and evaluates the scalability of the proposed framework on a highly precise cycle-accurate simulator. This is achieved through the execution of 12 benchmarks across 240 different machine configurations, as well as further results utilising an incomplete development branch of the compiler. It is shown that the problems generally scale well with the LE1 architecture, up to eight cores, when the memory system becomes a serious bottleneck. Results demonstrate superlinear performance on certain benchmarks (x9 for the bitonic sort benchmark with 8 dual-issue cores) with further improvements from compiler optimisations (x14 for bitonic with the same configuration

    Coordination and P2P computing

    Get PDF
    Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use
    • …
    corecore