26 research outputs found

    Low-power Wearable Healthcare Sensors

    Get PDF
    Advances in technology have produced a range of on-body sensors and smartwatches that can be used to monitor a wearer’s health with the objective to keep the user healthy. However, the real potential of such devices not only lies in monitoring but also in interactive communication with expert-system-based cloud services to offer personalized and real-time healthcare advice that will enable the user to manage their health and, over time, to reduce expensive hospital admissions. To meet this goal, the research challenges for the next generation of wearable healthcare devices include the need to offer a wide range of sensing, computing, communication, and human–computer interaction methods, all within a tiny device with limited resources and electrical power. This Special Issue presents a collection of six papers on a wide range of research developments that highlight the specific challenges in creating the next generation of low-power wearable healthcare sensors

    Empirical studies of structural phenomena using a curated corpus of Java code

    Full text link
    Contrary to 50 years\u27 worth of advice in the instructional literature on software design, long cyclic dependencies are found to be widespread in sizeable, curated corpus of real Java software. Among their causes may be overuse of static members, underuse of dependency injection and poor tool support for avoiding them.<br /

    Data Bandwidth Reduction Techniques For Distributed Embedded Simulatio

    Get PDF
    Maintaining coherence between the independent views of multiple participants at distributed locations is essential in an Embedded Simulation environment. Currently, the Distributed Interactive Simulation (DIS) protocol maintains coherence by broadcasting the entity state streams from each simulation station. In this dissertation, a novel alternative to DIS that replaces the transmitting sources with local sources is developed, validated, and assessed by analytical and experimental means. The proposed Concurrent Model approach reduces the communication burden to transmission of only synchronization and model-update messages. Necessary and sufficient conditions for the correctness of Concurrent Models in a discrete event simulation environment are established by developing Behavioral Congruence ¨B(EL, ER) and Temporal Congruence ¨T(t, ER) functions. They indicate model discrepancies with respect to the simulation time t, and the local and remote entity state streams EL and ER, respectively. Performance benefits were quantified in terms of the bandwidth reduction ratio BR=N/I obtained from the comparison of the OneSAF Testbed Semi-Automated Forces (OTBSAF) simulator under DIS requiring a total of N bits and a testbed modified for the Concurrent Model approach which required I bits. In the experiments conducted, a range of 100 d BR d 294 was obtained representing two orders of magnitude reduction in simulation traffic. Investigation showed that the models rely heavily on the priority data structure of the discrete event simulation and that performance of the overall simulation can be enhanced by an additional 6% by improving the queue management. A low run-time overhead, self-adapting storage policy called the Smart Priority Queue (SPQ) was developed and evaluated within the Concurrent Model. The proposed SPQ policies employ a lowcomplexity linear queue for near head activities and a rapid-indexing variable binwidth calendar queue for distant events. The SPQ configuration is determined by monitoring queue access behavior using cost scoring factors and then applying heuristics to adjust the organization of the underlying data structures. Results indicate that optimizing storage to the spatial distribution of queue access can decrease HOLD operation cost between 25% and 250% over existing algorithms such as calendar queues. Taken together, these techniques provide an entity state generation mechanism capable of overcoming the challenges of Embedded Simulation in harsh mobile communications environments with restricted bandwidth, increased message latency, and extended message drop-outs

    CazDataProvider: a solution to the object-relational mismatch

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaToday, most software applications require mechanisms to store information persistently. For decades, Relational Database Management Systems (RDBMSs) have been the most common technology to provide efficient and reliable persistence. Due to the object-relational paradigm mismatch, object oriented applications that store data in relational databases have to deal with Object Relational Mapping (ORM) problems. Since the emerging of new ORM frameworks, there has been an attempt to lure developers for a radical paradigm shift. However, they still often have troubles finding the best persistence mechanism for their applications, especially when they have to bear with legacy database systems. The aim of this dissertation is to discuss the persistence problem on object oriented applications and find the best solutions. The main focus lies on the ORM limitations, patterns, technologies and alternatives. The project supporting this dissertation was implemented at Cachapuz under the Project Global Weighting Solutions (GWS). Essentially, the objectives of GWS were centred on finding the optimal persistence layer for CazFramework, mostly providing database interoperability with close-to-Structured Query Language (SQL) querying. Therefore, this work provides analyses on ORM patterns, frameworks, alternatives to ORM like Object-Oriented Database Management Systems (OODBMSs). It also describes the implementation of CazDataProvider, a .NET library tool providing database interoperability and dynamic query features. In the end, there is a performance comparison of all the technologies debated in this dissertation. The result of this dissertation provides guidance for adopting the best persistence technology or implement the most suitable ORM architectures.Hoje, a maioria dos aplicações requerem mecanismos para armazenar informação persistentemente. Durante décadas, as RDBMSs têm sido a tecnologia mais comum para fornecer persistência eficiente e confiável. Devido à incompatibilidade dos paradigmas objetos-relacional, as aplicações orientadas a objetos que armazenam dados em bases de dados relacionais têm de lidar com os problemas do ORM. Desde o surgimento de novas frameworks ORM, houve uma tentativa de atrair programadores para uma mudança radical de paradigmas. No entanto, eles ainda têm muitas vezes dificuldade em encontrar o melhor mecanismo de persistência para as suas aplicações, especialmente quando eles têm de lidar com bases de dados legadss. O objetivo deste trabalho é discutir o problema de persistência em aplicações orientadas a objetos e encontrar as melhores soluções. O foco principal está nas limitações, padrões e tecnologias do ORM bem como suas alternativas. O projeto de apoio a esta dissertação foi implementado na Cachapuz no âmbito do Projeto GWS. Essencialmente, os objetivos do GWS foram centrados em encontrar a camada de persistência ideal para a CazFramework, principalmente fornecendo interoperabilidade de base de dados e consultas em SQL. Portanto, este trabalho fornece análises sobre padrões, frameworks e alternativas ao ORM como OODBMS. Além disso descreve a implementação do CazDataProvider, uma biblioteca .NET que fornece interoperabilidade de bases de dados e consultas dinâmicas. No final, há uma comparação de desempenho de todas as tecnologias discutidas nesta dissertação. O resultado deste trabalho fornece orientação para adotar a melhor tecnologia de persistência ou implementar as arquiteturas ORM mais adequadas

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 12-01-2016La consolidación de las grandes infraestructuras para la Computación Distribuida ha resultado en una plataforma de Computación de Alta Productividad que está lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computación Cloud promete ser más flexible, utilizable, disponible y simple que la Computación Grid, cubriendo además muchas más necesidades computacionales que las requeridas para llevar a cabo cálculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignación ideal de las tareas computacionales en los recursos disponibles es, por definición un problema NP-completo, y sólo se pueden encontrar soluciones subóptimas para estos entornos. Sin embargo, la caracterización de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de información disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificación avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la última década esta cuestión no ha sido resuelta para la Computación Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) sólo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimación de su duración. La planificación compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gestión de producción pertenecientes a ciertas comunidades de investigación. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignación anticipada (early-binding) es notorio. Además, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificación y de interfaces de programación estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEsubmitte

    Strategies of development and maintenance in supervision, control, synchronization, data acquisition and processing in light sources

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Os aceleradores de partículas e fontes de luz sincrotrón, evolucionan constantemente para estar na vangarda da tecnoloxía, levando os límites cada vez mais lonxe para explorar novos dominios e universos. Os sistemas de control son unha parte crucial desas instalacións científicas e buscan logra-la flexibilidade de manobra para poder facer experimentos moi variados, con configuracións diferentes que engloban moitos tipos de detectores, procedementos, mostras a estudar e contornas. As propostas de experimento son cada vez máis ambiciosas e van sempre un paso por diante do establecido. Precísanse detectores cada volta máis rápidos e eficientes, con máis ancho de banda e con máis resolución. Tamén é importante a operación simultánea de varios detectores tanto escalares como mono ou bidimensionáis, con mecanismos de sincronización de precisión que integren as singularidades de cada un. Este traballo estuda as solucións existentes no campo dos sistemas de control e adquisición de datos nos aceleradores de partículas e fontes de luz e raios X, ó tempo que explora novos requisitos e retos no que respecta á sincronización e velocidade de adquisición de datos para novos experimentos, a optimización do deseño, soporte, xestión de servizos e custos de operación. Tamén se estudan diferentes solucións adaptadas a cada contorna.[Resumen] Los aceleradores de partículas y fuentes de luz sincrotrón, evolucionan constantemente para estar en la vanguardia de la tecnología, y poder explorar nuevos dominios. Los sistemas de control son una parte fundamental de esas instalaciones científicas y buscan lograr la máxima flexibilidad para poder llevar a cabo experimentos más variados, con configuraciones diferentes que engloban varios tipos de detectores, procedimientos, muestras a estudiar y entornos. Los experimentos se proponen cada vez más ambiciosos y en ocasiones más allá de los límites establecidos. Se necesitan detectores cada vez más rápidos y eficientes, con más resolución y ancho de banda, que puedan sincronizarse simultáneamente con otros detectores tanto escalares como mono y bidimensionales, integrando las singularidades de cada uno y homogeneizando la adquisición de datos. Este trabajo estudia los sistemas de control y adquisición de datos de aceleradores de partículas y fuentes de luz y rayos X, y explora nuevos requisitos y retos en lo que respecta a la sincronización y velocidad de adquisición de datos, optimización y costo-eficiencia en el diseño, operación soporte, mantenimiento y gestión de servicios. También se estudian diferentes soluciones adaptadas a cada entorno.[Abstract] Particle accelerators and photon sources are constantly evolving, attaining the cutting-edge technologies to push the limits forward and explore new domains. The control systems are a crucial part of these installations and are required to provide flexible solutions to the new challenging experiments, with different kinds of detectors, setups, sample environments and procedures. Experiment proposals are more and more ambitious at each call and go often a step beyond the capabilities of the instrumentation. Detectors shall be faster, with higher efficiency, more resolution, more bandwidth and able to synchronize with other detectors of all kinds; scalars, one or two-dimensional, taking into account their singularities and homogenizing the data acquisition. This work examines the control and data acquisition systems for particle accelerators and X- ray / light sources and explores new requirements and challenges regarding synchronization and data acquisition bandwidth, optimization and cost-efficiency in the design / operation / support. It also studies different solutions depending on the environment

    A software architecture for electro-mobility services: a milestone for sustainable remote vehicle capabilities

    Get PDF
    To face the tough competition, changing markets and technologies in automotive industry, automakers have to be highly innovative. In the previous decades, innovations were electronics and IT-driven, which increased exponentially the complexity of vehicle’s internal network. Furthermore, the growing expectations and preferences of customers oblige these manufacturers to adapt their business models and to also propose mobility-based services. One other hand, there is also an increasing pressure from regulators to significantly reduce the environmental footprint in transportation and mobility, down to zero in the foreseeable future. This dissertation investigates an architecture for communication and data exchange within a complex and heterogeneous ecosystem. This communication takes place between various third-party entities on one side, and between these entities and the infrastructure on the other. The proposed solution reduces considerably the complexity of vehicle communication and within the parties involved in the ODX life cycle. In such an heterogeneous environment, a particular attention is paid to the protection of confidential and private data. Confidential data here refers to the OEM’s know-how which is enclosed in vehicle projects. The data delivered by a car during a vehicle communication session might contain private data from customers. Our solution ensures that every entity of this ecosystem has access only to data it has the right to. We designed our solution to be non-technological-coupling so that it can be implemented in any platform to benefit from the best environment suited for each task. We also proposed a data model for vehicle projects, which improves query time during a vehicle diagnostic session. The scalability and the backwards compatibility were also taken into account during the design phase of our solution. We proposed the necessary algorithms and the workflow to perform an efficient vehicle diagnostic with considerably lower latency and substantially better complexity time and space than current solutions. To prove the practicality of our design, we presented a prototypical implementation of our design. Then, we analyzed the results of a series of tests we performed on several vehicle models and projects. We also evaluated the prototype against quality attributes in software engineering
    corecore