158 research outputs found

    Service Composition for IP Smart Object using Realtime Web Protocols: Concept and Research Challenges

    Get PDF
    The Internet of Things (IoT) refers to a world-wide network of interconnected physical things using standardized communication protocols. Recent development of Internet Protocol (IP) stacks for resource-constrained devices unveils a possibility for the future IoT based on the stable and scalable IP technology much like today's Internet of computers. One important question remains: how can data and events (denoted as services) introduced by a variety of IP networked things be exchanged and aggregated e ciently in various application domains. Because the true value of IoT lies in the interaction of several services from physical things, answers to this question are essential to support a rapid creation of new IoT smart and ubiquitous applications. The problem is known as service composition. This article explains the practicability of the future full-IP IoT with realtime Web protocols to formally state the problem of service composition for IP smart objects, provides literature review, and discusses its research challenges

    Insight into the Digital Health System of Ukraine (eHealth): Trends, Definitions, Standards, and Legislative Revisions

    Get PDF
    Purpose. This article aims to provide an in-depth examination of the digital health system of Ukraine, focusing on the emerging trends, precise definitions, established standards, and recent legislative revisions that shape the practice and implementation of eHealth solutions within the country. Background. The digital health landscape in Ukraine has witnessed significant transformations, especially in the wake of the COVID-19 pandemic and subsequent military conflicts. These events have catalyzed the expansion of telemedicine services, leading to innovative approaches in healthcare delivery. The national strategy underscores the necessity for human-centric and accessible telemedicine, reinforced by technological neutrality, and harmonization with global standards. Methods. A review of the current literature, national strategies, and legal documents was conducted, alongside an analysis of data usage and service provision patterns in various Ukrainian regions. Participation in the "Science for Safety and Sustainable Development of Ukraine" competition facilitated project initiatives like the development of a cloud-based platform for patient-centered telerehabilitation for oncology patients. Findings. The utilization of telemedicine has significantly increased in conflict-affected regions, demonstrating the need for, and the effective deployment of, digital health strategies under crisis conditions. Private health facilities and entrepreneurs have been pivotal in the provision of telemedicine services. Legislative efforts have been geared toward framing telemedicine as an integral component of the national eHealth system, ensuring interoperability, and aligning with international standards and the Internet of Medical Things (IoMT). Interpretation. The findings underscore the resilience and adaptability of the Ukrainian healthcare system in the face of adversity. There is a clear trend towards a more integrated, patient-focused, and technologically advanced healthcare model, aligning with international trends and prioritizing public health goals over private profits. This progress, however, is contingent upon continuous development, investment in technological infrastructure, and legislative support to sustain and advance digital health initiatives

    Near Field Communication: From theory to practice

    Get PDF
    This book provides the technical essentials, state-of-the-art knowledge, business ecosystem and standards of Near Field Communication (NFC)by NFC Lab - Istanbul research centre which conducts intense research on NFC technology. In this book, the authors present the contemporary research on all aspects of NFC, addressing related security aspects as well as information on various business models. In addition, the book provides comprehensive information a designer needs to design an NFC project, an analyzer needs to analyze requirements of a new NFC based system, and a programmer needs to implement an application. Furthermore, the authors introduce the technical and administrative issues related to NFC technology, standards, and global stakeholders. It also offers comprehensive information as well as use case studies for each NFC operating mode to give the usage idea behind each operating mode thoroughly. Examples of NFC application development are provided using Java technology, and security considerations are discussed in detail. Key Features: Offers a complete understanding of the NFC technology, including standards, technical essentials, operating modes, application development with Java, security and privacy, business ecosystem analysis Provides analysis, design as well as development guidance for professionals from administrative and technical perspectives Discusses methods, techniques and modelling support including UML are demonstrated with real cases Contains case studies such as payment, ticketing, social networking and remote shopping This book will be an invaluable guide for business and ecosystem analysts, project managers, mobile commerce consultants, system and application developers, mobile developers and practitioners. It will also be of interest to researchers, software engineers, computer scientists, information technology specialists including students and graduates.Publisher's Versio

    Internetworking Objects with RFID

    Get PDF

    Some Multidimensional Unintended Consequences of Telehealth Utilization: A Multi-Project Evaluation Synthesis

    Get PDF
    Background: Telehealth initiatives have bloomed around the globe, but their integration and diffusion remain challenging because of the complex issues they raise. Available evidence around telehealth usually deals with its expected effects and benefits, but its unintended consequences (UCs) and influencing factors are little documented. This study aims to explore, describe and analyze multidimensional UCs that have been associated with the use of telehealth.Methods: We performed a secondary analysis of the evaluations of 10 telehealth projects conducted over a 22-year period in the province of Quebec (Canada). All material was subjected to a qualitative thematic-pragmatic content analysis with triangulation of methodologies and data sources. We used the conceptual model of the UCs of health information technologies proposed by Bloomrosen et al to structure our analysis.Results: Four major findings emerged from our analysis. First, telehealth utilization requires many adjustments, changes and negotiations often underestimated in the planning and initial phases of the projects. Second, telehealth may result in the emergence of new services corridors that disturb existing ones and involve several adjustments for organizations, such as additional investments and resources, but also the risk of fragmentation of services and the need to balance between standardization of practices and local innovation. Third, telehealth may accentuate power relations between stakeholders. Fourth, it may lead to significant changes in the responsibilities of each actor in the supply chain of services. Finally, current legislative and regulatory frameworks appear ill-adapted to many of the new realities brought by telehealth.Conclusion: This study provides a first attempt for an overview of the UCs associated with the use of telehealth. Future research-evaluation studies should be more sensitive to the multidimensional and interdependent factors that influence telehealth implementation and utilization as well as its impacts, intended or unintended, at all levels. Thus, a consideration of potential UCs should inform telehealth projects, from their planning until their scaling-up

    Arquitectura, técnicas y modelos para posibilitar la Ciencia de Datos en el Archivo de la Misión Gaia

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 26/05/2017.The massive amounts of data that the world produces every day pose new challenges to modern societies in terms of how to leverage their inherent value. Social networks, instant messaging, video, smart devices and scientific missions are just mere examples of the vast number of sources generating data every second. As the world becomes more and more digitalized, new needs arise for organizing, archiving, sharing, analyzing, visualizing and protecting the ever-increasing data sets, so that we can truly develop into a data-driven economy that reduces inefficiencies and increases sustainability, creating new business opportunities on the way. Traditional approaches for harnessing data are not suitable any more as they lack the means for scaling to the larger volumes in a timely and cost efficient manner. This has somehow changed with the advent of Internet companies like Google and Facebook, which have devised new ways of tackling this issue. However, the variety and complexity of the value chains in the private sector as well as the increasing demands and constraints in which the public one operates, needs an ongoing research that can yield newer strategies for dealing with data, facilitate the integration of providers and consumers of information, and guarantee a smooth and prompt transition when adopting these cutting-edge technological advances. This thesis aims at providing novel architectures and techniques that will help perform this transition towards Big Data in massive scientific archives. It highlights the common pitfalls that must be faced when embracing it and how to overcome them, especially when the data sets, their transformation pipelines and the tools used for the analysis are already present in the organizations. Furthermore, a new perspective for facilitating a smoother transition is laid out. It involves the usage of higher-level and use case specific frameworks and models, which will naturally bridge the gap between the technological and scientific domains. This alternative will effectively widen the possibilities of scientific archives and therefore will contribute to the reduction of the time to science. The research will be applied to the European Space Agency cornerstone mission Gaia, whose final data archive will represent a tremendous discovery potential. It will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), providing unprecedented position, parallax and proper motion measurements for about one billion stars. The successful exploitation of this data archive will depend to a large degree on the ability to offer the proper architecture, i.e. infrastructure and middleware, upon which scientists will be able to do exploration and modeling with this huge data set. In consequence, the approach taken needs to enable data fusion with other scientific archives, as this will produce the synergies leading to an increment in scientific outcome, both in volume and in quality. The set of novel techniques and frameworks presented in this work addresses these issues by contextualizing them with the data products that will be generated in the Gaia mission. All these considerations have led to the foundations of the architecture that will be leveraged by the Science Enabling Applications Work Package. Last but not least, the effectiveness of the proposed solution will be demonstrated through the implementation of some ambitious statistical problems that will require significant computational capabilities, and which will use Gaia-like simulated data (the first Gaia data release has recently taken place on September 14th, 2016). These ambitious problems will be referred to as the Grand Challenge, a somewhat grandiloquent name that consists in inferring a set of parameters from a probabilistic point of view for the Initial Mass Function (IMF) and Star Formation Rate (SFR) of a given set of stars (with a huge sample size), from noisy estimates of their masses and ages respectively. This will be achieved by using Hierarchical Bayesian Modeling (HBM). In principle, the HBM can incorporate stellar evolution models to infer the IMF and SFR directly, but in this first step presented in this thesis, we will start with a somewhat less ambitious goal: inferring the PDMF and PDAD. Moreover, the performance and scalability analyses carried out will also prove the suitability of the models for the large amounts of data that will be available in the Gaia data archive.Las grandes cantidades de datos que se producen en el mundo diariamente plantean nuevos retos a la sociedad en términos de cómo extraer su valor inherente. Las redes sociales, mensajería instantánea, los dispositivos inteligentes y las misiones científicas son meros ejemplos del gran número de fuentes generando datos en cada momento. Al mismo tiempo que el mundo se digitaliza cada vez más, aparecen nuevas necesidades para organizar, archivar, compartir, analizar, visualizar y proteger la creciente cantidad de datos, para que podamos desarrollar economías basadas en datos e información que sean capaces de reducir las ineficiencias e incrementar la sostenibilidad, creando nuevas oportunidades de negocio por el camino. La forma en la que se han manejado los datos tradicionalmente no es la adecuada hoy en día, ya que carece de los medios para escalar a los volúmenes más grandes de datos de una forma oportuna y eficiente. Esto ha cambiado de alguna manera con la llegada de compañías que operan en Internet como Google o Facebook, ya que han concebido nuevas aproximaciones para abordar el problema. Sin embargo, la variedad y complejidad de las cadenas de valor en el sector privado y las crecientes demandas y limitaciones en las que el sector público opera, necesitan una investigación continua en la materia que pueda proporcionar nuevas estrategias para procesar las enormes cantidades de datos, facilitar la integración de productores y consumidores de información, y garantizar una transición rápida y fluida a la hora de adoptar estos avances tecnológicos innovadores. Esta tesis tiene como objetivo proporcionar nuevas arquitecturas y técnicas que ayudarán a realizar esta transición hacia Big Data en archivos científicos masivos. La investigación destaca los escollos principales a encarar cuando se adoptan estas nuevas tecnologías y cómo afrontarlos, principalmente cuando los datos y las herramientas de transformación utilizadas en el análisis existen en la organización. Además, se exponen nuevas medidas para facilitar una transición más fluida. Éstas incluyen la utilización de software de alto nivel y específico al caso de uso en cuestión, que haga de puente entre el dominio científico y tecnológico. Esta alternativa ampliará de una forma efectiva las posibilidades de los archivos científicos y por tanto contribuirá a la reducción del tiempo necesario para generar resultados científicos a partir de los datos recogidos en las misiones de astronomía espacial y planetaria. La investigación se aplicará a la misión de la Agencia Espacial Europea (ESA) Gaia, cuyo archivo final de datos presentará un gran potencial para el descubrimiento y hallazgo desde el punto de vista científico. La misión creará el catálogo en tres dimensiones más grande y preciso de nuestra galaxia (la Vía Láctea), proporcionando medidas sin precedente acerca del posicionamiento, paralaje y movimiento propio de alrededor de mil millones de estrellas. Las oportunidades para la explotación exitosa de este archivo de datos dependerán en gran medida de la capacidad de ofrecer la arquitectura adecuada, es decir infraestructura y servicios, sobre la cual los científicos puedan realizar la exploración y modelado con esta inmensa cantidad de datos. Por tanto, la estrategia a realizar debe ser capaz de combinar los datos con otros archivos científicos, ya que esto producirá sinergias que contribuirán a un incremento en la ciencia producida, tanto en volumen como en calidad de la misma. El conjunto de técnicas e infraestructuras innovadoras presentadas en este trabajo aborda estos problemas, contextualizándolos con los productos de datos que se generarán en la misión Gaia. Todas estas consideraciones han conducido a los fundamentos de la arquitectura que se utilizará en el paquete de trabajo de aplicaciones que posibilitarán la ciencia en el archivo de la misión Gaia (Science Enabling Applications). Por último, la eficacia de la solución propuesta se demostrará a través de la implementación de dos problemas estadísticos que requerirán cantidades significativas de cómputo, y que usarán datos simulados en el mismo formato en el que se producirán en el archivo de la misión Gaia (la primera versión de datos recogidos por la misión está disponible desde el día 14 de Septiembre de 2016). Estos ambiciosos problemas representan el Gran Reto (Grand Challenge), un nombre grandilocuente que consiste en inferir una serie de parámetros desde un punto de vista probabilístico para la función de masa inicial (Initial Mass Function) y la tasa de formación estelar (Star Formation Rate) dado un conjunto de estrellas (con una muestra grande), desde estimaciones con ruido de sus masas y edades respectivamente. Esto se abordará utilizando modelos jerárquicos bayesianos (Hierarchical Bayesian Modeling). Enprincipio,losmodelospropuestos pueden incorporar otros modelos de evolución estelar para inferir directamente la función de masa inicial y la tasa de formación estelar, pero en este primer paso presentado en esta tesis, empezaremos con un objetivo algo menos ambicioso: la inferencia de la función de masa y distribución de edades actual (Present-Day Mass Function y Present-Day Age Distribution respectivamente). Además, se llevará a cabo el análisis de rendimiento y escalabilidad para probar la idoneidad de la implementación de dichos modelos dadas las enormes cantidades de datos que estarán disponibles en el archivo de la misión Gaia...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Content rendering and interaction technologies for digital heritage systems

    Get PDF
    Existing digital heritage systems accommodate a huge amount of digital repository information; however their content rendering and interaction components generally lack the more interesting functionality that allows better interaction with heritage contents. Many digital heritage libraries are simply collections of 2D images with associated metadata and textual content, i.e. little more than museum catalogues presented online. However, over the last few years, largely as a result of EU framework projects, some 3D representation of digital heritage objects are beginning to appear in a digital library context. In the cultural heritage domain, where researchers and museum visitors like to observe cultural objects as closely as possible and to feel their existence and use in the past, giving the user only 2D images along with textual descriptions significantly limits interaction and hence understanding of their heritage. The availability of powerful content rendering technologies, such as 3D authoring tools to create 3D objects and heritage scenes, grid tools for rendering complex 3D scenes, gaming engines to display 3D interactively, and recent advances in motion capture technologies for embodied immersion, allow the development of unique solutions for enhancing user experience and interaction with digital heritage resources and objects giving a higher level of understanding and greater benefit to the community. This thesis describes DISPLAYS (Digital Library Services for Playing with Shared Heritage Resources), which is a novel conceptual framework where five unique services are proposed for digital content: creation, archival, exposition, presentation and interaction services. These services or tools are designed to allow the heritage community to create, interpret, use and explore digital heritage resources organised as an online exhibition (or virtual museum). This thesis presents innovative solutions for two of these services or tools: content creation where a cost effective render grid is proposed; and an interaction service, where a heritage scenario is presented online using a real-time motion capture and digital puppeteer solution for the user to explore through embodied immersive interaction their digital heritage

    Integrating Data Science and Earth Science

    Get PDF
    This open access book presents the results of three years collaboration between earth scientists and data scientist, in developing and applying data science methods for scientific discovery. The book will be highly beneficial for other researchers at senior and graduate level, interested in applying visual data exploration, computational approaches and scientifc workflows
    corecore