452,184 research outputs found

    Challenges in Complex Systems Science

    Get PDF
    FuturICT foundations are social science, complex systems science, and ICT. The main concerns and challenges in the science of complex systems in the context of FuturICT are laid out in this paper with special emphasis on the Complex Systems route to Social Sciences. This include complex systems having: many heterogeneous interacting parts; multiple scales; complicated transition laws; unexpected or unpredicted emergence; sensitive dependence on initial conditions; path-dependent dynamics; networked hierarchical connectivities; interaction of autonomous agents; self-organisation; non-equilibrium dynamics; combinatorial explosion; adaptivity to changing environments; co-evolving subsystems; ill-defined boundaries; and multilevel dynamics. In this context, science is seen as the process of abstracting the dynamics of systems from data. This presents many challenges including: data gathering by large-scale experiment, participatory sensing and social computation, managing huge distributed dynamic and heterogeneous databases; moving from data to dynamical models, going beyond correlations to cause-effect relationships, understanding the relationship between simple and comprehensive models with appropriate choices of variables, ensemble modeling and data assimilation, modeling systems of systems of systems with many levels between micro and macro; and formulating new approaches to prediction, forecasting, and risk, especially in systems that can reflect on and change their behaviour in response to predictions, and systems whose apparently predictable behaviour is disrupted by apparently unpredictable rare or extreme events. These challenges are part of the FuturICT agenda

    Coordination and Computation in distributed intelligent MEMS

    Get PDF
    International audienceOver the last decades, research on microelectromechanical systems (MEMS) has focused on the engineering process which has led to major advances. Future challenges will consist in adding embedded intelligence to MEMS systems to obtain distributed intelligent MEMS. One intrinsic characteristic of MEMS is their ability to be mass-produced. This, however, poses scalability problems because a significant number of MEMS can be placed in a small volume. Managing this scalability requires paradigm-shifts both in hardware and software parts. Furthermore, the need for actuated synchronization, programming, communication and mobility management raises new challenges in both control and programming. Finally, MEMS are prone to faulty behaviors as they are mechanical systems and they are issued from a batch fabrication process. A new programming paradigm which can meet these challenges is therefore needed. In this article, we present CO2Dim, which stands for Coordination and Computation in Distributed Intelligent MEMS. CO2DIM is a new programming environment which includes a language based on a joint development of programming and control capabilities, a simulator and real hardware

    Data locality in Hadoop

    Get PDF
    Current market tendencies show the need of storing and processing rapidly growing amounts of data. Therefore, it implies the demand for distributed storage and data processing systems. The Apache Hadoop is an open-source framework for managing such computing clusters in an effective, fault-tolerant way. Dealing with large volumes of data, Hadoop, and its storage system HDFS (Hadoop Distributed File System), face challenges to keep the high efficiency with computing in a reasonable time. The typical Hadoop implementation transfers computation to the data, rather than shipping data across the cluster. Otherwise, moving the big quantities of data through the network could significantly delay data processing tasks. However, while a task is already running, Hadoop favours local data access and chooses blocks from the nearest nodes. Next, the necessary blocks are moved just when they are needed in the given ask. For supporting the Hadoop’s data locality preferences, in this thesis, we propose adding an innovative functionality to its distributed file system (HDFS), that enables moving data blocks on request. In-advance shipping of data makes it possible to forcedly redistribute data between nodes in order to easily adapt it to the given processing tasks. New functionality enables the instructed movement of data blocks within the cluster. Data can be shifted either by user running the proper HDFS shell command or programmatically by other module like an appropriate scheduler. In order to develop such functionality, the detailed analysis of Apache Hadoop source code and its components (specifically HDFS) was conducted. Research resulted in a deep understanding of internal architecture, what made it possible to compare the possible approaches to achieve the desired solution, and develop the chosen one

    Integrated scientific workflow management for the Emulab network testbed

    Get PDF
    Journal ArticleThe main forces that shaped current network testbeds were the needs for realism and scale. Now that several testbeds support large and complex experiments, management of experimentation processes and results has become more difficult and a barrier to high-quality systems research. The popularity of network testbeds means that new tools for managing experiment workflows, addressing the ready-made base of testbed users, can have important and significant impacts. We are now evolving Emulab, our large and popular network testbed, to support experiments that are organized around scientific workflows. This paper summarizes the opportunities in this area, the new approaches we are taking, our implementation in progress, and the challenges in adapting scientific workflow concepts for testbed-based research. With our system, we expect to demonstrate that a network testbed with integrated scientific workflow management can be an important tool to aid research in networking and distributed systems

    A self-integration testbed for decentralized socio-technical systems

    Get PDF
    The Internet of Things (IoT) comes along with new challenges for experimenting, testing, and operating decentralized socio-technical systems at large-scale. In such systems, autonomous agents interact locally with their users, and remotely with other agents to make intelligent collective choices. Via these interactions they self-regulate the consumption and production of distributed (common) resources, e.g., self-management of traffic flows and power demand in Smart Cities. While such complex systems are often deployed and operated using centralized computing infrastructures, the socio-technical nature of these decentralized systems requires new value-sensitive design paradigms; empowering trust, transparency, and alignment with citizens’ social values, such as privacy preservation, autonomy, and fairness among citizens’ choices. Currently, instruments and tools to study such systems and guide the prototyping process from simulation, to live deployment, and ultimately to a robust operation of a high Technology Readiness Level (TRL) are missing, or not practical in this distributed socio-technical context. This paper bridges this gap by introducing a novel testbed architecture for decentralized socio-technical systems running on IoT. This new architecture is designed for a seamless reusability of (i) application-independent decentralized services by an IoT application, and (ii) different IoT applications by the same decentralized service. This dual self-integration promises IoT applications that are simpler to prototype, and can interoperate with decentralized services during runtime to self-integrate more complex functionality, e.g., data analytics, distributed artificial intelligence. Additionally, such integration provides stronger validation of IoT applications, and improves resource utilization, as computational resources are shared, thus cutting down deployment and operational costs. Pressure and crash tests during continuous operations of several weeks, with more than 80K network joining and leaving of agents, 2.4M parameter changes, and 100M communicated messages, confirm the robustness and practicality of the testbed architecture. This work promises new pathways for managing the prototyping and deployment complexity of decentralized socio-technical systems running on IoT, whose complexity has so far hindered the adoption of value-sensitive self-management approaches in Smart Cities

    Applying Blockchain Solutions to Address Research Reproducibility and Enable Scientometric Analysis

    Get PDF
    A worldwide reproducibility crisis around published scientific studies has gained attention from academics, journalists, and concerned citizens in recent decades. The inability to reliably reproduce experiments from scholarly research—especially in areas of high- impact science—has far-reaching social and economic implications. Fraud may seem an obvious culprit, but in our data-intensive world, vague methods, unclear standards, and even accidental mismanagement of digital resources can all be contributing factors. Reproducibility is an area of increasing focus within the scientometrics community and looking to emerging technologies to help mitigate reproducibility challenges makes practical sense. In the Web 3.0 era, the promise of distributed computing, the maturation of cloud services, and other novel convergences point toward new ways to enable bibliometric reproducibility. Concurrently, research artifacts beyond the peer-reviewed article are growing in prominence—datasets, algorithms, pre-prints—all serve an expanding role in research dissemination and discovery. In this paper we present an overview of some new approaches—with particular focus on the benefits of blockchain-based software systems—for managing research information and improving scientometric reproducibility

    A Novel Blockchain-based Trust Model for Cloud Identity Management

    Get PDF
    Secure and reliable management of identities has become one of the greatest challenges facing cloud computing today, mainly due to the huge number of new cloud-based applications generated by this model, which means more user accounts, passwords, and personal information to provision, monitor, and secure. Currently, identity federation is the most useful solution to overcome the aforementioned issues and simplify the user experience by allowing efficient authentication mechanisms and use of identity information from data distributed across multiple domains. However, this approach creates considerable complexity in managing trust relationships for both the cloud service providers and their clients. Poor management of trust in federated identity management systems brings with it many security, privacy and interoperability issues, which contributes to the reluctance of organizations to move their critical identity data to the cloud. In this paper, we aim to address these issues by introducing a novel trust and identity management model based on the Blockchain for cloud identity management with security and privacy improvements

    Enabling Usable and Performant Trusted Execution

    Full text link
    A plethora of major security incidents---in which personal identifiers belonging to hundreds of millions of users were stolen---demonstrate the importance of improving the security of cloud systems. To increase security in the cloud environment, where resource sharing is the norm, we need to rethink existing approaches from the ground-up. This thesis analyzes the feasibility and security of trusted execution technologies as the cornerstone of secure software systems, to better protect users' data and privacy. Trusted Execution Environments (TEE), such as Intel SGX, has the potential to minimize the Trusted Computing Base (TCB), but they also introduce many challenges for adoption. Among these challenges are TEE's significant impact on applications' performance and non-trivial effort required to migrate legacy systems to run on these secure execution technologies. Other challenges include managing a trustworthy state across a distributed system and ensuring these individual machines are resilient to micro-architectural attacks. In this thesis, I first characterize the performance bottlenecks imposed by SGX and suggest optimization strategies. I then address two main adoption challenges for existing applications: managing permissions across a distributed system and scaling the SGX's mechanism for proving authenticity and integrity. I then analyze the resilience of trusted execution technologies to speculative execution, micro-architectural attacks, which put cloud infrastructure at risk. This analysis revealed a devastating security flaw in Intel's processors which is known as Foreshadow/L1TF. Finally, I propose a new architectural design for out-of-order processors which defeats all known speculative execution attacks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155139/1/oweisse_1.pd

    Managing Distributed Information: Implications for Energy Infrastructure Co-production

    Get PDF
    abstract: The Internet and climate change are two forces that are poised to both cause and enable changes in how we provide our energy infrastructure. The Internet has catalyzed enormous changes across many sectors by shifting the feedback and organizational structure of systems towards more decentralized users. Today’s energy systems require colossal shifts toward a more sustainable future. However, energy systems face enormous socio-technical lock-in and, thus far, have been largely unaffected by these destabilizing forces. More distributed information offers not only the ability to craft new markets, but to accelerate learning processes that respond to emerging user or prosumer centered design needs. This may include values and needs such as local reliability, transparency and accountability, integration into the built environment, and reduction of local pollution challenges. The same institutions (rules, norms and strategies) that dominated with the hierarchical infrastructure system of the twentieth century are unlikely to be good fit if a more distributed infrastructure increases in dominance. As information is produced at more distributed points, it is more difficult to coordinate and manage as an interconnected system. This research examines several aspects of these, historically dominant, infrastructure provisioning strategies to understand the implications of managing more distributed information. The first chapter experimentally examines information search and sharing strategies under different information protection rules. The second and third chapters focus on strategies to model and compare distributed energy production effects on shared electricity grid infrastructure. Finally, the fourth chapter dives into the literature of co-production, and explores connections between concepts in co-production and modularity (an engineering approach to information encapsulation) using the distributed energy resource regulations for San Diego, CA. Each of these sections highlights different aspects of how information rules offer a design space to enable a more adaptive, innovative and sustainable energy system that can more easily react to the shocks of the twenty-first century.Dissertation/ThesisDoctoral Dissertation Sustainability 201

    Distributed network and service architecture for future digital healthcare

    Get PDF
    According to World Health Organization (WHO), the worldwide prevalence of chronic diseases increases fast and new threats, such as Covid-19 pandemic, continue to emerge, while the aging population continues decaying the dependency ratio. These challenges will cause a huge pressure on the efficacy and cost-efficiency of healthcare systems worldwide. Thanks to the emerging technologies, such as novel medical imaging and monitoring instrumentation, and Internet of Medical Things (IoMT), more accurate and versatile patient data than ever is available for medical use. To transform the technology advancements into better outcome and improved efficiency of healthcare, seamless interoperation of the underlying key technologies needs to be ensured. Novel IoT and communication technologies, edge computing and virtualization have a major role in this transformation. In this article, we explore the combined use of these technologies for managing complex tasks of connecting patients, personnel, hospital systems, electronic health records and medical instrumentation. We summarize our joint effort of four recent scientific articles that together demonstrate the potential of the edge-cloud continuum as the base approach for providing efficient and secure distributed e-health and e-welfare services. Finally, we provide an outlook for future research needs
    • 

    corecore