2,896 research outputs found
The Path to Fault- and Intrusion-Resilient Manycore Systems on a Chip
The hardware computing landscape is changing. What used to be distributed
systems can now be found on a chip with highly configurable, diverse,
specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to
control today's cyber-physical systems, being the building blocks of critical
infrastructures. They are deployed in harsh environments and are connected to
the cyberspace, which makes them exposed to both accidental faults and targeted
cyberattacks. This is in addition to the changing fault landscape that
continued technology scaling, emerging devices and novel application scenarios
will bring. In this paper, we discuss how the very features, distributed,
parallelized, reconfigurable, heterogeneous, that cause many of the imminent
and emerging security and resilience challenges, also open avenues for their
cure though SoC replication, diversity, rejuvenation, adaptation, and
hybridization. We show how to leverage these techniques at different levels
across the entire SoC hardware/software stack, calling for more research on the
topic
Simulating Charged Defects in Silicon Dangling Bond Logic Systems to Evaluate Logic Robustness
Recent research interest in emerging logic systems based on quantum dots has
been sparked by the experimental demonstration of nanometer-scale logic devices
composed of atomically sized quantum dots made of silicon dangling bonds
(SiDBs), along with the availability of SiQAD, a computer-aided design tool
designed for this technology. Latest design automation frameworks have enabled
the synthesis of SiDB circuits that reach the size of
-- orders of magnitude more complex than their
hand-designed counterparts. However, current SiDB simulation engines do not
take defects into account, which is important to consider for these sizable
systems. This work proposes a formulation for incorporating fixed-charge
simulation into established ground state models to cover an important class of
defects that has a non-negligible effect on nearby SiDBs at the
scale and beyond. The formulation is validated by implementing it into SiQAD's
simulation engine and computationally reproducing experiments on multiple
defect types, revealing a high level of accuracy. The new capability is applied
towards studying the tolerance of several established logic gates against the
introduction of a single nearby defect to establish the corresponding minimum
required clearance. These findings are compared against existing metrics to
form a foundation for logic robustness studies.Comment: 7 pages, 5 figures, 2 table
Artificial Intelligence for the Management of Servitization 5.0
Purpose-The sale of physical products has been manufacturing companies' main revenue source. A trend is known as servitization for earning revenue comes from services. With the convergence of servitization and digitization, many manufacturing organizations are undergoing digital servitization. In parallel, the digitization of industry is pushing new technological solutions to the top of the business agenda. Artificial intelligence can play a substantial role in this digital business transformation. This evolution is referred to in this paper as Servitization 5.0 and requires substantial changes. Aim-This paper explores the applications of artificial intelligence to Servitization 5.0 strategies and its role, particularly in changing organizations to EverythiA.I.ng as a Service. The paper underlines the contribution that A.I. can provide in moving to a human-centric, sustainable, and resilient servitization. Method used-The basis of the work is a literature review supported by information collected from business case studies by the authors. A follow-up study defined the models. The validity of the model was tested by collecting ten experts' opinions who currently work within servitization contracts sessions. Findings-For manufacturing companies, selling services requires completely different business models. In this situation, it is essential to consider advanced solutions to support these new business models. Artificial Intelligence can make it possible. On the inter-organizational side, empirical evidence also points to the support of A.I. in collaborating with ecosystems to support sustainability and resilience, as requested by Industry 5.0. Original value-Regarding theoretical implications, this paper contributes to interdisciplinary research in corporate marketing and operational servitization. It is part of the growing literature that deals with the applications of artificial intelligence-based solutions in different areas of organizational management. The approach is interesting because it highlights that digital solutions require an integrated business model approach. It is necessary to implement the technological platform with appropriate processes, people, and partners (the four Ps). The outcome of this study can be generalized for industries in high-value manufacturing. Implications-As implications for management, this paper defines how to organize the structure and support for Servitization 5.0 and how to work with the external business environment to support sustainability
Operationalising learning from rare events: framework for middle humanitarian operations managers
The purpose of this paper is to investigate the learning from rare events and the knowledge management processinvolved, which presents a significant challenge to many organizations. This is primarily attributed to the inability tointerpret these events in a systematic and “rich” manner, which this paper seeks to address. We start by summarizing therelevant literature on humanitarian operations management (HOM), outlining the evolution of the socio-technical disasterlifecycle and its relationship with humanitarian operations, using a supply chain resilience theoretical lens. We then out-line theories of organizational learning (and unlearning) from disasters and the impact on humanitarian operations. Subse-quently, we theorize the role of middle managers in humanitarian operations, which is the main focus of our paper. Themain methodology incorporates a hybrid of two techniques for root cause analysis, applied to two related case studies.The cases were specifically selected as, despite occurring twenty years apart, there are many similarities in the chain ofcausation and supporting factors, potentially suggesting that adequate learning from experience and failures is not occur-ring. This provides a novel learning experience within the HOM paradigm. Hence, the proposed approach is based on amultilevel structure that facilitates the operationalization of learning from rare events in humanitarian operations. Theresults show that we are able to provide an environment for multiple interpretations and effective learning, with emphasison middle managers within a humanitarian operations and crisis/disaster management context
From Microbial Communities to Distributed Computing Systems
A distributed biological system can be defined as a system whose components are
located in different subpopulations, which communicate and coordinate their actions
through interpopulation messages and interactions. We see that distributed systems
are pervasive in nature, performing computation across all scales, from microbial
communities to a flock of birds. We often observe that information processing within
communities exhibits a complexity far greater than any single organism. Synthetic
biology is an area of research which aims to design and build synthetic biological
machines from biological parts to perform a defined function, in a manner similar
to the engineering disciplines. However, the field has reached a bottleneck in the
complexity of the genetic networks that we can implement using monocultures, facing
constraints from metabolic burden and genetic interference. This makes building
distributed biological systems an attractive prospect for synthetic biology that would
alleviate these constraints and allow us to expand the applications of our systems
into areas including complex biosensing and diagnostic tools, bioprocess control and
the monitoring of industrial processes. In this review we will discuss the fundamental
limitations we face when engineering functionality with a monoculture, and the key
areas where distributed systems can provide an advantage. We cite evidence from
natural systems that support arguments in favor of distributed systems to overcome
the limitations of monocultures. Following this we conduct a comprehensive overview
of the synthetic communities that have been built to date, and the components that
have been used. The potential computational capabilities of communities are discussed,
along with some of the applications that these will be useful for. We discuss some of
the challenges with building co-cultures, including the problem of competitive exclusion
and maintenance of desired community composition. Finally, we assess computational
frameworks currently available to aide in the design of microbial communities and identify
areas where we lack the necessary tool
The Complexity of Infinite-Horizon General-Sum Stochastic Games
We study the complexity of computing stationary Nash equilibrium (NE) in n-player infinite-horizon general-sum stochastic games. We focus on the problem of computing NE in such stochastic games when each player is restricted to choosing a stationary policy and rewards are discounted. First, we prove that computing such NE is in PPAD (in addition to clearly being PPAD-hard). Second, we consider turn-based specializations of such games where at each state there is at most a single player that can take actions and show that these (seemingly-simpler) games remain PPAD-hard. Third, we show that under further structural assumptions on the rewards computing NE in such turn-based games is possible in polynomial time. Towards achieving these results we establish structural facts about stochastic games of broader utility, including monotonicity of utilities under single-state single-action changes and reductions to settings where each player controls a single state
Recommended from our members
Design of Hardware with Quantifiable Security against Reverse Engineering
Semiconductors are a 412 billion dollar industry and integrated circuits take on important roles in human life, from everyday use in smart-devices to critical applications like healthcare and aviation. Saving today\u27s hardware systems from attackers can be a huge concern considering the budget spent on designing these chips and the sensitive information they may contain. In particular, after fabrication, the chip can be subject to a malicious reverse engineer that tries to invasively figure out the function of the chip or other sensitive data. Subsequent to an attack, a system can be subject to cloning, counterfeiting, or IP theft. This dissertation addresses some issues concerning the security of hardware systems in such scenarios.
First, the issue of privacy risks from approximate computing is investigated in Chapter 2. Simulation experiments show that the erroneous outputs produced on each chip instance can reveal the identity of the chip that performed the computation, which jeopardizes user privacy.
The next two chapters deal with camouflaging, which is a technique to prevent reverse engineering from extracting circuit information from the layout. Chapter 3 provides a design automation method to protect camouflaged circuits against an adversary with prior knowledge about the circuit\u27s viable functions. Chapter 4 provides a method to reverse engineer camouflaged circuits. The proposed reverse engineering formulation uses Boolean Satisfiability (SAT) solving in a way that incorporates laser fault injection and laser voltage probing capabilities to figure out the function of an aggressively camouflaged circuit with unknown gate functions and connections.
Chapter 5 addresses the challenge of secure key storage in hardware by proposing a new key storage method that applies threshold-defined behavior of memory cells to store secret information in a way that achieves a high degree of protection against invasive reverse engineering. This approach requires foundry support to encode the secrets as threshold voltage offsets in transistors. In Chapter 6, a secret key storage approach is introduced that does not rely on a trusted foundry. This approach only relies on the foundry to fabricate the hardware infrastructure for key generation but not to encode the secret key. The key is programmed by the IP integrator or the user after fabrication via directed accelerated aging of transistors. Additionally, this chapter presents the design of a working hardware prototype on PCB that demonstrates this scheme.
Finally, chapter 7 concludes the dissertation and summarizes possible future research
- …