165 research outputs found
Making sense of project value from a value-co-creation perspective: an exploratory conceptual framework
This paper proposes a conceptual framework to make sense of how project value is created in projects. We study the extant project management value creation literature using a value co-creation lens based on service-dominant (S-D) logic. We explore how project value is proposed, exchanged and then realized following a project life-cycle. This leads to the identification of an exploratory “value co-creation life-cycle” framework. This framework shows value as a whole transcends the limitation of measurable products value normally used to define the project value. In particular, it shows how operant resources (or actors) - typically referred to as stakeholders - within the project management system exchange services and integrate resources in order to co-create value. The exploratory framework, in turn, would enable future investigation of real projects with the view to unpacking the complex dynamic behavior of project value creation
Lattice gauge theories simulations in the quantum information era
The many-body problem is ubiquitous in the theoretical description of
physical phenomena, ranging from the behavior of elementary particles to the
physics of electrons in solids. Most of our understanding of many-body systems
comes from analyzing the symmetry properties of Hamiltonian and states: the
most striking example are gauge theories such as quantum electrodynamics, where
a local symmetry strongly constrains the microscopic dynamics. The physics of
such gauge theories is relevant for the understanding of a diverse set of
systems, including frustrated quantum magnets and the collective dynamics of
elementary particles within the standard model. In the last few years, several
approaches have been put forward to tackle the complex dynamics of gauge
theories using quantum information concepts. In particular, quantum simulation
platforms have been put forward for the realization of synthetic gauge
theories, and novel classical simulation algorithms based on quantum
information concepts have been formulated. In this review we present an
introduction to these approaches, illustrating the basics concepts and
highlighting the connections between apparently very different fields, and
report the recent developments in this new thriving field of research.Comment: Pedagogical review article. Originally submitted to Contemporary
Physics, the final version will appear soon on the on-line version of the
journal. 34 page
Going one step further: towards cognitively enhanced problem-solving teaming agents
Operating current advanced production systems, including Cyber-Physical Systems, often requires profound programming skills and configuration knowledge, creating a disconnect between human cognition and system operations. To address this, we suggest developing cognitive algorithms that can simulate and anticipate teaming partners' cognitive processes, enhancing and smoothing collaboration in problem-solving processes. Our proposed solution entails creating a cognitive system that minimizes human cognitive load and stress by developing models reflecting humans individual problem-solving capabilities and potential cognitive states. Further, we aim to devise algorithms that simulate individual decision processes and virtual bargaining procedures that anticipate actions, adjusting the system’s behavior towards efficient goal-oriented outcomes. Future steps include the development of benchmark sets tailored for specific use cases and human-system interactions. We plan to refine and test algorithms for detecting and inferring cognitive states of human partners. This process requires incorporating theoretical approaches and adapting existing algorithms to simulate and predict human cognitive processes of problem-solving with regards to cognitive states. The objective is to develop cognitive and computational models that enable production systems to become equal team members alongside humans in diverse scenarios, paving the way for more efficient, effective goal-oriented solutions
Approximate Waveforms for Extreme-Mass-Ratio Inspirals in Modified Gravity Spacetimes
Extreme-mass-ratio inspirals, in which a stellar-mass compact object spirals
into a supermassive black hole, are prime candidates for detection with
space-borne milliHertz gravitational wave detectors, similar to the Laser
Interferometer Space Antenna. The gravitational waves generated during such
inspirals encode information about the background in which the small object is
moving, providing a tracer of the spacetime geometry and a probe of
strong-field physics. In this paper, we construct approximate,
"analytic-kludge" waveforms for such inspirals with parameterized
post-Einsteinian corrections that allow for generic, model-independent
deformations of the supermassive black hole background away from the Kerr
metric. These approximate waveforms include all of the qualitative features of
true waveforms for generic inspirals, including orbital eccentricity and
relativistic precession. The deformations of the Kerr metric are modeled using
a recently proposed, modified gravity bumpy metric, which parametrically
deforms the Kerr spacetime while ensuring that three approximate constants of
the motion remain for geodesic orbits: a conserved energy, azimuthal angular
momentum and Carter constant. The deformations represent modified gravity
effects and have been analytically mapped to several modified gravity black
hole solutions in four dimensions. In the analytic kludge waveforms, the
conservative motion is modeled by a post-Newtonian expansion of the geodesic
equations in the deformed spacetimes, which in turn induce modifications to the
radiation-reaction force. These analytic-kludge waveforms serve as a first step
toward complete and model-independent tests of General Relativity with extreme
mass-ratio inspirals.Comment: v1: 28 pages, no figures; v2: minor changes for consistency with
accepted version, 2 figures added showing sample waveforms; accepted by Phys.
Rev.
Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning
Neural networks are tools that are often used to perform functions such as object recognition in images, speech-to-text, and general data classification. Because neural networks have been successful at approximating these functions that are difficult to explicitly write, they are seeing increased usage in fields such as autonomous driving, airplane collision avoidance systems, and other safety-critical applications. Due to the risks involved with safety-critical systems, it is important to provide guarantees about the networks performance under certain conditions. As an example, it is critically important that self driving cars with neural network based vision systems correctly identify pedestrians 100% of the time. The ability to identify pedestrians correctly is considered a safety property of the neural network and this property must be rigorously verified to produce a guarantee of safe functionality. This thesis focuses on a safety property of neural networks called local adversarial robustness. Often, small changes or noise on the input of the network can cause it to behave unexpectedly. Water droplets on the lens of a camera that feeds images to a network for classification may render the classification output useless. When a network is locally robust to adversarial inputs it means that small changes to a known input do not cause the network to behave erratically. Due to some characteristics of neural networks, safety properties like local adversarial robustness are extremely difficult to verify. For example, changing the color of the pedestrians shirt to blue should not effect the network’s classification. What about if the shirt is red? What about all the other colors? What about all the possible color combinations of shirts and pants? The complexity of verifying these safety properties grows very quickly.
This thesis proposes three novel methods for tackling some of the challenges related to verifying safety properties of neural networks. The first is a method to strategically select which dimensions in the input will be searched first. These dimensions are chosen by approximating how much each dimension contributes to the classification output. This helps to manage the issue of high dimensionality. This proposed method is compared with a state-of-the-art technique and shows improvements in efficiency and quality. The second contribution of this work is an abstraction technique that models regions in the input space by a set of potential adversarial inputs. This set of potential adversarial inputs can be generated and verified much quicker than the entire region. If an adversarial input is found in this set then more expensive verification techniques can be skipped because the result is already known. This thesis introduces the randomized fast gradient sign method (RFGSM) that better models regions than its predecessor through increased output variance and maintains its high success rate of adversarial input generation. The final contribution of this work is a framework that adds these previously mentioned optimizations to existing verification techniques. The framework also splits the region being tested up into smaller regions that can be verified simultaneously. The framework focuses on finding as many adversarial inputs as possible so that the network can be retrained to be more robust to them
Evaluation of the selectivity and sensitivity of isoform- and mutation-specific RAS antibodies
Researchers rely largely on antibodies to measure the abundance, activity, and localization of a protein, information that provides critical insight into both normal and pathological cellular functions. However, antibodies are not always reliable or universally valid for the methods in which they are used; in particular, the reliability of commercial antibodies against RAS is highly variable. Waters et al . rigorously assessed 22 commercially available RAS antibodies for their utility to detect the distinct RAS isoforms in various cell types and for their use in specific analytical methods. Their findings show how reliably one can interpret the data acquired from each reagent
- …