237,977 research outputs found
A taxonomy of asymmetric requirements aspects
The early aspects community has received increasing attention among researchers and practitioners, and has grown a set of meaningful terminology and concepts in recent years, including the notion of requirements aspects. Aspects at the requirements level present stakeholder concerns that crosscut the problem domain, with the potential for a broad impact on questions of scoping, prioritization, and architectural design. Although many existing requirements engineering approaches advocate and advertise an integral support of early aspects analysis, one challenge is that the notion of a requirements aspect is not yet well established to efficaciously serve the community. Instead of defining the term once and for all in a normally arduous and unproductive conceptual unification stage, we present a preliminary taxonomy based on the literature survey to show the different features of an asymmetric requirements aspect. Existing approaches that handle requirements aspects are compared and classified according to the proposed taxonomy. In addition,we study crosscutting security requirements to exemplify the taxonomy's use, substantiate its value, and explore its future directions
Toward a More Coherent Doctrine of Trademark Genericism and Functionality: Focusing on Fair Competition
The doctrines of trademark genericism and functionality serve similar functions under the Lanham Act and the common law of unfair competition. Genericism, in the context of word marks, and functionality, for trade dress, bar trademark registration under the Lanham Act and, both under the Act and at common law, render a trademark unprotectable and invalid. In the word mark context, genericism stands for the proposition that certain parts of vocabulary cannot be cordoned off as trademarks; all competitors must be able to use words that consumers understand to identify the goods or services that they are selling. Functionality likewise demands that certain aspects of product design cannot be legally protected as trade dress, as to do so would potentially limit competitors’ ability to make products that work as well at the same price. The core concern, for both doctrines, is or should be the preservation of free and fair market competition. Part I of this Article explains the theoretical parallels between the doctrines of genericism and functionality, and examines the history and purpose of these doctrines. A finding that a word is or has become generic, or that a form of trade dress is functional, negates a mark’s registration and protection under the Lanham Act, as well as under state and common law. Even incontestable marks can be declared invalid, regardless of the passage of time, under either doctrine. The types of trademarks typically at issue when making genericism and functionality determinations—word marks that are, at best, descriptive, or product design functioning as trade dress—are correctly described as weak. The genericism and functionality doctrines therefore play a critical role in marking the boundaries of trademark law. To properly draw those lines, decision makers need to correctly define and understand the theory underlying both doctrines. In Part II, this Article argues that both genericism and functionality, in their practical interpretation and purpose, should more clearly reflect the core principle of protecting fair competition. In particular, the concept of viable, competitive alternatives—either in the form of words or alternative designs—should play an enhanced role in determining whether an erstwhile trademark is generic or functional. The various tests for genericism and functionality currently employed by the courts often attempt to draw formalistic distinctions among categories of words or product features that may confound business owners (and their lawyers) and divert the focus of the courts’ inquiry in such cases away from the core value at the heart of both doctrines: preserving fair competition
Recommended from our members
Design Space Exploration in Cyber-Physical Systems
Cyber physical systems (CPS) integrate a variety of engineering areas such as control, mechanical and computer engineering in a holistic design effort. While interdependencies between the different disciplines are key attributes of CPS design science, little is known about the impact of design decisions of the cyber part on the overall system qualities. To investigate these interdependencies, this paper proposes a simulation-based Design Space Exploration (DSE) framework that considers detailed cyber system parameters such as cache size, bus width, and voltage levels in addition to physical and control parameters of the CPS. We propose an exploration algorithm that surfs the parameter configurations in the cyber physical sub-systems, in order to approximate the Pareto-optimal design points with regards to the trade-os among the design objectives, such as energy consumption and control stability. We apply the proposed framework to a network control system for an inverted-pendulum application. The presented holistic evaluation of the identified Pareto-points reveals the presence of non-trivial trade-os, which are imposed by the control, physical, and detailed cyber parameters. For instance the identified energy and control optimal design points comprise configurations with a wide range of CPU speeds, sample times and cache configuration following non-trivial zig-zag patterns. The proposed framework could identify and manage those trade-os and, as a result, is an imperative rst step to automate the search for superior CSP configurations
Restructuring the rotor analysis program C-60
The continuing evolution of the rotary wing industry demands increasing analytical capabilities. To keep up with this demand, software must be structured to accommodate change. The approach discussed for meeting this demand is to restructure an existing analysis. The motivational factors, basic principles, application techniques, and practical lessons from experience with this restructuring effort are reviewed
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
Modelling of cryogenic cooling system design concepts for superconducting aircraft propulsion
Distributed propulsion concepts are promising in terms of improved fuel burn, better aerodynamic performance, and greater control. Superconducting networks are being considered for their superior power density and efficiency. This study discusses the design of cryogenic cooling systems which are essential for normal operation of superconducting materials. This research project has identified six key requirements such as maintain temperature and low weight, with two critical components that dramatically affect mass identified as the heat exchanger and compressors. Qualitatively, the most viable concept for cryocooling was found to be the reverse-Brayton cycle (RBC) for its superior reliability and flexibility. Single- and two-stage reverse-Brayton systems were modelled, highlighting that double stage concepts are preferable in specific mass and future development terms in all cases except when using liquid hydrogen as the heat sink. Finally, the component-level design space was considered with the most critical components affecting mass being identified as the reverse-Brayton compressor and turbine
Selfishness versus functional cooperation in a stochastic protocell model
How to design an "evolvable" artificial system capable to increase in complexity? Although Darwin's theory of evolution by natural selection obviously offers a firm foundation, little hope of success seems to be expected from the explanatory adequacy of modern evolutionary theory, which does a good job at explaining what has already happened but remains practically helpless at predicting what will occur. However, the study of the major transitions in evolution clearly suggests that increases in complexity have occurred on those occasions when the conflicting interests between competing individuals were partly subjugated. This immediately raises the issue about "levels of selection" in evolutionary biology, and the idea that multi-level selection scenarios are required for complexity to emerge. After analyzing the dynamical behaviour of competing replicators within compartments, we show here that a proliferation of differentiated catalysts and/or improvement of catalytic efficiency of ribozymes can potentially evolve in properly designed artificial cells. Experimental evolution in these systems will likely stand as beautiful examples of artificial adaptive systems, and will provide new insights to understand possible evolutionary paths to the evolution of metabolic complexity
On cost-effective reuse of components in the design of complex reconfigurable systems
Design strategies that benefit from the reuse of system components can reduce costs while maintaining or increasing dependability—we use the term dependability to tie together reliability and availability. D3H2 (aDaptive Dependable Design for systems with Homogeneous and Heterogeneous redundancies) is a methodology that supports the design of complex systems with a focus on reconfiguration and component reuse. D3H2 systematizes the identification of heterogeneous redundancies and optimizes the design of fault detection and reconfiguration mechanisms, by enabling the analysis of design alternatives with respect to dependability and cost. In this paper, we extend D3H2 for application to repairable systems. The method is extended with analysis capabilities allowing dependability assessment of complex reconfigurable systems. Analysed scenarios include time-dependencies between failure events and the corresponding reconfiguration actions. We demonstrate how D3H2 can support decisions about fault detection and reconfiguration that seek to improve dependability while reducing costs via application to a realistic railway case study
- …