293 research outputs found
Recommended from our members
High-capacity preconscious processing in concurrent groupings of colored dots.
Grouping is a perceptual process in which a subset of stimulus components (a group) is selected for a subsequent-typically implicit-perceptual computation. Grouping is a critical precursor to segmenting objects from the background and ultimately to object recognition. Here, we study grouping by color. We present subjects with 300-ms exposures of 12 dots colored with the same but unknown identical color interspersed among 14 dots of seven different colors. To indicate grouping, subjects point-click the remembered centroid ("center of gravity") of the set of homogeneous dots, of heterogeneous dots, or of all dots. Subjects accurately judge all of these centroids. Furthermore, after a single stimulus exposure, subjects can judge both the heterogeneous and homogeneous centroids, that is, subjects simultaneously group by similarity and by dissimilarity. The centroid paradigm reveals the relative weight of each dot among targets and distractors to the underlying grouping process, offering a more detailed, quantitative description of grouping than was previously possible. A change detection experiment reveals that conscious memory contains less than two dots and their locations, whereas an ideal detector would have to perfectly process at least 15 of 26 dots to match the subjects' centroid judgments-indicating an extraordinary capacity for preconscious grouping. A different color set yielded identical results. Grouping theories that rely on predefined feature maps would fail to explain these results. Rather, the results indicate that preconscious grouping is automatic, flexible, and rapid, and a far more complex process than previously believed
Process Measurement in Business Process Management : Theoretical Framework and Analysis of Several Aspects
Process measurement deals with the quantification of business process models using process model metrics. This book presents a theoretical framework for the prediction of external process model attributes (as, for example, error-proneness and understandabiltiy) based on internal (structural) attributes. The properties of proposed metrics are analyzed. A visualization technique for metric values is introduced and metrics for process model understandability and granularity are evaluated
Doing good while making profit: perspectives on reconciling multiple objectives in social enterprises
This dissertation looks at social ventures that create social impact whilst being self-sustainable. By adopting three different theoretical perspectives, various aspects of organizations with multiple social and economic objectives are highlighted. Study 1 examines the hybrid nature of social ventures. It examines the conditions under which social ventures develop hybrid value creating activities to deal with their economic and social goals. Based on data from 11 social ventures, the findings are based on a combination of an inductive analysis and fsQCA analysis. The focal point of the study is how the interplay of the exclusiveness of the beneficiary target group, the overlap between customers and beneficiaries, and the visibility of the social mission in the value offering, influences the extent to which social ventures hybridize their means. The study contributes to the literature on social ventures specifically, and hybrid ventures more generally.
Study 2 is based on a qualitative and inductive ethnographic study of a social venture. It sheds light on how organizational members have an imprinting effect on a venture beyond the founding phase. The model proposed in this study illuminates how the imprinting process is an ongoing, two-way interaction between the individual and the organizational level. The analysis shows how the initial imprint of the venture attracts people with specific social identities, and how bottom-up involvement of organizational members impacts on the imprint through three processes: projecting, sharing and contextualizing. This study adds to the literature on imprinting on one hand, and to literature on social ventures on the other hand.
Study 3 is a comparative study of two social ventures bringing electrification to rural communities in a bottom-of-the-pyramid market. The study unpacks how these ventures design governance models to align the heterogeneous interests of their stakeholders - including customers, employees and local communities – with their own organizational social and economic objectives. On one hand, the results of the analysis show how the two ventures differ in terms of a customer versus community focus in their governance approach. On the other hand, the analysis shows how this divergent take on governance is driven by a different perception of stakeholder categories, a dissimilar conceptualization of beneficiaries of the social mission, and a different extent of adopting relational versus transactional approaches towards the stakeholders. The study is a response to calls for research on governance in the context of organizations with multiple social and economic objectives.Open Acces
Exploiting Contextual Independence In Probabilistic Inference
Bayesian belief networks have grown to prominence because they provide
compact representations for many problems for which probabilistic inference is
appropriate, and there are algorithms to exploit this compactness. The next
step is to allow compact representations of the conditional probabilities of a
variable given its parents. In this paper we present such a representation that
exploits contextual independence in terms of parent contexts; which variables
act as parents may depend on the value of other variables. The internal
representation is in terms of contextual factors (confactors) that is simply a
pair of a context and a table. The algorithm, contextual variable elimination,
is based on the standard variable elimination algorithm that eliminates the
non-query variables in turn, but when eliminating a variable, the tables that
need to be multiplied can depend on the context. This algorithm reduces to
standard variable elimination when there is no contextual independence
structure to exploit. We show how this can be much more efficient than variable
elimination when there is structure to exploit. We explain why this new method
can exploit more structure than previous methods for structured belief network
inference and an analogous algorithm that uses trees
Inference and Learning with Planning Models
[ES] Inferencia y aprendizaje son los actos de razonar sobre evidencia recogida con el fin de alcanzar conclusiones lógicas sobre el proceso que la originó. En el contexto de un modelo de espacio de estados, inferencia y aprendizaje se refieren normalmente a explicar el comportamiento pasado de un agente, predecir sus acciones futuras, o identificar su modelo. En esta tesis, presentamos un marco para inferencia y aprendizaje en el modelo de espacio de estados subyacente al modelo de planificación clásica, y formulamos una paleta de problemas de inferencia y aprendizaje bajo este paraguas unificador. También desarrollamos métodos efectivos basados en planificación que nos permiten resolver estos problemas utilizando algoritmos de planificación genéricos del estado del arte. Mostraremos que un gran número de problemas de inferencia y aprendizaje claves que han sido tratados como desconectados se pueden formular de forma cohesiva y resolver siguiendo procedimientos homogéneos usando nuestro marco. Además, nuestro trabajo abre las puertas a nuevas aplicaciones para tecnología de planificación ya que resalta las características que hacen que el modelo de espacio de estados de planificación clásica sea diferente a los demás modelos.[CA] Inferència i aprenentatge són els actes de raonar sobre evidència arreplegada a fi d'aconseguir conclusions lògiques sobre el procés que la va originar. En el context d'un model d'espai d'estats, inferència i aprenentatge es referixen normalment a explicar el comportament passat d'un agent, predir les seues accions futures, o identificar el seu model. En esta tesi, presentem un marc per a inferència i aprenentatge en el model d'espai d'estats subjacent al model de planificació clàssica, i formulem una paleta de problemes d'inferència i aprenentatge davall este paraigua unificador. També desenrotllem mètodes efectius basats en planificació que ens permeten resoldre estos problemes utilitzant algoritmes de planificació genèrics de l'estat de l'art. Mostrarem que un gran nombre de problemes d'inferència i aprenentatge claus que han sigut tractats com desconnectats es poden formular de forma cohesiva i resoldre seguint procediments homogenis usant el nostre marc. A més, el nostre treball obri les portes a noves aplicacions per a tecnologia de planificació ja que ressalta les característiques que fan que el model d'espai d'estats de planificació clàssica siga diferent dels altres models.[EN] Inference and learning are the acts of reasoning about some collected evidence in order to reach a logical conclusion regarding the process that originated it. In the context of a state-space model, inference and learning are usually concerned with explaining an agent's past behaviour, predicting its future actions or identifying its model. In this thesis, we present a framework for inference and learning in the state-space model underlying the classical planning model, and formulate a palette of inference and learning problems under this unifying umbrella. We also develop effective planning-based approaches to solve these problems using off-the-shelf, state-of-the-art planning algorithms. We will show that several core inference and learning problems that previous research has treated as disconnected can be formulated in a cohesive way and solved following homogeneous procedures using the proposed framework. Further, our work opens the way for new applications of planning technology as it highlights the features that make the state-space model of classical planning different from other models.The work developed in this doctoral thesis has been possible thanks to the FPU16/03184 fellowship that I have enjoyed for the duration of my PhD studies. I have also been supported by my advisors’ grants TIN2017-88476-C2-1-R, TIN2014-55637-C2-2-R-AR, and RYC-2015-18009.Aineto García, D. (2022). Inference and Learning with Planning Models [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/18535
A comprehensive part model and graphical schema representation for object-oriented databases
Part-whole modeling plays an important role in the development of database schemata in data-intensive application domains such as manufacturing, design, computer graphics. text document processing, and so on. Object-oriented databases (OODBs) have been targeted for use in such areas. Thus, it is essential that OODBs incorporate a part relationship as one of their modeling primitives. In this dissertation, we present a comprehensive OODB part model which expands the boundaries of OODB part-whole modeling along three fronts. First, it identifies and codifies new semantics for the OODB part relationship. Second, it provides two novel realizations for part relationships and their associated modeling constructs in the context of OODB data models. Third. it, provides an extensive graphical notation for the development of OODB schemata.
The heart of the part model is a part relationship that imposes part-whole interaction on the instances of an OODB. The part relationship is divided into four characteristic dimensions: (1) exclusive/shared. (2) cardinality/ordinality, (3) dependency. and (A) value propagation. The latter forms the basis for the definition of derived attributes in a part hierarchy.
To demonstrate the viability of our part model, we present two novel realizations for it in the context of existing OODBs. The first realizes the part relationship as an object class and utilizes only a basic set of OODB constructs. The second realization, an implementation of which is described in this dissertation, uses the unique metaclass mechanism of the VODAK Model Language (VML). This implementation shows that our part model can be incorporated into an existing OODB without having to rewrite a substantial subsystem of the OODB, and it also shows that the VML metaclass facility can indeed support extensions in terms of new semantic relationships.
To facilitate the creation of part-whole schemata, we introduce an extensive graphical notation for the part relationship and its associated constructs. This notation complements our more general OODB graphical schema representation which includes symbols for classes, attributes. methods. and a variety of relationships. OO-dini, a graphical schema editor that employs our notation and supports conversion of the graphical schema into textual formats, is also discussed
Ontology Validation for Managers
Ontology driven conceptual modeling focuses on accurately representing a domain of interest, instead of making information fit an arbitrary set of constructs. It may be used for different purposes, like to achieve semantic interoperability (Nardi, Falbo and Almeida, 2013), development of knowledge representation models (Guizzardi and Zamborlini, 2012) and language evaluation (Santos, Almeida and Guizzardi,2010). Regardless its final application, a model must be accurately defined in order for it to be a successful solution.
This new branch of conceptual modeling improves traditional techniques by taking into consideration ontological properties, such as rigidity, identity and dependence, which are derived from a foundational ontology. This increasing interest in more expressive languages for conceptual modeling is shown by OMGs request for language proposals for the Semantic Information Model Federation (SIMF) (OMG,2011). OntoUML (Guizzardi, 2005) is an example of a language designed for that purpose.Its metamodel (Carraretto, 2010) is designed to comply to the Unified Foundational Ontology (UFO). It focus on structural aspects of individuals and universals.Grounded on human cognition and linguistics, it aims to provide the most basic categories in which humans understand and classify things around them.In (Guizzardi, 2010) Guizzardi quotes the famous Dijkstras lecture, in which he discusses the humble programmer and makes an analogy entitled the humble ontologist. He argues that the task of ontology-driven conceptual modeling is extremely complex and thus, modelers should surround themselves with as many
tools as possible to aid in the development of the ontology. These complexities arise from different sources. A couple of them come from foundational ontology itself, both its modal nature, which imposes modelers to deal with possibilities, and the many different restrictions of each ontological category. But they also come from the need of accurately defining instance level constraints, which require additional rules, outside of the languages graphical notation.
To help modelers to develop high quality OntoUML models, a number of tools have been proposed to aid in different phases of conceptual modeling. From the
construction of the models themselves using design patterns questions (Guizzardi et al., 2011), to automatic syntax verification (Benevides, 2010) and model validation through simulation (Benevides et al., 2010).
The importance of domain specification that accurately captures the intended
conceptualization has been recognized by both the traditional conceptual modeling community (Moody et al., 2003) and the ontology community (Vrandečić, 2009). In this research we want to improve (Benevides et al., 2010) initiative, but focus exclusively on the validation of ontology driven conceptual models, and not on verification. With the complexity of the modeling activity in mind, we want to help modelers to systematically produce high quality ontologies, improving precision and coverage (Gangemi et al., 2005) of the models. We intend to make the simulationbased approach available for users that are not experts in the formal method, relieving them of the need to learn yet another language, solely for the purpose of validating their models
- …