3,011 research outputs found

    Passives are not hard to interpret but hard to remember : evidence from online and offline studies

    Get PDF
    Passive sentences are considered more difficult to comprehend than active ones. Previous online-only studies cast doubt on this generalization. The current paper directly compares online and offline processing of passivization and manipulates verb type: state vs event. Stative passives are temporarily ambiguous (adjectival vs verbal), eventive passives are not (always verbal). Across 4 experiments (self-paced reading with comprehension questions), passives were consistently read faster than actives. This contradicts the claim that passives are difficult to parse and/or interpret, as argued by main perspectives of passive processing (heuristic or syntactic). The reading time facilitation is compatible with broader expectation/surprisal theories. When comprehension targeted theta-roles assignment, passives were more errorful, regardless of verb type. Verbal WM measures did not correlate with the difference in accuracy, excluding it as an explanation. The accuracy effect is argued to reflect a post-interpretive difficulty associated with generating/maintaining a propositional representation of passives required by specific tasks

    21st Century STEM Reasoning

    Get PDF
    Presented at the The Georgia Southern University Real STEM Project is collaborating with 10 middle and high schools on development and implementation of an interdisciplinary STEM research and design course that actively engages students in real-world problem solving. We will discuss the expected outcomes of such interdisciplinary STEM experiences which is the development of 21st century STEM reasoning. Examples of using authentic teaching strategies to promote complex systems reasoning, model-based reasoning, computational reasoning, engineering design-based reasoning, and quantitative reasoning will be shared with participants

    21st Century STEM Reasoning

    Get PDF
    The Georgia Southern University Real STEM Project is collaborating with 10 middle and high schools on development and implementation of an interdisciplinary STEM research and design course that actively engages students in real-world problem solving. We will discuss the expected outcomes of such interdisciplinary STEM experiences which is the development of 21st century STEM reasoning. Examples of using authentic teaching strategies to promote complex systems reasoning, model-based reasoning, computational reasoning, engineering design-based reasoning, and quantitative reasoning will be shared with participants

    Adaptive User Interfaces for Intelligent E-Learning: Issues and Trends

    Get PDF
    Adaptive User Interfaces have a long history rooted in the emergence of such eminent technologies as Artificial Intelligence, Soft Computing, Graphical User Interface, JAVA, Internet, and Mobile Services. More specifically, the advent and advancement of the Web and Mobile Learning Services has brought forward adaptivity as an immensely important issue for both efficacy and acceptability of such services. The success of such a learning process depends on the intelligent context-oriented presentation of the domain knowledge and its adaptivity in terms of complexity and granularity consistent to the learner’s cognitive level/progress. Researchers have always deemed adaptive user interfaces as a promising solution in this regard. However, the richness in the human behavior, technological opportunities, and contextual nature of information offers daunting challenges. These require creativity, cross-domain synergy, cross-cultural and cross-demographic understanding, and an adequate representation of mission and conception of the task. This paper provides a review of state-of-the-art in adaptive user interface research in Intelligent Multimedia Educational Systems and related areas with an emphasis on core issues and future directions

    Incommensurability and Theory Change

    Get PDF
    The paper explores the relativistic implications of the thesis of incommensurability. A semantic form of incommensurability due to semantic variation between theories is distinguished from a methodological form due to variation in methodological standards between theories. Two responses to the thesis of semantic incommensurability are dealt with: the first challenges the idea of untranslatability to which semantic incommensurability gives rise; the second holds that relations of referential continuity eliminate semantic incommensurability. It is then argued that methodological incommensurability poses little risk to the rationality or objectivity of science. For rational theory choice need neither be dictated by an algorithm nor governed by a binding set of rules. The upshot of the discussion is deflationary. There is little prospect for a relativistic conception of science based on inflated claims about the incommensurability of scientific theories

    Quantum Mechanical Reality: Entanglement and Decoherence

    Full text link
    We look into the ontology of quantum theory as distinct from that of the classical theory in the sciences, following a broadly Kantian tradition and distinguishing between the noumenal and phenomenal realities where the former is independent of our perception while the latter is assembled from the former by means of fragmentary bits of interpretation. Within this framework, theories are conceptual constructs applying to models generated in the phenomenal world within limited contexts.The ontology of quantum theory principally rests on the view that entities in the world are pervasively correlated with one another not by means of probabilities as in the case of the classical theory, but by means of probability amplitudes involving finely tuned phases of quantum mechanical states (entanglement). The quantum correlations are shared globally in the process of environment-induced decoherence whereby locally generated correlations are removed, the removal being especially manifest in the case of systems that appear as classical ones, in which case the process is almost instantaneous, being, in all likelihood, driven by field fluctuations in the Planck regime. This points to factors of an unknown nature determining its finest details, since Planck scale physics remains an obscure terrain. In other words, the present day quantum theory holds within a limited context set by the Planck scale.Comment: 33 pages, no figure

    Local Belief Dynamics in Network Knowledge Bases

    Get PDF
    People are becoming increasingly more connected to each other as social networks continue to grow both in number and variety, and this is true for autonomous software agents as well. Taking them as a collection, such social platforms can be seen as one complex network with many different types of relations, different degrees of strength for each relation, and a wide range of information on each node. In this context, social media posts made by users are reflections of the content of their own individual (or local) knowledge bases; modeling how knowledge flows over the network? or how this can possibly occur? is therefore of great interest from a knowledge representation and reasoning perspective. In this article, we provide a formal introduction to the network knowledge base model, and then focus on the problem of how a single agents knowledge base changes when exposed to a stream of news items coming from other members of the network. We do so by taking the classical belief revision approach of first proposing desirable properties for how such a local operation should be carried out (theoretical characterization), arriving at three different families of local operators, exploring concrete algorithms (algorithmic characterization) for two of the families, and proving properties about the relationship between the two characterizations (representation theorem). One of the most important differences between our approach and the classical models of belief revision is that in our case the input is more complex, containing additional information about each piece of information.Fil: Gallo, Fabio Rafael. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Gerardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigación en Ciencias de la Computación. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigación en Ciencias de la Computación; ArgentinaFil: Abad Santos, Natalia Vanesa. Universidad Nacional del Sur; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Falappa, Marcelo Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Do you want cookies?:Trust dynamics and educational gaps in the datafied risk society

    Get PDF
    Accepting cookies is an activity that most internet users routinely perform multiple times a day. Cookies are just one of the many ways in which our behaviours and preferences are tracked online and rendered into quantified bits – data. This process of rendering all aspects of life into data is called datafication, and it is today pervasive due to powerful digital tools and inexpensive storage solutions. However, the datafication process also generates risks, as individuals may lose their ability to control the flow of information about themselves – i.e., their privacy. In the thesis I apply the Risk Society perspective proposed by Ulrich Beck to the study of datafication, to explain the uneven acknowledgment of privacy risks unfolding within the datafied society. In particular, I investigate the datafied risk society by focusing on two main questions: first, are risks induced by datafication acknowledged, and what happens once they are uncovered? And second, what is the role of knowledge in recognizing such risks? The questions are addressed in four studies, which are based on a mix of quantitative analytical techniques applied to secondary data and survey data specifically collected for the purposes of the thesis. Results of the empirical chapters show that privacy risks are often not acknowledged: for instance, people tend to be willing to accept a privacy-intrusive vaccination certificate in the context of the COVID-19, especially when they have a high trust in the government and in science. In addition, even when confronted with the misuse of data following the Cambridge Analytica scandal, I do not detect a large drop in trust in social media. Results also show that education plays a role in shielding from privacy risks, since higher educated individuals appear more skeptical of online surveillance and are more prone to protect privacy online. While datafication facilitates many societal operations, it also induces privacy risks which are made more acute if citizens keep ‘accepting cookies’ uncritically. In addition, measures should be introduced to ensure that vulnerable individuals are not further penalized by datafication processes, for instance by introducing online privacy skills early in the educational curricula

    The Complexity of Repairing, Adjusting, and Aggregating of Extensions in Abstract Argumentation

    Full text link
    We study the computational complexity of problems that arise in abstract argumentation in the context of dynamic argumentation, minimal change, and aggregation. In particular, we consider the following problems where always an argumentation framework F and a small positive integer k are given. - The Repair problem asks whether a given set of arguments can be modified into an extension by at most k elementary changes (i.e., the extension is of distance k from the given set). - The Adjust problem asks whether a given extension can be modified by at most k elementary changes into an extension that contains a specified argument. - The Center problem asks whether, given two extensions of distance k, whether there is a "center" extension that is a distance at most (k-1) from both given extensions. We study these problems in the framework of parameterized complexity, and take the distance k as the parameter. Our results covers several different semantics, including admissible, complete, preferred, semi-stable and stable semantics
    • …
    corecore