230,716 research outputs found

    Tutorial on Latent Growth Models for Longitudinal Data Analysis

    Get PDF
    This tutorial introduces Latent Growth Modeling (LGM) as a promising new method for analyzing longitudinal data when interested in understanding the process of change over time. Given the need to go beyond cross-sectional models in IS research, explore complex longitudinal IS phenomena, and test Information Systems (IS) theories over time, LGM is proposed as a complementary method to help IS researchers propose time-dependent hypotheses and make longitudinal inferences about IS theories. The tutorial leader will explain the importance of theorizing patterns of change over time, how to propose longitudinal hypotheses, and how LGM can help test such hypotheses. All three tutorial facilitators will describe the tenets of LGM and offer guidelines for applying LGM in IS research including framing time-dependent hypotheses that can be readily tested with LGM. The three tutorial facilitators will also explain how to use LGM in SAS 9.2 with a hands-on application that will attempt to model the complex longitudinal relationship between IT and firm performance using longitudinal data from Fortune 1000 firms. The tutorial facilitators will also draw comparisons with other existing methods for modeling longitudinal data and they will also discuss the advantages and disadvantages of LGM for identifying longitudinal patterns in data

    Identifying the time profile of everyday activities in the home using smart meter data

    Get PDF
    Activities are a descriptive term for the common ways households spend their time. Examples include cooking, doing laundry, or socialising. Smart meter data can be used to generate time profiles of activities that are meaningful to households’ own lived experience. Activities are therefore a lens through which energy feedback to households can be made salient and understandable. This paper demonstrates a multi-step methodology for inferring hourly time profiles of ten household activities using smart meter data, supplemented by individual appliance plug monitors and environmental sensors. First, household interviews, video ethnography, and technology surveys are used to identify appliances and devices in the home, and their roles in specific activities. Second, ‘ontologies’ are developed to map out the relationships between activities and technologies in the home. One or more technologies may indicate the occurrence of certain activities. Third, data from smart meters, plug monitors and sensor data are collected. Smart meter data measuring aggregate electricity use are disaggregated and processed together with the plug monitor and sensor data to identify when and for how long different activities are occurring. Sensor data are particularly useful for activities that are not always associated with an energy-using device. Fourth, the ontologies are applied to the disaggregated data to make inferences on hourly time profiles of ten everyday activities. These include washing, doing laundry, watching TV (reliably inferred), and cleaning, socialising, working (inferred with uncertainties). Fifth, activity time diaries and structured interviews are used to validate both the ontologies and the inferred activity time profiles. Two case study homes are used to illustrate the methodology using data collected as part of a UK trial of smart home technologies. The methodology is demonstrated to produce reliable time profiles of a range of domestic activities that are meaningful to households. The methodology also emphasises the value of integrating coded interview and video ethnography data into both the development of the activity inference process

    An alternative to Kitcher's theory of conceptual progress and his account of the change of the gene concept

    Get PDF
    The present paper discusses Kitcher’s framework for studying conceptual change and progress. Kitcher’s core notion of reference potential is hard to apply to concrete cases. In addition, an account of conceptual change as change in reference potential misses some important aspects of conceptual change and conceptual progress. I propose an alternative framework that focuses on the inferences and explanations supported by scientific concepts. The application of my approach to the history of the gene concept offers a better account of the conceptual progress that occurred in the transition from the Mendelian to the molecular gene than Kitcher’s theory

    Mathematics in the National Curriculum for Wales : Key Stages 2-4 = Mathemateg yng Nghwricwlwm Cenedlaethol Cymru : Cyfnodau Allweddol 2-4

    Get PDF

    Two Kinds of Concept: Implicit and Explicit

    Get PDF
    In his refreshing and thought-provoking book, Edouard Machery (2009) argues that people possess different kinds of concept. This is probably true and important. Before I get to that, I will briefly disagree on two other points

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    Getting to know you: Accuracy and error in judgments of character

    Get PDF
    Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that this skeptical worry is misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate

    Real science for young scientists

    Get PDF
    • 

    corecore