300 research outputs found

    Sensitivity of Species Habitat-Relationship Model Performance to Factors of Scale

    Get PDF
    Researchers have come to different conclusions about the usefulness of habitat-relationship models for predicting species presence or absence. This difference frequently stems from a failure to recognize the effects of spatial scales at which the models are applied. We examined the effects of model complexity, spatial data resolution, and scale of application on the performance of bird habitat relationship (BHR) models on the Craig Mountain Wildlife Management Area and on the Idaho portion of the U.S. Forest Service\u27s Northern Region. We constructed and tested BHR models for 60 bird species detected on the study areas. The models varied by three levels of complexity (amount of habitat information) and three spatial data resolutions (0.09 ha, 4 ha, 10 ha). We tested these models at two levels of analysis: the site level (a homogeneous area \u3c0.5 ha) and cover-type level (an aggregation of many similar sites of a similar land-cover type), using correspondence between model predictions and species detections to calculate kappa coefficients of agreement. Model performance initially increased as models became more complex until a point was reached where omission errors increased at a rate greater than the rate at which commission errors were decreasing. Heterogeneity of the study areas appeared to influence the effect of model complexity. Changes in model complexity resulted in a greater decrease in commission error than increase in omission error. The effect of spatial data resolution on the performance of BHR models was influenced by the variability of the study area. BHR models performed better at cover-type levels of analysis than at the site level for both study areas. Correct-presence estimates (1 − minus percentage omission error) decreased slightly as number of species detections increased on each study area. Correct-absence estimates (1 − percentage commission error) increased as number of species detections increased on each study area. This suggests that a large number of detections may be necessary to achieve reliable estimates of model accuracy

    A predictive processing theory of sensorimotor contingencies: explaining the puzzle of perceptual presence and its absence in synesthesia

    Get PDF
    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of “perceptual presence” has motivated “sensorimotor theories” which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative “predictive processing” theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These “counterfactually-rich” generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception

    CLIVAR Mode Water Dynamics Experiment (CLIMODE) fall 2005, R/V Oceanus voyage 419, November 9, 2005–November 27, 2005

    Get PDF
    CLIMODE (CLIVAR Mode Water Dynamic Experiment) is a program designed to understand and quantify the processes responsible for the formation and dissipation of North Atlantic subtropical mode water, also called Eighteen Degree Water (EDW). Among these processes, the amount of buoyancy loss at the ocean-atmosphere interface is still uncertain and needs to be accurately quantified. In November 2005, a cruise was made aboard R/V Oceanus in the region of the separated Gulf Stream, where intense oceanic heat loss to the atmosphere is believed to trigger the formation of EDW. During that cruise, one surface mooring with IMET meteorological instruments was anchored in the core of the Gulf Stream as well as two moored profilers on its southeastern edge. Surface drifters, APEX floats and bobby RAFOS floats were also deployed along with two other moorings with sound sources. CTD profiles and water samples were also carried out. This array of instruments will permit a characterization of EDW with high spatial and temporal resolutions, and accurate in-situ measurements of air-sea fluxes in the formation region. The present report documents this cruise, the instruments that were deployed and the array of measurements that was set in place.Funding was provided by the National Science Foundation under Grant No. OCE 04-24536

    Cognitive Computation sans Representation

    Get PDF
    The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing

    SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods

    Get PDF
    In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, \textit{as they are used in practice}, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods' codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods
    corecore