615 research outputs found

    A HAND-HELD STRUCTURE FROM MOTION PHOTOGRAMMETRIC APPROACH TO RIPARIAN AND STREAM ASSEESSMENT AND MONITORING

    Get PDF
    Two of the biggest weaknesses in stream restoration and monitoring are: 1) subjective estimation and subsequent comparison of changes in channel form, vegetative cover, and in-stream habitat; and 2) the high costs in terms of financing, human resources, and time necessary to make these estimates. Remote sensing can be used to remedy these weaknesses and save organizations focused on restoration both money and time. However, implementing traditional remote sensing approaches via autonomous aerial systems or light detection and ranging systems is either prohibitively expensive or impossible along small streams with dense vegetation. Hand-held Structure from Motion Multi-view Stereo (SfM-MVS) photogrammetric technology can solve these problems by offering a resource efficient approach for producing 3D Models for a variety of environments. SfM-MVS photogrammetric technology is the result of cutting-edge advances in computer vision algorithms and discipline-specific research in the geosciences. This study found that images taken by GoPro, iPhone, and Digital Single-Lens Reflex cameras were all capable of producing 3D representations of heavily vegetated stream corridors with minimal image post-processing using workflows within Agisoft Metashape™. Analysis within Agisoft Metashape™ produced expected measurements from 3D textured mesh models, digital elevation models, and orthomosaics that were comparable to the physical measurements taken at the time of each survey using an arbitrary latitude, longitude, and elevation classification scheme. The methods described in this study could be applied in future stream restoration and monitoring efforts as a means to complement in person collection and measurement while limiting effort andmoney spent

    A hand-held, photogrammetric, approach to riparian and stream assessment and monitoring

    Get PDF
    Over 37,000 riparian restoration projects have been reported in the United States since 1980, but only 38% of those projects are currently being monitored; 70% of those being monitored reported that the restoration actions were not accomplishing their intended purposes. Because riparian zones serve as the connection between terrestrial and flowing freshwater ecosystems, it is important to identify and quantify their structural and functional roles in natural systems and monitor the restoration efforts being implemented. A rigorous understanding of the connection between riparian, geomorphic, and hydraulic processes provides a sound ecological foundation when identifying riparian management objectives and evaluating current, and future, land-use practices. It is known that vegetative cover in riparian zones influences stream ecosystem function by stabilizing stream banks, providing habitat and food for aquatic and terrestrial biota, and improving water quality. However, quantifying these influences demands a high cost in terms of financing, human resources, and time. Two of the biggest weaknesses in stream restoration and monitoring are: 1) subjective estimation and subsequent comparison of changes in channel form, vegetative cover, and in-stream habitat; and 2) the high costs in terms of financing, human resources, and time necessary to make these estimates. Hand-held Structure from Motion Multi-view Stereo (SfM/MVS) photogrammetric technology can solve these problems by offering a resource efficient approach for producing hyperspatial 3D models for a variety of environments. SfM/MVS photogrammetric technology is the result of cutting-edge advances in computer vision algorithms and discipline-specific research in the geosciences. By expanding the application of hand-held photogrammetric technology to stream assessment and restoration monitoring projects, it should be possible to both increase and improve data collection in terms of accuracy and efficiency. 3D models produced via hand-held cameras at close range may help to further standardize the way stream attributes are measured and quantified. The traditional approach for stream mapping using cameras is done via an Unmanned Aerial Vehicle (UAV) from above. There are well-established methodologies and workflows in place for UAV studies. However, there is a gap in the literature when attempting to implement similar workflows from an oblique or on the ground perspective, especially when attempting to use consumer grade sensors like digital cameras and cell phones. Therefore, the primary goal of this research is to develop a standardized approach to stream assessment and monitoring using 3D visualizations created from consumer grade, hand-held, cameras. Ideally, this approach will provide the possibility for more frequent surveying and produce 3D models that are capable of detecting changes in bank structure, instream habitat, and vegetation within heavily vegetated riparian and stream environments. This presentation will include findings on camera selection, software workflows, and measurements that can actually be determined from the 3D visualizations of the stream

    Sex With Robots and Human-Machine Sexualities: Encounters Between Human-Machine Communication and Sexuality Studies

    Get PDF
    Sex robots are a controversial topic. Understood as artificial-intelligence enhanced humanoid robots designed for use in partnered and solo sex, sex robots offer ample opportunities for theorizing from a Human-Machine Communication (HMC) perspective. This comparative literature review conjoins the seemingly disconnected literatures of HMC and sexuality studies (SeS) to explore questions surrounding intimacy, love, desire, sex, and sexuality among humans and machines. In particular, I argue for understanding human-machine sexualities as communicative sexuotechnical-assemblages, extending previous efforts in both HMC and SeS for more-than-human, ecological, and more fluid approaches to humans and machines, as well as to sex and sexuality. This essay continues and expands the critical turn in HMC by engaging in an interdisciplinary exercise with theoretical, design, and use/effect implications in the context of sex robots

    Archipelagic Human-Machine Communication: Building Bridges amidst Cultivated Ambiguity

    Get PDF
    In this commentary, I call for maintaining the archipelagic character of human-machine communication (HMC). Utilizing the metaphor of the archipelago or a chain of connected islands indicates that HMC entails a variety of islands differing in shape, size, location, and proximity to one another. Rather than aiming for conceptual unity and definitional homogeneity, I call for embracing a cultivated ambiguity related to HMC key concepts. Ambiguity in the sense of allowing these concepts to be flexible enough to be explored in different contexts. Cultivated in the sense of demanding resonance across individual studies and theoretical lineages to allow for cumulative and collaborative theorizing. My hope is that HMC scholars can continue to build bridges that traverse the paradigmatic, methodological, theoretical, and technological archipelago of HMC

    Distributional constraints on cognitive architecture

    Get PDF
    Mental chronometry is a classical paradigm in cognitive psychology that uses response time and accuracy data in perceptual-motor tasks to elucidate the architecture and mechanisms of the underlying cognitive processes of human decisions. The redundant signals paradigm investigates the response behavior in Experimental tasks, where an integration of signals is required for a successful performance. The common finding is that responses are speeded for the redundant signals condition compared to single signals conditions. On a mean level, this redundant signals effect can be accounted for by several cognitive architectures, exhibiting considerable model mimicry. Jeff Miller formalized the maximum speed-up explainable by separate activations or race models in form of a distributional bound – the race model inequality. Whenever data violates this bound, it excludes race models as a viable account for the redundant signals effect. The common alternative is a coactivation account, where the signals integrate at some stage in the processing. Coactivation models have mostly been inferred on and rarely explicated though. Where coactivation is explicitly modeled, it is assumed to have a decisional locus. However, in the literature there are indications that coactivation might have at least a partial locus (if not entirely) in the nondecisional or motor stage. There are no studies that have tried to compare the fit of these coactivation variants to empirical data to test different effect generating loci. Ever since its formulation, the race model inequality has been used as a test to infer the cognitive architecture for observers’ performance in redundant signals Experiments. Subsequent theoretical and empirical analyses of this RMI test revealed several challenges. On the one hand, it is considered to be a conservative test, as it compares data to the maximum speed-up possible by a race model account. Moreover, simulation studies could show that the base time component can further reduce the power of the test, as violations are filtered out when this component has a high variance. On the other hand, another simulation study revealed that the common practice of RMI test can introduce an estimation bias, that effectively facilitates violations and increases the type I error of the test. Also, as the RMI bound is usually tested at multiple points of the same data, an inflation of type I errors can reach a substantial amount. Due to the lack of overlap in scope and the usage of atheoretic, descriptive reaction time models, the degree to which these results can be generalized is limited. State-of-the-art models of decision making provide a means to overcome these limitations and implement both race and coactivation models in order to perform large scale simulation studies. By applying a state-of-the-art model of decision making (scilicet the Ratcliff diffusion model) to the investigation of the redundant signals effect, the present study addresses research questions at different levels. On a conceptual level, it raises the question, at what stage coactivation occurs – at a decisional, a nondecisional or a combined decisional and nondecisional processing stage and to what extend? To that end, two bimodal detection tasks have been conducted. As the reaction time data exhibits violations of the RMI at multiple time points, it provides the basis for a comparative fitting analysis of coactivation model variants, representing different loci of the effect. On a test theoretic level, the present study integrates and extends the scopes of previous studies within a coherent simulation framework. The effect of experimental and statistical parameters on the performance of the RMI test (in terms of type I errors, power rates and biases) is analyzed via Monte Carlo simulations. Specifically, the simulations treated the following questions: (i) what is the power of the RMI test, (ii) is there an estimation bias for coactivated data as well and if so, in what direction, (iii) what is the effect of a highly varying base time component on the estimation bias, type I errors and power rates, (iv) and are the results of previous simulation studies (at least qualitatively) replicable, when current models of decision making are used for the reaction time generation. For this purpose, the Ratcliff diffusion model was used to implement race models with controllable amount of correlation and coactivation models with varying integration strength, and independently specifying the base time component. The results of the fitting suggest that for the two bimodal detection tasks, coactivation has a shared decisional and nondecisional locus. For the focused attention experiment the decisional part prevails, whereas in the divided attention task the motor component is dominating the redundant signals effect. The simulation study could reaffirm the conservativeness of the RMI test as latent coactivation is frequently missed. An estimation bias was found also for coactivated data however, both biases become negligible once more than 10 samples per condition are taken to estimate the respective distribution functions. A highly varying base time component reduces both the type I errors and the power of the test, while not affecting the estimation biases. The outcome of the present study has theoretical and practical implications for the investigations of decisions in a multisignal context. Theoretically, it contributes to the locus question of coactivation and offers evidence for a combined decisional and nondecisional coactivation account. On a practical level, the modular simulation approach developed in the present study enables researchers to further investigate the RMI test within a coherent and theoretically grounded framework. It effectively provides a means to optimally set up the RMI test and thus helps to solidify and substantiate its outcomes. On a conceptual level the present study advocates the application of current formal models of decision making to the mental chronometry paradigm and develops future research questions in the field of the redundant signals paradigm

    How to get into flow with it: measuring the paradoxes in digital knowledge work

    Get PDF
    Digitized knowledge workers are exposed to various technology-, individual- and work-related factors resulting in multiple paradoxes that may promote or hinder their capacity to work. This paper elaborates on how emerging paradoxes of IT usage impact the flow experience for daily planning tasks of knowledge workers. To study the impact beyond effective use of IT on flow, we conducted a survey study with 336 participants in a mixed-method approach combining PLS-SEM and fsQCA. Our results show that the digital working method could positively influence the flow experience overall. A full mediation of perceived behavioral control, representing the paradox control and chaos, and representational fidelity, representing clarity and ambiguity, on flow, was confirmed. Our fsQCA results support the conclusion that increasing IT penetration alone is insufficient to experience work flow. It depends on how knowledge workers interact with the IT in their specific task environment, balancing the dialectical tensions at work, with some differences between genders and within specific industries. We discuss the study\u27s implications for research and practice

    Becoming Human? Ableism and Control in \u3cem\u3eDetroit: Become Human\u3c/em\u3e and the Implications for Human-Machine Communication

    Get PDF
    In human-machine communication (HMC), machines are communicative subjects in the creation of meaning. The Computers are Social Actors and constructivist approaches to HMC postulate that humans communicate with machines as if they were people. From this perspective, communication is understood as heavily scripted where humans mindlessly apply human-to-human scripts in HMC. We argue that a critical approach to communication scripts reveals how humans may rely on ableism as a means of sense-making in their relationships with machines. Using the choose-your-own-adventure game Detroit: Become Human as a case study, we demonstrate (a) how ableist communication scripts render machines as both less-than-human and superhuman and (b) how such scripts manifest in control and cyborg anxiety. We conclude with theoretical and design implications for rescripting ableist communication scripts

    A methodology for workflow modeling : From business process modeling towards sound workflow specification

    Get PDF
    Der Einsatz von Workflow Management Systemen (WFMS) in Unternehmen oder Verwaltungen mit einfach strukturierten und automatisierbaren Prozessen bietet ein hohes Potenzial für die Optimierung der Geschäftsprozesse. Für die Koordinierung von Geschäftsprozessen zur Laufzeit benötigen WFMS Workflow-Spezifikationen, die den automatisierbaren Anteil der Geschäftsprozesse in einer maschinenlesbaren Form beschreiben. In der Praxis werden Workflow-Spezifikationen bislang oft unabhängig von bereits existierenden Geschäftsprozessmodellen erstellt. Es existiert kein methodisch fundiertes Vorgehensmodell, dass die Modellierung von Gechäftsprozessen und die Weiterverwendung der erstellten Modelle für die Workflow-Spezifikation unterstützt [GHS95,AaHe02]. Diese Arbeit schlägt ein durchgehendes Vorgehensmodell für die Spezifikation von Workflows in Form von Petrinetzen vor. In dem fünfstufigen Vorgehensmodell wird der Schwerpunkt auf die Modellierung der Kontrollflussaspekte gelegt. Im Rahmen der Modellierung werden die folgenden Schritte unterstützt: 1. Modellierung der Geschäftsprozesse 2. Formalisierung durch Petrinetze 3. Korrektheitstest und Fehlerkorrektur 4. Festlegung und Integration einer Ausführungsstrategie 5. Kontrollverfeinerung. Das Ergebnis ist ein Prozessmodell mit formal fundierter und operationaler Semantik, das zudem sound [Aal98] ist. Ein solches Modell entspricht den Anforderungen an eine Workflow-Spezifikation, deren Verwendung für ein WFMS eine zuverlässige Ausführung der Geschäftsprozesse zur Laufzeit garantiert. In dem ersten Schritt "Modellierung der Geschäftsprozesse" wird die Verwendung semiformaler Modellierungstechniken unterstützt. Diese räumen dem Modellierer Spielraum in der Beschreibung der Prozesse ein. Im nächsten Schritt wird das erstellte Modell intern formalisiert. Die Formalisierung basiert auf einer Abbildung in Petrinetze. Dabei werden Mehrdeutigkeiten nicht eliminiert sondern explizit gemacht. Im dritten Schritt wird das Modell auf Korrektheit überprüft. Dafür werden neue, pragmatische Kriterien eingeführt. Es werden präzise Fehlermeldungen zurückgegeben, die ein iteratives Verbessern der Geschäftsprozessmodelle ermöglichen. In Schritt vier und fünf wird das erstellte Modell auf eine Workflow-Spezifikation abgebildet. Dazu wird auf die bereits erstellte Petrinetz-Formalisierung zurückgegriffen. Die Petrinetze werden zunächst so erweitert, dass eine Ausführungsstrategie festgelegt wird. Durch die Integration der Strategie werden alle vorher noch enthaltenen Mehrdeutigkeiten beseitigt. Abschließend werden Aktivitäten verfeinert. Das vorgeschlagene Vorgehensmodell bindet in der Praxis bewährte Techniken ein und stellt angemessene Kriterien für die Fehlerkorrektur zur Verfügung. Das gesamte Vorgehensmodells ist methodisch unterlegt und greift auf Ergebnisse der Petrinetztheorie, der Spieltheorie und der Controller Synthesis zurück.Supporting business processes with the help of workflow management systems is a necessary prerequisite for many companies to stay competitive. An important task is the specification of workflow, i.e. these parts of a business process that can be supported by a computer system. A workflow specification mainly refines a business process description, incorporating details of the implementation. Despite the close relation between the two process descriptions there is still no satisfactory link between their modeling. This fact mainly relies on the assignment to different peolpe (IT- vs. domain experts) having a different modeling culture. The thesis provides a methodically well-founded approach for the specification of functional workflow requirements. It supports domain experts in their modeling of business processes in a semiformal manner and guides them stepwise towards a formal workflow specification, i.e. helping to bridge the gap between business process modeling and workflow specification. The proposed approach acknowledges the need to describe business processes at different levels of abstraction and combines the advantages of different modeling languages that proved to fit the respective requirements. A semiformal modeling language is proposed to be used by the domain expert. As a prominent example, widely accepted in practice, are Event-driven Process Chains (EPCs). For the definition of the workflow specification we use a particular type of Petri nets. The strength of Petri-nets is their formally founded, operational semantics which enables their use as input format for workflow management systems. The key concept for the proposed process model is the use of pragmatic correctness criteria, namely relaxed soundness and robustness. They fit the correctness requirements within this first abstraction level and make it possible to provide a feedback to the modeler. To support the execution of the business process at run time, the resulting process description must be refined to fit the requirements of a workflow specification. The proposed process model supports this refinement step, applying methods from controller synthesis. A sound WF-system is automatically generated on the basis of a relaxed sound and robust process description. Only within this step do performance issues become relevant. Information that is incorporated relates to a certain scheduling strategy. The late determination of performance issues is especially desirable as corresponding information (the occurrence probability of a certain failure, costs of failure compensation, or priorities) will often only become available at run-time. Their incorporation towards the end of the proposed process model extends the possibility to reuse modeling results under changing priorities. The resulting process description is sound. Using it as a basis for the execution support during run-time reliable processing can be guaranteed
    • …
    corecore