139 research outputs found
Effects of Water Parameters on Container Mosquito (Diptera: Culicidae) Oviposition and Performance
Water body parameters have a considerable effect on the communities that develop within them. In small container habitats like tires, the depth, surface area, and volume effect the development and success of the mosquito communities. This study investigated the choices of the adult female mosquitoes, Aedes albopictus and Culex quinquefasciatus, between different depths and surface areas. In addition, larval performance was determined under differing depth and larvae densities. Oviposition studies showed that Ae. albopictus had a preference for deeper habitats (χ2= 14.2902, p= 0.0139) but did not prefer any surface areas (χ2= 7.2321, p= 0.0649) though there was a trend that indicated that there could be a preference for larger surface area. Conversely, Culex quinquefasciatus was shown to be sensitive to surface areas (χ2= 11.1419, p= 0.0110) but not depth (χ2= 9.9828, p= 0.0757). Larval densities affected the population growth, represented by λ’, of Aedes albopictus (F3,15= 19.3786, p\u3c 0.0001) where higher densities of larvae depressed λ’ values. Culex quinquefasciatus had significant differences in the interaction of larval density and depth (F9,15= 3.2870, p= 0.0204) between the low λ’ 10:10 and the high λ’ 0:5 densities. Within the 10:10 density, differences were found in λ’ with higher growth in the 7 cm depth compared to the 14 cm depth. Additionally, the 14 cm depth produced heavier female depth produced heavier female Ae. albopictus than 7 cm depths (F3,15= 3.3160, p= 0.0488). Overall, it was shown that Ae. albopictus prefer deeper habitats while ovipositing and although this does not seem to confer greater population growth, it does result in larger female mosquitoes. In addition, Ae albopictus depressed the population growth of Cx. quinquefasciatus in high larval densities
Selective Sharing is Caring: Toward the Design of a Collaborative Tool to Facilitate Team Sharing
Temporary teams are commonly limited by the amount of experience with their new teammates, leading to poor understanding and coordination. Collaborative tools can promote teammate team mental models (e.g., teammate attitudes, tendencies, and preferences) by sharing personal information between teammates during team formation. The current study utilizes 89 participants engaging in real-world temporary teams to better understand user perceptions of sharing personal information. Qualitative and quantitative results revealed unique findings including: 1) Users perceived personality and conflict management style assessments to be accurate and sharing these assessments to be helpful, but had mixed perceptions regarding the appropriateness of sharing; 2) Users of the collaborative tool had higher perceptions of sharing in terms of helpfulness and appropriateness; and 3) User feedback highlighted the need for tools to selectively share less data with more context to improve appropriateness and helpfulness while reducing the amount of time to read
Selective Sharing is Caring: Toward the Design of a Collaborative Tool to Facilitate Team Sharing
Temporary teams are commonly limited by the amount of experience with their new teammates, leading to poor understanding and coordination. Collaborative tools can promote teammate team mental models (e.g., teammate attitudes, tendencies, and preferences) by sharing personal information between teammates during team formation. The current study utilizes 89 participants engaging in real-world temporary teams to better understand user perceptions of sharing personal information. Qualitative and quantitative results revealed unique findings including: 1) Users perceived personality and conflict management style assessments to be accurate and sharing these assessments to be helpful, but had mixed perceptions regarding the appropriateness of sharing; 2) Users of the collaborative tool had higher perceptions of sharing in terms of helpfulness and appropriateness; and 3) User feedback highlighted the need for tools to selectively share less data with more context to improve appropriateness and helpfulness while reducing the amount of time to read
Understanding Human-AI Cooperation Through Game-Theory and Reinforcement Learning Models
For years, researchers have demonstrated the viability and applicability of game theory principles to the field of artificial intelligence. Furthermore, game theory has been shown as a useful tool for researching human-machine interaction, specifically their cooperation, by creating an environment where cooperation can initially form before reaching a continuous and stable presence in a human-machine system. Additionally, recent developments in reinforcement learning artificial intelligence have led to artificial agents cooperating more efficiently with humans, especially in more complex environments. This research conducts an empirical study to understand how different modern reinforcement learning algorithms and game theory scenarios could create different cooperation levels in human-machine teams. Three different reinforcement learning algorithms (Vanilla Policy Gradient, Proximal Policy Optimization, and Deep Q-Network) and two different game theory scenarios (Hawk Dove and Prisoners dilemma) were examined in a large-scale experiment. The results indicated that different reinforcement learning models interact differently with humans with Deep-Q engendering higher cooperation levels. The Hawk Dove game theory scenario elicited significantly higher levels of cooperation in the human-artificial intelligence system. A multiple regression using these two independent variables also found a significant ability to predict cooperation in the human-artificial intelligence systems. The results highlight the importance of social and task framing in human-artificial intelligence systems and noted the importance of choosing reinforcement learning models
The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams
This study improves the understanding of trust in human-AI teams by investigating the relationship of AI teammate ethicality on individual outcomes of trust (i.e., monitoring, confidence, fear) in AI teammates and human teammates over time. Specifically, a synthetic task environment was built to support a three person team with two human teammate and one AI teammate (simulated by a confederate). The AI teammate performed either an ethical or unethical action in three missions and measures of trust in the human and AI teammates were taken after each mission. Results from the study revealed that unethical actions by the AT had a significant effect on nearly all of the outcomes of trust measured and that levels of trust were dynamic over time for both the AI and human teammates, with the AI teammate recovering trust to Mission 1 levels by Mission 3. AI ethicality was mostly unrelated to participants trust in their fellow human teammate but did decrease perceptions of fear, paranoia, and skepticism in them and trust in the human and AI teammate was not significantly related to individual performance outcomes, which both diverge from previous trust research in human-AI teams utilizing competency-based trust violations
The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams
This study improves the understanding of trust in human-AI teams by investigating the relationship of AI teammate ethicality on individual outcomes of trust (i.e., monitoring, confidence, fear) in AI teammates and human teammates over time. Specifically, a synthetic task environment was built to support a three person team with two human teammate and one AI teammate (simulated by a confederate). The AI teammate performed either an ethical or unethical action in three missions and measures of trust in the human and AI teammates were taken after each mission. Results from the study revealed that unethical actions by the AT had a significant effect on nearly all of the outcomes of trust measured and that levels of trust were dynamic over time for both the AI and human teammates, with the AI teammate recovering trust to Mission 1 levels by Mission 3. AI ethicality was mostly unrelated to participants trust in their fellow human teammate but did decrease perceptions of fear, paranoia, and skepticism in them and trust in the human and AI teammate was not significantly related to individual performance outcomes, which both diverge from previous trust research in human-AI teams utilizing competency-based trust violations
The astrobiology primer: An outline of general knowledge - Version 1, 2006
Peer reviewe
- …
