629 research outputs found
Fostering Quality of Reflection in First-Year Honours Students in a Bachelor Engineering Program Technology, Liberal Arts & Science (ATLAS)
This study focused on fostering the quality reflection displayed in semester self-evaluation reports (SERs) of first-year honours students in a bachelor engineering program Technology, Liberal Arts and Science (ATLAS). In a first pilot study, studentsā conceptions of reflection and educational needs regarding reflection were explored. Based on the results and relevant theory, an intervention was designed. Twentynine participants, not previously exposed to academic training on reflection before, received a Reflection Guide on how to write reflections in their SERs. Two online interactive lectures were provided as support. Quality of reflection in the SERs was assessed using a standardized rubric and quality scores in the intervention group were compared with scores of the student cohort of the previous academic year (n = 33). Results showed that the intervention group reflected on a higher level than the comparison group. Perceived usefulness and value of the reflection method proposed were measured in both students and assessors. In general, both students and assessors were positive about the reflection method
Unraveling the influence of domain knowledge during simulation-based inquiry learning
This study investigated whether the mere knowledge of the meaning of variables can facilitate inquiry learning processes and outcomes. Fifty-seven college freshmen were randomly allocated to one of three inquiry tasks. The concrete task had familiar variables from which hypotheses about their underlying relations could be inferred. The intermediate task used familiar variables that did not invoke underlying relations, whereas the abstract task contained unfamiliar variables that did not allow for inference of hypotheses about relations. Results showed that concrete participants performed more successfully and efficiently than intermediate participants, who in turn were equally successful and efficient as abstract participants. From these findings it was concluded that students learning by inquiry benefit little from knowledge of the meaning of variables per se. Some additional understanding of the way these variables are interrelated seems required to enhance inquiry learning processes and outcomes
Trust in everyday life
Although trust plays a pivotal role in many aspects of life, very little is known about the manifestation of trust and distrust in everyday life. In this work, we integrated several prior approaches to trust and investigated the prevalence and key determinants of trust (vs. distrust) in peopleās natural environments, using preregistered experience-sampling methodology. Across more than 4,500 social interactions from a heterogeneous sample of 427 participants, results showed high average levels of trust, but also considerable variability in trust across contexts. This variability was attributable to aspects of trustee perception, social distance, as well as three key dimensions of situational interdependence: conflict of interests, information (un)certainty, and power imbalance. At the dispositional level, average everyday trust was shaped by general trust, moral identity, and zero-sum beliefs. The social scope of most trust-related traits, however, was moderated by social distance: Whereas moral identity buffered against distrusting distant targets, high general distrust and low social value orientation amplified trust differences between close vs. distant others. Furthermore, a laboratory-based trust game predicted everyday trust only with regard to more distant but not close interaction partners. Finally, everyday trust was linked to self-disclosure and to cooperation, particularly in situations of high conflict between interaction partnersā interests. We conclude that trust can be conceptualized as a relational hub that interconnects the social perception of the trustee, the relational closeness between trustor and trustee, key structural features of situational interdependence, and behavioral response options such as self-disclosure
Semimechanistic Clearance Models of Oncology Biotherapeutics and Impact of Study Design: Cetuximab as a Case Study
This study aimed to explore the currently competing and new semimechanistic clearance models for monoclonal antibodies and the impact of clearance model misspecification on exposure metrics under different study designs exemplified for cetuximab. Six clearance models were investigated under four different study designs (sampling density and single/multiple-dose levels) using a rich data set from two cetuximab clinical trials (226 patients with metastatic colorectal cancer) and using the nonlinear mixed-effects modeling approach. A two-compartment model with parallel Michaelis-Menten and time-decreasing linear clearance adequately described the data, the latter being related to post-treatment response. With respect to bias in exposure metrics, the simplified time-varying linear clearance (CL) model was the best alternative. Time-variance of the linear CL component should be considered for biotherapeutics if response impacts pharmacokinetics. Rich sampling at steady-state was crucial for unbiased estimation of Michaelis-Menten elimination in case of the reference (parallel Michaelis-Menten and time-varying linear CL) model
Efficiency and stability in Euclidean network design
Network Design problems typically ask for a minimum cost sub-network from a given host network. This classical point-of-view assumes a central authority enforcing the optimum solution. But how should networks be designed to cope with selfish agents that own parts of the network? In this setting, minimum cost networks may be very unstable in that agents will deviate from a proposed solution if this decreases their individual cost. Hence, designed networks should be both efficient in terms of total cost and stable in terms of the agents' willingness to accept the network.
We study this novel type of Network Design problem by investigating the creation of (Ī²,Ī³)-networks, that are in Ī²-approximate Nash equilibrium and have a total cost of at most Ī³ times the optimal cost, for the recently proposed Euclidean Generalized Network Creation Game by BilĆ² et al. [SPAA 2019]. There, n agents corresponding to points in Euclidean space create costly edges among themselves to optimize their centrality in the created network. Our main result is a simple O(n^2)-time algorithm that computes a (Ī²,Ī²)-network with low Ī² for any given set of points. Moreover, on integer grid point sets or random point sets our algorithm achieves a low constant~Ī². Besides these results for the Euclidean model, we discuss a generalization of our algorithm to instances with arbitrary, even non-metric, edge lengths. Moreover, in contrast to these algorithmic results, we show that no such positive results are possible when focusing on either optimal networks, i.e., (Ī²,1)-networks, or perfectly stable networks, i.e., (1,Ī³)-networks, as in both cases NP-hard problems arise, there exist instances with very unstable optimal networks, and there are instances for perfectly stable networks with high total cost. Along the way, we significantly improve several results from BilĆ² et al. and we asymptotically resolve their conjecture about the Price of Anarchy by providing a tight bound
Towards Finding Optimal Solutions For Constrained Warehouse Layouts Using Answer Set Programming
A minimum requirement of feasible order picking layouts is the accessibility of every storage location. Obeying only this requirement typically leads to a vast amount of different layouts that are theoretically possible. Being able to generate all of these layouts automatically opens the door for new layouts and is valuable training data for reinforcement learning, e.g., for operating strategies of automated guided vehicles. We propose an approach using answer set programming that is able to generate and select optimal order picking layouts with regards to a defined objective function for given warehouse structures in a short amount of time. This constitutes a significant step towards reliable artificial intelligence. In a first step all feasible layout solutions are generated and in a second step an objective function is applied to get an optimal layout with regards to a defined layout problem. In brownfield projects this can lead to non-traditional layouts that are manually hard to find. The implementation can be customized for different use cases in the field of order picking layout generation, while the core logic stays the same
- ā¦