2 research outputs found

    Dijet azimuthal correlations and conditional yields in pp and p plus Pb collisions at √{S}^NN=5.02 TeV with the ATLAS detector

    Get PDF
    This paper presents a measurement of forward-forward and forward-central dijet azimuthal angular correlations and conditional yields in proton-proton (pp) and proton-lead (p + Pb) collisions as a probe of the nuclear gluon density in regions where the fraction of the average momentum per nucleon carried by the parton entering the hard scattering is low. In these regions, gluon saturation can modify the rapidly increasing parton distribution function of the gluon. The analysis utilizes 25 pb^{-1} of pp data and 360 mp^{-1} of p + Pb data, both at {S}^NN=5.02 TeV, collected in 2015 and 2016, respectively, with the ATLAS detector at the Large Hadron Collider. The measurement is performed in the center-of-mass frame of the nucleon-nucleon system in the rapidity range between -4.0 and 4.0 using the two highest transverse-momentum jets in each event, with the highest transverse-momentum jet restricted to the forward rapidity range. No significant broadening of azimuthal angular correlations is observed for forward-forward or forward-central dijets in p + Pb compared to pp collisions. For forward-forward jet pairs in the proton-going direction, the ratio of conditional yields in p + Pb collisions to those in pp collisions is suppressed by approximately 20%, with no significant dependence on the transverse momentum of the dijet system. No modification of conditional yields is observed for forward-central dijets

    Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking

    No full text
    International audienceIn its lifetime, a robot should be able to autonomously understand the semantics of different tasks to effectively perform them in different situations. In this context, it is important to distinguish the meaning (in terms of the desired effect) of a task and the means to achieve that task. Our focus is those tasks in which one agent is required to perform a task for another agent, such as give, show, hide, make-accessible, etc. In this paper, we identify that a high-level human-centered combined reasoning, based on perspective taking, efforts and abilities analyses, is the key to understand semantics of such tasks. By combining these aspects, the robot infers sets of hierarchy of facts, which serve for analyzing the effect of a task. We adapt the explanation based learning approach enabling the task understanding from the very first demonstration and continuous refinement with new demonstrations. We argue that such symbolic level understanding of a task, which is not bound to trajectory, kinematics structure or shape of the robot, facilitates generalization to novel situations as well as ease the transfer of acquired knowledge among heterogeneous robots. Further, the knowledge of tasks at such human understandable level of abstraction will enrich the natural human–robot interaction
    corecore