2,162 research outputs found

    Neuroprediction and A.I. in Forensic Psychiatry and Criminal Justice: A Neurolaw Perspective

    Get PDF
    Advances in the use of neuroimaging in combination with A.I., and specifically the use of machine learning techniques, have led to the development of brain-reading technologies which, in the nearby future, could have many applications, such as lie detection, neuromarketing or brain-computer interfaces. Some of these could, in principle, also be used in forensic psychiatry. The application of these methods in forensic psychiatry could, for instance, be helpful to increase the accuracy of risk assessment and to identify possible interventions. This technique could be referred to as ‘A.I. neuroprediction,’ and involves identifying potential neurocognitive markers for the prediction of recidivism. However, the future implications of this technique and the role of neuroscience and A.I. in violence risk assessment remain to be established. In this paper, we review and analyze the literature concerning the use of brain-reading A.I. for neuroprediction of violence and rearrest to identify possibilities and challenges in the future use of these techniques in the fields of forensic psychiatry and criminal justice, considering legal implications and ethical issues. The analysis suggests that additional research is required on A.I. neuroprediction techniques, and there is still a great need to understand how they can be implemented in risk assessment in the field of forensic psychiatry. Besides the alluring potential of A.I. neuroprediction, we argue that its use in criminal justice and forensic psychiatry should be subjected to thorough harms/benefits analyses not only when these technologies will be fully available, but also while they are being researched and developed

    Evolution of central dark matter of early-type galaxies up to z ~ 0.8

    Full text link
    We investigate the evolution of dark and luminous matter in the central regions of early-type galaxies (ETGs) up to z ~ 0.8. We use a spectroscopically selected sample of 154 cluster and field galaxies from the EDisCS survey, covering a wide range in redshifts (z ~ 0.4-0.8), stellar masses (logM/M\log M_{\star}/ M_{\odot} ~ 10.5-11.5 dex) and velocity dispersions (σ\sigma_{\star} ~ 100-300 \, km/s). We obtain central dark matter (DM) fractions by determining the dynamical masses from Jeans modelling of galaxy aperture velocity dispersions and the MM_{\star} from galaxy colours, and compare the results with local samples. We discuss how the correlations of central DM with galaxy size (i.e. the effective radius, ReR_{\rm e}), MM_{\star} and σ\sigma_{\star} evolve as a function of redshift, finding clear indications that local galaxies are, on average, more DM dominated than their counterparts at larger redshift. This DM fraction evolution with zz can be only partially interpreted as a consequence of the size-redshift evolution. We discuss our results within galaxy formation scenarios, and conclude that the growth in size and DM content which we measure within the last 7 Gyr is incompatible with passive evolution, while it is well reproduced in the multiple minor merger scenario. We also discuss the impact of the IMF on our DM inferences and argue that this can be non-universal with the lookback time. In particular, we find the Salpeter IMF can be better accommodated by low redshift systems, while producing stellar masses at high-zz which are unphysically larger than the estimated dynamical masses (particularly for lower-σ\sigma_{\star} systems).Comment: 14 pages, 6 figures, 3 tables, MNRAS in pres

    Finding Strong Gravitational Lenses in the Kilo Degree Survey with Convolutional Neural Networks

    Get PDF
    The volume of data that will be produced by new-generation surveys requires automatic classification methods to select and analyze sources. Indeed, this is the case for the search for strong gravitational lenses, where the population of the detectable lensed sources is only a very small fraction of the full source population. We apply for the first time a morphological classification method based on a Convolutional Neural Network (CNN) for recognizing strong gravitational lenses in 255255 square degrees of the Kilo Degree Survey (KiDS), one of the current-generation optical wide surveys. The CNN is currently optimized to recognize lenses with Einstein radii 1.4\gtrsim 1.4 arcsec, about twice the rr-band seeing in KiDS. In a sample of 2178921789 colour-magnitude selected Luminous Red Galaxies (LRG), of which three are known lenses, the CNN retrieves 761 strong-lens candidates and correctly classifies two out of three of the known lenses. The misclassified lens has an Einstein radius below the range on which the algorithm is trained. We down-select the most reliable 56 candidates by a joint visual inspection. This final sample is presented and discussed. A conservative estimate based on our results shows that with our proposed method it should be possible to find 100\sim100 massive LRG-galaxy lenses at z\lsim 0.4 in KiDS when completed. In the most optimistic scenario this number can grow considerably (to maximally \sim2400 lenses), when widening the colour-magnitude selection and training the CNN to recognize smaller image-separation lens systems.Comment: 24 pages, 17 figures. Published in MNRA

    Constraining decaying dark energy density models with the CMB temperature-redshift relation

    Full text link
    We discuss the thermodynamic and dynamical properties of a variable dark energy model with density scaling as ρx(1+z)m\rho_x \propto (1+z)^{m}, z being the redshift. These models lead to the creation/disruption of matter and radiation, which affect the cosmic evolution of both matter and radiation components in the Universe. In particular, we have studied the temperature-redshift relation of radiation, which has been constrained using a recent collection of cosmic microwave background (CMB) temperature measurements up to z3z \sim 3. We find that, within the uncertainties, the model is indistinguishable from a cosmological constant which does not exchange any particles with other components. Future observations, in particular measurements of CMB temperature at large redshift, will allow to give firmer bounds on the effective equation of state parameter weffw_{eff} for such types of dark energy models.Comment: 9 pages, 1 figure, to appear in the Proceedings of the 3rd Italian-Pakistani Workshop on Relativistic Astrophysics, Lecce 20-22 June 2011, published in Journal of Physics: Conference Series (JPCS

    Do software models based on the UML aid in source-code comprehensibility? Aggregating evidence from 12 controlled experiments

    Get PDF
    In this paper, we present the results of long-term research conducted in order to study the contribution made by software models based on the Unified Modeling Language (UML) to the comprehensibility of Java source-code deprived of comments. We have conducted 12 controlled experiments in different experimental contexts and on different sites with participants with different levels of expertise (i.e., Bachelor’s, Master’s, and PhD students and software practitioners from Italy and Spain). A total of 333 observations were obtained from these experiments. The UML models in our experiments were those produced in the analysis and design phases. The models produced in the analysis phase were created with the objective of abstracting the environment in which the software will work (i.e., the problem domain), while those produced in the design phase were created with the goal of abstracting implementation aspects of the software (i.e., the solution/application domain). Source-code comprehensibility was assessed with regard to correctness of understanding, time taken to accomplish the comprehension tasks, and efficiency as regards accomplishing those tasks. In order to study the global effect of UML models on source-code comprehensibility, we aggregated results from the individual experiments using a meta-analysis. We made every effort to account for the heterogeneity of our experiments when aggregating the results obtained from them. The overall results suggest that the use of UML models affects the comprehensibility of source-code, when it is deprived of comments. Indeed, models produced in the analysis phase might reduce source-code comprehensibility, while increasing the time taken to complete comprehension tasks. That is, browsing source code and this kind of models together negatively impacts on the time taken to complete comprehension tasks without having a positive effect on the comprehensibility of source code. One plausible justification for this is that the UML models produced in the analysis phase focus on the problem domain. That is, models produced in the analysis phase say nothing about source code and there should be no expectation that they would, in any way, be beneficial to comprehensibility. On the other hand, UML models produced in the design phase improve source-code comprehensibility. One possible justification for this result is that models produced in the design phase are more focused on implementation details. Therefore, although the participants had more material to read and browse, this additional effort was paid back in the form of an improved comprehension of source code

    Surface alignment and anchoring transitions in nematic lyotropic chromonic liquid crystal

    Full text link
    The surface alignment of lyotropic chromonic liquid crystals (LCLCs) can be not only planar (tangential) but also homeotropic, with self-assembled aggregates perpendicular to the substrate, as demonstrated by mapping optical retardation and by three-dimensional imaging of the director field. With time, the homeotropic nematic undergoes a transition into a tangential state. The anchoring transition is discontinuous and can be described by a double-well anchoring potential with two minima corresponding to tangential and homeotropic orientation.Comment: Accepted for publication in Phys. Rev. Lett. (Accepted Wednesday Jun 02, 2010

    SEAGLE - III: Towards resolving the mismatch in the dark-matter fraction in early-type galaxies between silations and observations

    Get PDF
    The central dark-matter fraction of galaxies is sensitive to feedback processes during galaxy foation. Strong gravitational lensing has been effective in the precise measurement of the dark-matter fraction inside massive early-type galaxies. Here, we compare the projected dark-matter fraction of early-type galaxies inferred from the SLACS (Sloan Lens ACS Survey) strong-lens survey with those obtained from the Evolution and Assembly of GaLaxies and their Environment (EAGLE), Illustris, and IllustrisTNG hydrodynamical silations. Previous comparisons with some silations revealed a large discrepancy, with considerably higher inferred dark-matter fractions - by factors of ≈2-3 - inside half of the effective radius in observed strong-lens galaxies as compared to silated galaxies. Here, we report good agreement between EAGLE and SLACS for the dark-matter fractions inside both half of the effective radius and the effective radius as a function of the galaxy's stellar mass, effective radius, and total mass-density slope. However, for IllustrisTNG and Illustris, the dark-matter fractions are lower than observed. This work consistently assumes a Chabrier initial mass function (IMF), which suggests that a different IMF (although not excluded) is not necessary to resolve this mismatch. The differences in the stellar feedback model between EAGLE and Illustris and IllustrisTNG are likely the dominant cause of the difference in their dark-matter fraction and density slope

    Performance Characterization of ESA's Tropospheric Delay Calibration System for Advanced Radio Science Experiments

    Get PDF
    Media propagation noises are amongst the main error sources of radiometric observables for deep space missions, with fluctuations of the tropospheric excess path length representing a relevant contributor to the Doppler noise budget. Microwave radiometers currently represent the most accurate instruments for the estimation of the tropospheric delay and delay rate along a slant direction. A prototype of a tropospheric delay calibration system (TDCS), using a 14 channel Ka/V band microwave radiometer, has been developed under a European Space Agency contract and installed at the deep space ground station in Malargüe, Argentina, in February 2019. After its commissioning, the TDCS has been involved in an extensive testbed campaign by recording a total of 44 tracking passes of the Gaia spacecraft, which were used to perform an orbit determination analysis. This work presents the first statistical characterization of the end-to-end performance of the TDCS prototype in an operational scenario. The results show that using TDCS-based calibrations instead of the standard GNSS-based calibrations leads to a significant reduction of the residual Doppler noise and instability
    corecore