232 research outputs found

    Analysis of individual mouse activity in group housed animals of different inbred strains using a novel automated home cage analysis system.

    Get PDF
    Central nervous system disorders such as autism as well as the range of neurodegenerative diseases such as Huntington's disease are commonly investigated using genetically altered mouse models. The current system for characterizing these mice usually involves removing the animals from their home-cage environment and placing them into novel environments where they undergo a battery of tests measuring a range of behavioral and physical phenotypes. These tests are often only conducted for short periods of times in social isolation. However, human manifestations of such disorders are often characterized by multiple phenotypes, presented over long periods of time and leading to significant social impacts. Here, we have developed a system which will allow the automated monitoring of individual mice housed socially in the cage they are reared and housed in, within established social groups and over long periods of time. We demonstrate that the system accurately reports individual locomotor behavior within the group and that the measurements taken can provide unique insights into the effects of genetic background on individual and group behavior not previously recognized

    Production of He-4 and (4) in Pb-Pb collisions at root(NN)-N-S=2.76 TeV at the LHC

    Get PDF
    Results on the production of He-4 and (4) nuclei in Pb-Pb collisions at root(NN)-N-S = 2.76 TeV in the rapidity range vertical bar y vertical bar <1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4(He) = (0.8 +/- 0.4 (stat) +/- 0.3 (syst)) x 10(-6) and dN/dy4 = (1.1 +/- 0.4 (stat) +/- 0.2 (syst)) x 10(-6), respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (T-chem = 156 MeV) as for light hadrons. The measured ratio of (4)/He-4 is 1.4 +/- 0.8 (stat) +/- 0.5 (syst). (C) 2018 Published by Elsevier B.V.Peer reviewe

    A scoring system for the evaluation of the mutated Crb1/rd8-derived retinal lesions in C57BL/6N mice [version 1; referees: 2 approved]

    No full text
    As part of the International Mouse Phenotyping Consortium (IMPC) programme, the MRC Harwell is conducting a large eye morphology phenotyping screen on genetically modified mice compared to the baseline phenotype observed in the background strain of C57BL/6NTac. The C57BL/6NTac strain is known to carry a spontaneous mutation in the Crb1 gene that causes retinal degeneration characterized by the presence of white spots (flecks) in the fundus. These flecks potentially represent a confounding factor, masking similar retinal phenotype abnormalities that may be detected in mutants. Therefore we investigated the frequency, position and extent of the flecks in a large population of C57BL/6NTac mice to provide the basis for evaluating the presence of flecks in mutant mice with the same genetic background. We found that in our facility males were more severely affected than females and that in both males and females the most common localisation of the flecks was in the inferior hemicycle of the fundus

    A flexible and scalable architecture for human-robot interaction

    No full text
    Recent developments and advancements in several areas of Computer Science such as Semantic Web, Natural Language Understanding, Knowledge Representation, and more in general Artificial Intelligence have enabled to develop automatic and smart systems able to address various challenges and tasks. In this paper, we present a scalable and flexible humanoid robot architecture which employs artificial intelligent technologies and developed on top of the programmable humanoid robot called Zora. The framework is composed by three different modules which enable the interaction between Zora and a human for tasks such as Sentiment Understanding, Question-Answering, and automatic Object Recognition. The framework is flexible and extensible, and can be augmented by other modules. Moreover, the embedded modules we present are general, in the sense that they can be easily enriched by adding training resources for the presented sub-components. The design of each module consists of two components (i) a front-end system which is responsible for the interaction with humans, and (ii) a back-end component which resides on server side and performs the heavy computation

    Agile Methodologies in Web Programming: a Survey

    No full text
    This paper reports the results of a survey concerning the use of Agile Methodologies (AM), techniques and tools for Web Programming. The survey was performed during the period from October to December 2013, and involved 112 Web application developers from several countries. The main purpose of the survey is to assess the usage of AMs, and of specific practices and tools, in the context of Web programming and of related technologies, such as Content Management Systems. The results confirm a broad adoption of AMs among Web developers, and the prevalence of Scrum among AMs

    Agile Methodologies in Web Programming: a Survey

    No full text
    This paper reports the results from a survey concerning the use of Agile Methodologies (AM), techniques and tools for Web Programming. The survey lasted from October to December 2013, and involved 112 Web application developers from 32 countries. Its main purpose was to assess the usage of AMs, and of specific practices and tools, in the context of Web programming and of related technologies, such as Content Management Systems. The results confirm a broad adoption of AMs among Web developers and the prevalence of Scrum among AMs

    Web framework points: an effort estimation methodology for Web application development using a content management framework

    No full text
    Web applications are among the most popular and relevant kinds of application. Most Web applications are developed using a content management framework (CMF). CMF helps to accelerate the publication of large amounts of information and the development of Web applications. However, developing Web applications through CMF is not exempt from cost and time overruns, as in traditional software projects. Currently, there is no estimation model able to adequately measure the effort of developing a Web application. This work presents a new methodology, called web framework points, to estimate the effort of Web applications developed with CMF. Web framework points is a hybrid methodology, composed of a sizing phase, which follows specific guidelines, and an effort estimation phase, obtained by applying a cost model to the size model of the project to estimate. The sizing of the project takes into account not only usual functional requirements, as in function points analysis, but also elements specific for developing a Web application through CMF. We also present the experimental validation of the proposed methodology, performed on a dataset of 29 real-world projects, of which 83% show an estimation error of less than 25%

    Managing a heterogeneous scientific computing cluster with cloud-like tools: ideas and experience

    Get PDF
    Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation of special features or a lower-level control on the computing infrastructure, for example for testing experimental configurations. The variety of use cases proposed by several departments of the University of Torino, including ones from solid-state chemistry, computational biology, genomics and many others, called for different and sometimes conflicting configurations; furthermore, several R&D activities in the field of scientific computing, with topics ranging from GPU acceleration to Cloud Computing technologies, needed a platform to be carried out on. The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible and reconfigurable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. We describe some of the use cases that prompted the design and construction of the system, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests

    Managing a heterogeneous scientific computing cluster with cloud-like tools: ideas and experience

    Get PDF
    Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation of special features or a lower-level control on the computing infrastructure, for example for testing experimental configurations. The variety of use cases proposed by several departments of the University of Torino, including ones from solid-state chemistry, computational biology, genomics and many others, called for different and sometimes conflicting configurations; furthermore, several R&D activities in the field of scientific computing, with topics ranging from GPU acceleration to Cloud Computing technologies, needed a platform to be carried out on. The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible and reconfigurable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. We describe some of the use cases that prompted the design and construction of the system, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests
    corecore