96,292 research outputs found

    Properties of Faint Distant Galaxies as seen through Gravitational Telescopes

    Full text link
    This paper reviews the most recent developments related to the use of lensing clusters of galaxies as Gravitational Telescopes in deep Universe studies. We summarize the state of the art and the most recent results aiming at studying the physical properties of distant galaxies beyond the limits of conventional spectroscopy. The application of photometric redshift techniques in the context of gravitational lensing is emphasized for the study of both lensing structures and the background population of lensed galaxies. A presently ongoing search for the first building blocks of galaxies behind lensing clusters is presented and discussed.Comment: Review lecture given at "Gravitational Lensing: a unique tool for cosmology",Aussois, France, January 2003. To appear in ASP Conf. S., eds. D. Valls-Gabaud & J.-P. Kneib, 26 pages, 8 figure

    Characterizing Service Level Objectives for Cloud Services: Motivation of Short-Term Cache Allocation Performance Modeling

    Get PDF
    Service level objectives (SLOs) stipulate performance goals for cloud applications, microservices, and infrastructure. SLOs are widely used, in part, because system managers can tailor goals to their products, companies, and workloads. Systems research intended to support strong SLOs should target realistic performance goals used by system managers in the field. Evaluations conducted with uncommon SLO goals may not translate to real systems. Some textbooks discuss the structure of SLOs but (1) they only sketch SLO goals and (2) they use outdated examples. We mined real SLOs published on the web, extracted their goals and characterized them. Many web documents discuss SLOs loosely but few provide details and reflect real settings. Systematic literature review (SLR) prunes results and reduces bias by (1) modeling expected SLO structure and (2) detecting and removing outliers. We collected 75 SLOs where response time, query percentile and reporting period were specified. We used these SLOs to confirm and refute common perceptions. For example, we found few SLOs with response time guarantees below 10 ms for 90% or more queries. This reality bolsters perceptions that single digit SLOs face fundamental research challenges.This work was funded by NSF Grants 1749501 and 1350941.No embargoAcademic Major: Computer Science and EngineeringAcademic Major: Financ

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    Report of the user requirements and web based access for eResearch workshops

    Get PDF
    The User Requirements and Web Based Access for eResearch Workshop, organized jointly by NeSC and NCeSS, was held on 19 May 2006. The aim was to identify lessons learned from e-Science projects that would contribute to our capacity to make Grid infrastructures and tools usable and accessible for diverse user communities. Its focus was on providing an opportunity for a pragmatic discussion between e-Science end users and tool builders in order to understand usability challenges, technological options, community-specific content and needs, and methodologies for design and development. We invited members of six UK e-Science projects and one US project, trying as far as possible to pair a user and developer from each project in order to discuss their contrasting perspectives and experiences. Three breakout group sessions covered the topics of user-developer relations, commodification, and functionality. There was also extensive post-meeting discussion, summarized here. Additional information on the workshop, including the agenda, participant list, and talk slides, can be found online at http://www.nesc.ac.uk/esi/events/685/ Reference: NeSC report UKeS-2006-07 available from http://www.nesc.ac.uk/technical_papers/UKeS-2006-07.pd

    The VIPERS Multi-Lambda Survey. I. UV and NIR Observations, multi-color catalogues and photometric redshifts

    Get PDF
    We present observations collected in the CFHTLS-VIPERS region in the ultraviolet (UV) with the GALEX satellite (far and near UV channels) and the near infrared with the CFHT/WIRCam camera (KsK_s-band) over an area of 22 and 27 deg2^2, respectively. The depth of the photometry was optimized to measure the physical properties (e.g., SFR, stellar masses) of all the galaxies in the VIPERS spectroscopic survey. The large volume explored by VIPERS will enable a unique investigation of the relationship between the galaxy properties and their environment (density field and cosmic web) at high redshift (0.5 < z < 1.2). In this paper, we present the observations, the data reductions and the build-up of the multi-color catalogues. The CFHTLS-T0007 (gri-{\chi}^2) images are used as reference to detect and measure the KsK_s-band photometry, while the T0007 u-selected sources are used as priors to perform the GALEX photometry based on a dedicated software (EMphot). Our final sample reaches NUVABNUV_{AB}~25 (at 5{\sigma}) and KABK_{AB}~22 (at 3{\sigma}). The large spectroscopic sample (~51,000 spectroscopic redshifts) allows us to highlight the robustness of our star/galaxy separation, and the reliability of our photometric redshifts with a typical accuracy σz≀\sigma_z \le 0.04 and a catastrophic failure rate {\eta} < 2% down to i~23. We present various tests on the KsK_s band completeness and photometric redshift accuracy by comparing with existing, overlapping deep photometric catalogues. Finally, we discuss the BzK sample of passive and active galaxies at high redshift and the evolution of galaxy morphology in the (NUV-r) vs (r-K_s) diagram at low redshift (z < 0.25) thanks to the high image quality of the CFHTLS. The images, catalogues and photometric redshifts for 1.5 million sources (down to NUV≀NUV \le 25 or Ks≀K_s \le 22) are released and available at this URL: http://cesam.lam.fr/vipers-mls/Comment: 14 pages, 16 figures. Accepted for publication in A&A. Version to be publishe

    Agents in Bioinformatics

    No full text
    The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarise and reflect on the presentations and discussions
    • 

    corecore