24,039 research outputs found

    Oceanus.

    Get PDF
    v. 26, no. 2 (1983

    Review of precision cancer medicine: Evolution of the treatment paradigm.

    Get PDF
    In recent years, biotechnological breakthroughs have led to identification of complex and unique biologic features associated with carcinogenesis. Tumor and cell-free DNA profiling, immune markers, and proteomic and RNA analyses are used to identify these characteristics for optimization of anticancer therapy in individual patients. Consequently, clinical trials have evolved, shifting from tumor type-centered to gene-directed, histology-agnostic, with innovative adaptive design tailored to biomarker profiling with the goal to improve treatment outcomes. A plethora of precision medicine trials have been conducted. The majority of these trials demonstrated that matched therapy is associated with superior outcomes compared to non-matched therapy across tumor types and in specific cancers. To improve the implementation of precision medicine, this approach should be used early in the course of the disease, and patients should have complete tumor profiling and access to effective matched therapy. To overcome the complexity of tumor biology, clinical trials with combinations of gene-targeted therapy with immune-targeted approaches (e.g., checkpoint blockade, personalized vaccines and/or chimeric antigen receptor T-cells), hormonal therapy, chemotherapy and/or novel agents should be considered. These studies should target dynamic changes in tumor biologic abnormalities, eliminating minimal residual disease, and eradicating significant subclones that confer resistance to treatment. Mining and expansion of real-world data, facilitated by the use of advanced computer data processing capabilities, may contribute to validation of information to predict new applications for medicines. In this review, we summarize the clinical trials and discuss challenges and opportunities to accelerate the implementation of precision oncology

    Applications of satellite and marine geodesy to operations in the ocean environment

    Get PDF
    The requirements for marine and satellite geodesy technology are assessed with emphasis on the development of marine geodesy. Various programs and missions for identification of the satellite geodesy technology applicable to marine geodesy are analyzed along with national and international marine programs to identify the roles of satellite/marine geodesy techniques for meeting the objectives of the programs and other objectives of national interest effectively. The case for marine geodesy is developed based on the extraction of requirements documented by authoritative technical industrial people, professional geodesists, government agency personnel, and applicable technology reports

    Planetary rover technology development requirements

    Get PDF
    Planetary surface (including lunar) mobility and sampling capability is required to support proposed future National Aeronautics and Space Administration (NASA) solar system exploration missions. The NASA Office of Aeronautics and Space Technology (OAST) is addressing some of these technology needs in its base research and development program, the Civil Space Technology Initiative (CSTI) and a new technology initiative entitled Pathfinder. The Pathfinder Planetary Rover (PPR) and Sample Acquisition, Analysis and Preservation (SAAP) programs will develop and validate the technologies needed to enable both robotic and piloted rovers on various planetary surfaces. The technology requirements for a planetary roving vehicle and the development plans of the PPR and SAAP programs are discussed

    Why We Read Wikipedia

    Get PDF
    Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users' motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents' digital traces in Wikipedia's server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia's user experience, editors striving to cater to their readers' needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.Comment: Published in WWW'17; v2 fixes caption of Table

    Guidelines for the presentation and visualisation of lifelog content

    Get PDF
    Lifelogs offer rich voluminous sources of personal and social data for which visualisation is ideally suited to providing access, overview, and navigation. We explore through examples of our visualisation work within the domain of lifelogging the major axes on which lifelogs operate, and therefore, on which their visualisations should be contingent. We also explore the concept of ‘events’ as a way to significantly reduce the complexity of the lifelog for presentation and make it more human-oriented. Finally we present some guidelines and goals which should be considered when designing presentation modes for lifelog conten

    Evaluating trust in electronic commerce : a study based on the information provided on merchants' websites

    Get PDF
    Lack of trust has been identified as a major problem hampering the growth of Electronic Commerce (EC). It is reported by many studies that a large number of online shoppers abandon their transactions because they do not trust the website when they are asked to provide personal information. To support trust, we developed an information framework model based on research on EC trust. The model is based on the information a consumer expects to find on an EC website and that is shown from the literature to increase his/her trust towards online merchants. An information extraction system is then developed to help the user find this information. In this paper, we present the development of the information extraction system and its evaluation. This is then followed by a study looking at the use of the identified variables on a sample of EC websites

    Binary Particle Swarm Optimization based Biclustering of Web usage Data

    Full text link
    Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms
    corecore