595,652 research outputs found
ON THE LOGIC, METHOD AND SCIENTIFIC DIVERSITY OF TECHNICAL SYSTEMS: AN INQUIRY INTO THE DIAGNOSTIC MEASUREMENT OF HUMAN SKIN
This dissertation explores some of the scientific, technical and cultural history of human skin measurement and diagnostics. Through a significant collection of primary texts and case studies, I track the changing technologies and methods used to measure skin, as well as the scientific and sociotechnical applications. I then map these histories onto some of the diverse understandings of the human body, physics, biology, natural philosophy and language that underpinned the scientific enterprise of skin measurement. The main argument of my thesis demonstrates how these diverse histories of science historically and theoretically inform the succeeding methods and applications for skin measurement from early Greek medicine, to beginnings of Anthropology as scientific discipline, to the emergence of scientific racism, to the age of digital imaging analysis, remote sensing, algorithms, massive databases and biometric technologies; further, these new digital applications go beyond just health diagnostics and are creating new technical categorizations of human skin divorced from the established ethical mechanisms of modern science. Based on this research, I inquire how communication practices within the scientific enterprise address the ethical and historical implications for a growing set of digital biometric applications with industrial, military, sociopolitical and public functions
Recommended from our members
Unsupervised word embeddings capture latent knowledge from materials science literature.
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases1,2, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing3-10, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11-13 (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure-property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature
Utilization of Scientific Publication Media to Improve the Quality of Scientific Work
The publication of scientific work is an absolute thing that must be owned and produced by academics at this time. Moreover, when referring to the Minister of Administrative and Bureaucratic Reform (PAN RB) Regulation No. 17 of 2013 and the Minister of Education and Culture Regulation No. 92 of 2004 which states that the increase in the academic level of lecturers requires publication of accredited national scientific journals and journals Internationally reputable in their field. In addition to being very important for the performance of lecturers, the publication of scientific papers has become a government regulation through the Director General of Higher Education, which requires S1, S2 and S3 students to make a summary of scientific work published both online and in print as one of the graduation requirements. Seeing this, Raharja College has participated in providing publication media for scientific works, especially in online forms, one of which is iLearning Journal Center (iJC). Until now iLearning Journal Center has overseen 5 (five) journals in it with different scope of research. However, the problems that occur at this time are still a lack of the general public to know especially in the Higher Education environment regarding the iLearning Journal Center (iJC) as a publication media for online scientific work. In this study will be discussed about the steps or methods taken to maximize the use of iLearning Journal Center (iJC) as an online journal publication media to improve the quality and quantity of scientific works. This study uses SWOT analysis method and system design using the Unified Modeling Language (UML) and the applications used in this study by applying the Open Journal System (OJS) which is known as management software and publishing online journals. The results of this study are a governance or management that can be done as a step to maximize the increase of publication of online scientific works for the academic community.
Keywords: iLearning Journal Center (iJC), Scientific Work Publication, Journal Online, Open Journal System (OJS
DAS: a data management system for instrument tests and operations
The Data Access System (DAS) is a metadata and data management software
system, providing a reusable solution for the storage of data acquired both
from telescopes and auxiliary data sources during the instrument development
phases and operations. It is part of the Customizable Instrument WorkStation
system (CIWS-FW), a framework for the storage, processing and quick-look at the
data acquired from scientific instruments. The DAS provides a data access layer
mainly targeted to software applications: quick-look displays, pre-processing
pipelines and scientific workflows. It is logically organized in three main
components: an intuitive and compact Data Definition Language (DAS DDL) in XML
format, aimed for user-defined data types; an Application Programming Interface
(DAS API), automatically adding classes and methods supporting the DDL data
types, and providing an object-oriented query language; a data management
component, which maps the metadata of the DDL data types in a relational Data
Base Management System (DBMS), and stores the data in a shared (network) file
system. With the DAS DDL, developers define the data model for a particular
project, specifying for each data type the metadata attributes, the data format
and layout (if applicable), and named references to related or aggregated data
types. Together with the DDL user-defined data types, the DAS API acts as the
only interface to store, query and retrieve the metadata and data in the DAS
system, providing both an abstract interface and a data model specific one in
C, C++ and Python. The mapping of metadata in the back-end database is
automatic and supports several relational DBMSs, including MySQL, Oracle and
PostgreSQL.Comment: Accepted for pubblication on ADASS Conference Serie
Multi-Architecture Monte-Carlo (MC) Simulation of Soft Coarse-Grained Polymeric Materials: SOft coarse grained Monte-carlo Acceleration (SOMA)
Multi-component polymer systems are important for the development of new
materials because of their ability to phase-separate or self-assemble into
nano-structures. The Single-Chain-in-Mean-Field (SCMF) algorithm in conjunction
with a soft, coarse-grained polymer model is an established technique to
investigate these soft-matter systems. Here we present an im- plementation of
this method: SOft coarse grained Monte-carlo Accelera- tion (SOMA). It is
suitable to simulate large system sizes with up to billions of particles, yet
versatile enough to study properties of different kinds of molecular
architectures and interactions. We achieve efficiency of the simulations
commissioning accelerators like GPUs on both workstations as well as
supercomputers. The implementa- tion remains flexible and maintainable because
of the implementation of the scientific programming language enhanced by
OpenACC pragmas for the accelerators. We present implementation details and
features of the program package, investigate the scalability of our
implementation SOMA, and discuss two applications, which cover system sizes
that are difficult to reach with other, common particle-based simulation
methods
LINGUOCULTURAL PECULIARITIES OF ABBREVIATIONS IN THE POLITICAL DISCOURSE
The purpose of the article: The aim of the article is to define the linguocultural peculiarities of the abbreviations in the political discourse.Materials and methods: The leading approach to the study of this problem is scientific. In the article such general scientific research methods as a descriptive-analytical method; method of continuous sampling and contextual analysis were used.Results of the research: there is relevance to the detailed investigation of the active language processes in modern English electronic media. Abbreviations are the language tools that help create a picture of the day. The newspaper is the first source where new abbreviations are fixed. In the English speaking electronic newspapers in political discourse generally accepted abbreviations are used. In political articles the use of abbreviations is in outline. The materials of the article can be useful for students, Masters, and postgraduates in English study. Data on the "language picture of the world" of the analyzed linguistic and cultural community can be applied in the methodology and teaching practice of foreign languages.
Applications: This research can be used for universities, teachers, and students.
Novelty/Originality: In this research, the model of Linguocultural Peculiarities of Abbreviations in Political Discourse is presented in a comprehensive and complete manner
Discovering Patterns of Definitions and Methods from Scientific Documents
The difficulties of automatic extraction of definitions and methods from
scientific documents lie in two aspects: (1) the complexity and diversity of
natural language texts, which requests an analysis method to support the
discovery of pattern; and, (2) a complete definition or method represented by a
scientific paper is usually distributed within text, therefore an effective
approach should not only extract single sentence definitions and methods but
also integrate the sentences to obtain a complete definition or method. This
paper proposes an analysis method for discovering patterns of definition and
method and uses the method to discover patterns of definition and method.
Completeness of the patterns at the semantic level is guaranteed by a complete
set of semantic relations that identify definitions and methods respectively.
The completeness of the patterns at the syntactic and lexical levels is
guaranteed by syntactic and lexical constraints. Experiments on the self-built
dataset and two public definition datasets show that the discovered patterns
are effective. The patterns can be used to extract definitions and methods from
scientific documents and can be tailored or extended to suit other
applications
Saiph, a domain specific language for computational fluid dynamics simulations
Nowadays, High-Performance Computing (HPC) is assuming an increasingly central role in scientific research while computer architectures are becoming more and more hetero-geneous and using different parallel programming models and techniques. Under this scenario, the only way to successfully exploit an HPC system requires that computer and domain scientists work closely towards producing applications to solve domain problems, ensuring productivity and performance at the same time. Facing such purpose, Saiph is a Domain Specific Language designed to ease the task of solving couple and uncouple Partial Differential Equations (PDE’s), with a primary focusing on Computational Fluid Dynamics (CFD) applications. Saiph allows to model complex physical phenomena featured by PDE’s, easing the use of numerical methods and optimizations on different computer architectures to the users
Polyhedral+Dataflow Graphs
This research presents an intermediate compiler representation that is designed for optimization, and emphasizes the temporary storage requirements and execution schedule of a given computation to guide optimization decisions. The representation is expressed as a dataflow graph that describes computational statements and data mappings within the polyhedral compilation model. The targeted applications include both the regular and irregular scientific domains.
The intermediate representation can be integrated into existing compiler infrastructures. A specification language implemented as a domain specific language in C++ describes the graph components and the transformations that can be applied. The visual representation allows users to reason about optimizations. Graph variants can be translated into source code or other representation. The language, intermediate representation, and associated transformations have been applied to improve the performance of differential equation solvers, or sparse matrix operations, tensor decomposition, and structured multigrid methods
- …