251,555 research outputs found
AN ERROR ANALYSIS OF SCIENTIFIC PAPERS WRITTEN BY PUBLIC HEALTH STUDY PROGRAM STUDENTS OF STIK BINA HUSADA PALEMBANG
One of the student's obligations as a graduation requirement is to write a thesis. Scientific writing is written work that is systematically arranged according to certain rules according to the results of scientific thinking. There are several types of scientific work reports, namely: research reports, scientific works, dissertations, and proposals. Abstract is an important part of a scientific work located on the first page of a scientific paper.Abstract is a brief description of a scientific work, background, questions, methods used, results obtained, conclusions, and recommendations. This paper discussed the results of the analysis of the errors in English abstract of postgraduate students STIK Bina Husada Palembang majoring Public Health Study Program in Scientific Writing Course. The design of this type of research was observational with a descriptive approach. The aim was to collect data through observation of scientific documents which aims to explain the feasibility of writing the English abstracts. 100% abstracts were appropriate in giving background detail, 70% were appropriate in describing the research activity and stating the conclusion, while reporting result only 60% and 40% describing the methods were appropriate. Google Translate sometimes cannot convert the word meanings appropriately, such as Posyandu, Gasurkes, and Perwali. The students must recheck their work and not rely solely on the translation application.Keywords:error, analysis, scientific paper
HUDDL for description and archive of hydrographic binary data
Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice. This paper presents a different approach: a standardized description of (past, present, and future) data formats using the Hydrographic Universal Data Description Language (HUDDL), a descriptive language implemented using the Extensible Markup Language (XML). That is, XML is used to provide a structural and physical description of a data format, rather than the content of a particular file. Done correctly, this opens the possibility of automatically generating both multi-language data parsers and documentation for format specification based on their HUDDL descriptions, as well as providing easy version control of them. This solution also provides a powerful approach for archiving a structural description of data along with the data, so that binary data will be easy to access in the future. Intending to provide a relatively low-effort solution to index the wide range of existing formats, we suggest the creation of a catalogue of format descriptions, each of them capturing the logical and physical specifications for a given data format (with its subsequent upgrades). A C/C++ parser code generator is used as an example prototype of one of the possible advantages of the adoption of such a hydrographic data format catalogue
Huddl: the Hydrographic Universal Data Description Language
Since many of the attempts to introduce a universal hydrographic data format have failed or have been only partially successful, a different approach is proposed. Our solution is the Hydrographic Universal Data Description Language (HUDDL), a descriptive XML-based language that permits the creation of a standardized description of (past, present, and future) data formats, and allows for applications like HUDDLER, a compiler that automatically creates drivers for data access and manipulation. HUDDL also represents a powerful solution for archiving data along with their structural description, as well as for cataloguing existing format specifications and their version control. HUDDL is intended to be an open, community-led initiative to simplify the issues involved in hydrographic data access
Recommended from our members
Leveraging legacy codes to distributed problem solving environments: A web service approach
This paper describes techniques used to leverage high performance legacy codes as CORBA components to a distributed problem solving environment. It first briefly introduces the software architecture adopted by the environment. Then it presents a CORBA oriented wrapper generator (COWG) which can be used to automatically wrap high performance legacy codes as CORBA components. Two legacy codes have been wrapped with COWG. One is an MPI-based molecular dynamic simulation (MDS) code, the other is a finite element based computational fluid dynamics (CFD) code for simulating incompressible Navier-Stokes flows. Performance comparisons between runs of the MDS CORBA component and the original MDS legacy code on a cluster of workstations and on a parallel computer are also presented. Wrapped as CORBA components, these legacy codes can be reused in a distributed computing environment. The first case shows that high performance can be maintained with the wrapped MDS component. The second case shows that a Web user can submit a task to the wrapped CFD component through a Web page without knowing the exact implementation of the component. In this way, a user’s desktop computing environment can be extended to a high performance computing environment using a cluster of workstations or a parallel computer
Recommended from our members
Extracting and re-using research data from chemistry e-theses: the SPECTRa-T project
Scientific e-theses are data-rich resources, but much of the information they contain is not readily accessible. For chemistry, the SPECTRa-T project has addressed this problem by developing data-mining techniques to extract experimental data, creating RDF (Resource Description Framework) triples for exposure to sophisticated Semantic Web searches.
We used OSCAR3, an Open Source chemistry text-mining tool, to parse and extract data from theses in PDF, and from theses in Office Open XML document format.
Theses in PDF suffered data corruption and a loss of formatting that prevented the identification of chemical objects. Theses in .docx yielded semantically rich SciXML that enabled the additional extraction of associated data. Chemical objects were placed in a data repository, and RDF triples deposited in a triplestore.
Data-mining from chemistry e-theses is both desirable and feasible; but the use of PDF, the de facto format standard for deposit in most repositories, prevents the optimal extraction of data for semantic querying. In order to facilitate this, we recommend that universities also require deposition of chemistry e-theses in an XML document format. Further work is required to clarify the complex IPR issues and ensure that they do not become an unwarranted barrier to data extraction and re-use
IVOA Recommendation: IVOA Photometry Data Model
The Photometry Data Model (PhotDM) standard describes photometry filters,
photometric systems, magnitude systems, zero points and its interrelation with
the other IVOA data models through a simple data model. Particular attention is
given necessarily to optical photometry where specifications of magnitude
systems and photometric zero points are required to convert photometric
measurements into physical flux density units
The Grid[Way] Job Template Manager, a tool for parameter sweeping
Parameter sweeping is a widely used algorithmic technique in computational
science. It is specially suited for high-throughput computing since the jobs
evaluating the parameter space are loosely coupled or independent.
A tool that integrates the modeling of a parameter study with the control of
jobs in a distributed architecture is presented. The main task is to facilitate
the creation and deletion of job templates, which are the elements describing
the jobs to be run. Extra functionality relies upon the GridWay Metascheduler,
acting as the middleware layer for job submission and control. It supports
interesting features like multi-dimensional sweeping space, wildcarding of
parameters, functional evaluation of ranges, value-skipping and job template
automatic indexation.
The use of this tool increases the reliability of the parameter sweep study
thanks to the systematic bookkeping of job templates and respective job
statuses. Furthermore, it simplifies the porting of the target application to
the grid reducing the required amount of time and effort.Comment: 26 pages, 1 figure
- …