67,402 research outputs found

    Toward the consolidation of a multi-metric-based journal ranking and categorization system for computer science subject areas

    Get PDF
    The evaluation of scientific journals poses challenges owing to the existence of various impact measures. This is because journal ranking is a multidimensional construct that may not be assessed effectively using a single metric such as an impact factor. A few studies have proposed an ensemble of metrics to prevent the bias induced by an individual metric. In this study, a multi-metric journal ranking method based on the standardized average index (SA index) was adopted to develop an extended standardized average index (ESA index). The ESA index utilizes six metrics: the CiteScore, Source Normalized Impact per Paper (SNIP), SCImago Journal Rank (SJR), Hirsh index (H-index), Eigenfactor Score, and Journal Impact Factor from three well-known databases (Scopus, SCImago Journal & Country Rank, and Web of Science). Experiments were conducted in two computer science subject areas: (1) artificial intelligence and (2) computer vision and pattern recognition. Comparing the results of the multi-metric-based journal ranking system with the SA index, it was demonstrated that the multi-metric ESA index exhibited high correlation with all other indicators and significantly outperformed the SA index. To further evaluate the performance of the model and determine the aggregate impact of bibliometric indices with the ESA index, we employed unsupervised machine learning techniques such as clustering coupled with principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). These techniques were utilized to measure the clustering impact of various bibliometric indicators on both the complete set of bibliometric features and the reduced set of features. Furthermore, the results of the ESA index were compared with those of other ranking systems, including the internationally recognized Scopus, SJR, and HEC Journal Recognition System (HJRS) used in Pakistan. These comparisons demonstrated that the multi-metric-based ESA index can serve as a valuable reference for publishers, journal editors, researchers, policymakers, librarians, and practitioners in journal selection, decision making, and professional assessment

    Experience with the EURECA Packet Telemetry and Packet Telecommand system

    Get PDF
    The European Retrieval Carrier (EURECA) was launched on its first flight on the 31st of July 1992 and retrieved on the 29th of June 1993. EURECA is characterized by several new on-board features, most notably Packet telemetry, and a partial implementation of packet telecommanding, the first ESA packetised spacecraft. Today more than one year after the retrieval the data from the EURECA mission has to a large extent been analysed and we can present some of the interesting results. This paper concentrates on the implementation and operational experience with the EURECA Packet Telemetry and Packet Telecommanding. We already discovered during the design of the ground system that the use of packet telemetry has major impact on the overall design and that processing of packet telemetry may have significant effect on the computer loading and sizing. During the mission a number of problems were identified with the on-board implementation resulting in very strange anomalous behaviors. Many of these problems directly violated basic assumptions for the design of the ground segment adding to the strange behavior. The paper shows that the design of a telemetry packet system should be flexible enough to allow a rapid configuration of the telemetry processing in order to adapt it to the new situation in case of an on-board failure. The experience gained with the EURECA mission control should be used to improve ground systems for future missions

    Automatic coding of short text responses via clustering in educational assessment

    Full text link
    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the Programme for International Student Assessment (PISA) 2012 in Germany. Free text responses of 10 items with Formula responses in total were analyzed. We further examined the effect of different methods, parameter values, and sample sizes on performance of the implemented system. The system reached fair to good up to excellent agreement with human codings Formula Especially items that are solved by naming specific semantic concepts appeared properly coded. The system performed equally well with Formula and somewhat poorer but still acceptable down to Formula Based on our findings, we discuss potential innovations for assessment that are enabled by automatic coding of short text responses. (DIPF/Orig.

    LARES-lab: a thermovacuum facility for research and e-learning. Tests of LARES satellite components and small payloads for e-learning

    Get PDF
    LARES, an Italian Space Agency satellite, has been launched successfully in 2012. A small thermovacuum facility has been designed and built specifically for performing tests on the optical components of the satellite. Due to the extremely demanding performances of the optical cube corner reflectors, the space conditions have been simulated using the most up-to-date technology available. In particular Sun, Earth and deep space can be simulated in a ultra high vacuum. It is planned to automate the facility so that it can be operated remotely over the internet. The students during the lectures and the researchers from home will be able to perform thermal tests on specimens by exposing them, for specified amount of time, toward Earth, Sun or deep space. They will collect pressures and temperatures and will input additional thermal power through resistive heaters. The paper will first describe the facility and its capabilities showing the tests performed on LARES satellite components but will focus mainly to the planned upgrades that improve its remote use both for research and e-learning

    An investigation into reducing the spindle acceleration energy consumption of machine tools

    Get PDF
    Machine tools are widely used in the manufacturing industry, and consume large amount of energy. Spindle acceleration appears frequently while machine tools are working. It produces power peak which is highly energy intensive. As a result, a considerable amount of energy is consumed by this acceleration during the use phase of machine tools. However, there is still a lack of understanding of the energy consumption of spindle acceleration. Therefore, this research aims to model the spindle acceleration energy consumption of computer numerical control (CNC) lathes, and to investigate potential approaches to reduce this part of consumption. The proposed model is based on the principle of spindle motor control and includes the calculation of moment of inertia for spindle drive system. Experiments are carried out based on a CNC lathe to validate the proposed model. The approaches for reducing the spindle acceleration energy consumption were developed. On the machine level, the approaches include avoiding unnecessary stopping and restarting of the spindle, shortening the acceleration time, lightweight design, proper use and maintenance of the spindle. On the system level, a machine tool selection criterion is developed for energy saving. Results show that the energy can be reduced by 10.6% to more than 50% using these approaches, most of which are practical and easy to implement

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions

    THE COLUMBUS GROUND SEGMENT – A PRECURSOR FOR FUTURE MANNED MISSIONS

    Get PDF
    In the beginning the space programs were self standing national activities, often in competition to other nations. Today space flight becomes more and more an international task. Complex space mission and deep space explorations are not longer to be stemmed by one agency or nation alone but are joint activities of several nations. The best example for such a joint (ad-) venture at the moment is the International Space Station ISS. Such international activities define complete new requirements for the supporting ground segments. The world-wide distribution of a ground segment is not any longer limited to a network of ground stations with the aim to provide a good coverage of the space craft. The coverage is sometimes – like for the ISSanyway ensured by using a relay satellite system instead. In addition to the enhanced down- and uplink methods a ground segment is aimed to connect the different centres of competence of all participating agencies/nations. From the space craft operations point of view such transnational ground segments are required to support distributed and shared operations in a predefined decision/commanding hierarchy. This has to be taken into account in the technical topology as well as for the operational set-up and teaming. Last not least increases the duration of missions, which requires a certain flexibility of the ground segment and long-term maintenance strategies for the ground segment with a special emphasis on nonintrusive replacements. The Russian space station MIR has been in the orbit for about 15 years, the ISS is currently targeted for 2020, to be for over 20 years in space

    Metadata impact on research paper similarity

    Get PDF
    While collaborative filtering and citation analysis have been well studied for research paper recommender systems, content-based approaches typically restrict themselves to straightforward application of the vector space model. However, various types of metadata containing potentially useful information are usually available as well. Our work explores several methods to exploit this information in combination with different similarity measures
    • …
    corecore