75,878 research outputs found
Profiling a decade of information systems frontiersâ research
This article analyses the first ten years of research published in the Information Systems Frontiers (ISF) from 1999 to 2008. The analysis of the published material includes examining variables such as most productive authors, citation analysis, universities associated with the most publications, geographic diversity, authorsâ backgrounds and research methods. The keyword analysis suggests that ISF research has evolved from establishing concepts and domain of information systems (IS), technology and management to contemporary issues such as outsourcing, web services and security. The analysis presented in this paper has identified intellectually significant studies that have contributed to the development and accumulation of intellectual wealth of ISF. The analysis has also identified authors published in other journals whose work largely shaped and guided the researchers published in ISF. This research has implications for researchers, journal editors, and research institutions
Profiling research published in the journal of enterprise information management (JEIM)
Purpose â The purpose of this paper is to analyse research published in the Journal of Enterprise Information Management (JEIM) in the last ten years (1999 to 2008).
Design/methodology/approach â Employing a profiling approach, the analysis of the 381 JEIM publications includes examining variables such as the most active authors, geographic diversity, authors' backgrounds, co-author analysis, research methods and keyword analysis.
Findings â All the finding are in relation to the period of analysis (1999 to 2008). (a) Research categorised under descriptive, theoretical and conceptual methods is the most dominant research approach followed by JEIM authors. This is followed by case study research. (b) The largest proportion of contributions came from researchers and practitioners with an information systems background, followed by those with a background in business and computer science and IT. (c) The keyword analysis suggests that âinformation systemsâ, âelectronic commerceâ, âinternetâ, âlogisticsâ, âsupply chain managementâ, âdecision makingâ, âsmall to medium-sized enterprisesâ, âinformation managementâ, âoutsourcingâ, and âmodellingâ were the most frequently investigated keywords. (d) The paper presents and discusses the findings obtained from the citation analysis that determines the impact of the research published in the JEIM.
Originality/value â The primary value of this paper lies in extending the understanding of the evolution and patterns of IS research. This has been achieved by analysing and synthesising existing JEIM publications
Recommended from our members
Classification of information systems research revisited: A keyword analysis approach
A number of studies have previously been conducted on keyword analysis in order to provide a comprehensive scheme to classify information systems (IS) research. However, these studies appeared prior to 1994, and IS research has clearly developed substantially since then with the emergence of areas such as electronic commerce, electronic government, electronic health and numerous others. Furthermore, the majority of European IS outlets - such as the European Journal of Information Systems and Information Systems Journal - were founded in the early 1990s, and keywords from these journals were not included in any previous work. Given that a number of studies have raised the issue of differences in European and North American IS research topics and approaches, it is arguable that any such analysis must consider sources from both locations to provide a representative and balanced view of IS classification. Moreover, it has also been argued that there is a need for further work in order to create a comprehensive keyword classification scheme reflecting the current state of the art. Consequently, the aim of this paper is to present the results of a keyword analysis utilizing keywords appearing in major peer-reviewed IS publications after the year 1990 through to 2007. This aim is realized by means of the two following objectives: (1) collect all keywords appearing in 24 peer reviewed IS journals after 1990; and (2) identify keywords not included in the previous IS keyword classification scheme. This paper also describes further research required in order to place new keywords in appropriate IS research categories. The paper makes an incremental contribution toward a contemporary means of classifying IS research. This work is important and useful for researchers in understanding the area and evolution of the IS field and also has implications for improving information search and retrieval activities
Strategies and mechanisms for electronic peer review
This journal article published at the October 2000 Frontiers in Education Conference discusses strategies and mechanisms for electronic peer review. It outlines a peer-grading system for review of student assignments over the World-Wide Web called Peer Grader. The system allows authors and reviewers to communicate and authors to update their submissions. This system facilitates collaborative learning and makes it possible to break up a large project into smaller portions. The article summarizes a unique and innovative method of peer-review. Educational levels: Graduate or professional
Introductory programming: a systematic literature review
As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming.
This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research
Asynchronous spiking neurons, the natural key to exploit temporal sparsity
Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms
Accuracy and feasibility of an android-based digital assessment tool for post stroke visual disorders - The StrokeVision App
Background: Visual impairment affects up to 70% of stroke survivors. We designed an app (StrokeVision) to facilitate screening for common post stroke visual issues (acuity, visual fields and visual inattention). We sought to describe the test-time, feasibility, acceptability and accuracy of our app based digital visual assessments against a) current methods used for bedside screening, and b) gold standard measures.
Methods: Patients were prospectively recruited from acute stroke settings. Index tests were app based assessments of fields and inattention performed by a trained researcher. We compared against usual clinical screening practice of visual fields to confrontation including inattention assessment (simultaneous stimuli). We also compared app to gold standard assessments of formal kinetic perimetry (Goldman or Octopus Visual Field Assessment); and pencil and paper based tests of inattention (Albertâs, Star Cancellation, and Line Bisection). Results of inattention and field tests were adjudicated by a specialist Neuro-Ophthalmologist. All assessors were masked to each otherâs results. Participants and assessors graded acceptability using a bespoke scale that ranged from 0 (completely unacceptable) to 10 (perfect acceptability).
Results: Of 48 stroke survivors recruited, the complete battery of index and reference tests for fields was successfully completed in 45. Similar acceptability scores were observed for app-based (assessor median score 10 [IQR:9-10]; patient 9 [IQR:8-10]) and traditional bedside testing (assessor 10 [IQR:9-10; patient 10 [IQR:9-10]). Median test time was longer for app-based testing (combined time-to-completion of all digital tests 420 seconds [IQR:390-588]) when compared with conventional bedside testing (70 seconds, [IQR:40-70]) but shorter than gold standard testing (1260 seconds, [IQR:1005-1620]). Compared with gold standard assessments, usual screening practice demonstrated 79% sensitivity and 82% specificity for detection of a stroke-related field defect. This compares with 79% sensitivity and 88% specificity for StrokeVision digital assessment.
Conclusion: StrokeVision shows promise as a screening tool for visual complications in the acute phase of stroke. The app is at least as good as usual screening and offers other functionality that may make it attractive for use in acute stroke
- âŠ