13 research outputs found
Usage of Asp.Net Ajax for Binus School Serpong Web Applications
Today web applications have become a necessity and many companies use them as a communication tool to keep in touch with their customers. The USAge of Web Application in current time increases as the numberof internet users has been rised. For reason of Rich Internet Application, the desktop application developer wasmoved to web application developer with AJAX technology. BINUS School Serpong is a Cambridge Curriculum base International School that uses web application for access every information about the school. By usingAJAX, performance of web application should be improved and the bandwidth USAge is decreased. Problems thatoccur at BINUS School Serpong is not all part of the web application that uses AJAX. This paper introducesusage of AJAX in ASP.NET with C# programming language in web application BINUS School Serpong. It is expected by using ASP.NET AJAX, BINUS School Serpong website performance will be faster because of reducing web page reload. The methodology used in this paper is literature study. Results from this study are to prove that the ASP.NET AJAX can be used easily and improve BINUS School Serpong website performance. Conclusion of this paper is the implementation of ASP.NET AJAX improves performance of web application in BINUS School Serpong
Smart forms: a survey to state and test the most major electronic forms technologies that are based on W3C standards
Smart Forms are efficient and powerful electronic forms that could be used for the interactions between end users and web applications systems. Several electronic forms software products that use W3C technologies are presented to meet the demands of users. This thesis aims to study and test the major electronic forms technologies that are based on W3C standards. It discusses the main electronic forms features and experiments them with some essential applications. This research produces deep understanding of the most electronic forms technologies that are based on W3C standards and their important features, which make an electronic form smart form. In addition, it opens developments prospects for other researchers to develop some applications ideas that could contribute in the electronic forms domain
Implementasi Fitur Autocomplete Dan Algoritma Levenshtein Distance Untuk Meningkatkan Efektivitas Pencarian Kata Di Kamus Besar Bahasa Indonesia (KBBI)
Penelitian ini dilakukan untuk mengimplementasikan fitur autocomplete dan algoritma levenshtein distance pada apllikasi KBBI dan untuk mengetahui efektivitas penggunaannya dalam fitur pencarian aran kata. Metode pengembangan software yang digunakan adalah dengan menggunakan metode waterfall, yang terdiri dari lima bagian yaitu requirement definitions, system and software design, implementation and unit testing, integration and system testing, dan operation and maintenance.Hasil penelitian yang didapat dari pengujian black box terhadap kemunculan autocomplete adalah muncul untuk setiap kata yang diinputkan. Lalu untuk pengujian dengan algoritma levenshtein distance, saran sudah bisa muncul meskipun tidak semua saran sesuai dengan yang diharapkan dan untuk pengujian terhadap keseluruhan sistem aplikasi dihasilkan keluaran yang valid untuk setiap menu yang diuji. Pengujian keefektifan terhadap efektifitas implementasi autocomplete pada aplikasi adalah sebesar 84.615 % yang berarti fitur ini sangat efektif. Dan untuk levenshtein distance adalah sebesar 76.04 % yang berarti efektif untuk digunakan di aplikasi KBBI. Saran yang dapat diberikan dalam penelitian ini adalah sebaiknya dilakukan penambahan menu pencarian kata dan ungkapan daerah, kata dan ungkapan asing, dan sinonim dan akronim agar kamus digital ini lebih lengkap seperti versi cetaknya
Recommended from our members
Dynamic web forms development using RuleML. Building a framework using metadata driven rules to control Web forms generation and appearance.
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands.
A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms¿ generation and appearance.
In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms.
First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that
Abstract
iii
the method can be extended from generic metadata rules to more domain specific rules.
Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms.
Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics.
Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format
A Comparative Look at Entity Framework Code First
The motivation behind this project is to examine what “Entity Framework Code First” is bringing to the world of object relational mappers and data access and how it compares to the more traditional methods of the past. The problem is whether Entity Framework’s high level of abstraction from the database schema is useful to the developers in reducing code development or if traditional approaches with their robust, custom data layers provide developers with an overall better performance. To analyze Entity Framework, a real-world business web application was developed implementing an Entity Framework Code First approach to data access. Using this implementation of a business application, comparisons were drawn between Entity Framework and more traditional data access techniques. The results of the research conclude that while it does have some criticisms, Entity Framework is an improvement upon traditional approaches. It greatly reduces the time spent writing code for the application’s data access layer, makes managing database relationships and data objects easier, provides a level of abstraction to isolate the database from the developer’s application, and translates queries at runtime allowing minimal code impact with regards to database storage changes
Manual and automatic authoring for adaptive hypermedia
Adaptive Hypermedia allows online content to be tailored specifically to the needs
of the user. This is particularly valuable in educational systems, where a student
might benefit from a learning experience which only displays (or recommends)
content that they need to know.
Authoring for adaptive systems requires content to be divided into stand-alone
fragments which must then be labelled with sufficient pedagogical metadata.
Authors must also create a pedagogical strategy that selects the appropriate
content depending on (amongst other things) the learner's profile. This authoring
process is time-consuming and unfamiliar to most non-technical authors. Therefore,
to ensure that students (of all ages, ability level and interests) can benefit from
Adaptive Educational Hypermedia, authoring tools need to be usable by a range of
educators. The overall aim of this thesis is therefore to identify the ways that this
authoring process can be simplified.
The research in this thesis describes the changes that were made to the My Online
Teacher (MOT) tool in order to address issues such as functionality and usability.
The thesis also describes usability and functionality changes that were made to the
GRAPPLE Authoring Tool (GAT), which was developed as part of a European FP7
project. These two tools (which utilise different authoring paradigms) were then
used within a usability evaluation, allowing the research to draw a comparison
between the two toolsets.
The thesis also describes how educators can reuse their existing non-adaptive
(linear) material (such as presentations and Wiki articles) by importing content into
an adaptive authoring system
Visualisation of quality information for geospatial and remote sensing data:providing the GIS community with the decision support tools for geospatial dataset quality evaluation
The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection
Knowledge extraction from unstructured data and classification through distributed ontologies
The World Wide Web has changed the way humans use and share any kind of information. The Web removed several access barriers to the information published and has became an enormous space where users can easily navigate through heterogeneous resources (such as linked documents) and can easily edit, modify, or produce them. Documents implicitly enclose information and relationships among them which become only accessible to human beings. Indeed, the Web of documents evolved towards a space of data silos, linked each other only through untyped references (such as hypertext references) where only humans were able to understand. A growing desire to programmatically access to pieces of data implicitly enclosed in documents has characterized the last efforts of the Web research community. Direct access means structured data, thus enabling computing machinery to easily exploit the linking of different data sources. It has became crucial for the Web community to provide a technology stack for easing data integration at large scale, first structuring the data using standard ontologies and afterwards linking them to external data. Ontologies became the best practices to define axioms and relationships among classes and the Resource Description Framework (RDF) became the basic data model chosen to represent the ontology instances (i.e. an instance is a value of an axiom, class or attribute). Data becomes the new oil, in particular, extracting information from semi-structured textual documents on the Web is key to realize the Linked Data vision. In the literature these problems have been addressed with several proposals and standards, that mainly focus on technologies to access the data and on formats to represent the semantics of the data and their relationships. With the increasing of the volume of interconnected and serialized RDF data, RDF repositories may suffer from data overloading and may become a single point of failure for the overall Linked Data vision. One of the goals of this dissertation is to propose a thorough approach to manage the large scale RDF repositories, and to distribute them in a redundant and reliable peer-to-peer RDF architecture. The architecture consists of a logic to distribute and mine the knowledge and of a set of physical peer nodes organized in a ring topology based on a Distributed Hash Table (DHT). Each node shares the same logic and provides an entry point that enables clients to query the knowledge base using atomic, disjunctive and conjunctive SPARQL queries. The consistency of the results is increased using data redundancy algorithm that replicates each RDF triple in multiple nodes so that, in the case of peer failure, other peers can retrieve the data needed to resolve the queries. Additionally, a distributed load balancing algorithm is used to maintain a uniform distribution of the data among the participating peers by dynamically changing the key space assigned to each node in the DHT. Recently, the process of data structuring has gained more and more attention when applied to the large volume of text information spread on the Web, such as legacy data, news papers, scientific papers or (micro-)blog posts. This process mainly consists in three steps: \emph{i)} the extraction from the text of atomic pieces of information, called named entities; \emph{ii)} the classification of these pieces of information through ontologies; \emph{iii)} the disambigation of them through Uniform Resource Identifiers (URIs) identifying real world objects. As a step towards interconnecting the web to real world objects via named entities, different techniques have been proposed. The second objective of this work is to propose a comparison of these approaches in order to highlight strengths and weaknesses in different scenarios such as scientific and news papers, or user generated contents. We created the Named Entity Recognition and Disambiguation (NERD) web framework, publicly accessible on the Web (through REST API and web User Interface), which unifies several named entity extraction technologies. Moreover, we proposed the NERD ontology, a reference ontology for comparing the results of these technologies. Recently, the NERD ontology has been included in the NIF (Natural language processing Interchange Format) specification, part of the Creating Knowledge out of Interlinked Data (LOD2) project. Summarizing, this dissertation defines a framework for the extraction of knowledge from unstructured data and its classification via distributed ontologies. A detailed study of the Semantic Web and knowledge extraction fields is proposed to define the issues taken under investigation in this work. Then, it proposes an architecture to tackle the single point of failure issue introduced by the RDF repositories spread within the Web. Although the use of ontologies enables a Web where data is structured and comprehensible by computing machinery, human users may take advantage of it especially for the annotation task. Hence, this work describes an annotation tool for web editing, audio and video annotation in a web front end User Interface powered on the top of a distributed ontology. Furthermore, this dissertation details a thorough comparison of the state of the art of named entity technologies. The NERD framework is presented as technology to encompass existing solutions in the named entity extraction field and the NERD ontology is presented as reference ontology in the field. Finally, this work highlights three use cases with the purpose to reduce the amount of data silos spread within the Web: a Linked Data approach to augment the automatic classification task in a Systematic Literature Review, an application to lift educational data stored in Sharable Content Object Reference Model (SCORM) data silos to the Web of data and a scientific conference venue enhancer plug on the top of several data live collectors. Significant research efforts have been devoted to combine the efficiency of a reliable data structure and the importance of data extraction techniques. This dissertation opens different research doors which mainly join two different research communities: the Semantic Web and the Natural Language Processing community. The Web provides a considerable amount of data where NLP techniques may shed the light within it. The use of the URI as a unique identifier may provide one milestone for the materialization of entities lifted from a raw text to real world object
Designing and Evaluating Accessible E-Learning for Students with Visual Impairments in K-12 Computing Education
This dissertation explores the pathways for making K-12 computing education more accessible for blind or visually impaired (BVI) learners. As computer science (CS) expands into K-12 education, more concerted efforts are required to ensure all students have equitable access to opportunities to pursue a career in computing. To determine their viability with BVI learners, I conducted three studies to assess current accessibility in CS curricula, materials, and learning environments. Study one was interviews with visually impaired developers; study two was interviews with K-12 teachers of visually impaired students; study three was a remote observation within a computer science course. My exploration revealed that most of CS education lacks the necessary accommodations for BVI students to learn at an equitable pace with sighted students. However, electronic learning (e-learning) was a theme that showed to provide the most accessible learning experience for BVI students, although even there, usability and accessibility challenges were present in online learning platforms.
My dissertation engaged in a human-centered approach across three studies towards designing, developing, and evaluating an online learning management system (LMS) with the critical design elements to improve navigation and interaction with BVI users. Study one was a survey exploring the perception of readiness for taking online courses between sighted and visually impaired students. The findings from the survey fueled study two, which employed participatory design with storytelling with K-12 teachers and BVI students to learn more about their experiences using LMSs and how they imagine such systems to be more accessible. The findings led to developing the accessible learning content management system (ALCMS), a web-based platform for managing courses, course content, and course roster, evaluated in study three with high school students, both sighted and visually impaired, to determine its usability and accessibility. This research contributes with recommendations for including features and design elements to improve accessibility in existing LMSs and building new ones