215 research outputs found
Recommended from our members
Newspeak: A Secure Approach for Designing Web Applications
Internet applications are being used for more and more important business and personal purposes. Despite efforts to lock down web servers and isolate databases, there is an inherent problem in the web application architecture that leaves databases necessarily exposed to possible attack from the Internet. We propose a new design that removes the web server as a trusted component of the architecture and provides an extra layer of protection against database attacks. We have created a prototype system that demonstrates the feasibility of the new design
Consideration of availability when designing web applications
Yhä useammat tietojärjestelmät ovat toiminnaltaan niin kriittisiä, että niiden palveluiden tulee olla käytännössä aina käytettävissä. Tästä huolimatta palveluiden saatavuusvaatimuksia osataan harvoin arvioida riittävän tarkasti sovellusta toteutettaessa, ja vasta ongelmatilanteissa havaitaan sovelluksen saatavuuden kriittisyys ja merkitys liiketoiminnalle. Palvelun saatavuudelle voidaan kuitenkin laskea rahallinen arvo, ja pyrkiä sen avulla arvioimaan kuinka kriittinen sovellus on organisaation toiminnalle ja suhteuttamaan saatu arvo kuluihin, joita korkean saatavuuden infrasktuurin tuottamisesta aiheutuu.
Tässä diplomityössä esitellään korkean saatavuuden peruskäsitteitä ja esitellään miten saatavuuden arvoa voidaan rahallisesti laskea. Tämän lisäksi tutkitaan millaisia uhkia palveluihin liittyy niiden saatavuuden kannalta, ja miten näihin uhkiin voidaan varautua. Työssä esitellään kuhunkin ongelmaan useita erilaisia ja eri tasoisia ratkaisumalleja, sillä ongelmakohtia voi pyrkiä ratkaisemaan hyvin eri tavalla erilaisissa skenaarioissa.
Korkean saatavuuden palveluja web-sovelluksena tuottaessa tulee ottaa huomioon web-sovellusten erityispiirteet. Tällaisia erityispiirteitä ovat muunmuassa tilaton HTTP- protokolla, palvelun kannalta kriittinen verkkoyhteys asiakkaalle sekä se,että tyypillisesti ulkoisia riippuvuuksia on useita. Useimmat web-sovellukset eivät itse säilytä tietoa, vaan se luetaan jostain ulkopuolisesta tietokannasta tai tietokannoista. Näiden tietokantojen saatavuus on palvelun kannalta yhtä tärkeää kuin itse sovelluksen. Näitä ongelmia ja ratkaisumalleja konkretisoidaan esittelemällä Suomen valtion VAHTI-säännöstön vaatimusten mukainen korkean saatavuuden infrakstuuri web-sovellukselle.
Työn tuloksena saadaan selville, että korkean saatavuuden järjestelmä on useimmiten hyvin monimutkainen, ja sisältää huomattavan määrän erilaisia komponentteja joilla on omat vastuualueensa. Työssä havaittiin, että mitä korkeampiin vaatimuksiin järjestelmä pyrkii vastaamaan sitä monimutkaisempi infrakstuurista tulee, ja monimutkaisuus voi jopa aiheuttaa itsessään uhan järjestelmän saatavuudelle. Varsinkin tiedon synkronoinnin haasteet eri solmujen välillä havaittiin toistuvaksi ongelmaksi arkkitehtuurin eri kerroksissa
A Process Framework for Semantics-aware Tourism Information Systems
The growing sophistication of user requirements in tourism due to the advent of new technologies such as the Semantic Web and mobile computing has imposed new possibilities for improved intelligence in Tourism Information Systems (TIS). Traditional software engineering and web engineering approaches cannot suffice, hence the need to find new product development approaches that would sufficiently enable the next generation of TIS. The next generation of TIS are expected among other things to: enable
semantics-based information processing, exhibit natural language capabilities, facilitate inter-organization exchange of information in a seamless way, and
evolve proactively in tandem with dynamic user requirements. In this paper, a product development approach called Product Line for Ontology-based Semantics-Aware Tourism Information Systems (PLOSATIS) which is a novel
hybridization of software product line engineering, and Semantic Web engineering concepts is proposed. PLOSATIS is presented as potentially effective, predictable and amenable to software process improvement initiatives
Web Application Models Are More Than Conceptual Models
In this paper we argue that Web applications are a particular kind of hypermedia application and show how to model their navigational structure. We argue that if we need to design applications combining hypermedia navigation with complex transactional behaviors (as in E-commerce systems), we need a systematic development approach. We present the main ideas underlying the Object-Oriented Hypermedia Design Method (OOHDM) and show that Web applications are built as views of conceptual models. We present the abstraction primitives used to design conceptual and navigational structure of Web applications and describe the view definition language. We introduce navigational contexts as the structuring mechanism for the navigational space. Further work on designing Web applications with OOHDM is also presented.Publicado en Lecture Notes in Computer Science book series (LNCS, vol. 1727).Laboratorio de Investigación y Formación en Informática Avanzad
A Brief History of Web Crawlers
Web crawlers visit internet applications, collect data, and learn about new
web pages from visited pages. Web crawlers have a long and interesting history.
Early web crawlers collected statistics about the web. In addition to
collecting statistics about the web and indexing the applications for search
engines, modern crawlers can be used to perform accessibility and vulnerability
checks on the application. Quick expansion of the web, and the complexity added
to web applications have made the process of crawling a very challenging one.
Throughout the history of web crawling many researchers and industrial groups
addressed different issues and challenges that web crawlers face. Different
solutions have been proposed to reduce the time and cost of crawling.
Performing an exhaustive crawl is a challenging question. Additionally
capturing the model of a modern web application and extracting data from it
automatically is another open question. What follows is a brief history of
different technique and algorithms used from the early days of crawling up to
the recent days. We introduce criteria to evaluate the relative performance of
web crawlers. Based on these criteria we plot the evolution of web crawlers and
compare their performanc
Hacia una herramienta de soporte para el modelado web con accesibilidad
Web Accessibility is a basic attribute of quality in use and a player to a successful Web application. The principles of design called for all or universal design, are aimed at product design and user-friendly environments for as many people as possible, without the need to adapt or redesign it so specially. There are different tools and approaches to assist to the Accessibility evaluation of existing Web applications. In contrast, there are no similar efforts for the early design with Accessibility principles in mind. Designing Web applications to improve Accessibility implies the analysis of different concerns that can be linked through the use of techniques from Aspect-Oriented Software Development (AOSD). In this work we propose a tool’s architecture based on AOSD design concepts and the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) to support building accessible user interfaces.VI Workshop Ingeniería de Software (WIS)Red de Universidades con Carreras en Informática (RedUNCI
- …