31 research outputs found

    Media Gateway : το μοντέλο μίας υπηρεσίας ηλεκτρονικού εμπορίου για τη διαχείριση κατανεμημένων πολυμεσικών (multimedia) εφαρμογών και δεδομένων

    Get PDF
    Η ηλεκτρονική υπηρεσία του ζωντανού παραδείγματος που εξετάζουμε σε αυτήν την εργασία, απαιτεί μεγάλη εγκατάσταση υπολογιστών εκτιθέμενους στο Διαδίκτυο. Συνεπώς, μία τέτοια εφαρμογή απειλείται από σοβαρούς κινδύνους μέσω του Διαδικτύου. Το βασικότερο όφελος που θα μπορούσαμε να προσφέρουμε στους χρήστες της υπηρεσίας αυτής είναι η ασφάλεια και η προστασία των προσωπικών τους δεδομένων. Μη λαμβάνοντας υπ όψιν αυτόν τον τομέα η λειτουργία της υπηρεσίας αυτής θα επηρεαζόταν αρνητικά. Το πρώτο βήμα στη διαμόρφωση μίας αρχιτεκτονικής ασφαλείας είναι η εύρεση και ο καθορισμός των στόχων ασφάλειας του συστήματος. Οι βασικότερες έννοιες που εξετάζονται στον τομέα της ασφάλειας είναι η διαθεσιμότητα των δεδομένων, που δίνει το πλεονέκτημα στο χρήστη να χρησιμοποιεί την υπηρεσία από οπουδήποτε και οποτεδήποτε, καθώς και η εμπιστευτικότητα και ακεραιότητα των δεδομένων, που αποτελούν σημαντικούς παράγοντες απόφασης χρήσης της νέας υπηρεσίας, αφού εξασφαλίζουν στο χρήστη ότι τα προσωπικά του δεδομένα δεν θα είναι προσβάσιμα και δεν θα μπορούν να τροποποιηθούν ή να υποκλαπούν από τρίτους, κακόβουλους χρήστες. Επιπρόσθετα, εξετάστηκαν και άλλες σημαντικές έννοιες που εγγυώνται στο χρήστη την ασφάλειά του κατά την πραγματοποίηση οποιασδήποτε ενέργειας στην υπηρεσία, όπως για παράδειγμα η αυθεντικοποίηση, η εξουσιοδότηση, η απόδοση ευθυνών και η μη-απάρνηση. Τέλος, για τη δημιουργία αρχιτεκτονικών ασφαλείας είναι απαραίτητο να εξετάσουμε περισσότερα αντίμετρα για τους κινδύνους που εγκυμονούν, μιας και ολοένα και περισσότεροι κίνδυνοι απειλούν τα συστήματα χρόνο με το χρόνο. Τα μέτρα ασφαλείας που θα πάρουμε θα πρέπει να μην περιορίζουν τις ενέργειες των χρηστών και τη συνεργασία μεταξύ τους, αλλά αντίθετα να τα υποστηρίζουν με τον αποτελεσματικότερο και ασφαλέστερο τρόπο προς αυτούς

    Research Article An Improved Particle Swarm Optimization Based on Deluge Approach for Enhanced Hierarchical Cache Optimization in IPTV Networks

    Get PDF
    Abstract: In recent years, IP network has been considered as a new delivery network for TV services. A majority of the telecommunication industries have used IP network to offer on-demand services and linear TV services as it can offer a two-way and high-speed communication. In order to effectively and economically utilize the IP network, caching is the technique which is usually preferred. In IPTV system, a managed network is utilized to bring out TV services, the requests of Video on Demand (VOD) objects are usually combined in a limited period intensively and user preferences are fluctuated dynamically. Furthermore, the VOD content updates often under the control of IPTV providers. In order to minimize this traffic and overall network cost, a segment of the video content is stored in caches closer to subscribers, for example, Digital Subscriber Line Access Multiplexer (DSLAM), a Central Office (CO) and Intermediate Office (IO). The major problem focused in this approach is to determine the optimal cache memory that should be assigned in order to attain maximum cost effectiveness. This approach uses an effective Grate Deluge algorithm based Particle Swarm Optimization (GDPSO) approach for attaining the optimal cache memory size which in turn minimizes the overall network cost. The analysis shows that hierarchical distributed caching can save significant network cost through the utilization of the GDPSO algorithm

    Interactive Visualization on High-Resolution Tiled Display Walls with Network Accessible Compute- and Display-Resources

    Get PDF
    Papers number 2-7 and appendix B and C of this thesis are not available in Munin: 2. Hagen, T-M.S., Johnsen, E.S., Stødle, D., Bjorndalen, J.M. and Anshus, O.: 'Liberating the Desktop', First International Conference on Advances in Computer-Human Interaction (2008), pp 89-94. Available at http://dx.doi.org/10.1109/ACHI.2008.20 3. Tor-Magne Stien Hagen, Oleg Jakobsen, Phuong Hoai Ha, and Otto J. Anshus: 'Comparing the Performance of Multiple Single-Cores versus a Single Multi-Core' (manuscript)4. Tor-Magne Stien Hagen, Phuong Hoai Ha, and Otto J. Anshus: 'Experimental Fault-Tolerant Synchronization for Reliable Computation on Graphics Processors' (manuscript) 5. Tor-Magne Stien Hagen, Daniel Stødle and Otto J. Anshus: 'On-Demand High-Performance Visualization of Spatial Data on High-Resolution Tiled Display Walls', Proceedings of the International Conference on Imaging Theory and Applications and International Conference on Information Visualization Theory and Applications (2010), pages 112-119. Available at http://dx.doi.org/10.5220/0002849601120119 6. Bård Fjukstad, Tor-Magne Stien Hagen, Daniel Stødle, Phuong Hoai Ha, John Markus Bjørndalen and Otto Anshus: 'Interactive Weather Simulation and Visualization on a Display Wall with Many-Core Compute Nodes', Para 2010 – State of the Art in Scientific and Parallel Computing. Available at http://vefir.hi.is/para10/extab/para10-paper-60 7. Tor-Magne Stien Hagen, Daniel Stødle, John Markus Bjørndalen, and Otto Anshus: 'A Step towards Making Local and Remote Desktop Applications Interoperable with High-Resolution Tiled Display Walls', Lecture Notes in Computer Science (2011), Volume 6723/2011, 194-207. Available at http://dx.doi.org/10.1007/978-3-642-21387-8_15The vast volume of scientific data produced today requires tools that can enable scientists to explore large amounts of data to extract meaningful information. One such tool is interactive visualization. The amount of data that can be simultaneously visualized on a computer display is proportional to the display’s resolution. While computer systems in general have seen a remarkable increase in performance the last decades, display resolution has not evolved at the same rate. Increased resolution can be provided by tiling several displays in a grid. A system comprised of multiple displays tiled in such a grid is referred to as a display wall. Display walls provide orders of magnitude more resolution than typical desktop displays, and can provide insight into problems not possible to visualize on desktop displays. However, their distributed and parallel architecture creates several challenges for designing systems that can support interactive visualization. One challenge is compatibility issues with existing software designed for personal desktop computers. Another set of challenges include identifying characteristics of visualization systems that can: (i) Maintain synchronous state and display-output when executed over multiple display nodes; (ii) scale to multiple display nodes without being limited by shared interconnect bottlenecks; (iii) utilize additional computational resources such as desktop computers, clusters and supercomputers for workload distribution; and (iv) use data from local and remote compute- and data-resources with interactive performance. This dissertation presents Network Accessible Compute (NAC) resources and Network Accessible Display (NAD) resources for interactive visualization of data on displays ranging from laptops to high-resolution tiled display walls. A NAD is a display having functionality that enables usage over a network connection. A NAC is a computational resource that can produce content for network accessible displays. A system consisting of NACs and NADs is either push-based (NACs provide NADs with content) or pull-based (NADs request content from NACs). To attack the compatibility challenge, a push-based system was developed. The system enables several simultaneous users to mirror multiple regions from the desktop of their computers (NACs) onto nearby NADs (among others a 22 megapixel display wall) without requiring usage of separate DVI/VGA cables, permanent installation of third party software or opening firewall ports. The system has lower performance than that of a DVI/VGA cable approach, but increases flexibility such as the possibility to share network accessible displays from multiple computers. At a resolution of 800 by 600 pixels, the system can mirror dynamic content between a NAC and a NAD at 38.6 frames per second (FPS). At 1600x1200 pixels, the refresh rate is 12.85 FPS. The bottleneck of the system is frame buffer capturing and encoding/decoding of pixels. These two functional parts are executed in sequence, limiting the usage of additional CPU cores. By pipelining and executing these parts on separate CPU cores, higher frame rates can be expected and by a factor of two in the best case. To attack all presented challenges, a pull-based system, WallScope, was developed. WallScope enables interactive visualization of local and remote data sets on high-resolution tiled display walls. The WallScope architecture comprises a compute-side and a display-side. The compute-side comprises a set of static and dynamic NACs. Static NACs are considered permanent to the system once added. This type of NAC typically has strict underlying security and access policies. Examples of such NACs are clusters, grids and supercomputers. Dynamic NACs are compute resources that can register on-the-fly to become compute nodes in the system. Examples of this type of NAC are laptops and desktop computers. The display-side comprises of a set of NADs and a data set containing data customized for the particular application domain of the NADs. NADs are based on a sort-first rendering approach where a visualization client is executed on each display-node. The state of these visualization clients is provided by a separate state server, enabling central control of load and refresh-rate. Based on the state received from the state server, the visualization clients request content from the data set. The data set is live in that it translates these requests into compute messages and forwards them to available NACs. Results of the computations are returned to the NADs for the final rendering. The live data set is close to the NADs, both in terms of bandwidth and latency, to enable interactive visualization. WallScope can visualize the Earth, gigapixel images, and other data available through the live data set. When visualizing the Earth on a 28-node display wall by combining the Blue Marble data set with the Landsat data set using a set of static NACs, the bottleneck of WallScope is the computation involved in combining the data sets. However, the time used to combine data sets on the NACs decreases by a factor of 23 when going from 1 to 26 compute nodes. The display-side can decode 414.2 megapixels of images per second (19 frames per second) when visualizing the Earth. The decoding process is multi-threaded and higher frame rates are expected using multi-core CPUs. WallScope can rasterize a 350-page PDF document into 550 megapixels of image-tiles and display these image-tiles on a 28-node display wall in 74.66 seconds (PNG) and 20.66 seconds (JPG) using a single quad-core desktop computer as a dynamic NAC. This time is reduced to 4.20 seconds (PNG) and 2.40 seconds (JPG) using 28 quad-core NACs. This shows that the application output from personal desktop computers can be decoupled from the resolution of the local desktop and display for usage on high-resolution tiled display walls. It also shows that the performance can be increased by adding computational resources giving a resulting speedup of 17.77 (PNG) and 8.59 (JPG) using 28 compute nodes. Three principles are formulated based on the concepts and systems researched and developed: (i) Establishing the end-to-end principle through customization, is a principle stating that the setup and interaction between a display-side and a compute-side in a visualization context can be performed by customizing one or both sides; (ii) Personal Computer (PC) – Personal Compute Resource (PCR) duality states that a user’s computer is both a PC and a PCR, implying that desktop applications can be utilized locally using attached interaction devices and display(s), or remotely by other visualization systems for domain specific production of data based on a user’s personal desktop install; and (iii) domain specific best-effort synchronization stating that for distributed visualization systems running on tiled display walls, state handling can be performed using a best-effort synchronization approach, where visualization clients eventually will get the correct state after a given period of time. Compared to state-of-the-art systems presented in the literature, the contributions of this dissertation enable utilization of a broader range of compute resources from a display wall, while at the same time providing better control over where to provide functionality and where to distribute workload between compute-nodes and display-nodes in a visualization context

    Context prediction-based prefetching in software-defined wireless networks

    Get PDF
    In this master thesis we focus on improving in-network caching for mobile users in a large campus WiFi network. First we pinpoint the negative effects of mobility on network conditions and user experience. We propose a method leveraging SDN technology to redirect users' requests to optimally located cache servers, resulting in improved user experience and lowered burden on the backhaul and core network links. Our contribution is a network application that controls the flows in the network via an SDN controller. The application takes user's movement traces as an input, computes the optimal location of cache servers in the network and redirects user's flows accordingly. We tested our solution in a Mininet network simulator. We devised multiple scenarios using real-world movement traces from Dartmouth Campus. We measured requests delay as the main characteristic for user experience and data traffic over core and backhaul links as an indicator of network health. Our experiments show that for mobile users our dynamic redirection approach provides noticeable improvements over traditional, static caching methods

    Third Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, Aarhus, Denmark, August 29-31, 2001

    Get PDF
    This booklet contains the proceedings of the Third Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, August 29-31, 2001. The workshop is organised by the CPN group at Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop01

    Formulations and identification of algorithmic solutions for enabling opportunistic networks - M4.1

    Get PDF
    Milestone M4.1 del projecte Europeu OneFIT (ICT-2009-257385).This document contains a detailed description of the algorithms to be implemented to manage the opportunistic networks. There are defined according to the functional and system architecture (WP2) to fulfil the technical challenges. These algorithms will implemented during the WP4.2 and validated during the WP4.3Postprint (published version

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Teadusarvutuse algoritmide taandamine hajusarvutuse raamistikele

    Get PDF
    Teadusarvutuses kasutatakse arvuteid ja algoritme selleks, et lahendada probleeme erinevates reaalteadustes nagu geneetika, bioloogia ja keemia. Tihti on eesmärgiks selliste loodusnähtuste modelleerimine ja simuleerimine, mida päris keskkonnas oleks väga raske uurida. Näiteks on võimalik luua päikesetormi või meteoriiditabamuse mudel ning arvutisimulatsioonide abil hinnata katastroofi mõju keskkonnale. Mida keerulisemad ja täpsemad on sellised simulatsioonid, seda rohkem arvutusvõimsust on vaja. Tihti kasutatakse selleks suurt hulka arvuteid, mis kõik samaaegselt töötavad ühe probleemi kallal. Selliseid arvutusi nimetatakse paralleel- või hajusarvutusteks. Hajusarvutuse programmide loomine on aga keeruline ning nõuab palju rohkem aega ja ressursse, kuna vaja on sünkroniseerida erinevates arvutites samaaegselt tehtavat tööd. On loodud mitmeid tarkvararaamistikke, mis lihtsustavad seda tööd automatiseerides osa hajusprogrammeerimisest. Selle teadustöö eesmärk oli uurida selliste hajusarvutusraamistike sobivust keerulisemate teadusarvutuse algoritmide jaoks. Tulemused näitasid, et olemasolevad raamistikud on üksteisest väga erinevad ning neist ükski ei ole sobiv kõigi erinevat tüüpi algoritmide jaoks. Mõni raamistik on sobiv ainult lihtsamate algoritmide jaoks; mõni ei sobi olukorras, kus andmed ei mahu arvutite mällu. Algoritmi jaoks kõige sobivama hajusarvutisraamistiku valimine võib olla väga keeruline ülesanne, kuna see nõuab olemasolevate raamistike uurimist ja rakendamist. Sellele probleemile lahendust otsides otsustati luua dünaamiline algoritmide modelleerimise rakendus (DAMR), mis oskab simuleerida algoritmi implementatsioone erinevates hajusarvutusraamistikes. DAMR aitab hinnata milline hajusraamistik on kõige sobivam ette antud algoritmi jaoks, ilma algoritmi reaalselt ühegi hajusraamistiku peale implementeerimata. Selle uurimustöö peamine panus on hajusarvutusraamistike kasutuselevõtu lihtsamaks tegemine teadlastele, kes ei ole varem nende kasutamisega kokku puutunud. See peaks märkimisväärselt aega ja ressursse kokku hoidma, kuna ei pea ükshaaval kõiki olemasolevaid hajusraamistikke tundma õppima ja rakendama.Scientific computing uses computers and algorithms to solve problems in various sciences such as genetics, biology and chemistry. Often the goal is to model and simulate different natural phenomena which would otherwise be very difficult to study in real environments. For example, it is possible to create a model of a solar storm or a meteor hit and run computer simulations to assess the impact of the disaster on the environment. The more sophisticated and accurate the simulations are the more computing power is required. It is often necessary to use a large number of computers, all working simultaneously on a single problem. These kind of computations are called parallel or distributed computing. However, creating distributed computing programs is complicated and requires a lot more time and resources, because it is necessary to synchronize different computers working at the same time. A number of software frameworks have been created to simplify this process by automating part of a distributed programming. The goal of this research was to assess the suitability of such distributed computing frameworks for complex scientific computing algorithms. The results showed that existing frameworks are very different from each other and none of them are suitable for all different types of algorithms. Some frameworks are only suitable for simple algorithms; others are not suitable when data does not fit into the computer memory. Choosing the most appropriate distributed computing framework for an algorithm can be a very complex task, because it requires studying and applying the existing frameworks. While searching for a solution to this problem, it was decided to create a Dynamic Algorithms Modelling Application (DAMA), which is able to simulate the implementation of the algorithm in different distributed computing frameworks. DAMA helps to estimate which distributed framework is the most appropriate for a given algorithm, without actually implementing it in any of the available frameworks. This main contribution of this study is simplifying the adoption of distributed computing frameworks for researchers who are not yet familiar with using them. It should save significant time and resources as it is not necessary to study each of the available distributed computing frameworks in detail

    NASA SBIR abstracts of 1990 phase 1 projects

    Get PDF
    The research objectives of the 280 projects placed under contract in the National Aeronautics and Space Administration (NASA) 1990 Small Business Innovation Research (SBIR) Phase 1 program are described. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses in response to NASA's 1990 SBIR Phase 1 Program Solicitation. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 280, in order of its appearance in the body of the report. The document also includes Appendixes to provide additional information about the SBIR program and permit cross-reference in the 1990 Phase 1 projects by company name, location by state, principal investigator, NASA field center responsible for management of each project, and NASA contract number
    corecore