95 research outputs found

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations

    Doctor of Philosophy

    Get PDF
    dissertationNetwork emulation has become an indispensable tool for the conduct of research in networking and distributed systems. It offers more realism than simulation and more control and repeatability than experimentation on a live network. However, emulation testbeds face a number of challenges, most prominently realism and scale. Because emulation allows the creation of arbitrary networks exhibiting a wide range of conditions, there is no guarantee that emulated topologies reflect real networks; the burden of selecting parameters to create a realistic environment is on the experimenter. While there are a number of techniques for measuring the end-to-end properties of real networks, directly importing such properties into an emulation has been a challenge. Similarly, while there exist numerous models for creating realistic network topologies, the lack of addresses on these generated topologies has been a barrier to using them in emulators. Once an experimenter obtains a suitable topology, that topology must be mapped onto the physical resources of the testbed so that it can be instantiated. A number of restrictions make this an interesting problem: testbeds typically have heterogeneous hardware, scarce resources which must be conserved, and bottlenecks that must not be overused. User requests for particular types of nodes or links must also be met. In light of these constraints, the network testbed mapping problem is NP-hard. Though the complexity of the problem increases rapidly with the size of the experimenter's topology and the size of the physical network, the runtime of the mapper must not; long mapping times can hinder the usability of the testbed. This dissertation makes three contributions towards improving realism and scale in emulation testbeds. First, it meets the need for realistic network conditions by creating Flexlab, a hybrid environment that couples an emulation testbed with a live-network testbed, inheriting strengths from each. Second, it attends to the need for realistic topologies by presenting a set of algorithms for automatically annotating generated topologies with realistic IP addresses. Third, it presents a mapper, assign, that is capable of assigning experimenters' requested topologies to testbeds' physical resources in a manner that scales well enough to handle large environments

    Best effort measurement based congestion control

    Get PDF
    Abstract available: p.

    The Development of an Automated Testing Framework for Data-Driven Testing Utilizing the UML Testing Profile

    Get PDF
    The development of increasingly-complex Web 2.0 applications, along with a rise in end-user expectations, have not only made the testing and quality assurance processes of web application development an increasingly-important part of the SDLC, but have also made these processes more complex and resource-intensive. One way to effectively test these applications is by implementing an automated testing solution along with manual testing, as automation solutions have been shown to increase the total amount of testing that can be performed, and help testing team achieve consistency in their testing efforts. The difficulty, though, lies in how to best go about developing such a solution. The use of a framework is shown to help, by decreasing the amount of duplicate code and maintenance required, and increasing the amount of separation among the various elements of the testing solution. This research examines the use of the UML Testing Profile (UTP), including the use of UML diagrams, in the creation of such a framework. Using an Action Design Research methodology, a framework is developed for an automated testing solution that utilizes the Selenium Webdriver with a data-driven methodology, used in an organizational context, and evaluated, over the course of multiple iterations. Design principles, including the use of a test architecture and test context, the use of UML diagrams for the creation of Page Objects, and the identification and implementation of workflows are distilled from these iterations, and their impact on the larger context, the delivery of a robust application that meets end-user expectations, is examined

    Navigating the IoT landscape: Unraveling forensics, security issues, applications, research challenges, and future

    Full text link
    Given the exponential expansion of the internet, the possibilities of security attacks and cybercrimes have increased accordingly. However, poorly implemented security mechanisms in the Internet of Things (IoT) devices make them susceptible to cyberattacks, which can directly affect users. IoT forensics is thus needed for investigating and mitigating such attacks. While many works have examined IoT applications and challenges, only a few have focused on both the forensic and security issues in IoT. Therefore, this paper reviews forensic and security issues associated with IoT in different fields. Future prospects and challenges in IoT research and development are also highlighted. As demonstrated in the literature, most IoT devices are vulnerable to attacks due to a lack of standardized security measures. Unauthorized users could get access, compromise data, and even benefit from control of critical infrastructure. To fulfil the security-conscious needs of consumers, IoT can be used to develop a smart home system by designing a FLIP-based system that is highly scalable and adaptable. Utilizing a blockchain-based authentication mechanism with a multi-chain structure can provide additional security protection between different trust domains. Deep learning can be utilized to develop a network forensics framework with a high-performing system for detecting and tracking cyberattack incidents. Moreover, researchers should consider limiting the amount of data created and delivered when using big data to develop IoT-based smart systems. The findings of this review will stimulate academics to seek potential solutions for the identified issues, thereby advancing the IoT field.Comment: 77 pages, 5 figures, 5 table

    Remote maintenance of real time controller software over the internet

    Get PDF
    The aim of the work reported in this thesis is to investigate how to establish a standard platform for remote maintenance of controller software, which provides remote monitoring, remote fault identification and remote performance recovery services for geographically distributed controller software over the Internet. A Linear Quadratic Gaussian (LQG) controller is used as the benchmark for the control performance assessment; the LQG benchmark variances are estimated based on the Lyapunov equation and subspace matrices. The LQG controller is also utilized as the reference model of the actual controller to detect the controller failures. Discrepancies between control signals of the LQG and the actual controller are employed to a General Likelihood Ratio (GLR) test and the controller failure detection is characterized to detect sudden jumping points in the mean or variance of the discrepancies. To restore the degraded control performance caused by the controller failures, a compensator is designed and inserted into the post-fault control loop, which serially links with the faulty controller and recovers the degraded control performance into an acceptable range. Techniques of controller performance monitoring, controller failure detection and maintenance are extended into the Internet environment. An Internet-based maintenance system for controller software is developed, which provides remote control performance assessment and recovery services, and remote fault identification service over the Internet for the geographically distributed controller software. The integration between the mobile agent technology and the controller software maintenance is investigated. A mobile agent based controller software maintenance system is established; the mobile agent structure is designed to be flexible and the travelling agents can be remotely updated over the Internet. Also, the issue of heavy data process and transfer over the Internet is probed and a novel data process and transfer scheme is introduced. All the proposed techniques are tested on sirnulations or a process control unit. Simulation and experimental results illustrate the effectiveness of the proposed techniques.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Spatial consensus-building through access to web-based GIS : an online planning tool for Leipzig

    Get PDF
    Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1997.Includes bibliographical references (p. 143-149).by Matthias Baxmann.M.C.P

    Sharing our digital aura through social and physical proximity

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.Includes bibliographical references (p. 153-160).People are quite good at establishing a social style and using it in different communications contexts, but they do less well when the communication is mediated by computer networks. It is hard to control what information is revealed and how one's digital persona will be presented or interpreted. In this thesis, we ameliorate this problem by creating a "Virtual Private Milieu", a "VPM", that allows networked devices to act on our behalf and project a "digital aura" to other people and devices around us in a manner analogous to the way humans naturally interact with one another. The dynamic aggregation of the different auras and facets that the devices expose to one another creates social spheres of interaction between sets of active devices, and consequently between people. We focus on the subset of networking that deals with proximate communication, which we dub Face-to-Face Networking (FtFN). Network interaction in this space is often analogous to human face-to-face interaction, and increasingly, our devices are being used in local situations. We describe a VPM framework, key features of which include the incorporation of trust and context parameters into the discovery and communication process, incorporation of multiple contextunique identities, and also the support for multiple degrees of security and privacy. We also present the "Social Dashboard", a readily usable control for one's aura. Finally, we review "Comm.unity", a software package that allows developers and researchers easy implementation and deployment of local and distant social applications, and present two applications developed over this platform.Nadav Aharony.S.M

    In Ieee 802.15.4 Standard Guaranteed Time Slot Performance, Synchronous Data Acquisition And Synchronization Error

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu çalışmada, Freescale Yarıiletken tarafından üretilen 13192 EVK ile Garantilenmiş Zaman Dilimi (GTS) başarımı ölçülmüştür. Ölçülen başarım, kuramsal üretilen iş (throughput) ve kuramsal en büyük yararlı iş (goodput) değerleri ile kıyaslanmıştır. Başarım ölçümlerinin yanında, iki algılayıcı düğümü kullanılarak eş zamanlı veri edinme (data acquisition) de başarıyla gerçekleştirilmiştir. Edinilmiş veriler eşgüdümleyiciye (coordinator) aktarılırken GTS kullanılmıştır. Ayrıca başarım ölçümlerinden elde edilen sonuçlar yardımı ile iki algılayıcı düğümün eşzamanlı hale getirilme ayarlamaları (tune) yapılmıştır. Bir eşzamanlama yöntemine ulaşabilmek için geliştirilen uygulamada IEEE 802.15.4 Standardında tanımlı parıldak haber göstergesi ilkeli (beacon notify indication primitive) kullanılmıştır. Ayrıca düğümler arası eşzamanlama hatası incelenmiştir.In this study, performance of GTS is measured on 13192 Evolution Kit modules from Freescale Semiconductor. This performance is compared with the theoretical throughput and theoretical maximum goodput values. Besides the performance measurements, synchronous data acquisition with two sensor nodes has been successfully realized. While transmitting acquisition data to the coordinator GTS is used. Furthermore, obtained results from the performance measurements used for tuning the synchronization of two nodes. In order to find a synchronization scheme beacon notification indication primitive defined in the 802.15.4 standard has been used in the developed applications. Also synchronization error introduced by the nodes is inspected.Yüksek LisansM.Sc

    Availability and Preservation of Scholarly Digital Resources

    Get PDF
    The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication, representing a relatively new medium for the conveyance of scientific thought and discovery. Researchers create thousands of web sites every year to share software, data and services. Unlike books and journals, however, the preservation systems are not yet mature. This carries implications that go to the core of science: the ability to examine another\u27s sources to understand and reproduce their work. These valuable resources have been documented as disappearing over time in several subject areas. This dissertation examines the problem by performing a crossdisciplinary investigation, testing the effectiveness of existing remedies and introducing new ones. As part of the investigation, 14,489 unique web pages found in the abstracts within Thomson Reuters’ Web of Science citation index were accessed. The median lifespan of these web pages was found to be 9.3 years with 62% of them being archived. Survival analysis and logistic regression identified significant predictors of URL lifespan and included the year a URL was published, the number of times it was cited, its depth as well as its domain. Statistical analysis revealed biases in current static web-page solutions
    corecore