3,379 research outputs found
Ubiquitous Semantic Applications
As Semantic Web technology evolves many open areas emerge, which attract more research focus. In addition to quickly expanding Linked Open Data (LOD) cloud, various embeddable metadata formats (e.g. RDFa, microdata) are becoming more common. Corporations are already using existing Web of Data to create new technologies that were not possible before. Watson by IBM an artificial intelligence computer system capable of answering questions posed in natural language can be a great example.
On the other hand, ubiquitous devices that have a large number of sensors and integrated devices are becoming increasingly powerful and fully featured computing platforms in our pockets and homes. For many people smartphones and tablet computers have already replaced traditional computers as their window to the Internet and to the Web. Hence, the management and presentation of information that is useful to a user is a main requirement for todayâs smartphones. And it is becoming extremely important to provide access to the emerging Web of Data from the ubiquitous devices.
In this thesis we investigate how ubiquitous devices can interact with the Semantic Web. We discovered that there are five different approaches for bringing the Semantic Web to ubiquitous devices. We have outlined and discussed in detail existing challenges in implementing this approaches in section 1.2. We have described a conceptual framework for ubiquitous semantic applications in chapter 4. We distinguish three client approaches for accessing semantic data using ubiquitous devices depending on how much of the semantic data processing is performed on the device itself (thin, hybrid and fat clients). These are discussed in chapter 5 along with the solution to every related challenge. Two provider approaches (fat and hybrid) can be distinguished for exposing data from ubiquitous devices on the Semantic Web. These are discussed in chapter 6 along with the solution to every related challenge. We conclude our work with a discussion on each of the contributions of the thesis and propose future work for each of the discussed approach in chapter 7
Different Forensic Tools on a Single SSD and HDD, Their Differences and Drawbacks
With the increase in technology comes great innovations. One such transformation is changing from Hard Disks to Solid State Drives. Solid State Drives generally known as SSDâs is a non-volatile memory which became a key storage system nowadays. SSD\u27s are nothing but a storage device like Hard Disks but many times faster with a very much lower power consumption. They are smaller in size and more efficient, the mechanism by which SSDs store and modify data is intrinsically different from hard disk drives. Each innovation has its advantages as well as drawbacks. When it comes to digital forensics working on SSDâs is relatively new. It has been a challenge for the cyber-crime investigators ever since the evolution of SSD\u27s, it was easy in hard disks to retrieve deleted data but when it comes to SSD\u27s, they can automatically retrieve or alter data whenever they are connected to power even without an interface which results in major evidence loss or contamination. There are different types of SSD\u27s which do not function similarly is also a challenge to a cybercrime investigator. The main purpose of this paper is to describe the evolution of SSD\u27s and creating image files of a single SSD and Hard Disk using different forensic tools and comparing results. We create an evidence file and pass it to SSD and HDD with multiple permutations and combinations, then we format the disks and create an image file of both the disks to analyze using a forensic tool. We will also analyze how many evidence files are being deleted completely from both the devices by comparing them with the original number files we passed and the original hits we obtained while performing the analysis on single evidence folder
A Survey on Resource Management in IoT Operating Systems
Recently, the Internet of Things (IoT) concept has attracted a lot of attention due to its capability to translate our physical world into a digital cyber world with meaningful information. The IoT devices are smaller in size, sheer in number, contain less memory, use less energy, and have more computational capabilities. These scarce resources for IoT devices are powered by small operating systems (OSs) that are specially designed to support the IoT devices' diverse applications and operational requirements. These IoT OSs are responsible for managing the constrained resources of IoT devices efficiently and in a timely manner. In this paper, discussions on IoT devices and OS resource management are provided. In detail, the resource management mechanisms of the state-of-the-art IoT OSs, such as Contiki, TinyOS, and FreeRTOS, are investigated. The different dimensions of their resource management approaches (including process management, memory management, energy management, communication management, and file management) are studied, and their advantages and limitations are highlighted
Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
There exists a widely recognized need to better understand
and manage complex âsystems of systems,â ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to ârobust yet fragileâ tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design
A Case for Bundle Protocol in Space
NASA, through the Advanced Exploration Systems (AES) project, is investing in the development and infusion of delay tolerant networking (DTN) protocols for use on future space flight missions. The cornerstone of the DTN suite is the Bundle Protocol which provides network layer addressing and routing of data blocks. In 2017, the Plankton, Aerosol, Cloud, and Ocean Ecosystem (PACE) mission was selected as the first in-house robotic science mission to implement the Bundle Protocol for downlink of housekeeping telemetry. One year into the design and incorporation of the Bundle Protocol on PACE, this presentation makes a case for using the Bundle Protocol for communication with future space assets. Specifically, the use of the Bundle Protocol (1) simplifies relaying data through store and forward routing and custody transfer; (2) simplifies downlink management through delivery guarantees; and (3) simplifies storage services through block level interactions with memory devices
Halting Neotropical Deforestation: Do the Forest Principles Have What It Takes?
INTRODUCTION I crashed into the thick secondary growth, stopping suddenly to duck a certain branch in my path: a fat black bullet ant crawled along it with indifference, an attitude that would have quickly changed had I brushed up against him. I headed toward the large patch of Heliconia just to the right. We had earlier mapped out the clump, and finding it to contain seventeen flower clusters, it was one of the prize patches in the study plot. I took my spot ten paces from the outer clusters, started my stop watch, and waited with field book in hand. The Birds of Paradise were dripping nectar from their red fingertips. With such a gold mine, I did not have to wait long for a hummingbird. Like an Evinrude-powered flat bottom whizzing up a winding lagoon, the bird\u27s sound reached me before I saw him. He appeared from the back of the patch, taking a drink here, then there, then here again, then at some other spot, then there again and back to here. He did not sit and sip for long at each spot, but he did pause long enough for me to see him gleam green and deep violet. He was a red-footed plumeleteer, emerald green on the head, changing to dark purple through his body and on to his tail. His feet and straight bill were distinctively red. Without a doubt he owned this lucrative Heliconia patch. But then from my right came another whir. A ..
The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing It Apart
Two forces are in tension as the Internet evolves. One pushes toward interconnected common platforms; the other pulls toward fragmentation and proprietary alternatives. Their interplay drives many of the contentious issues in cyberlaw, intellectual property, and telecommunications policy, including the fight over network neutrality for broadband providers, debates over global Internet governance, and battles over copyright online. These are more than just conflicts between incumbents and innovators, or between openness and deregulation. Their roots lie in the fundamental dynamics of interconnected networks.
Fortunately, there is an interdisciplinary literature on network properties, albeit one virtually unknown to legal scholars. The emerging field of network formation theory explains the pressures threatening to pull the Internet apart, and suggests responses. The Internet as we know it is surprisingly fragile. To continue the extraordinary outpouring of creativity and innovation that the Internet fosters, policy-makers must protect its composite structure against both fragmentation and excessive concentration of power.
This paper, the first to apply network formation models to Internet law, shows how the Internet pulls itself together as a coherent whole. This very process, however, creates and magnifies imbalances that encourage balkanization. By understanding how networks behave, governments and other legal decision-makers can avoid unintended consequences and target their actions appropriately. A network-theoretic perspective holds great promise to inform the law and policy of the information economy
Recommended from our members
Reconfigurable Optically Interconnected Systems
With the immense growth of data consumption in today's data centers and high-performance computing systems driven by the constant influx of new applications, the network infrastructure supporting this demand is under increasing pressure to enable higher bandwidth, latency, and flexibility requirements. Optical interconnects, able to support high bandwidth wavelength division multiplexed signals with extreme energy efficiency, have become the basis for long-haul and metro-scale networks around the world, while photonic components are being rapidly integrated within rack and chip-scale systems. However, optical and photonic interconnects are not a direct replacement for electronic-based components. Rather, the integration of optical interconnects with electronic peripherals allows for unique functionalities that can improve the capacity, compute performance and flexibility of current state-of-the-art computing systems. This requires physical layer methodologies for their integration with electronic components, as well as system level control planes that incorporates the optical layer characteristics. This thesis explores various network architectures and the associated control plane, hardware infrastructure, and other supporting software modules needed to integrate silicon photonics and MEMS based optical switching into conventional datacom network systems ranging from intra-data center and high-performance computing systems to the metro-scale layer networks between data centers. In each of these systems, we demonstrate dynamic bandwidth steering and compute resource allocation capabilities to enable significant performance improvements. The key accomplishments of this thesis are as follows.
In Part 1, we present high-performance computing network architectures that integrate silicon photonic switches for optical bandwidth steering, enabling multiple reconfigurable topologies that results in significant system performance improvements. As high-performance systems rely on increased parallelism by scaling up to greater numbers of processor nodes, communication between these nodes grows rapidly and the interconnection network becomes a bottleneck to the overall performance of the system. It has been observed that many scientific applications operating on high-performance computing systems cause highly skewed traffic over the network, congesting only a small percentage of the total available links while other links are underutilized. This mismatch of the traffic and the bandwidth allocation of the physical layer network presents the opportunity to optimize the bandwidth resource utilization of the system by using silicon photonic switches to perform bandwidth steering. This allows the individual processors to perform at their maximum compute potential and thereby improving the overall system performance. We show various testbeds that integrates both microring resonator and Mach-Zehnder based silicon photonic switches within Dragonfly and Fat-Tree topology networks built with conventional
equipment, and demonstrate 30-60% reduction in execution time of real high-performance benchmark applications.
Part 2 presents a flexible network architecture and control plane that enables autonomous bandwidth steering and IT resource provisioning capabilities between metro-scale geographically distributed data centers. It uses a software-defined control plane to autonomously provision both network and IT resources to support different quality of service requirements and optimizes resource utilization under dynamically changing load variations. By actively monitoring both the bandwidth utilization of the network and CPU or memory resources of the end hosts, the control plane autonomously provisions background or dynamic connections with different levels of quality of service using optical MEMS switching, as well as initializing live migrations of virtual machines to consolidate or distribute workload. Together these functionalities provide flexibility and maximize efficiency in processing and transferring data, and enables energy and cost savings by scaling down the system when resources are not needed. An experimental testbed of three data center nodes was built to demonstrate the feasibility of these capabilities.
Part 3 presents Lightbridge, a communications platform specifically designed to provide a more seamless integration between processor nodes and an optically switched network. It addresses some of the crucial issues faced by the works presented in the previous chapters related to optical switching. When optical switches perform switching operations, they change the physical topology of the network, and they lack the capability to buffer packets, resulting in certain optical circuits being unavailable. This prompts the question of whether it is safe to transmit packets by end hosts at any given time. Lightbridge was developed to coordinate switching and routing of optical circuits across the network, by having the processors gain information about the current state of the optical network before transmitting packets, and being able to buffer packets when the optical circuit is not available. This part describes details of Lightbridge which is constituted by a loadable Linux kernel module along with other supporting modifications to the Linux kernel in order to achieve the necessary functionalities
Recommended from our members
CheriOS: Designing an untrusted single-address-space capability operating system utilising capability hardware and a minimal hypervisor
This thesis presents the design, implementation, and evaluation of a novel capability operating system: CheriOS. The guiding motivation behind CheriOS is to provide strong security guarantees to programmers, even allowing them to continue to program in fast, but typically unsafe, languages such as C. Furthermore, it does this in the presence of an extremely strong adversarial model: in CheriOS, every compartment -- and even the operating system itself -- is considered actively malicious. Building on top of the architecturally enforced capabilities offered by the CHERI microprocessor, I show that only a few more capability types and enforcement checks are required to provide a strong compartmentalisation model that can facilitate mutual distrust. I implement these new primitives in software, in a new abstraction layer I dub the nanokernel. Among the new OS primitives I introduce are one for integrity and confidentiality called a Reservation (which allows allocating private memory without trusting the allocator), as well as another that can provide attestation about the state of the system, a Foundation (which provides a key to sign and protect capabilities based on a signature of the starting state of a program). I show that, using these new facilities, it is possible to design an operating system without having to trust the implementation is correct.
CheriOS is fundamentally fail-safe; there are no assumptions about the behaviour of the system, apart from the CHERI processor and the nanokernel, to be broken. Using CHERI and the new nanokernel primitives, programmers can expect full isolation at scopes ranging from a whole program to a single function, and not just with respect to other programs but the system itself. Programs compiled for and run on CheriOS offer full memory safety, both spatial and temporal, enforced control flow integrity between compartments and protection against common vulnerabilities such as buffer overflows, code injection and Return-Oriented-Programming attacks. I achieve this by designing a new CHERI-based ABI (Application Binary Interface) which includes a novel stack structure that offers temporal safety. I evaluate how practical the new designs are by prototyping them and offering a detailed performance evaluation. I also contrast with existing offerings from both industry and academia.
CHERI capabilities can be used to restrict access to system resources, such as memory, with the required dynamic checks being performed by hardware in parallel with normal operation. Using the accelerating features of CHERI, I show that many of the security guarantees that CheriOS offers can come at little to no cost. I present a novel and secure IO/IPC layer that allows secure marshalling of multiple data streams through mutually distrusting compartments, with fine-grained authenticated access control for end-points, and without either copying or encryption. For example, CheriOS can restrict its TCP stack from having access to packet contents, or restrict an open socket to ensure data sent on it to arrives at an endpoint signed as a TLS implementation. Even with added security requirements, CheriOS can perform well on real workloads. I showcase this by running a state-of-the-art webserver, NGINX, atop both CheriOS and FreeBSD and show improvements in performance ranging from 3x to 6x when running on a small-scale low-power FPGA implementation of CHERI-MIPS
Introductory Computer Forensics
INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic
- âŠ