152 research outputs found
A Survey of Green Networking Research
Reduction of unnecessary energy consumption is becoming a major concern in
wired networking, because of the potential economical benefits and of its
expected environmental impact. These issues, usually referred to as "green
networking", relate to embedding energy-awareness in the design, in the devices
and in the protocols of networks. In this work, we first formulate a more
precise definition of the "green" attribute. We furthermore identify a few
paradigms that are the key enablers of energy-aware networking research. We
then overview the current state of the art and provide a taxonomy of the
relevant work, with a special focus on wired networking. At a high level, we
identify four branches of green networking research that stem from different
observations on the root causes of energy waste, namely (i) Adaptive Link Rate,
(ii) Interface proxying, (iii) Energy-aware infrastructures and (iv)
Energy-aware applications. In this work, we do not only explore specific
proposals pertaining to each of the above branches, but also offer a
perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate;
Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications.
18 pages, 6 figures, 2 table
Evidencia digital orientada a unidades de estado sólido (SSD): una revisión
Nowadays, the massive electronic usage and it's dependance. (Phones, tablets, computers, laptops, among others) it has taken to people in some way the necessity to stay connected permanently on this technology tools; in sinister terms make them really useful such as evidentiary da data. In the academy literature absence, this article checks main topics clarifying from computer forensics concepts to digital evidence, recollections and digital evidence in Argentina, Chile, Colombia and Mexico. During the last decade we use IEEE data base information and organization such as International Telecommunications Union (UIT), the attorney general's office, the Ministry of information and communications (MINTIC) and specializing web sites. Making an interpretative with Cybersecurity resources and their main focus on SSD and the physical information recovery and logically in this type of controlling materials.El uso masivo de dispositivos electrónicos (celulares, tabletas, computadoras, laptops, entre otros) y su dependencia, han llevado a las personas a crear una necesidad de estar conectados permanentemente con estas herramientas tecnológicas; situación que en el caso de siniestros las hace útiles como material probatorio. Ante la ausencia de literatura académica, este artículo realiza una revisión sobre informática forense, recolección y manejo de evidencia digital en: Argentina, Chile Colombia y México, durante la última década. Para el efecto se usan fuentes emanadas de las bases: IEEE, y organizaciones como la Unión Internacional de telecomunicaciones (UIT), la Fiscalía General de la Nación, el Ministerio de Tecnologías de la Información y Comunicaciones (MINTIC), y páginas web especializadas. Se realiza un estudio interpretativo de las fuentes relacionadas con ciberseguridad y su orientación hacia las UES y la recuperación de información física y lógica en este tipo de elementos de control. 
Network Simulation Cradle
This thesis proposes the use of real world network stacks instead of protocol
abstractions in a network simulator, bringing the actual code used in
computer systems inside the simulator and allowing for greater simulation
accuracy. Specifically, a framework called the Network Simulation
Cradle is created that supports the kernel source code from FreeBSD, OpenBSD
and Linux to make the network stacks from these systems available to the
popular network simulator ns-2.
Simulating with these real world network stacks reveals situations where the
result differs significantly from ns-2's TCP models. The simulated
network stacks are able to be directly compared to the same operating system
running on an actual machine, making validation simple. When measuring the
packet traces produced on a test network and in simulation the results are
nearly identical, a level of accuracy previously unavailable using traditional
TCP simulation models. The results of simulations run comparing ns-2 TCP
models and our framework are presented in this dissertation along with
validation studies of our framework showing how closely simulation resembles
real world computers.
Using real world stacks to simulate TCP is a complementary approach to using
the existing TCP models and provides an extra level of validation. This way of
simulating TCP and other protocols provides the network researcher or engineer
new possibilities. One example is using the framework as a protocol
development environment, which allows user-level development of protocols with
a standard set of reproducible tests, the ability to test scenarios which are
costly or impossible to build physically, and being able to trace and debug
the protocol code without affecting results
Pacific Review Fall 2013
https://scholarlycommons.pacific.edu/pacific-review/1008/thumbnail.jp
Performance Trees: Implementation And Distributed Evaluation
In this paper, we describe the first realisation of an evaluation environment for Performance Trees, a recently proposed formalism for the specification of performance properties and measures. In particular, we present details of the architecture and implementation of this environment that comprises a client-side model and performance query specification tool, and a server-side distributed evaluation engine, supported by a dedicated computing cluster. The evaluation engine combines the analytic capabilities of a number of distributed tools for steady-state, passage time and transient analysis, and also incorporates a caching mechanism to avoid redundant calculations. We demonstrate in the context of a case study how this analysis pipeline allows remote users to design their models and performance queries in a sophisticated yet easy to use framework, and subsequently evaluate them by harnessing the computing power of a Grid cluster back-end.Accepted versio
Mirror - Vol. 30, No. 15 - January 27, 2005
The Mirror (sometimes called the Fairfield Mirror) is the official student newspaper of Fairfield University, and is published weekly during the academic year (September - May). It runs from 1977 - the present; current issues are available online.https://digitalcommons.fairfield.edu/archives-mirror/1646/thumbnail.jp
Recommended from our members
Design and Implementation of Algorithms for Traffic Classification
Traffic analysis is the practice of using inherent characteristics of a network flow such as timings, sizes, and orderings of the packets to derive sensitive information about it. Traffic analysis techniques are used because of the extensive adoption of encryption and content-obfuscation mechanisms, making it impossible to infer any information about the flows by analyzing their content. In this thesis, we use traffic analysis to infer sensitive information for different objectives and different applications. Specifically, we investigate various applications: p2p cryptocurrencies, flow correlation, and messaging applications. Our goal is to tailor specific traffic analysis algorithms that best capture network traffic’s intrinsic characteristics in those applications for each of these applications. Also, the objective of traffic analysis is different for each of these applications. Specifically, in Bitcoin, our goal is to evaluate Bitcoin traffic’s resilience to blocking by powerful entities such as governments and ISPs. Bitcoin and similar cryptocurrencies play an important role in electronic commerce and other trust-based distributed systems because of their significant advantage over traditional currencies, including open access to global e-commerce. Therefore, it is essential to
the consumers and the industry to have reliable access to their Bitcoin assets. We also examine stepping stone attacks for flow correlation. A stepping stone is a host that an attacker uses to relay her traffic to hide her identity. We introduce two fingerprinting systems, TagIt and FINN. TagIt embeds a secret fingerprint into the flows by moving the packets to specific time intervals. However, FINN utilizes DNNs to embed the fingerprint by changing the inter-packet delays (IPDs) in the flow. In messaging applications, we analyze the WhatsApp messaging service to determine if traffic leaks any sensitive information such as members’ identity in a particular conversation to the adversaries who watch their encrypted traffic. These messaging applications’ privacy is essential because these services provide an environment to dis- cuss politically sensitive subjects, making them a target to government surveillance and censorship in totalitarian countries. We take two technical approaches to design our traffic analysis techniques. The increasing use of DNN-based classifiers inspires our first direction: we train DNN classifiers to perform some specific traffic analysis task. Our second approach is to inspect and model the shape of traffic in the target application and design a statistical classifier for the expected shape of traffic. DNN- based methods are useful when the network is complex, and the traffic’s underlying noise is not linear. Also, these models do not need a meticulous analysis to extract the features. However, deep learning techniques need a vast amount of training data to work well. Therefore, they are not beneficial when there is insufficient data avail- able to train a generalized model. On the other hand, statistical methods have the advantage that they do not have training overhead
University of San Diego News Print Media Coverage 2006.07
Printed clippings housed in folders with a table of contents arranged by topic.https://digital.sandiego.edu/print-media/1042/thumbnail.jp
- …