20 research outputs found

    A Remote Capacity Utilization Estimator for WLANs

    Get PDF
    In WLANs, the capacity of a node is not fixed and can vary dramatically due to the shared nature of the medium under the IEEE 802.11 MAC mechanism. There are two main methods of capacity estimation in WLANs: Active methods based upon probing packets that consume the bandwidth of the channel and do not scale well. Passive methods based upon analyzing the transmitted packets that avoid the overhead of transmitting probe packets and perform with greater accuracy. Furthermore, passive methods can be implemented locally or remotely. Local passive methods require an additional dissemination mechanism in order to communicate the capacity information to other network nodes which adds complexity and can be unreliable under adverse network conditions. On the other hand, remote passive methods do not require a dissemination mechanism and so can be simpler to implement and also do not suffer from communication reliability issues. Many applications (e.g. ANDSF etc) can benefit from utilizing this capacity information. Therefore, in this thesis we propose a new remote passive Capacity Utilization estimator performed by neighbour nodes. However, there will be an error associated with the measurements owing to the differences in the wireless medium as observed by the different nodes’ location. The main undertaking of this thesis is to address this issue. An error model is developed to analyse the main sources of error and to determine their impact on the accuracy of the estimator. Arising from this model, a number of modifications are implemented to improve the accuracy of the estimator. The network simulator ns2 is used to investigate the performance of the estimator and the results from a range of different test scenarios indicate its feasibility and accuracy as a passive remote method. Finally, the estimator is deployed in a node saturation detection scheme where it is shown to outperform two other similar schemes based upon queue observation and probing with ping packets

    Proceedings of the 2004 ONR Decision-Support Workshop Series: Interoperability

    Get PDF
    In August of 1998 the Collaborative Agent Design Research Center (CADRC) of the California Polytechnic State University in San Luis Obispo (Cal Poly), approached Dr. Phillip Abraham of the Office of Naval Research (ONR) with the proposal for an annual workshop focusing on emerging concepts in decision-support systems for military applications. The proposal was considered timely by the ONR Logistics Program Office for at least two reasons. First, rapid advances in information systems technology over the past decade had produced distributed collaborative computer-assistance capabilities with profound potential for providing meaningful support to military decision makers. Indeed, some systems based on these new capabilities such as the Integrated Marine Multi-Agent Command and Control System (IMMACCS) and the Integrated Computerized Deployment System (ICODES) had already reached the field-testing and final product stages, respectively. Second, over the past two decades the US Navy and Marine Corps had been increasingly challenged by missions demanding the rapid deployment of forces into hostile or devastate dterritories with minimum or non-existent indigenous support capabilities. Under these conditions Marine Corps forces had to rely mostly, if not entirely, on sea-based support and sustainment operations. Particularly today, operational strategies such as Operational Maneuver From The Sea (OMFTS) and Sea To Objective Maneuver (STOM) are very much in need of intelligent, near real-time and adaptive decision-support tools to assist military commanders and their staff under conditions of rapid change and overwhelming data loads. In the light of these developments the Logistics Program Office of ONR considered it timely to provide an annual forum for the interchange of ideas, needs and concepts that would address the decision-support requirements and opportunities in combined Navy and Marine Corps sea-based warfare and humanitarian relief operations. The first ONR Workshop was held April 20-22, 1999 at the Embassy Suites Hotel in San Luis Obispo, California. It focused on advances in technology with particular emphasis on an emerging family of powerful computer-based tools, and concluded that the most able members of this family of tools appear to be computer-based agents that are capable of communicating within a virtual environment of the real world. From 2001 onward the venue of the Workshop moved from the West Coast to Washington, and in 2003 the sponsorship was taken over by ONR’s Littoral Combat/Power Projection (FNC) Program Office (Program Manager: Mr. Barry Blumenthal). Themes and keynote speakers of past Workshops have included: 1999: ‘Collaborative Decision Making Tools’ Vadm Jerry Tuttle (USN Ret.); LtGen Paul Van Riper (USMC Ret.);Radm Leland Kollmorgen (USN Ret.); and, Dr. Gary Klein (KleinAssociates) 2000: ‘The Human-Computer Partnership in Decision-Support’ Dr. Ronald DeMarco (Associate Technical Director, ONR); Radm CharlesMunns; Col Robert Schmidle; and, Col Ray Cole (USMC Ret.) 2001: ‘Continuing the Revolution in Military Affairs’ Mr. Andrew Marshall (Director, Office of Net Assessment, OSD); and,Radm Jay M. Cohen (Chief of Naval Research, ONR) 2002: ‘Transformation ... ’ Vadm Jerry Tuttle (USN Ret.); and, Steve Cooper (CIO, Office ofHomeland Security) 2003: ‘Developing the New Infostructure’ Richard P. Lee (Assistant Deputy Under Secretary, OSD); and, MichaelO’Neil (Boeing) 2004: ‘Interoperability’ MajGen Bradley M. Lott (USMC), Deputy Commanding General, Marine Corps Combat Development Command; Donald Diggs, Director, C2 Policy, OASD (NII

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    WSN based sensing model for smart crowd movement with identification: a conceptual model

    Get PDF
    With the advancement of IT and increase in world population rate, Crowd Management (CM) has become a subject undergoing intense study among researchers. Technology provides fast and easily available means of transport and, up-to-date information access to the people that causes crowd at public places. This imposes a big challenge for crowd safety and security at public places such as airports, railway stations and check points. For example, the crowd of pilgrims during Hajj and Ummrah while crossing the borders of Makkah, Kingdom of Saudi Arabia. To minimize the risk of such crowd safety and security identification and verification of people is necessary which causes unwanted increment in processing time. It is observed that managing crowd during specific time period (Hajj and Ummrah) with identification and verification is a challenge. At present, many advanced technologies such as Internet of Things (IoT) are being used to solve the crowed management problem with minimal processing time. In this paper, we have presented a Wireless Sensor Network (WSN) based conceptual model for smart crowd movement with minimal processing time for people identification. This handles the crowd by forming groups and provides proactive support to handle them in organized manner. As a result, crowd can be managed to move safely from one place to another with group identification. The group identification minimizes the processing time and move the crowd in smart way

    Networks, complexity and internet regulation scale-free law

    Get PDF
    This book, then, starts with a general statement: that regulators should try, wherever possible, to use the physical methodological tools presently available in order to draft better legislation. While such an assertion may be applied to the law in general, this work will concentrate on the much narrower area of Internet regulation and the science of complex networks The Internet is the subject of this book not only because it is my main area of research, but also because –without over-emphasising the importance of the Internet to everyday life– one cannot deny that the growth and popularisation of the global communications network has had a tremendous impact on the way in which we interact with one another. The Internet is, however, just one of many interactive networks. One way of looking at the complex and chaotic nature of society is to see it as a collection of different nodes of interaction. Humans are constantly surrounded by networks: the social network, the financial network, the transport network, the telecommunications network and even the network of our own bodies. Understanding how these systems operate and interact with one another has been the realm of physicists, economists, biologists and mathematicians. Until recently, the study of networks has been mainly theoretical and academic, because it is difficult to gather data about large and complex systems that is sufficiently reliable to support proper empirical application. In recent years, though, the Internet has given researchers the opportunity to study and test the mathematical descriptions of these vast complex systems. The growth rate and structure of cyberspace has allowed researchers to map and test several previously unproven theories about how links and hubs within networks interact with one another. The Web now provides the means with which to test the organisational structures, architecture and growth of networks, and even permits some limited prediction about their behaviour, strengths and vulnerabilities. The main objective of this book is first and foremost to serve as an introduction to the wider legal audience to some of the theories of complexity and networks. The second objective is more ambitious. By looking at the application of complexity theory and network science in various areas of Internet regulation, it is hoped that there will be enough evidence to postulate a theory of Internet regulation based on network science. To achieve these two goals, Chapter 2 will look in detail at the science of complex networks to set the stage for the legal and regulatory arguments to follow. With the increase in reliability of the descriptive (and sometimes predictive) nature of network science, a logical next step for legal scholars is to look at the legal implications of the characteristics of networks. Chapter 3 highlights the efforts of academics and practitioners who have started to find potential uses for network science tools. Chapter 4 takes this idea further, and explores how network theory can shape Internet regulation. The following chapters will analyse the potential for application of the tools described in the previous chapters, applying complexity theory to specific areas of study related to Internet Law. Chapter 5 deals with the subject of copyright in the digital world. Chapter 6 explores the issue of peer-production and user-generated content using network science as an analytical framework. Chapter 7 finishes the evidence section of the work by studying the impact of network architecture in the field of cybercrime, and asks whether the existing architecture hinders or assists efforts to tackle those problems. It is clear that these are very disparate areas of study. It is not the intention of this book to be overreaching in its scope, although I am mindful that it covers a lot of ground and attempts to study and describe some disciplines that fall outside of my intellectual comfort zone. While the focus of the work is the Internet, its applications may extend beyond mere electronic bits. Without trying to be over-ambitious, it is my strong belief that legal scholarship has been neglectful in that it has been slow to respond to the wealth of research into complexity. That is not to say that there has been no legal research on the topic, but it would seem that lawyers, legislators and policy-makers are reluctant to consider technical solutions to legal problems. It is hoped then that this work will serve as a stepping stone that will lead to new interest in some of the theories that I describe

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
    corecore