836 research outputs found

    Size-Based Routing Policies: Non-Asymptotic Analysis and Design of Decentralized Systems

    Get PDF
    Size-based routing policies are known to perform well when the variance of the distribution of the job size is very high. We consider two size-based policies in this paper: Task Assignment with Guessing Size (TAGS) and Size Interval Task Assignment (SITA). The latter assumes that the size of jobs is known, whereas the former does not. Recently, it has been shown by our previous work that when the ratio of the largest to shortest job tends to infinity and the system load is fixed and low, the average waiting time of SITA is, at most, two times less than that of TAGS. In this article, we first analyze the ratio between the mean waiting time of TAGS and the mean waiting time of SITA in a non-asymptotic regime, and we show that for two servers, and when the job size distribution is Bounded Pareto with parameter α=1, this ratio is unbounded from above. We then consider a system with an arbitrary number of servers and we compare the mean waiting time of TAGS with that of Size Interval Task Assignment with Equal load (SITA-E), which is a SITA policy where the load of all the servers are equal. We show that in the light traffic regime, the performance ratio under consideration is unbounded from above when (i) the job size distribution is Bounded Pareto with parameter α=1 and an arbitrary number of servers as well as (ii) for Bounded Pareto distributed job sizes with α∈(0,2)\{1} and the number of servers tends to infinity. Finally, we use the result of our previous work to show how to design decentralized systems with quality of service constraints.Josu Doncel has received funding from the Department of Education of the Basque Government through the Consolidated Research Group MATHMODE (IT1294-19), from the Marie Sklodowska-Curie grant agreement No 777778, and from the Spanish Ministry of Science and Innovation with reference PID2019-108111RB-I00 (FEDER/AEI). Eitan Bachmat’s work was supported by the German Science Foundation (DFG) through the grant, Airplane Boarding, (JA 2311/3-1)

    Load Balancing with Job-Size Testing: Performance Improvement or Degradation?

    Full text link
    In the context of decision making under explorable uncertainty, scheduling with testing is a powerful technique used in the management of computer systems to improve performance via better job-dispatching decisions. Upon job arrival, a scheduler may run some \emph{testing algorithm} against the job to extract some information about its structure, e.g., its size, and properly classify it. The acquisition of such knowledge comes with a cost because the testing algorithm delays the dispatching decisions, though this is under control. In this paper, we analyze the impact of such extra cost in a load balancing setting by investigating the following questions: does it really pay off to test jobs? If so, under which conditions? Under mild assumptions connecting the information extracted by the testing algorithm in relationship with its running time, we show that whether scheduling with testing brings a performance degradation or improvement strongly depends on the traffic conditions, system size and the coefficient of variation of job sizes. Thus, the general answer to the above questions is non-trivial and some care should be considered when deploying a testing policy. Our results are achieved by proposing a load balancing model for scheduling with testing that we analyze in two limiting regimes. When the number of servers grows to infinity in proportion to the network demand, we show that job-size testing actually degrades performance unless short jobs can be predicted reliably almost instantaneously and the network load is sufficiently high. When the coefficient of variation of job sizes grows to infinity, we construct testing policies inducing an arbitrarily large performance gain with respect to running jobs untested

    The effects of routing and scoring within a computer adaptive multi-stage framework

    Get PDF
    This dissertation examined the overall effects of routing and scoring within a computer adaptive multi-stage framework (ca-MST). Testing in a ca-MST environment has become extremely popular in the testing industry. Testing companies enjoy its efficiency benefits as compared to traditionally linear testing and its quality-control features over computer adaptive testing (CAT). Test takers enjoy being able to go back and change responses in review time before being assigned to the next module. Lord (1980) outlined a few salient characteristics that should be investigated before the implementation of multi-stage testing. Of these characteristics, decisions on routing mechanisms have received the least attention. This dissertation varied both item pool characteristics such as the location of information, and ca-MST configuration characteristics such as the ca-MST configuration design (e.g., 1-3, 1-2-3, 1-2-3-4). The results from this study hope to show that number correct scoring can serve as a capable surrogate for IRT calibrations at each step and that even if three-parameter scoring models are used at the end that the number correct method will not misroute as compared to traditional methods

    Load Balancing of Elastic Data Traffic in Heterogeneous Wireless Networks

    Get PDF
    The increasing amount of mobile data traffic has resulted in an architectural innovation in cellular networks through the introduction of heterogeneous networks. In heterogeneous networks, the deployment of macrocells is accompanied by the use of low power pico and femtocells (referred to as microcells) in hot spot areas inside the macrocell which increase the data rate per unit area. The purpose of this thesis is to study the load balancing problem of elastic data traffic in heterogeneous wireless networks. These networks consist of different types of cells with different characteristics. Individual cells are modelled as an M/G/1 - PS queueing system. This results in a multi-server queueing model consisting of a single macrocell with multiple microcells within the area. Both static and dynamic load balancing schemes are developed to balance the data flows between the macrocell and microcells so that the mean flow-level delay is minimized. Both analytical and numerical methods are used for static policies. For dynamic policies, the performance is evaluated by simulations. The results of the study indicate that all dynamic policies can significantly improve the flow-level delay performance in the system under consideration compared to the optimal static policy. The results also indicate that MJSQ and MP are best policies although MJSQ needs less state information. The performance gain of most of the dynamic polices is insensitive with respect to the flow size distribution. In addition, many interesting tests are conducted such as the effect of increasing the number of microcells and the impact of service rate difference between macrocell and microcells

    迅速な災害管理のための即時的,持続可能,かつ拡張的なエッジコンピューティングの研究

    Get PDF
    本学位論文は、迅速な災害管理におけるいくつかの問題に取り組んだ。既存のネットワークインフラが災害による直接的なダメージや停電によって使えないことを想定し、本論文では、最新のICTを用いた次世代災害支援システムの構築を目指す。以下のとおり本論文は三部で構成される。第一部は、災害発生後の緊急ネットワーキングである。本論文では、情報指向フォグコンピューティング(Information-Centric Fog Computing)というアーキテクチャを提案し、既存のインフラがダウンした場合に臨時的なネットワーク接続を提供する。本論文では、六次の隔たり理論から着想を得て、緊急時向け名前ベースルーティング(Name-Based Routing)を考慮した。まず、二層の情報指向フォグコンピューティングネットワークモデルを提案した。次に、ソーシャルネットワークを元に、情報指向フォグノード間の関係をモデリングし、名前ベースルーティングプロトコルをデザインする。シミュレーション実験では、既存のソリューションと比較し、提案手法はより高い性能を示し、有用性が証明された。第二部は、ネットワークの通信効率の最適化である。本論文は、第一部で構築されたネットワークの通信効率を最適化し、ネットワークの持続時間を延ばすために、ネットワークのエッジで行われるキャッシングストラテジーを提案した。本論文では、まず、第一部で提案した二層ネットワークモデルをベースにサーバー層も加えて、異種ネットワークストラクチャーを構成した。次に、緊急時向けのエッジキャッシングに必要なTime to Live (TTL)とキャッシュ置換ポリシーを設計する。シミュレーション実験では、エネルギー消費とバックホールレートを性能指標とし、メモリ内キャッシュとディスクキャッシュの性能を比較した。結果では、メモリ内ストレージと処理がエッジキャッシングのエネルギーを節約し、かなりのワークロードを共有できることが示された。第三部は、ネットワークカバレッジの拡大である。本論文は、ドローンの関連技術とリアルタイム視覚認識技術を利用し、被災地のユーザ捜索とドローンの空中ナビゲーションを行う。災害管理におけるドローン制御に関する研究を調査し、現在のドローン技術と無人捜索救助に対する実際のニーズを考慮すると、軽量なソリューションが緊急時に必要であることが判明した。そのため本論文では、転移学習を利用し、ドローンに搭載されたオンボードコンピュータで実行可能な空中ビジョンに基づいたナビゲーションアプローチを開発した。シミュレーション実験では、1/150ミニチュアモデルを用いて、空中ナビゲーションの実行可能性をテストした。結果では、本論文で提案するドローンの軽量ナビゲーションはフィードバックに基づいてリアルタイムに飛行の微調整を実現でき、既存手法と比較して性能において大きな進歩を示した。This dissertation mainly focuses on solving the problems in agile disaster management. To face the situation when the original network infrastructure no longer works because of disaster damage or power outage, I come up with the idea of introducing different emerging technologies in building a next-generation disaster response system. There are three parts of my research. In the first part of emergency networking, I design an information-centric fog computing architecture to fast build a temporary emergency network while the original ones can not be used. I focus on solving name-based routing for disaster relief by applying the idea from six degrees of separation theory. I first put forward a 2-tier information-centric fog network architecture under the scenario of post-disaster. Then I model the relationships among ICN nodes based on delivered files and propose a name-based routing strategy to enable fast networking and emergency communication. I compare with DNRP under the same experimental settings and prove that my strategy can achieve higher work performance. In the second part of efficiency optimization, I introduce the idea of edge caching in prolong the lifetime of the rebuilt network. I focus on how to improve the energy efficiency of edge caching using in-memory storage and processing. Here I build a 3-tier heterogeneous network structure and propose two edge caching methods using different TTL designs & cache replacement policies. I use total energy consumption and backhaul rate as the two metrics to test the performance of the in-memory caching method and compare it with the conventional method based on disk storage. The simulation results show that in-memory storage and processing can help save more energy in edge caching and share a considerable workload in percentage. In the third part of coverage expansion, I apply UAV technology and real-time image recognition in user search and autonomous navigation. I focus on the problem of designing a navigation strategy based on the airborne vision for UAV disaster relief. After the survey of related works on UAV fly control in disaster management, I find that in consideration of the current UAV manufacturing technology and actual demand on unmanned search & rescue, a lightweight solution is in urgent need. As a result, I design a lightweight navigation strategy based on visual recognition using transfer learning. In the simulation, I evaluate my solutions using 1/150 miniature models and test the feasibility of the navigation strategy. The results show that my design on visual recognition has the potential for a breakthrough in performance and the idea of UAV lightweight navigation can realize real-time flight adjustment based on feedback.室蘭工業大学 (Muroran Institute of Technology)博士(工学

    TOKEN-BASED APPROACH FOR SCALABLE TEAMCOORDINATION

    Get PDF
    To form a cooperative multiagent team, autonomous agents are required to harmonize activities and make the best use of exclusive resources to achieve their common goal. In addition, to handle uncertainty and quickly respond to external environmental events, they should share knowledge and sensor in formation. Unlike small team coordination, agents in scalable team must limit the amount of their communications while maximizing team performance. Communication decisions are critical to scalable-team coordination because agents should target their communications, but these decisions cannot be supported by a precise model or by complete team knowledge.The hypothesis of my thesis is: local routing of tokens encapsulating discrete elements of control, based only on decentralized local probability decision models, will lead to efficient scalable coordination with several hundreds of agents. In my research, coordination controls including all domain knowledge, tasks and exclusive resources are encapsulated into tokens. By passing tokens around, agents transfer team controls encapsulated in the tokens. The team benefits when a token is passed to an agent who can make use of it, but communications incur costs. Hence, no single agent has sole responsible over any shared decision. The key problem lies in how agents make the correct decisions to target communications and pass tokens so that they will potentially benefit the team most when considering communication costs.My research on token-based coordination algorithm starts from the investigation of random walk of token movement. I found a little increase of the probabilities that agents make the right decision to pass a token, the overall efficiency of the token movement could be greatly enhanced. Moreover, if token movements are modeled as a Markov chain, I found that the efficiency of passing tokens could be significantly varied based on different network topologies.My token-based algorithm starts at the investigation of each single decision theoretic agents. Although under the uncertainties that exist in large multiagent teams, agents cannot act optimal, it is still feasible to build a probability model for each agents to rationally pass tokens. Specifically, this decision only allow agent to pass tokens over an associate network where only a few of team members are considered as token receiver.My proposed algorithm will build each agent's individual decision model based on all of its previously received tokens. This model will not require the complete knowledge of the team. The key idea is that I will make use of the domain relationships between pairs of coordination controls. Previously received tokens will help the receiver to infer whether the sender could benefit the team if a related token is received. Therefore, each token is used to improve the routing of other tokens, leading to a dramatic performance improvement when more tokens are added. By exploring the relationships between different types of coordination controls, an integrated coordination algorithm will be built, and an improvement of one aspect of coordination will enhance the performance of the others

    Real-Time Sensor Networks and Systems for the Industrial IoT

    Get PDF
    The Industrial Internet of Things (Industrial IoT—IIoT) has emerged as the core construct behind the various cyber-physical systems constituting a principal dimension of the fourth Industrial Revolution. While initially born as the concept behind specific industrial applications of generic IoT technologies, for the optimization of operational efficiency in automation and control, it quickly enabled the achievement of the total convergence of Operational (OT) and Information Technologies (IT). The IIoT has now surpassed the traditional borders of automation and control functions in the process and manufacturing industry, shifting towards a wider domain of functions and industries, embraced under the dominant global initiatives and architectural frameworks of Industry 4.0 (or Industrie 4.0) in Germany, Industrial Internet in the US, Society 5.0 in Japan, and Made-in-China 2025 in China. As real-time embedded systems are quickly achieving ubiquity in everyday life and in industrial environments, and many processes already depend on real-time cyber-physical systems and embedded sensors, the integration of IoT with cognitive computing and real-time data exchange is essential for real-time analytics and realization of digital twins in smart environments and services under the various frameworks’ provisions. In this context, real-time sensor networks and systems for the Industrial IoT encompass multiple technologies and raise significant design, optimization, integration and exploitation challenges. The ten articles in this Special Issue describe advances in real-time sensor networks and systems that are significant enablers of the Industrial IoT paradigm. In the relevant landscape, the domain of wireless networking technologies is centrally positioned, as expected

    The Prom Problem: Fair and Privacy-Enhanced Matchmaking with Identity Linked Wishes

    Get PDF
    In the Prom Problem (TPP), Alice wishes to attend a school dance with Bob and needs a risk-free, privacy preserving way to find out whether Bob shares that same wish. If not, no one should know that she inquired about it, not even Bob. TPP represents a special class of matchmaking challenges, augmenting the properties of privacy-enhanced matchmaking, further requiring fairness and support for identity linked wishes (ILW) – wishes involving specific identities that are only valid if all involved parties have those same wishes. The Horne-Nair (HN) protocol was proposed as a solution to TPP along with a sample pseudo-code embodiment leveraging an untrusted matchmaker. Neither identities nor pseudo-identities are included in any messages or stored in the matchmaker’s database. Privacy relevant data stay within user control. A security analysis and proof-of-concept implementation validated the approach, fairness was quantified, and a feasibility analysis demonstrated practicality in real-world networks and systems, thereby bounding risk prior to incurring the full costs of development. The SecretMatch™ Prom app leverages one embodiment of the patented HN protocol to achieve privacy-enhanced and fair matchmaking with ILW. The endeavor led to practical lessons learned and recommendations for privacy engineering in an era of rapidly evolving privacy legislation. Next steps include design of SecretMatch™ apps for contexts like voting negotiations in legislative bodies and executive recruiting. The roadmap toward a quantum resistant SecretMatch™ began with design of a Hybrid Post-Quantum Horne-Nair (HPQHN) protocol. Future directions include enhancements to HPQHN, a fully Post Quantum HN protocol, and more

    A Brave New World: Studies on the Deployment and Security of the Emerging IPv6 Internet.

    Full text link
    Recent IPv4 address exhaustion events are ushering in a new era of rapid transition to the next generation Internet protocol---IPv6. Via Internet-scale experiments and data analysis, this dissertation characterizes the adoption and security of the emerging IPv6 network. The work includes three studies, each the largest of its kind, examining various facets of the new network protocol's deployment, routing maturity, and security. The first study provides an analysis of ten years of IPv6 deployment data, including quantifying twelve metrics across ten global-scale datasets, and affording a holistic understanding of the state and recent progress of the IPv6 transition. Based on cross-dataset analysis of relative global adoption rates and across features of the protocol, we find evidence of a marked shift in the pace and nature of adoption in recent years and observe that higher-level metrics of adoption lag lower-level metrics. Next, a network telescope study covering the IPv6 address space of the majority of allocated networks provides insight into the early state of IPv6 routing. Our analyses suggest that routing of average IPv6 prefixes is less stable than that of IPv4. This instability is responsible for the majority of the captured misdirected IPv6 traffic. Observed dark (unallocated destination) IPv6 traffic shows substantial differences from the unwanted traffic seen in IPv4---in both character and scale. Finally, a third study examines the state of IPv6 network security policy. We tested a sample of 25 thousand routers and 520 thousand servers against sets of TCP and UDP ports commonly targeted by attackers. We found systemic discrepancies between intended security policy---as codified in IPv4---and deployed IPv6 policy. Such lapses in ensuring that the IPv6 network is properly managed and secured are leaving thousands of important devices more vulnerable to attack than before IPv6 was enabled. Taken together, findings from our three studies suggest that IPv6 has reached a level and pace of adoption, and shows patterns of use, that indicates serious production employment of the protocol on a broad scale. However, weaker IPv6 routing and security are evident, and these are leaving early dual-stack networks less robust than the IPv4 networks they augment.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120689/1/jczyz_1.pd
    corecore