141 research outputs found

    Robustness and Randomness

    Get PDF
    Robustness problems of computational geometry algorithms is a topic that has been subject to intensive research efforts from both computer science and mathematics communities. Robustness problems are caused by the lack of precision in computations involving floating-point instead of real numbers. This paper reviews methods dealing with robustness and inaccuracy problems. It discussed approaches based on exact arithmetic, interval arithmetic and probabilistic methods. The paper investigates the possibility to use randomness at certain levels of reasoning to make geometric constructions more robust

    A reference architecture for archival systems with application to product models

    Get PDF
    Pas de résumé en françaisNowadays, a major part of the information is in digital form. Digital preservation is essential to allowpeople to access information over time. From a computer science perspective, two major objectiveshave to be met to enable digital preservation: developing archival systems to manage the preserveddigital information, and select information representations that will facilitate the preservation. For complexinformation such as product models, these two objective are particularly hard to meet. Archivalsystems have to operate in a complex environment, interact with many different systems, and supportmay different business functions. Product model representations do not use all the possibilitiesof computer interpretation.Regarding the development of archival systems, the key is to determine what has to be described toprove that the archival system can effectively support the digital preservation. The Reference Modelfor an Open Archival Information System (OAIS) proposes a terminology to describe and comparearchives. The Audit and Certification of Trustworthy Digital Repository (ACTDR) provides criteria forthe certification of archives. One issue with these efforts is that there is not guidance on how to usethem within archival system descriptions.This thesis proposes a method called Reference Architecture for Archival Systems (RAAS) to describearchival systems implementations. RAAS relies on the DoD Architecture Framework to describethe various aspects of the archival systems. Moreover, RAAS provides an archival-specificterminology inspired by the OAIS Reference Model. RAAS also explains how the archival systemdescription can help for the ACTDR certification.RAAS is applied to a product model preservation case, to describe the various aspects of the archivalsystem. This description includes the interactions involving the archival systems, the archival systemfunctions, the definition of the preserved content, and the definition of the metadata. This descriptionformally refers to the OAIS terminology, and provides ACTDR certification evidence.This thesis also address the representation of product models by proposing the translation of productmodels from STEP to OWL. STEP is a standard for product model representation. The use ofOWL enables semantic relationship to enrich product information, and improve the search and theunderstanding of this information using data integration.The methodology used in this thesis can apply to other types of information, such as medical recordsDIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Re-parameterization reduces irreducible geometric constraint systems

    No full text
    International audienceYou recklessly told your boss that solving a non-linear system of size n (n unknowns and n equations) requires a time proportional to n, as you were not very attentive during algorithmic complexity lectures. So now, you have only one night to solve a problem of big size (e.g., 1000 equations/unknowns), otherwise you will be fired in the next morning. The system is well-constrained and structurally irreducible: it does not contain any strictly smaller well-constrained subsystems. Its size is big, so the Newton–Raphson method is too slow and impractical. The most frustrating thing is that if you knew the values of a small number k<<n of key unknowns, then the system would be reducible to small square subsystems and easily solved. You wonder if it would be possible to exploit this reducibility, even without knowing the values of these few key unknowns. This article shows that it is indeed possible. This is done at the lowest level, at the linear algebra routines level, so that numerous solvers (Newton–Raphson, homotopy, and also p-adic methods relying on Hensel lifting) widely involved in geometric constraint solving and CAD applications can benefit from this decomposition with minor modifications. For instance, with k<<n key unknowns, the cost of a Newton iteration becomes O(kn^2) instead of O(n^3). Several experiments showing a significant performance gain of our re-parameterization technique are reported in this paper to consolidate our theoretical findings and to motivate its practical usage for bigger systems

    An Arabic Text-to-Picture Mobile Learning System

    Get PDF
    Handled devices and software applications are susceptible to ameliorate learning strength, awareness, and career development. Many mobile-based learning applications are obtainable from the market but Arabic learning shortage is not taken in consideration. We conduct an Arabic Text-to-Picture (TTP) mobile educational application which performs knowledge extraction and concept analysis to generate pictures that represent the content of the Arabic text. The knowledge extraction is based on Arabic semantic models cover important scopes for young children and new Arabic learners (i.e., grammar, nature, animals). The concept analysis uses semantic reasoning, semantic rules, and Arabic natural text processing (NLP) tool to identify word-to-word relationships. The retrieval of images is done spontaneously from local repository and online search engine (i.e., Google or Bing). The instructor can select the Arabic educational content, get semi-automatic generated pictures, and use them for explanation. Preliminary results show improvement in Arabic learning strength and memorization.qscienc

    Fuzzy Distributed Genetic Approaches for Image Segmentation

    Get PDF
    This paper presents a new image segmentation algorithm (called FDGA-Seg) based on a combination of fuzzy logic, multiagent systems and genetic algorithms. We propose to use a fuzzy representation of the image site labels by introducing some imprecision in the gray tones values. The distributivity of FDGA-Seg comes from the fact that it is designed around a MultiAgent System (MAS) working with two different architectures based on the master-slave and island models. A rich set of experimental segmentation results given by FDGA-Seg is discussed and compared to the ICM results in the last section

    Adaptative Network Topology for Data Centers

    Get PDF
    Data centers have an important role in supporting cloud computing services (such as email, social networking, web search, etc.) enterprise computing needs, and infrastructure-based services. Data center networking is a research topic that aims at improving the overall performances of the data centers. It is a topic of high interest and importance for both academia and industry. Several architectures such as FatTree, FiConn, DCel, BCube, and SprintNet have been proposed. However, these topologies try to improve the scalability without any concerns about energy that data centers use and the network infrastructure cost, which are critical parameters that impact the performances of data centers. In fact, companies suffer from the huge energy their data centers use and the network infrastructure cost witch is seen by operators as a key driver for maximizing data centers profits and according to industry estimates, the united states data center market achieved almost US\$39 billion in 2009, growing from US\$16.2 billion in 2005 Moreover, the studies show that the installed base of servers has been increasing 12 percent a year, from 14 million in 2000 to 35 million in 2008. Yet that growth is not keeping up with the demands placed on data centers for computing power and the amount of data they can handle. Almost 30 percent of respondents to a 2008 survey of data center managers said their centers will have reached their capacity limits in three years or sooner. The infrastructure cost and power consumption are the first order design concern for data center operators. In fact, they represent an important fraction of the initial capital investment while not contributing directly to the future revenues. Thus, the design goals of data center architectures seen by operators are high scalability, low latency, low Average path length and especially low energy consumption and low infrastructure cost (the number of interface cards, switches, and links). Motivated by these challenges, we propose a new data center architecture, called VacoNet that combines the advantages of previous architectures while avoiding their limitations. VacoNet is a reliable, high-performance, and scalable data center topology that is able to improve the network performances in terms average path length, network capacity and network latency. In fact, VacoNet can connect more than 12 times the number of nodes in FlatNet without increasing the APL. Also, it achieves a good network capacity even with a bottleneck effect (bigger than 0.3 even for 1000 servers). Furthermore, VacoNet reduced the infrastructure cost by about 50%, and the power consumption will be decreased with more than 50000 watt compared to all the previous architectures. In addition, and thanks to the proposed fault tolerant algorithm, the new architecture shows a great performance even when the failure rate equals to 0.3, which means when about one third of the links failed, the connection failure rate is only 15%. By using VacoNet, operators can win till 2 million US dollars compared to Flatnet, Dcell, Bcube and Fattree. Both theoretical analysis and simulation experiments have conducted and validated to evaluate the overall performance of the proposed architecture.qscienc

    On Dependability Traffic Load and Energy Consumption Tradeoff in Data Center Networks

    Get PDF
    Mega data centers (DCs) are considered as efficient and promising infrastructures for supporting numerous cloud computing services such as online office, online social networking, Web search and IT infrastructure out-sourcing. The scalability of these services is influenced by the performance and dependability characteristics of the DCs. Consequently, the DC networks are constructed with a large number of network devices and links in order to achieve high performance and reliability. As a result, these requirements increase the energy consumption in DCs. In fact, in 2010, the total energy consumed by DCs was estimated to be about 120 billion Kilowatts of electricity in 2012, which is about 2.8% of the total electricity bill in the USA. According to industry estimates, the USA data center market achieved almost US 39 billion in 2009, growing from US 16.2 billion in 2005. One of the primary reasons behind this issue is that all the links and devices are always powered on regardless of the traffic status. The statistics showed that the traffic drastically alternates, especially between mornings and nights, and also between working days and weekends. Thus, the network utilization depends on the actual period, and generally, the peak capacity of the network is reached only in rush times. This non-proportionality between traffic load and energy consumption is caused by the fact that -most of the time- only a subset of the network devices and links can be enough to forward the data packets to their destinations while the remaining idle nodes are just wasting energy. Such observations inspired us to propose a new approach that powers off the unused links by deactivating the end-ports of each one of them to save energy. The deactivation of ports is proposed in many researches. However, these solutions have high computational complexity, network delay and reduced network reliability. In this paper, we propose a new approach to reduce the power consumption in DC. By exploiting the correlation in time of the network traffic, the proposed approach uses the traffic matrix of the current network state, and manages the state of switch ports (on/off) at the beginning of each period, while making sure to keep the data center fully connected. During the rest of each time period, the network must be able to forward its traffic through the active ports. The decision to close or open depends on a predefined threshold value; the port is closed only if the sum of the traffic generated by its connected node is less than the threshold. We also investigate the minimum period of time during which a port should not change its status. This minimum period is necessary given that it takes time and energy to switch a port on and off. Also, one of the major challenges in this work is powering off the idle devices for more energy saving while guaranteeing the connectivity of each server. So, we propose a new traffic aware algorithm that presents a tradeoff between energy saving and reliability satisfaction. For instance, in HyperFlatNet, simulation results show that the proposed approach reduces the energy consumption by 1.8*104 WU (Watt per unit of time) for a correlated network with1000-server (38 % of energy saving). In addition, and thanks to the proposed traffic aware algorithm, the new approach shows a good performance even in case of high failure rate (up to 30%) which means when one third of the links failed, the connection failure rate is only 0.7%. Both theoretical analysis and simulation experiments are conducted to evaluate and verify the performance of the proposed approach compared to the state-of-the-art techniques.qscienc

    Sparsity-aware Multiple Relay Selection In Large Decode-and-forward Relay Networks

    Get PDF
    Cooperative communication is a promising technology that has attracted significant attention recently thanks to its ability to achieve spatial diversity in wireless networks with only single-antenna nodes. The different nodes of a cooperative system can share their resources so that a virtual Multiple Input Multiple Output (MIMO) system is created which leads to spatial diversity gains. To exploit this diversity, a variety of cooperative protocols have been proposed in the literature under different design criteria and channel information availability assumptions. Among these protocols, two of the most-widely used are the amplify-and-forward (AF) and decode-and-forward (DF) protocols. However, in large-scale relay networks, the relay selection process becomes highly complex. In fact, in many applications such as device-to-device (D2D) communication networks and wireless sensor networks, a large number of cooperating nodes are used, which leads to a dramatic increase in the complexity of the relay selection process. To solve this problem, the sparsity of the relay selection vector has been exploited to reduce the multiple relay selection complexity for large AF cooperative networks while also improving the bit error rate performance. In this work, we extend the study from AF to large-scale decode-and-forward (DF) relay networks. Based on exploiting the sparsity of the relay selection vector, we propose and compare two different techniques (referred to as T1 and T2) that aim to improve the performance of multiple relay selection in large-scale decode-and-forward relay networks. In fact, when only few relays are selected from a large number of relays, the relay selection vector becomes sparse. Hence, utilizing recent advances in sparse signal recovery theory, we propose to use different signal recovery algorithms such as the Orthogonal Matching Pursuit (OMP) to solve the relay selection problem. Our theoretical and simulated results demonstrate that our two proposed sparsity-aware relay selection techniques are able to improve the outage performance and reduce the computation complexity at the same time compared with conventional exhaustive search (ES) technique. In fact, compared to ES technique, T1 reduces the selection complexity by O(K^2 N) (where N is the number of relays and K is the number of selected relays) while outperforming it in terms of outage probability irrespective of the relays' positions. Technique T2 provides higher outage probability compared to T1 but reduces the complexity making a compromise between complexity and outage performance. The best selection threshold for T2 is also theoretically calculated and validated by simulations which enabled T2 to also improve the outage probability compared with ES techniques.qscienc

    Towards a better integration of modelers and black box constraint solvers within the Product Design Process

    Get PDF
    This paper presents a new way of interaction between modelers and solvers to support the Product Development Process (PDP). The proposed approach extends the functionalities and the power of the solvers by taking into account procedural constraints. A procedural constraint requires calling a procedure or a function of the modeler. This procedure performs a series of actions and geometric computations in a certain order. The modeler calls the solver for solving a main problem, the solver calls the modeler’s procedures, and similarly procedures of the modeler can call the solver for solving sub-problems. The features, specificities, advantages and drawbacks of the proposed approach are presented and discussed. Several examples are also provided to illustrate this approach

    Exploiting Sparsity in Amplify-and-Forward Broadband Multiple Relay Selection

    Get PDF
    Cooperative communication has attracted significant attention in the last decade due to its ability to increase the spatial diversity order with only single-antenna nodes. However, most of the techniques in the literature are not suitable for large cooperative networks such as device-to-device and wireless sensor networks that are composed of a massive number of active devices, which significantly increases the relay selection complexity. Therefore, to solve this problem and enhance the spatial and frequency diversity orders of large amplify and forward cooperative communication networks, in this paper, we develop three multiple relay selection and distributed beamforming techniques that exploit sparse signal recovery theory to process the subcarriers using the low complexity orthogonal matching pursuit algorithm (OMP). In particular, by separating all the subcarriers or some subcarrier groups from each other and by optimizing the selection and beamforming vector(s) using OMP algorithm, a higher level of frequency diversity can be achieved. This increased diversity order allows the proposed techniques to outperform existing techniques in terms of bit error rate at a lower computation complexity. A detailed performance-complexity tradeoff, as well as Monte Carlo simulations, are presented to quantify the performance and efficiency of the proposed techniques. 2013 IEEE.This publication was made possible by NPRP grant 8-627-2-260 and NPRP grant 6-070-2-024 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Scopu
    corecore