718,014 research outputs found

    Nonlinear normal modes and spectral submanifolds: Existence, uniqueness and use in model reduction

    Full text link
    We propose a unified approach to nonlinear modal analysis in dissipative oscillatory systems. This approach eliminates conflicting definitions, covers both autonomous and time-dependent systems, and provides exact mathematical existence, uniqueness and robustness results. In this setting, a nonlinear normal mode (NNM) is a set filled with small-amplitude recurrent motions: a fixed point, a periodic orbit or the closure of a quasiperiodic orbit. In contrast, a spectral submanifold (SSM) is an invariant manifold asymptotic to a NNM, serving as the smoothest nonlinear continuation of a spectral subspace of the linearized system along the NNM. The existence and uniqueness of SSMs turns out to depend on a spectral quotient computed from the real part of the spectrum of the linearized system. This quotient may well be large even for small dissipation, thus the inclusion of damping is essential for firm conclusions about NNMs, SSMs and the reduced-order models they yield.Comment: To appear in Nonlinear Dynamic

    C# and the .NET framework: ready for real time

    Get PDF
    0 7 4 0 -7 4 5 9 / 0 3 / $ 1 9 . 0 0 © 2 0 0 3 I E E E including type unsafe features, thread synchronization, and overflow-sensitive arithmeticapply to real-time systems. This article will further explore C# and the .NET Framework's suitability for real-time systems. Real-time systems Practitioners categorize real-time systems as hard, firm, and soft. 1 Hard real-time systems include those in which a single missed deadline might cause catastrophic repercussions. Firm real-time systems tolerate one or more missed deadlines without catastrophic repercussions. In soft real-time systems, missed deadlines only result in performance degradation. Bart Van Beneden says, "All too often, real-time behavior is associated with raw speed. Popular conclusions are that the faster a system responds or processes data, the more real-time it is." 2 However, these conclusions are incorrect. Real-time systems must foremost address schedulability and determinism, especially under load. 1 Schedulability indicates a system's ability to satisfy all deadlines. Determinism lets an observer predict the system's next state at any time given its current state and a set of inputs. Real-time C# Real-time Java systems have been studied extensively. When examining C# and .NET for realtime systems, you should note the characteristics of the underlying platform-this primarily means Microsoft operating systems. M icrosoft's integrated development environment, Visual Studio.NET, includes a new programming language C# (pronounced "C sharp"), which targets the .NET Framework. Both the .NET Framework and C# are fairly well-documented technologies, but the platform's appropriateness for real-time systems has not received much attention. Microsoft doesn't specifically claim that C# and .NET are intended for real-time systems, but many of the platform's general purpose featuresreal-time system

    Cashless Societies and the Rise of the Independent Cryptocurrencies: How Governments Can Use Privacy Laws to Compete with Independent Cryptocurrencies

    Get PDF
    Many individuals (including governments) envision living in a future world where physical currency is a thing of the past. Many countries have made great strides in their efforts to go cashless. At the same time, there is increasing awareness among citizens of the decreasing amount of privacy in their lives. The potential hazards cashless societies pose to financial privacy may incentivize citizens to hold some of their money in independent cryptocurrencies. This article argues that in order for governments in cashless societies to keep firm control over their money supply, they should enact stronger privacy law protections for its citizens in order to decrease the real or perceived loss of (financial) privacy. This paper compares the privacy laws that exist today in both the United States and the European Union and suggests combining elements of both legal systems in order create a more privacy-friendly legal framework that can enable governments to complete against independent cryptocurrencies

    Forecasting model selection through out-of-sample rolling horizon weighted errors

    Full text link
    Demand forecasting is an essential process for any firm whether it is a supplier, manufacturer or retailer. A large number of research works about time series forecast techniques exists in the literature, and there are many time series forecasting tools. In many cases, however, selecting the best time series forecasting model for each time series to be dealt with is still a complex problem. In this paper, a new automatic selection procedure of time series forecasting models is proposed. The selection criterion has been tested using the set of monthly time series of the M3 Competition and two basic forecasting models obtaining interesting results. This selection criterion has been implemented in a forecasting expert system and applied to a real case, a firm that produces steel products for construction, which automatically performs monthly forecasts on tens of thousands of time series. As result, the firm has increased the level of success in its demand forecasts. © 2011 Elsevier Ltd. All rights reserved.Poler Escoto, R.; Mula, J. (2011). Forecasting model selection through out-of-sample rolling horizon weighted errors. Expert Systems with Applications. 38(12):14778-14785. doi:10.1016/j.eswa.2011.05.072S1477814785381

    Blink and it's done: Interactive queries on very large data

    Get PDF
    In this demonstration, we present BlinkDB, a massively parallel, sampling-based approximate query processing framework for running interactive queries on large volumes of data. The key observation in BlinkDB is that one can make reasonable decisions in the absence of perfect answers. BlinkDB extends the Hive/HDFS stack and can handle the same set of SPJA (selection, projection, join and aggregate) queries as supported by these systems. BlinkDB provides real-time answers along with statistical error guarantees, and can scale to petabytes of data and thousands of machines in a fault-tolerant manner. Our experiments using the TPC-H benchmark and on an anonymized real-world video content distribution workload from Conviva Inc. show that BlinkDB can execute a wide range of queries up to 150x faster than Hive on MapReduce and 10--150x faster than Shark (Hive on Spark) over tens of terabytes of data stored across 100 machines, all with an error of 2--10%.National Science Foundation (U.S.) (CISE Expeditions Award CCF-1139158)QUALCOMM Inc.Amazon.com (Firm)Google (Firm)SAP CorporationBlue GojiCisco Systems, Inc.Cloudera, Inc.Ericsson, Inc.General Electric CompanyHewlett-Packard CompanyIntel CorporationMarkLogic CorporationMicrosoft CorporationNetAppOracle CorporationSplunk Inc.VMware, Inc.United States. Defense Advanced Research Projects Agency (Contract FA8650-11-C-7136

    The Information Systems (IS) Role of Accountants: A Case Study of an On-line Analytical Processing (OLAP) Implementation

    Get PDF
    Today\u27s organisations place heavy reliance on computerised information systems (CIS) for the provision of timely and quality information to management. The quality of an accounting information system (AIS) is critical to the success of a firm. Executives now require real-time information with multidimensional views to manage firms operating in a dynamic and competitive environment. The use of OLAP technology in financial reporting will greatly improve the flexibility of information available from various databases. This study reports a case of implementing an OLAP tool to build complex financial reports for the use of senior management. The case illustrates the importance of the IS role of accountants with the emergence of the systems accounting role and the benefits of OLAP to accountants

    Knowing when you're wrong: Building fast and reliable approximate query processing systems

    Get PDF
    Modern data analytics applications typically process massive amounts of data on clusters of tens, hundreds, or thousands of machines to support near-real-time decisions.The quantity of data and limitations of disk and memory bandwidth often make it infeasible to deliver answers at interactive speeds. However, it has been widely observed that many applications can tolerate some degree of inaccuracy. This is especially true for exploratory queries on data, where users are satisfied with "close-enough" answers if they can come quickly. A popular technique for speeding up queries at the cost of accuracy is to execute each query on a sample of data, rather than the whole dataset. To ensure that the returned result is not too inaccurate, past work on approximate query processing has used statistical techniques to estimate "error bars" on returned results. However, existing work in the sampling-based approximate query processing (S-AQP) community has not validated whether these techniques actually generate accurate error bars for real query workloads. In fact, we find that error bar estimation often fails on real world production workloads. Fortunately, it is possible to quickly and accurately diagnose the failure of error estimation for a query. In this paper, we show that it is possible to implement a query approximation pipeline that produces approximate answers and reliable error bars at interactive speeds.National Science Foundation (U.S.) (CISE Expeditions Award CCF-1139158)Lawrence Berkeley National Laboratory (Award 7076018)United States. Defense Advanced Research Projects Agency (XData Award FA8750-12-2-0331)Amazon.com (Firm)Google (Firm)SAP CorporationThomas and Stacey Siebel FoundationApple Computer, Inc.Cisco Systems, Inc.Cloudera, Inc.EMC CorporationEricsson, Inc.Facebook (Firm

    Feedback-Based Admission Control for Firm Real-Time Task Allocation with Dynamic Voltage and Frequency Scaling

    Get PDF
    Feedback-based mechanisms can be employed to monitor the performance of Multiprocessor Systems-on-Chips (MPSoCs) and steer the task execution even if the exact knowledge of the workload is unknown a priori. In particular, traditional proportional-integral controllers can be used with firm real-time tasks to either admit them to the processing cores or reject in order not to violate the timeliness of the already admitted tasks. During periods with a lower computational power demand, dynamic voltage and frequency scaling (DVFS) can be used to reduce the dissipation of energy in the cores while still not violating the tasks’ time constraints. Depending on the workload pattern and weight, platform size and the granularity of DVFS, energy savings can reach even 60% at the cost of a slight performance degradation

    Performance Evaluation of Time-Critical Smart Grid Applications

    Get PDF
    This paper focuses on the Firm Real-Time requirements of Time-Critical Wide Area Measurement and Control systems, that are expected to play a major role in future Smart Grids. It analyses the operation of these systems and identifies their communication traffic characteristics. It shows that these characteristics are significantly different to those of the current Near Real-Time Wide Area Measurement applications that provide visualization to support manual grid control. It then discusses the performance evaluation of these time critical systems and presents the first stage in a body of work aimed at developing models and techniques to carry out the performance evaluation process. It presents some preliminary results and outlines the direction for future work
    corecore