11 research outputs found

    The costs of environmental regulation in a concentrated industry

    Get PDF
    The typical cost analysis of an environmental regulation consists of an engineering estimate of the compliance costs. In industries where fixed costs are an important determinant of market structure this static analysis ignores the dynamic effects of the regulation on entry, investment, and market power. I evaluate the welfare costs of the 1990 Amendments to the Clean Air Act on the US Portland cement industry, accounting for these effects through a dynamic model of oligopoly in the tradition of Ericson and Pakes (1995). Using a recently developed two-step estimator, I recover the entire cost structure of the industry, including the distribution of sunk entry costs and adjustment costs of investment. I find that the Amendments have significantly increased the sunk cost of entry. I solve for the Markov perfect Nash equilibrium (MPNE) of the model and simulate the welfare effects of the Amendments. A static analysis misses the welfare penalty on consumers, and obtains the wrong sign on the welfare effects on incumbent firms

    Cloud Based IoT Architecture

    Get PDF
    The Internet of Things (IoT) and cloud computing have grown in popularity over the past decade as the internet becomes faster and more ubiquitous. Cloud platforms are well suited to handle IoT systems as they are accessible and resilient, and they provide a scalable solution to store and analyze large amounts of IoT data. IoT applications are complex software systems and software developers need to have a thorough understanding of the capabilities, limitations, architecture, and design patterns of cloud platforms and cloud-based IoT tools to build an efficient, maintainable, and customizable IoT application. As the IoT landscape is constantly changing, research into cloud-based IoT platforms is either lacking or out of date. The goal of this thesis is to describe the basic components and requirements for a cloud-based IoT platform, to provide useful insights and experiences in implementing a cloud-based IoT solution using Microsoft Azure, and to discuss some of the shortcomings when combining IoT with a cloud platform

    Big data : evolution, components, challenges and opportunities

    Get PDF
    Thesis (S.M. in Management of Technology)--Massachusetts Institute of Technology, Sloan School of Management, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 122-126).This work reviews the evolution and current state of the "Big Data" industry, and to understand the key components, challenges and opportunities of Big Data and analytics face in today business environment, this is analyzed in seven dimensions: Historical Background. The historical evolution and milestones in data management that eventually led to what we know today as Big Data. What is Big Data? Reviews the key concepts around big data, including Volume, Variety, and Velocity, and the key components of successful Big Data initiatives. Data Collection. The most important issue to consider before any big data initiative is to identify the "Business Case" or "Question" we want to answer, no "big data" initiative should be launched without clearly identify the business problem we want to tackle. Data collection strategy has to be closely defined taking in consideration the business case in question. Data Analysis. This section explores the techniques available to create value by aggregate, manipulate, analyze and visualize big data. Including predictive modeling, data mining, and statistical inference models. Data Visualization. Visualization of data is one of the most powerful and appealing techniques for data exploration. This section explores the main techniques for data visualization so that the characteristics of the data and the relationships among data items can be reported and analyzed. Impact. This section explores the potential impact and implications of big data in value creation in five domains: Insurance, Healthcare, Politics, Education and Marketing. Human Capital. This chapter explores the way big data will influence business processes and human capital, explore the role of the "Data Scientist" and analyze a potential shortage of data experts in coming years. Infrastructure and Solutions. This chapter explores the current professional services and infrastructure offering and how this industry and makes a review of vendors available in different specialties around big data.by Alejandro Zarate Santovena.S.M.in Management of Technolog

    A comparison of statistical machine learning methods in heartbeat detection and classification

    Get PDF
    In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms

    The roots and fruits of the Nordic Consumer Research

    Get PDF
    Kirjoittajat: Grønhaug Kjell, Grunert Klaus G., Uusitalo Klaus, Wikström Solveig R., Brembeck Helene, Johansson Barbro, Bergström Kerstin, Hillén Sandra, Jonsson Lena, Ossiansson Eva, Shnahan Helena, Halkoaho Jenniina, Hansson Niklas, Holmberg Ulrika, Kotro Tanja, Repo Petteri, Bråtå Hans Olav, Hagen Svein erik, Hauge Atle, Lammi Minna, Pantzar Mika, Timonen Päivi, Mickelsson Jacob, Mauri Aurelio G., Soone Ivar, Ryynänen Toni, Varjonen Johannafi=vertaisarvioimaton|en=nonPeerReviewed

    Elastic techniques to handle dynamism in real-time data processing systems

    Get PDF
    Real-time data processing is a crucial component of cloud computing today. It is widely adopted to provide an up-to-date view of data for social networks, cloud management, web applications, edge, and IoT infrastructures. Real-time processing frameworks are designed for time-sensitive tasks such as event detection, real-time data analysis, and prediction. Compared to handling offline, batched data, real-time data processing applications tend to be long-running and are prone to performance issues caused by many unpredictable environmental variables, including (but not limited to) job specification, user expectation, and available resources. In order to cope with this challenge, it is crucial for system designers to improve frameworks’ ability to adjust their resource usage to adapt to changing environmental variables, defined as system elasticity. This thesis investigates how elastic resource provisioning helps cloud systems today process real-time data while maintaining predictable performance under workload influence in an automated manner. We explore new algorithms, framework design, and efficient system implementation to achieve this goal. On the other hand, distributed systems today need to continuously handle various application specifications, hardware configurations, and workload characteristics. Maintaining stable performance requires systems to explicitly plan for resource allocation upon starting an application and tailor allocation dynamically during run time. In this thesis, we show how achieving system elasticity can help systems provide tunable performance under the dynamism of many environmental variables without compromising resource efficiency. Specifically, this thesis focuses on the two following aspects: i) Elasticity-aware Scheduling: Real-time data processing systems today are often designed in resource-, workload-agnostic fashion. As a result, most users are unable to perform resource planning before launching an application or adjust resource allocation (both within and across application boundaries) intelligently during the run. The first part of this thesis work (Stela [1], Henge [2], Getafix [3]) explores efficient mechanisms to conduct performance analysis while also enabling elasticity-aware scheduling in today’s cloud frameworks. ii) Resource Efficient Cloud Stack: The second line of work in this thesis aims to improve underlying cloud stacks to support self-adaptive, highly efficient resource provisioning. Today’s cloud systems enforce full isolation that prevents resource sharing among applications at a fine granularity over time. This work (Cameo [4], Dirigo) builds real- time data processing systems for emerging cloud infrastructures with high resource utilization through fine-grained resource sharing. Given that the market for real-time data analysis is expected to increase by the annual rate of 28.2% and reach 35.5 billion by the year 2024 [5], improving system elasticity can introduce a significant reduction to deployment cost and increase in resource utilization. Our works improve the performances of real-time data analytics applications within resource constraints. We highlight some of the improvements as the following: i) Stela explores elastic techniques for single-tenant, on-demand dataflow scale-out and scale-in operations. It improves post-scale throughput by 45-120% during on-demand scale-out and post-scale throughput by 2-5× during on-demand scale-in. ii) Henge develops a mechanism to map application’s performance into a unified scale of resource needs. It reduces resource consumption by 40-60% by maintaining the same level of SLO achievement throughout the cluster. iii) Getafix implements a strategy to analyze workload dynamically and proposes a solution that guides the systems to calculate the number of replicas to generate and the placement plan of these replicas adaptively. It achieves comparable query latency (both average and tail) by achieving 1.45-2.15× memory savings. iv) Cameo proposes a scheduler that supports data-driven, fine-grained operator execution guided by user expectations. It improves cluster utilization by 6× and reduces the performance violation by 72% while compacting more jobs into a shared cluster. v) Dirigo performs fully decentralized, function state-aware, global message scheduling for stateful functions. It is able to reduce tail latency by 60% compared to the local scheduling approach and reduce remote state accesses by 19× compared to the scheduling approach that is unaware of function states. These works can potentially lead to profound cost savings for both cloud providers and end-users

    Finanses un kredīts: problēmas, koncepcijas, vadība

    Get PDF
    Economic situation in the Baltic States is investigated, in particular the development of economics in transition is analysed in Latvia, Lithuania, Estonia and Poland. There are studied the following details: Monetary and exchange rate policy; Crediting and bank management; Development of securities market; Management of taxes and finance; Development of accounting policy; Pension reform perspective etc
    corecore