200 research outputs found

    Crummer SunTrust Portfolio Recommendations: Crummer Investment Management [2019]

    Get PDF
    SunTrust endowed this portfolio to provide scholarships for future Crummer students and to give current students a practical, hands on learning opportunity. This year we are pleased to be able to award $50,000 in scholarships. We are extremely grateful for SunTrust’s generosity and investment in higher education. We have all learned a great deal from this experience and the responsibility of managing real money. Our first challenge is to establish a portfolio position that takes advantage of economic opportunities while avoiding unnecessary risk and conforming to the Crummer SunTrust Investment Policy Statement (IPS). We are also tasked by the IPS to operate at two levels simultaneously tactical for the near term, and strategic for the long run. Additionally, this portfolio presents some unusual portfolio management challenges by trading only once a year, in early May. Our tactical approach began with a top down sector analysis. We established an economic forecast based on research and consultation with economists, including Professor William Seyfried of the Crummer School. That forecast then drove our allocation among the eleven S&P sectors: Communication Services, Consumer Discretionary, Consumer Staples, Energy, Financials, Healthcare, Industrials, Information Technology, Materials, Telecommunications, and Utilities. This year we have forecast slowing economic growth and tilted the allocation towards defensive sectors that are less sensitive to the business cycle. Our asset class allocation embodies the long run strategy of our portfolio. The IPS sets asset class ranges from low to moderate risk to keep the portfolio from being whipsawed by transitory market cycles. Our equity allocations entail a moderate level of risk, consistent with our view that the stock market will continue a modest upward trend between now and April 2020 . We maintain an allocation to a sector ETF in each sector to ensure diversification. Additionally, as a practical matter, we are limiting each sector to a maximum of two individual stocks. Fixed income is our anchor sector, providing a hedge against the risk of an economic slowdown adversely impacting our equity holdings We are at the middle of our IPS range for fixed income at 15 %, which is a n increase from the decision of 10% last year. Furthermore, we have incorporated a new theme in to our portfolio selection process related t o the rise of the global middle class. Inspired by Hans Roslings’ Factfulness, we believe there are systematic misunderstandings about the state of the world. The biases and ignorance of rich nations obfuscate the tremendous human progress that has taken p lace across the globe, record low poverty levels providing one noteworthy example. Our investment team is committed to capitalize on opportunities hidden in plain sight. Regardless of a security’s consistency with our theme, all recommendations must be undervalued after rigorous quantitative and qualitative analysis. Lastly, we believe that the economic merits of capitalism will prevail against the negative sentiments unfortunately gaining support in the United States. The innovative capacity of the free market will avail itself and continue to raise living standards across the globe

    Design Guidelines for High-Performance SCM Hierarchies

    Full text link
    With emerging storage-class memory (SCM) nearing commercialization, there is evidence that it will deliver the much-anticipated high density and access latencies within only a few factors of DRAM. Nevertheless, the latency-sensitive nature of memory-resident services makes seamless integration of SCM in servers questionable. In this paper, we ask the question of how best to introduce SCM for such servers to improve overall performance/cost over existing DRAM-only architectures. We first show that even with the most optimistic latency projections for SCM, the higher memory access latency results in prohibitive performance degradation. However, we find that deployment of a modestly sized high-bandwidth 3D stacked DRAM cache makes the performance of an SCM-mostly memory system competitive. The high degree of spatial locality that memory-resident services exhibit not only simplifies the DRAM cache's design as page-based, but also enables the amortization of increased SCM access latencies and the mitigation of SCM's read/write latency disparity. We identify the set of memory hierarchy design parameters that plays a key role in the performance and cost of a memory system combining an SCM technology and a 3D stacked DRAM cache. We then introduce a methodology to drive provisioning for each of these design parameters under a target performance/cost goal. Finally, we use our methodology to derive concrete results for specific SCM technologies. With PCM as a case study, we show that a two bits/cell technology hits the performance/cost sweet spot, reducing the memory subsystem cost by 40% while keeping performance within 3% of the best performing DRAM-only system, whereas single-level and triple-level cell organizations are impractical for use as memory replacements.Comment: Published at MEMSYS'1

    Crummer SunTrust Portfolio Recommendations: Crummer Investment Management [2018]

    Get PDF
    The Crummer SunTrust Portfolio’s Investment Policy Statement requires that the management team determine portfolio allocations based on a consensus estimate of the market’s behavior throughout the coming year. This team has conducted thorough economic research using a variety of respected sources, developed a comprehensive market analysis, and heard from a well-rounded selection of industry experts (including economists, portfolio managers, and financial advisors) to inform this year’s investment decision. The team analyzed and discussed a range of likely economic possibilities for the upcoming year to shape a consensus that would serve to inform portfolio decisions. The team also evaluated the potential upsides and downsides relative to each economic factor to guide appropriate responses regarding individual stock selections and portfolio design. Finally, the team’s investment strategy has been to select securities trading at a significant discount to market value. We believe this strategy will mitigate any market volatility while providing a larger total return

    Amazon: David Becomes Goliath

    Get PDF
    From humble beginnings, Amazon.com and its founder Jeff Bezos have grown to become one of the most successful companies and individuals. This paper will explain how Amazon came to be, what has made them so successful, and where they are possibly going from here. This paper will also look at six competitors of Amazon and their strengths and weaknesses in the current business world. Finally, I will share some of my personal recommendations for the company and the lessons I have learned while studying this topic. The information in this paper has been retrieved from various sources and mostly includes quotes of people that helped build this organization and also professionally prepared SWOT analysis from professional in the business analysis field

    Data center's telemetry reduction and prediction through modeling techniques

    Get PDF
    Nowadays, Cloud Computing is widely used to host and deliver services over the Internet. The architecture of clouds is complex due to its heterogeneous nature of hardware and is hosted in large scale data centers. To effectively and efficiently manage such complex infrastructure, constant monitoring is needed. This monitoring generates large amounts of telemetry data streams (e.g. hardware utilization metrics) which are used for multiple purposes including problem detection, resource management, workload characterization, resource utilization prediction, capacity planning, and job scheduling. These telemetry streams require costly bandwidth utilization and storage space particularly at medium-long term for large data centers. Moreover, accurate future estimation of these telemetry streams is a challenging task due to multi-tenant co-hosted applications and dynamic workloads. The inaccurate estimation leads to either under or over-provisioning of data center resources. In this Ph.D. thesis, we propose to improve the prediction accuracy and reduce the bandwidth utilization and storage space requirement with the help of modeling and prediction methods from machine learning. Most of the existing methods are based on a single model which often does not appropriately estimate different workload scenarios. Moreover, these prediction methods use a fixed size of observation windows which cannot produce accurate results because these are not adaptively adjusted to capture the local trends in the recent data. Therefore, the estimation method trains on fixed sliding windows use an irrelevant large number of observations which yields inaccurate estimations. In summary, we C1) efficiently reduce bandwidth and storage for telemetry data through real-time modeling using Markov chain model. C2) propose a novel method to adaptively and automatically identify the most appropriate model to accurately estimate data center resources utilization. C3) propose a deep learning-based adaptive window size selection method which dynamically limits the sliding window size to capture the local trend in the latest resource utilization for building estimation model.Hoy en día, Cloud Computing se usa ampliamente para alojar y prestar servicios a través de Internet. La arquitectura de las nubes es compleja debido a su naturaleza heterogénea del hardware y está alojada en centros de datos a gran escala. Para administrar de manera efectiva y eficiente dicha infraestructura compleja, se necesita un monitoreo constante. Este monitoreo genera grandes cantidades de flujos de datos de telemetría (por ejemplo, métricas de utilización de hardware) que se utilizan para múltiples propósitos, incluyendo detección de problemas, gestión de recursos, caracterización de carga de trabajo, predicción de utilización de recursos, planificación de capacidad y programación de trabajos. Estas transmisiones de telemetría requieren una utilización costosa del ancho de banda y espacio de almacenamiento, particularmente a mediano y largo plazo para grandes centros de datos. Además, la estimación futura precisa de estas transmisiones de telemetría es una tarea difícil debido a las aplicaciones cohospedadas de múltiples inquilinos y las cargas de trabajo dinámicas. La estimación inexacta conduce a un suministro insuficiente o excesivo de los recursos del centro de datos. En este Ph.D. En la tesis, proponemos mejorar la precisión de la predicción y reducir la utilización del ancho de banda y los requisitos de espacio de almacenamiento con la ayuda de métodos de modelado y predicción del aprendizaje automático. La mayoría de los métodos existentes se basan en un modelo único que a menudo no estima adecuadamente diferentes escenarios de carga de trabajo. Además, estos métodos de predicción utilizan un tamaño fijo de ventanas de observación que no pueden producir resultados precisos porque no se ajustan adaptativamente para capturar las tendencias locales en los datos recientes. Por lo tanto, el método de estimación entrena en ventanas corredizas fijas utiliza un gran número de observaciones irrelevantes que produce estimaciones inexactas. En resumen, C1) reducimos eficientemente el ancho de banda y el almacenamiento de datos de telemetría a través del modelado en tiempo real utilizando el modelo de cadena de Markov. C2) proponer un método novedoso para identificar de forma adaptativa y automática el modelo más apropiado para estimar con precisión la utilización de los recursos del centro de datos. C3) proponer un método de selección de tamaño de ventana adaptativo basado en el aprendizaje profundo que limita dinámicamente el tamaño de ventana deslizante para capturar la tendencia local en la última utilización de recursos para el modelo de estimación de construcción.Postprint (published version

    Digital Transformation

    Get PDF
    The amount of literature on Digital Transformation is staggering—and it keeps growing. Why, then, come out with yet another such document? Moreover, any text aiming at explaining the Digital Transformation by presenting a snapshot is going to become obsolete in a blink of an eye, most likely to be already obsolete at the time it is first published. The FDC Initiative on Digital Reality felt there is a need to look at the Digital Transformation from the point of view of a profound change that is pervading the entire society—a change made possible by technology and that keeps changing due to technology evolution opening new possibilities but is also a change happening because it has strong economic reasons. The direction of this change is not easy to predict because it is steered by a cultural evolution of society, an evolution that is happening in niches and that may expand rapidly to larger constituencies and as rapidly may fade away. This creation, selection by experimentation, adoption, and sudden disappearance, is what makes the whole scenario so unpredictable and continuously changing.The amount of literature on Digital Transformation is staggering—and it keeps growing. Why, then, come out with yet another such document? Moreover, any text aiming at explaining the Digital Transformation by presenting a snapshot is going to become obsolete in a blink of an eye, most likely to be already obsolete at the time it is first published. The FDC Initiative on Digital Reality felt there is a need to look at the Digital Transformation from the point of view of a profound change that is pervading the entire society—a change made possible by technology and that keeps changing due to technology evolution opening new possibilities but is also a change happening because it has strong economic reasons. The direction of this change is not easy to predict because it is steered by a cultural evolution of society, an evolution that is happening in niches and that may expand rapidly to larger constituencies and as rapidly may fade away. This creation, selection by experimentation, adoption, and sudden disappearance, is what makes the whole scenario so unpredictable and continuously changing

    Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue: 6 Cultivating New Revenue Sources 12 Outsourcing: A Viable Alternative for Telecom? 18 Partnerships: Chemistry + Communications 28 Campus Cable Comes to Millsaps College 32 SHIP Comes in for UC Berkeley Internet Users 40 Operating a Pager Systems to Generate Revenu
    corecore