34,333 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Overcoming engineering challenges of providing an effective user interface to a large scale distributed synthetic environment on the US teragrid: a systems engineering success story
Over recent yearsâ large-scale distributed synthetic environment enterprises have been evolving in a diverse range of scientific and engineering fields. These computer modelling and simulation systems are increasing in scale and dimension in order to allow scientists and engineers to explore the attributes and emergent properties of a given system design. Within the field of computational science, the grid facilitates very large-scale collaborative simulation enterprises. The grid is similar to distributed interactive simulation/high level architecture (DIS/HLA) in that it supports interconnectivity but differs in the sense that it supports intercommunication of large super computing resources. An important factor in the rapid adoption of the grid has been its role in enabling access to significant supercomputing resources not usually available at a single institution. However, the major challenge for the grid has been the lack of an effective and ubiquitous interface to the huge computational resource (which can comprise over 6000 CPUs distributed across the globe) at any time and from any location. This paper describes a unique user interface built on systems engineering principles and practices to solve the problem of delivering real-time interaction (from lightweight computing devices such as personal digital assistants, commonly known as tablet devices, to high end computing platforms) with simulations delivering high resolution 3D images. The application of our work has far reaching benefits for many sectors including: aerospace, medical informatics, engineering design, distributed simulation, and modelling
Analysing relationship quality and its contribution to consumer relationship proneness
Relationship marketing has been the dominant paradigm in the sphere of marketing in the last decades. However, aspects such as globalisation, development of information technologies, or the growing competitiveness pressure have caused the way of approaching relationship management with consumers to change. A consumer feels as the lead character and demands personalised treatment customised to his/her needs and specific characteristics. In this context, relationship quality (RQ) allows to understand the proneness of consumers to keep their commercial relations alive. Several are the studies that analyse RQ antecedents, but none has used a comprehensive management approach that includes resources and capabilities (such as market orientation or knowledge management) that a company has available for management in order to enhance said RQ. Furthermore, we analyse the effect of said perceived quality on the consumerâs proneness to maintain the relationship
Screening of energy efficient technologies for industrial buildings' retrofit
This chapter discusses screening of energy efficient technologies for industrial buildings' retrofit
Doctor of Philosophy
dissertationThe embedded system space is characterized by a rapid evolution in the complexity and functionality of applications. In addition, the short time-to-market nature of the business motivates the use of programmable devices capable of meeting the conflicting constraints of low-energy, high-performance, and short design times. The keys to achieving these conflicting constraints are specialization and maximally extracting available application parallelism. General purpose processors are flexible but are either too power hungry or lack the necessary performance. Application-specific integrated circuits (ASICS) efficiently meet the performance and power needs but are inflexible. Programmable domain-specific architectures (DSAs) are an attractive middle ground, but their design requires significant time, resources, and expertise in a variety of specialties, which range from application algorithms to architecture and ultimately, circuit design. This dissertation presents CoGenE, a design framework that automates the design of energy-performance-optimal DSAs for embedded systems. For a given application domain and a user-chosen initial architectural specification, CoGenE consists of a a Compiler to generate execution binary, a simulator Generator to collect performance/energy statistics, and an Explorer that modifies the current architecture to improve energy-performance-area characteristics. The above process repeats automatically until the user-specified constraints are achieved. This removes or alleviates the time needed to understand the application, manually design the DSA, and generate object code for the DSA. Thus, CoGenE is a new design methodology that represents a significant improvement in performance, energy dissipation, design time, and resources. This dissertation employs the face recognition domain to showcase a flexible architectural design methodology that creates "ASIC-like" DSAs. The DSAs are instruction set architecture (ISA)-independent and achieve good energy-performance characteristics by coscheduling the often conflicting constraints of data access, data movement, and computation through a flexible interconnect. This represents a significant increase in programming complexity and code generation time. To address this problem, the CoGenE compiler employs integer linear programming (ILP)-based 'interconnect-aware' scheduling techniques for automatic code generation. The CoGenE explorer employs an iterative technique to search the complete design space and select a set of energy-performance-optimal candidates. When compared to manual designs, results demonstrate that CoGenE produces superior designs for three application domains: face recognition, speech recognition and wireless telephony. While CoGenE is well suited to applications that exhibit a streaming behavior, multithreaded applications like ray tracing present a different but important challenge. To demonstrate its generality, CoGenE is evaluated in designing a novel multicore N-wide SIMD architecture, known as StreamRay, for the ray tracing domain. CoGenE is used to synthesize the SIMD execution cores, the compiler that generates the application binary, and the interconnection subsystem. Further, separating address and data computations in space reduces data movement and contention for resources, thereby significantly improving performance compared to existing ray tracing approaches
DECISION SUPPORT IN CAR LEASING: A FORECASTING MODEL FOR RESIDUAL VALUE ESTIMATION
The paper proposes a methodology to support pricing decisions in the car leasing industry. In particular, the price is given by the monthly fee to be paid by the lessee as compensation for using a car over some contract horizon. After contract expiration, lessors are obliged to take back the vehicle, which will then be sold in the used car market. Therefore, lessors require an accurate estimate of carsâ residual values to manage the risk inherent to their business and determine profitable prices. We explore the organizational and technical requirements associated with this forecasting task and develop a prediction model that complies with identified application constraints. The model is rigorously tested within an empirical study and compared to established benchmarks. The results obtained in several experiments provide strong evidence for the proposed model being effective in generating accurate predictions of carsâ residual values and efficient in requiring little user intervention
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Enhancing the employability of fashion students through the use of 3D CAD
The textile and apparel industry has one of the longest and most intricate supply chains within manufacturing. Advancement in technology has facilitated its globalisation, enabling companies to span geographical borders. This has led to new methods of communication using electronic data formats. Throughout the latter part of the 20th Century, 2D CAD technology established itself as an invaluable tool within design and product development. More recently 3D virtual simulation software has made small but significant steps within this market. The technological revolution has opened significant opportunities for those forward thinking companies that are beginning to utilise 3D software. This advanced technology requires designers with unique skill sets. This paper investigates the skills required by fashion graduates from an industry perspective.
To reflect current industrial working practices, it is essential for educational establishments to incorporate technologies that will enhance the employability of graduates. This study developed an adapted action research model based on the work of Kurt Lewin, which reviewed the learning and teaching of 3D CAD within higher education. It encompassed the selection of 3D CAD software development, analysis of industry requirements, and the implementation of 3D CAD into the learning and teaching of a selection of fashion students over a three year period. Six interviews were undertaken with industrial design and product development specialists to determine: current working practices, opinions of virtual 3D software and graduate skill requirements.
It was found that the companies had similar working practices independent of the software utilised within their product development process. The companies which employed 3D CAD software considered further developments were required before the technology could be fully integrated. Further to this it was concluded that it was beneficial for graduates to be furnished with knowledge of emerging technologies which reflect industry and enhance their employability skills
- âŠ