227 research outputs found
Recommended from our members
A survey of handover algorithms in DVB-H
Digital Video Broadcasting for Handhelds (DVB-H) is a standard for
broadcasting IP Datacast (IPDC) services to mobile handheld terminals.
Based on the DVB-T standard, DVB-H adds new features such as time
slicing, MPE-FEC, in-depth interleavers, mandatory cell id identifier,
optional 4K-modulation mode and the use of 5 MHz bandwidth in addition
to the usually used 6, 7, or 8 MHz raster. IPDC over DVB-H is proposed
for ETSI to complement the DVB-H standard by combining IPDC and
DVB-H in an end-to-end system. Handover in such unidirectional broadcasting
networks is a novel issue. In the last few years since the birth of
DVB-H technology, great attention has been given to the performance
analysis of DVB-H mobile terminals. Handover is one of the main research
topics for DVB-H in mobile scenarios. Better reception quality and greater
power efficiency are considered to be the main targets of handover
research for DVB-H. New algorithms for different handover stages in
DVB-H have been the subject of recent research and are currently being
studied. Further novel algorithms need to be designed to improve the
mobile reception quality. This article provides a comprehensive survey of
the handover algorithms in DVB-H. A systematic evaluation and categorization
approach is proposed based on the problems the algorithms solve
and the handover stages being focused on. Criteria are proposed and analyzed
to facilitate designing better handover algorithms for DVB-H that
have been identified from the research conducted by the author
A Fresh Green Index in the World: Building and optimizing a Vegan and Sustainable Index Fund using a Genetic Algorithm and a Heuristic Local Search
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and ManagementThe curiosity of investors regarding Environmental, Social and Governance (ESG) factors has seen a growth in the last few years (Alcoforado, 2016), as the world faces some of its biggest problems to date, such as Climate Change and Ecological Collapse. As these issues are not to be taken lightly, individuals have started to act, in the hopes of creating a ‘greener’ world. As individuals hope to align with principles such as Sustainability and Veganism, the proposed project hopes to build a Vegan and Sustainable Index Fund, as “An investment is not an investment if it is destroying our planet.” (Shiva, 2017).
The aim of the proposed work is, consequently, to build and optimize an Industry and Geographical diversified Index Fund, using a Genetic Algorithm (GA), demonstrating this through the incorporation of Vegan and Sustainable companies, in addition to the global top-50 ESG ranked firms. Index Funds, which are mutual or Exchange-Traded Funds (ETF), are known to be passively managed portfolios, which have been broadly used in hedge trading (Orito, Inoguchi, & Yamamoto, 2008).
This study uses historical data from Vegan, Sustainable and ESG-ranked companies as sample data, replacing traditional optimization methods using a Genetic Algorithm.
The GA method was applied to a sample of 61 assets, regarding vegan and sustainable companies, further obtaining a well-diversified and non-centred asset allocation. The obtained results confirm the possible efficiency of genetic algorithms, given their high-speed convergence towards a better solution. A few functions were presented in the algorithm, for example the penalty function method, to perform portfolio optimization which expects to maximize profits and minimize risks. Some flaws have been identified in regard to the method applied
Lossless, Persisted Summarization of Static Callgraph, Points-To and Data-Flow Analysis
Static analysis is used to automatically detect bugs and security breaches, and aids compiler optimization. Whole-program analysis (WPA) can yield high precision, however causes long analysis times and thus does not match common software-development workflows, making it often impractical to use for large, real-world applications.
This paper thus presents the design and implementation of ModAlyzer, a novel static-analysis approach that aims at accelerating whole-program analysis by making the analysis modular and compositional. It shows how to compute lossless, persisted summaries for callgraph, points-to and data-flow information, and it reports under which circumstances this function-level compositional analysis outperforms WPA.
We implemented ModAlyzer as an extension to LLVM and PhASAR, and applied it to 12 real-world C and C++ applications. At analysis time, ModAlyzer modularly and losslessly summarizes the analysis effect of the library code those applications share, hence avoiding its repeated re-analysis. The experimental results show that the reuse of these summaries can save, on average, 72% of analysis time over WPA. Moreover, because it is lossless, the module-wise analysis fully retains precision and recall. Surprisingly, as our results show, it sometimes even yields precision superior to WPA. The initial summary generation, on average, takes about 3.67 times as long as WPA
Four-Octave Six-Port Receiver and its Calibration for Broadband Communications and Software Defined Radios
This paper presents a software defined radio six-port receiver for a novel broadband mobile communications system. The prototype covers the frequency range from 0.3 GHz to 6 GHz, and operates with up to 100 MHz-wide channels. The multi-band and multi-mode demodulation capabilities of the six-port architecture have been experimentally demonstrated. The six-port receiver has been satisfactorily proved for high data rates (up to 93.75 Mb/s, limited by the available test instruments). An efficient six-port auto-calibration method suitable for large instantaneous bandwidth systems is presented and validated
Multifidelity Information Fusion Algorithms for High-Dimensional Systems and Massive Data sets
We develop a framework for multifidelity information fusion and predictive inference in high-dimensional input spaces and in the presence of massive data sets. Hence, we tackle simultaneously the “big N" problem for big data and the curse of dimensionality in multivariate parametric problems. The proposed methodology establishes a new paradigm for constructing response surfaces of high-dimensional stochastic dynamical systems, simultaneously accounting for multifidelity in physical models as well as multifidelity in probability space. Scaling to high dimensions is achieved by data-driven dimensionality reduction techniques based on hierarchical functional decompositions and a graph-theoretic approach for encoding custom autocorrelation structure in Gaussian process priors. Multifidelity information fusion is facilitated through stochastic autoregressive schemes and frequency-domain machine learning algorithms that scale linearly with the data. Taking together these new developments leads to linear complexity algorithms as demonstrated in benchmark problems involving deterministic and stochastic fields in up to 10⁵ input dimensions and 10⁵ training points on a standard desktop computer
Efficient Cell Phone Keypad Designing for Bangla SMS Using English Alphabets
Mobile phone networks are increasingly supporting the transmission of textual message between individuals. In this paper we have introduced a new approach that will enhance the speed of typing process in Bangla by using English mobile keypad. An example of making Bangla sentences using English keypad could be “Ami valo achiâ€. Traditional cell phone keypad is not suitable for Bangla typing using English alphabets and number of key pressing is high to make such Bangla SMS (Short Message Service). The proposed approach has been explored to speed up the typing process in Bangla using English alphabets. The alphabets are rearranged according to the priority of frequencies. The frequency of alphabet is appeared by most used letter in SMS. The letters which are mostly used are recognized as higher frequency. The proposed design consumes less time for typing Bangla SMS using English letter format. Keywords: Mobile keypad, unitap, multitap, Bangla SMS, frequency
Improving query performance on dynamic graphs
Querying large models efficiently often imposes high demands on system resources such as memory, processing time, disk access or network latency. The situation becomes more complicated when data are highly interconnected, e.g. in the form of graph structures, and when data sources are heterogeneous, partly coming from dynamic systems and partly stored in databases. These situations are now common in many existing social networking applications and geo-location systems, which require specialized and efficient query algorithms in order to make informed decisions on time. In this paper, we propose an algorithm to improve the memory consumption and time performance of this type of queries by reducing the amount of elements to be processed, focusing only on the information that is relevant to the query but without compromising the accuracy of its results. To this end, the reduced subset of data is selected depending on the type of query and its constituent f ilters. Three case studies are used to evaluate the performance of our proposal, obtaining significant speedups in all cases.This work is partially supported by the European Commission (FEDER) and the Spanish Government under projects APOLO (US-1264651), HORATIO (RTI2018-101204-B-C21), EKIPMENT-PLUS (P18-FR-2895) and COSCA (PGC2018-094905B-I00)
The Quantum Adiabatic Algorithm applied to random optimization problems: the quantum spin glass perspective
Among various algorithms designed to exploit the specific properties of
quantum computers with respect to classical ones, the quantum adiabatic
algorithm is a versatile proposition to find the minimal value of an arbitrary
cost function (ground state energy). Random optimization problems provide a
natural testbed to compare its efficiency with that of classical algorithms.
These problems correspond to mean field spin glasses that have been extensively
studied in the classical case. This paper reviews recent analytical works that
extended these studies to incorporate the effect of quantum fluctuations, and
presents also some original results in this direction.Comment: 151 pages, 21 figure
Recommended from our members
Business and/or Ethics? A Framework for Resolving Multicriteria Decision Dilemmas
Corporate leadership is often in the unenviable position of balancing ethical choices and profit. Business decisions consider alternatives and make choices to further strategic business goals. Measures of business success are likely to be financial, including profit, revenue, sales, market share, cost of production, quality of products, innovative product development. Ethical decisions are choices among right and wrong outcomes or processes. Assessment of ethical choices may or may not be easily quantified, including consideration of positive and negative consequences, moral principles, and fair process. Inevitably, then, the inherent nature of business-ethics decisions will involve multiple decision criteria, including both business criteria and ethics criteria. These criteria may conflict, creating dilemmas that may be difficult to resolve. Sometimes ethical business decisions will be profitable, sometimes ethical business decisions will be more costly than less ethical alternatives and therefore be less profitable. Multicriteria analysis tools are designed for such decision dilemmas, yet responsibility inheres to the people who must choose. Conclusions are drawn for individual, corporate, and algorithmic decisions. Decision processes should answer these questions: Are units of measure comparable? Is the system open or closed? Is it deterministic or stochastic? Is there a risk to life? Who is responsible? Is the decision process transparent? Who cares about the outcome? What are their criteria for successful consequences? What ethical principles apply
- …