39,386 research outputs found
Multi Agent Modelling: Evolution and Skull Thickness in Hominids
Within human evolution, the period of Homo Erectus is particularly interesting since in this period,
our ancestors have carried thicker skulls than the species both before and after them. There are
competing theories as to the reasons of this enlargement and its reversal. One of these is the theory
that Homo Erectus males fought for females by clubbing each other on the head. The other one says
that due to the fact that Homo Erectus’ did not cook their food at all, they had to have strong jaw
muscles attached to ridges on either side of the skull which prohibited brain and skull growth but
required the skull to be thick.
The re-thinning of the skull on the other hand might be due to the fact that a thick skull provided
poor cooling for the brain or that as hominids started using tools to cut their food and using fire to
cook it, they did not require the strong jaw muscles anymore and this trait was actually selected
against since the brain had a tendency to grow and the ridges and a thick skull were preventing this.
In this paper we simulated both the fighting and the diet as ways in which the hominid skull grew
thicker. We also added other properties such as cooperation, selfishness and vision to our agents and
analyzed their changes over generations.
Keywords: Evolution, Skull Thickness, Hominids, Multi-Agent Modeling, Genetic Algorithm
Zigzag Codes: MDS Array Codes with Optimal Rebuilding
MDS array codes are widely used in storage systems to protect data against
erasures. We address the \emph{rebuilding ratio} problem, namely, in the case
of erasures, what is the fraction of the remaining information that needs to be
accessed in order to rebuild \emph{exactly} the lost information? It is clear
that when the number of erasures equals the maximum number of erasures that an
MDS code can correct then the rebuilding ratio is 1 (access all the remaining
information). However, the interesting and more practical case is when the
number of erasures is smaller than the erasure correcting capability of the
code. For example, consider an MDS code that can correct two erasures: What is
the smallest amount of information that one needs to access in order to correct
a single erasure? Previous work showed that the rebuilding ratio is bounded
between 1/2 and 3/4, however, the exact value was left as an open problem. In
this paper, we solve this open problem and prove that for the case of a single
erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2. In
general, we construct a new family of -erasure correcting MDS array codes
that has optimal rebuilding ratio of in the case of erasures,
. Our array codes have efficient encoding and decoding
algorithms (for the case they use a finite field of size 3) and an
optimal update property.Comment: 23 pages, 5 figures, submitted to IEEE transactions on information
theor
IDENTIFYING IT SOLUTIONS ON FRAUD IN ELECTRONIC TRANSACTIONS OF FUNDS FROM BANKING SYSTEM
Although we hear daily of fraud, most of them are not reported. Some reports estimated that approximately 90% of assaults are not reported outside organizations were attacked, and only some of the reports are completed by punishment.In fact, for fear of losing customers, some companies (usually banks and large corporations) prefers to fall to an understanding with attackers in exchange for preserving part of the stolen money and keeping silence. Taking into account the development and modernization of the economies of the world in the last four decades, and simultaneous this global banking development and distribution were strongly influenced by the introduction of new computer technology; in such activities new computer technology had a strong impact on providers and on consumers.IT security, fraud, electronic transactions, banking system
Integrated learning programme 2016-2017 : term 2
This Integrated Learning Programme (ILP) guidebook provides details about ILP courses offered during 2016-2017 second term in Lingnan University, Hong Kong.https://commons.ln.edu.hk/ilp_guidebook/1028/thumbnail.jp
PROMOTING PRIVATE EQUITY FUNDS
The recently development of the capital market intensified the investors attraction toward profit opportunities. The involvement on the capital market has become a widely used activity among all investors regardless their financial power. This has led to diversifying of the capital market, and also to a specialization and hybridization of the financial instruments traded. Whereas, investment funds are considered intermediaries between investors and investees, they can be promoted in relation with their funding and investment activities involving the investors and certain target companies (also known as portfolio companies or investee companies).In addition to the traditional ways of promoting investment funds (i.e. direct selling, printed publications, advertisements and straplines, public relations actions) can be identified other particular methods in promoting their activity (fund raising, investing and financing) by using business angels and netpreneurs. Therefore, this paper presents the importance of promoting investment funds in relation to all the participants involved.private equity, investmen funds,promoting,business angels,netpreneurs
STRATEGIC APPROACH IN MODEL OF SCHOOLING ”K-12”
Electronic learning is such aspect of modern learning where lectures, examination or instruction performs exclusively through Internet, while the percentage of learning and using ICT is over 80%. Key elements of e-learning pattern are technological mainframe, curriculum, interaction, strategic management and marketing. Model K-12 gives his contribution in organization of education and time flexibility, provides quality communication and gains higher profit. The paper addresses model of schooling K-12 which needs to be compared with current stage in Republic of Croatia. It is confusing that the term of e-learning industry still does not find itself in Croatian economical terminology, although its value in 2008 was ranked on 38 billions of euros.e-learning, model K-12, education management, KM, ICT
Group-Lasso on Splines for Spectrum Cartography
The unceasing demand for continuous situational awareness calls for
innovative and large-scale signal processing algorithms, complemented by
collaborative and adaptive sensing platforms to accomplish the objectives of
layered sensing and control. Towards this goal, the present paper develops a
spline-based approach to field estimation, which relies on a basis expansion
model of the field of interest. The model entails known bases, weighted by
generic functions estimated from the field's noisy samples. A novel field
estimator is developed based on a regularized variational least-squares (LS)
criterion that yields finitely-parameterized (function) estimates spanned by
thin-plate splines. Robustness considerations motivate well the adoption of an
overcomplete set of (possibly overlapping) basis functions, while a sparsifying
regularizer augmenting the LS cost endows the estimator with the ability to
select a few of these bases that ``better'' explain the data. This parsimonious
field representation becomes possible, because the sparsity-aware spline-based
method of this paper induces a group-Lasso estimator for the coefficients of
the thin-plate spline expansions per basis. A distributed algorithm is also
developed to obtain the group-Lasso estimator using a network of wireless
sensors, or, using multiple processors to balance the load of a single
computational unit. The novel spline-based approach is motivated by a spectrum
cartography application, in which a set of sensing cognitive radios collaborate
to estimate the distribution of RF power in space and frequency. Simulated
tests corroborate that the estimated power spectrum density atlas yields the
desired RF state awareness, since the maps reveal spatial locations where idle
frequency bands can be reused for transmission, even when fading and shadowing
effects are pronounced.Comment: Submitted to IEEE Transactions on Signal Processin
Quantization of Prior Probabilities for Hypothesis Testing
Bayesian hypothesis testing is investigated when the prior probabilities of
the hypotheses, taken as a random vector, are quantized. Nearest neighbor and
centroid conditions are derived using mean Bayes risk error as a distortion
measure for quantization. A high-resolution approximation to the
distortion-rate function is also obtained. Human decision making in segregated
populations is studied assuming Bayesian hypothesis testing with quantized
priors
- …