Recent advancements in Large Language Models (LLMs) have revealed new
capabilities and opportunities across the technological landscape. However, the
practicality of very large LLMs is challenged by their high compute cost, which
does not justify the benefits given their limited capability compared to
humans. While smaller, more practical LLMs have shown potential in financial
analysis, though they are not yet fully proficient, as evidenced by their
near-passing performance on the Chartered Financial Analyst (CFA) exam. In this
work, we present Financial Analyst Extension to our Text Hyperlocally Augmented
Large Language Extension (THaLLE), a series of 8B LLMs consistently achieving
highest performance on mock CFA exams against models of comparable size. We
thoroughly document the fine-tuning techniques used to facilitate future
research. Additionally, we introduce the use of Flare CFA, a publicly available
dataset for evaluating LLMs as a financial advisor