Authors:
(1) Arindam Mitra;
(2) Luciano Del Corro, work done while at Microsoft;
(3) Shweti Mahajan, work done while at Microsoft;
(4) Andres Codas, denote equal contributions;
(5) Clarisse Simoes, denote equal contributions;
(6) Sahaj Agarwal;
(7) Xuxi Chen, work done while at Microsoft;;
(8) Anastasia Razdaibiedina, work done while at Microsoft;
(9) Erik Jones, work done while at Microsoft;
(10) Kriti Aggarwal, work done while at Microsoft;
(11) Hamid Palangi;
(12) Guoqing Zheng;
(13) Corby Rosset;
(14) Hamed Khanpour;
(15) Ahmed Awadall.
Teaching Orca 2 to be a Cautious Reasoner
B. BigBench-Hard Subtask Metrics
C. Evaluation of Grounding in Abstractive Summarization
F. Illustrative Example from Evaluation Benchmarks and Corresponding Model Outpu
For Orca 2, we created a new dataset with ~817K training instances, which we will refer as Orca 2 dataset. Following Orca 1, Orca 2 has been trained with progressive learning, with subsets of data obtained from combining the original FLAN [33] annotations, Orca 1 dataset and the Orca 2 dataset. We also describe the details about the progressive learning.
The Orca 2 dataset has four main sources:
FLAN: Our main source of prompts for synthetic data generation is the FLAN-v2 Collection [33], which consists of five sub-collections, namely, CoT, NiV2, T0, Flan 2021 and Dialogue. Each sub-collection contains multiple tasks. Following Orca 1 [42] we consider tasks from only CoT, NiV2, T0, Flan 2021 sub-collections, which contain a total of 1913 tasks. Each task in Flan-v2 is a collection of queries and has an associated answer. Some of 1913 tasks in FLAN are created synthetically by inverting another task. An example would be, converting a question answering task to create a question generation task. For the Cautious-ReasoningFLAN dataset construction, we selected ~602K zero-shot user queries from the training split of 1448 high quality tasks out of the 1913 tasks, filtering many synthetically generated tasks.
We grouped the selected 1448 tasks manually into 23 categories (e.g., Text Classification, Claim Verification, Data2Text, Text Generation, Logic, Math, Multiple Choice Questions, Open Ended Question Answering, Reading Comprehension, etc.). Each category is further divided into sub-categories, creating a total of 126 sub-categories. Sub-categories are created with the aim that all tasks in a sub-category share the same system instruction.
For alignment towards cautious reasoning, we replace all the system instructions with the following generic system instruction:
We will refer to it as the cautious system instruction.
Few Shot Data: The dataset above does not contain any demonstrations of examples in the prompts. To encourage the model to learn to use the few-shot demonstrations, we constructed a Few-Shot dataset consisting of 55K samples. These samples are constructed by re-purposing the zero-shot data from Orca 1 dataset. Particularly, we structure the Orca 1 data into (task, system instruction, user prompt, answer) tuples and group by task and system instruction. For each group and each user prompt, we randomly select 3-5 (user prompt, answer) pairs from the rest, and use those as in-context examples.
Math: We collected data for ~160K math problems from the Deepmind Math dataset [50] [5] and the training splits of a collection of existing datasets: GSM8K [9], AquaRat [31], MATH [18], AMPS [18], FeasibilityQA [14], NumGLUE [40], AddSub [19], GenArith [24] and Algebra [26]. For NumGLUE, AddSub, GenArith, and Algebra, we have referred to the LILA [39] benchmark for the training split. Note that including prompts from the training split of a dataset (e.g. GSM8K) renders it in-domain for the sake of evaluation. Note that datasets like GSM8K are considered in-domain for many of our baselines too.
Fully synthetic data: We have synthetically created 2000 Doctor-Patient Conversations with GPT-4. We then instruct the model to create a summary of the conversation with four sections: HISTORY OF PRESENT ILLNESS, PHYSICAL EXAM, RESULTS, ASSESSMENT AND PLAN. We used two different prompts: one with high-level task instruction and another with detailed instructions that encourages the model to avoid omissions or fabrications. We use this data to assess the learning of specialized skills.
This section provides an overview of the training process for Orca 2, covering different aspects of tokenization, sequencing, and loss computation.
Progressive Learning: We start with LLaMA-2-7B or LLaMA-2-13B checkpoint and finetune it on the train split of FLAN-v2 dataset for one epoch. Note that FLAN-v2 dataset contains both zero-shot and few-shot problems. We then train on 5 million ChatGPT data from Orca 1 for 3 epochs. Then we train on the combination of 1 million GPT-4 data from Orca 1 and Orca 2’s 817K data for 4 epochs.
Tokenization: We utilize the LLaMA Byte Pair Encoding (BPE) tokenizer for processing the input examples. Notably, the LLaMA tokenizer splits all numbers into individual digits, and fallbacks to bytes to decompose unknown UTF-8 characters. To deal with variable length sequences we add a padding token “[[PAD]]” into the LLaMA tokenizer vocabulary. We also add the ChatML special tokens “<|im_start|>” and “<|im_end|>”. The resulting vocabulary contains 32, 003 tokens.
Packing: To optimize the training process and utilize computational resources efficiently, we employ the packing technique [25]. This method involves concatenating multiple input examples into a single sequence, which is then used for training the model. The packing is performed such that the total length of the concatenated sequence does not exceed max_len = 4096 tokens. Particularly, we shuffle the input examples and then partition the examples into groups such that length of the concatenated sequence in each group is at most max_len. Padding tokens are then added to the concatenated sequence to achieve a uniform input sequence length of max_len.
Loss: For the purpose of training Orca 2, we compute the loss only on the tokens generated by the teacher model, i.e., it learns to generate responses conditioned on the system instruction and task instructions. This approach ensures that the model focuses on learning from the most relevant and informative tokens, improving the overall efficiency and effectiveness of the training process.
Compute: We trained Orca 2 on 32 NVIDIA A100 GPUs with 80GB memory with bfloat16. For the 13B checkpoint, it took ~17 hours to train Orca 2 on FLAN dataset for one epoch, ~40 hours to train on 5 million ChatGPT data for 3 epochs and ~23 hours to continue training on ~1.8 million GPT-4 data for 4 epochs.
[5] We have sampled only from arithmetic div, arithmetic mul, arithmetic add or sub tasks from https://huggingface.co/datasets/math_dataset