Overview

  • Large Language Models (LLMs) and Vision-and-Language Models (VLMs) are evaluated across a wide array of benchmarks, which test their abilities in language understanding, reasoning, coding, and multimedia understanding (in case of VLMs).
  • These benchmarks are crucial for the development of AI models as they provide standardized challenges that help identify both strengths and weaknesses, driving improvements in future iterations.
  • This primer offers an overview of these benchmarks, attributes of their datasets, and relevant papers.

Large Language Models (LLMs)

General Benchmarks

Here’s the revised writeup with more detailed attributes and additional information about the respective benchmarks:

Language Understanding

  • GLUE (General Language Understanding Evaluation): A set of nine tasks including question answering and textual entailment, designed to gauge general language understanding.
  • SuperGLUE: A more challenging version of GLUE intended to push language models to their limits.
  • MMLU (Massive Multitask Language Understanding): Assesses model performance across a broad range of subjects and task formats to test general knowledge.
  • MMLU-Pro (Massive Multitask Language Understanding Pro): A robust and challenging dataset designed to rigorously benchmark large language models’ capabilities. With 12K complex questions across various disciplines, it enhances evaluation complexity and model robustness by increasing options from 4 to 10, making random guessing less effective. Unlike the original MMLU’s knowledge-driven questions, MMLU-Pro focuses on more difficult, reasoning-based problems, where chain-of-thought (CoT) results can be 20% higher than perplexity (PPL). This increased difficulty results in more consistent model performance, as seen with Llama-2-7B’s variance of within 1%, compared to 4-5% in the original MMLU.
    • Dataset Attributes: 12K questions with 10 options each. Sources include Original MMLU, STEM websites, TheoremQA, and SciBench. Covers disciplines such as Math, Physics, Chemistry, Law, Engineering, Health, Psychology, Economics, Business, Biology, Philosophy, Computer Science, and History. Focus on reasoning, increased problem difficulty, and manual expert review by a panel of over ten experts.
    • Reference: Hugging Face: MMLU-Pro.

Common-Sense Reasoning

  • HellaSwag: A dataset designed to evaluate common-sense reasoning through completion of context-dependent scenarios.
    • Dataset Attributes: Challenges models to choose the most plausible continuation among four options, requiring nuanced understanding of everyday activities and scenarios. It features adversarially filtered examples to ensure difficulty and minimize data leakage from pre-training.
    • Reference: “HellaSwag: Can a Machine Really Finish Your Sentence?”.
  • WinoGrande: A large-scale dataset for evaluating common-sense reasoning through Winograd schema challenges.
    • Dataset Attributes: Includes a diverse set of sentences that require resolving ambiguous pronouns, emphasizing subtle distinctions in language understanding. The dataset is designed to address the limitations of smaller Winograd Schema datasets by providing scale and diversity.
    • Reference: “WinoGrande: An Adversarial Winograd Schema Challenge at Scale”.
  • ARC Challenge (ARC-c) and ARC Easy (ARC-e): The AI2 Reasoning Challenge (ARC) tests models on science exam questions, designed to be challenging for AI.
  • OpenBookQA (OBQA): Focuses on science-based question answering that requires both retrieval of relevant facts and reasoning.
    • Dataset Attributes: Challenges models to answer questions using both retrieved facts and reasoning, focusing on scientific knowledge. The dataset includes a small “open book” of 1,326 elementary-level science facts to aid in answering the questions.
    • Reference: “OpenBookQA: A New Dataset for Open Book Question Answering”.
  • CommonsenseQA (CQA): A benchmark designed to probe models’ ability to reason about everyday knowledge.

Contextual Comprehension

  • LAMBADA: Focuses on predicting the last word of a passage, requiring a deep understanding of the context.
  • BoolQ: A dataset for boolean question answering, focusing on reading comprehension.

General Knowledge and Skills

  • TriviaQA: A widely used dataset consisting of trivia questions collected from various sources. It evaluates a model’s ability to answer open-domain questions with detailed and accurate responses. The dataset includes a mix of web-scraped and curated questions.
  • Natural Questions (NQ): Developed by Google, this benchmark consists of real questions posed by users to the Google search engine. It assesses a model’s ability to retrieve and generate accurate answers based on a comprehensive understanding of the query and relevant documents.
    • Dataset Attributes: Includes 300,000 training examples with questions and long and short answer annotations, providing a rich resource for training and evaluating LLMs on real-world information retrieval and comprehension. The dataset focuses on long-form answers sourced from Wikipedia.
    • Reference: “Natural Questions: a Benchmark for Question Answering Research”.
  • WebQuestions (WQ): A dataset created to test a model’s ability to answer questions using information found on the web. The questions were obtained via the Google Suggest API, ensuring they reflect genuine user queries.
    • Dataset Attributes: Comprises around 6,000 question-answer pairs, with answers derived from Freebase, allowing models to leverage structured knowledge bases to provide accurate responses. The dataset focuses on factual questions requiring specific, often entity-centric answers.
    • Reference: “WebQuestions: A Benchmark for Open-Domain Question Answering”.

Specialized Knowledge and Skills

  • HumanEval: Tests models on generating code snippets to solve programming tasks, evaluating coding abilities.
    • Dataset Attributes: Programming problems requiring synthesis of function bodies, testing understanding of code logic and syntax. The dataset consists of prompts and corresponding reference solutions in Python, ensuring a clear standard for evaluation.
    • Reference: “Evaluating Large Language Models Trained on Code”.
  • Physical Interaction Question Answering (PIQA): Evaluates understanding of physical properties through problem-solving scenarios.
    • Dataset Attributes: Focuses on questions that require reasoning about everyday physical interactions, pushing models to understand and predict physical outcomes. The scenarios involve practical physical tasks and common sense, making the benchmark unique in testing physical reasoning.
    • Reference: “PIQA: Reasoning about Physical Commonsense in Natural Language”.
  • Social Interaction Question Answering (SIQA): Tests the ability of models to

navigate social situations through multiple-choice questions.

  • Dataset Attributes: Challenges models with scenarios involving human interactions, requiring understanding of social norms and behaviors. The questions are designed to assess social commonsense reasoning, with multiple plausible answers to evaluate nuanced understanding.
  • Reference: “Social IQa: Commonsense Reasoning about Social Interactions”.

Mathematical and Scientific Reasoning

  • MATH: A comprehensive set of mathematical problems designed to challenge models on various levels of mathematics.
    • Dataset Attributes: Contains complex, multi-step mathematical problems from various branches of mathematics, requiring advanced reasoning and problem-solving skills. Problems range from algebra and calculus to number theory and combinatorics, emphasizing detailed solutions and proofs.
    • Reference: “Measuring Mathematical Problem Solving With the MATH Dataset”.
  • GSM8K (Grade School Math 8K): A benchmark for evaluating the reasoning capabilities of models through grade school level math problems.
    • Dataset Attributes: Consists of arithmetic and word problems typical of elementary school mathematics, emphasizing logical and numerical reasoning. The dataset aims to test foundational math skills and the ability to apply these skills to solve straightforward problems.
    • Reference: “Can Language Models do Grade School Math?”.
  • MetaMathQA: A diverse collection of mathematical reasoning questions that aim to evaluate and improve the problem-solving capabilities of models.
    • Dataset Attributes: Features a wide range of question types, from elementary to advanced mathematics, emphasizing not only the final answer but also the reasoning process leading to it. The dataset includes step-by-step solutions to foster reasoning and understanding in mathematical problem-solving.
    • Reference: “MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models”.

      Medical Benchmarks

  • In the medical/biomedical field, benchmarks play a critical role in evaluating the ability of AI models to handle domain-specific tasks such as clinical decision support, medical image analysis, and processing of biomedical literature. Here’s an expanded overview of common benchmarks in these areas, including additional benchmarks and the attributes of their datasets, along with references to the original papers where these benchmarks were proposed:

Clinical Decision Support and Patient Outcomes

  • MIMIC-III (Medical Information Mart for Intensive Care): A widely used dataset comprising de-identified health data associated with over forty thousand patients who stayed in critical care units. This dataset is used for tasks such as predicting patient outcomes, extracting clinical information, and generating clinical notes.
    • Dataset Attributes: Includes notes, lab test results, vital signs, medication records, diagnostic codes, and demographic information, requiring comprehensive understanding of medical terminology, clinical narratives, and patient history.
    • Reference: “The MIMIC-III Clinical Database”

Biomedical Question Answering

  • BioASQ: A challenge for testing biomedical semantic indexing and question answering capabilities. The tasks include factoid, list-based, yes/no, and summary questions based on biomedical research articles.
  • MedQA (USMLE): A question answering benchmark based on the United States Medical Licensing Examination, which assesses a model’s ability to reason with medical knowledge under exam conditions.
  • MultiMedQA: A benchmark collection that integrates multiple datasets for evaluating question answering across various medical fields, including consumer health, clinical medicine, and genetics.
  • PubMedQA: A dataset for natural language question answering using abstracts from PubMed as the context, focusing on yes/no questions.
    • Dataset Attributes: Questions derived from PubMed article titles with answers provided in the abstracts, emphasizing models’ ability to extract and verify factual information from scientific texts. Includes a balanced distribution of yes, no, and maybe answers.
    • Reference: “PubMedQA: A Dataset for Biomedical Research Question Answering”
  • MedMCQA: A medical multiple-choice question answering benchmark that evaluates comprehensive understanding and application of medical concepts.

Biomedical Language Understanding

  • BLUE (Biomedical Language Understanding Evaluation): A benchmark consisting of several diverse biomedical NLP tasks such as named entity recognition, relation extraction, and sentence similarity in the biomedical domain.
    • Dataset Attributes: Utilizes various biomedical corpora, including PubMed abstracts, clinical trial reports, and electronic health records, emphasizing specialized language understanding and entity relations. Tasks are designed to evaluate both generalization and specialization in biomedical contexts.
    • Reference: “BLUE: The Biomedical Language Understanding Evaluation Benchmark”

Code LLM Benchmarks

  • In the domain of code synthesis and understanding, benchmarks play a pivotal role in assessing the performance of Code LLMs. These benchmarks challenge models on various aspects such as code generation, understanding, and debugging. Here’s a detailed overview of common benchmarks used for evaluating code LLMs, including the attributes of their datasets and references to the original papers where these benchmarks were proposed:

Code Generation and Synthesis

  • HumanEval: This benchmark is designed to test the ability of language models to generate code. It consists of a set of Python programming problems that require writing function definitions from scratch.
    • Dataset Attributes: Includes 164 hand-crafted programming problems covering a range of difficulty levels, requiring understanding of problem statements and generation of functionally correct and efficient code. Problems are evaluated based on correctness and execution results.
    • Reference: “Evaluating Large Language Models Trained on Code”
  • Mostly Basic Programming Problems (MBPP): A benchmark consisting of simple Python coding problems intended to evaluate the capabilities of code generation models in solving basic programming tasks.
    • Dataset Attributes: Contains 974 Python programming problems, focusing on basic functionalities and common programming tasks that are relatively straightforward to solve. Problems range from simple arithmetic to basic data manipulation and control structures.
    • Reference: “Program Synthesis with Large Language Models”

Code Debugging and Error Detection

  • DS-1000 (DeepSource Python Bugs Dataset): This dataset is used to evaluate the ability of models to detect bugs in Python code. It includes a diverse set of real-world bugs.

Comprehensive Code Understanding and Multi-language Evaluation

  • CodeXGLUE: A comprehensive benchmark that includes multiple tasks like code completion, code translation, and code repair across various programming languages.

Algorithmic Problem Solving

  • LeetCode Problems: A widely used benchmark for algorithmic problem solving, offering a comprehensive set of problems that test various algorithmic and data structure concepts.
    • Dataset Attributes: Features thousands of problems across different categories such as arrays, linked lists, dynamic programming, and more. Problems range from easy to hard, providing a robust platform for evaluating algorithmic problem-solving skills.
    • Reference: The LeetCode Solution Dataset on Kaggle
  • Codeforces Problems: This benchmark includes competitive programming problems from Codeforces, a platform known for its challenging contests and diverse problem sets.
    • Dataset Attributes: Contains problems that are designed to test deep algorithmic understanding and optimization skills. The problems vary in difficulty and cover a wide range of topics including graph theory, combinatorics, and computational geometry.
    • Reference: “Competition-Level Problems are Effective LLM Evaluators”

Vision-Language Models (VLMs)

General Benchmarks

  • VLMs are pivotal in AI research as they combine visual data with linguistic elements, offering insights into how machines can interpret and generate human-like responses based on visual inputs. This section delves into key benchmarks that test these hybrid capabilities:

Visual Question Answering

  • Visual Question Answering (VQA) and VQAv2: Requires models to answer questions about images, testing both visual comprehension and language processing.
    • Dataset Attributes: Combines real and abstract images with questions that require understanding of object properties, spatial relationships, and activities. VQA includes open-ended questions, while VQAv2 provides a balanced dataset to reduce language biases.
    • Reference: “VQA: Visual Question Answering” and its subsequent updates.

Image Captioning

  • MSCOCO Captions: Models generate captions for images, focusing on accuracy and relevance of the visual descriptions.
    • Dataset Attributes: Real-world images with annotations requiring descriptive and detailed captions that cover a broad range of everyday scenes and objects. The dataset includes over 330,000 images with five captions each, emphasizing diversity in descriptions.
    • Reference: “Microsoft COCO: Common Objects in Context”

Visual Reasoning

  • NLVR2 (Natural Language for Visual Reasoning for Real): Evaluates reasoning about the relationship between textual descriptions and image pairs.
  • MMMU (MultiModal MultiTask Understanding): Tests models’ ability to understand and generate responses based on both visual and textual stimuli.
    • Dataset Attributes: Involves tasks like visual question answering, image captioning, and visual reasoning, testing both visual and textual understanding. The

dataset includes diverse multimodal tasks designed to evaluate comprehensive understanding and generation abilities.

  • Reference: [Reference not available]

Video Understanding

  • Perception Test: A benchmark designed to evaluate models on understanding and interpreting video content.
    • Dataset Attributes: Video sequences requiring models to interpret dynamic scenes, focusing on object detection, movement prediction, and scene classification. The dataset includes real-world driving scenarios, making it relevant for autonomous vehicle research.
    • Reference: “Perception Test: Benchmark for Autonomous Vehicle Perception”

Medical VLM Benchmarks

  • Medical VLMs are essential in merging AI’s visual and linguistic analysis for healthcare applications. They are pivotal for developing systems that can interpret complex medical imagery alongside textual data, enhancing diagnostic accuracy and treatment efficiency. This section explores major benchmarks testing these interdisciplinary skills:

Medical Image Annotation and Retrieval

  • ImageCLEFmed: Part of the ImageCLEF challenge, this benchmark tests image-based information retrieval, automatic annotation, and visual question answering using medical images.
    • Dataset Attributes: Contains a wide array of medical imaging types, including radiographs, histopathology images, and MRI scans, necessitating the interpretation of complex visual medical data. Tasks range from multi-label classification to segmentation and retrieval.
    • Reference: “ImageCLEF - the CLEF 2009 Cross-Language Image Retrieval Track”

Disease Classification and Detection

  • CheXpert: A large dataset of chest radiographs for identifying and classifying key thoracic pathologies. This benchmark is often used for tasks that involve reading and interpreting X-ray images.
  • Diabetic Retinopathy Detection: Focused on the classification of retinal images to diagnose diabetic retinopathy, a common cause of vision loss.
    • Dataset Attributes: Features high-resolution retinal images, where models need to detect subtle indicators of disease progression, requiring high levels of visual detail recognition. The dataset includes labels for different stages of retinopathy, emphasizing early detection and severity assessment.
    • Reference: Diabetic Retinopathy Detection on Kaggle

Common Challenges Across Benchmarks

  • Generalization: Assessing how well models can generalize from the training data to unseen problems.
  • Robustness: Evaluating the robustness of models against edge cases and unusual inputs.
  • Execution Correctness: Beyond generating syntactically correct code, the emphasis is also on whether the code runs correctly and solves the problem as intended.
  • Bias and Fairness: Ensuring that models do not inherit or perpetuate biases that could impact patient care outcomes, especially given the diversity of patient demographics.
  • Data Privacy and Security: Addressing concerns related to the handling and processing of sensitive health data in compliance with regulations such as HIPAA.
  • Domain Specificity: Handling the high complexity of medical and biomedical terminologies and imaging, which requires not only technical accuracy but also clinical relevancy.

Citation

If you found our work useful, please cite it as:

@article{Chadha2020DistilledLLMVLMBenchmarks,
  title   = {LLM/VLM Benchmarks},
  author  = {Chadha, Aman},
  journal = {Distilled AI},
  year    = {2020},
  note    = {\url{https://aman.ai}}
}