Pedagogical Ability Assessment of AI-powered Tutors @ACL-BEA-25
A BERT fine tune for pedagogical guidance classification from LLMs and human experts. Specifically this model was used for ACL-BEA-25 Track 3.
Pedagogical Guidance: classify weather tutors’ responses offer correct and relevant guidance, such as an explanation, elaboration, hint, examples, and so on. The following categories are included:
- Yes: the tutor provides guidance that is correct and relevant to the student’s mistake
- To some extent: guidance is provided but it is fully or partially incorrect, incomplete, or somewhat misleading
- No: the tutor’s response does not include any guidance, or the guidance provided is irrelevant to the question or factually incorrect
Usage
import torch
from transformers import BertForSequenceClassification, BertTokenizer
# Load the model and tokenizer from the Hugging Face Hub.
# Replace the repo_id with your repository name.
repo_id = "alonsopg/BEA-25-pedagogical-guidance" # Update this
model = BertForSequenceClassification.from_pretrained(repo_id)
tokenizer = BertTokenizer.from_pretrained(repo_id)
# Move the model to the appropriate device.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Define a mapping from indices to guidance labels.
label_mapping = {0: "Yes", 1: "To some extent", 2: "No"}
def predict_guidance(response_text):
"""
Tokenizes an input response, moves the tensors to the device,
performs inference, and returns the predicted guidance.
"""
inputs = tokenizer(response_text, return_tensors="pt", padding="max_length", truncation=True, max_length=128)
inputs = {key: value.to(device) for key, value in inputs.items()}
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
pred = torch.argmax(logits, axis=1).item()
return label_mapping[pred]
# Example usage:
sample_response = (
"I appreciate your effort, but let's think about this carefully: if we divide 10 into 5 equal groups, "
"how many would be in each group?"
)
prediction = predict_guidance(sample_response)
print("Predicted Pedagogical Guidance:", prediction)
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for alonsopg/BEA-25-pedagogical-guidance
Base model
google-bert/bert-base-uncased