a system to automate marking tests in school
Find a file
2024-12-27 17:11:16 +08:00
answer_key.csv switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
LICENSE switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
README.md switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
responses.csv switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
results.csv switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
score_mark_01.py switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
score_mark_02.py switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00
test_data.json switch to a new branch 'main' git checkout -b main 2024-12-27 17:11:16 +08:00

01 a system to automate marking tests in school 01

Automating test marking for a school involves processing test submissions, evaluating answers, and calculating scores. The following Python code demonstrates a basic implementation using CSV files for input and output, assuming objective-type questions (e.g., multiple choice, true/false):

Structure of Input Files:

Answer Key (answer_key.csv):

Question,Answer
1,A
2,B
3,C
4,D

Student Responses (responses.csv):

StudentID,1,2,3,4
101,A,B,C,D
102,A,A,C,D
103,A,B,B,D

Python Script:


import csv

def load_answer_key(file_path):
    """Load the answer key from a CSV file."""
    answer_key = {}
    with open(file_path, mode='r') as file:
        reader = csv.DictReader(file)
        for row in reader:
            answer_key[int(row['Question'])] = row['Answer']
    return answer_key

def load_responses(file_path):
    """Load student responses from a CSV file."""
    responses = []
    with open(file_path, mode='r') as file:
        reader = csv.DictReader(file)
        for row in reader:
            responses.append(row)
    return responses

def evaluate_responses(answer_key, responses):
    """Evaluate responses and calculate scores for each student."""
    results = []
    for response in responses:
        student_id = response['StudentID']
        score = 0
        for question, correct_answer in answer_key.items():
            if str(question) in response and response[str(question)] == correct_answer:
                score += 1
        results.append({'StudentID': student_id, 'Score': score})
    return results

def save_results(file_path, results):
    """Save the results to a CSV file."""
    with open(file_path, mode='w', newline='') as file:
        fieldnames = ['StudentID', 'Score']
        writer = csv.DictWriter(file, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(results)

def main():
    # File paths
    answer_key_file = 'answer_key.csv'
    responses_file = 'responses.csv'
    results_file = 'results.csv'

    # Load data
    answer_key = load_answer_key(answer_key_file)
    responses = load_responses(responses_file)

    # Evaluate responses
    results = evaluate_responses(answer_key, responses)

    # Save results
    save_results(results_file, results)
    print(f"Results saved to {results_file}")

if __name__ == "__main__":
    main()

Explanation:

Input:

  • The answer_key.csv contains the correct answers.
  • The responses.csv has student responses.

Process:

  • Load the answer key and responses using csv.DictReader.
  • Compare student answers to the key and calculate the score.

Output:

A new CSV file results.csv is generated with students' IDs and their scores:

StudentID,Score
101,4
102,3
103,3

Next Steps:

  • Expand the script to handle other types of questions (e.g., short answers, essays).
  • Integrate the system with a database for large-scale usage.
  • Develop a GUI or web interface for ease of use.

02 a system to automate marking tests in school 02

Heres an example of a Python-based system to automate marking tests. This simple implementation uses a JSON file to store the correct answers and student responses, then calculates the score for each student.

Code Example


import json

# Sample JSON file for answers and student responses
test_data = {
    "answers": {
        "Q1": "A",
        "Q2": "B",
        "Q3": "C",
        "Q4": "D"
    },
    "students": {
        "Student1": {
            "Q1": "A",
            "Q2": "B",
            "Q3": "D",
            "Q4": "D"
        },
        "Student2": {
            "Q1": "A",
            "Q2": "C",
            "Q3": "C",
            "Q4": "D"
        }
    }
}

def mark_tests(test_data):
    correct_answers = test_data["answers"]
    student_scores = {}

    for student, responses in test_data["students"].items():
        score = sum(1 for question, answer in responses.items() if correct_answers.get(question) == answer)
        student_scores[student] = score

    return student_scores

# Write the JSON data to a file (for demonstration purposes)
with open("test_data.json", "w") as file:
    json.dump(test_data, file)

# Load the JSON data
with open("test_data.json", "r") as file:
    loaded_data = json.load(file)

# Mark the tests
scores = mark_tests(loaded_data)

# Display the results
print("Test Results:")
for student, score in scores.items():
    print(f"{student}: {score}/{len(test_data['answers'])}")

Features of This System:

  1. Data Structure: Uses JSON for easy storage and handling of questions, answers, and student responses.
  2. Flexibility: Supports multiple students and questions.
  3. Scoring: Matches responses with correct answers and calculates scores.

How to Use:

  1. Update the test_data dictionary with the correct answers and student responses.
  2. Run the script to process and output scores.

Expand this system with features like:

  • Uploading test data from a file or database.
  • Exporting scores to a spreadsheet.
  • Handling different question types (e.g., multiple-choice, open-ended).