Features

AI Reports

Understanding HUMA's AI-generated evaluation reports — structure, scores, and how to act on them.

Overview

After every AI voice call or panel interview, HUMA automatically generates a structured evaluation report. Reports are available within seconds of a session ending.


Report Structure

Every HUMA report contains the following fields:

| Field | Description | |---|---| | Overall Score | A 0–100 score representing the candidate's overall performance | | Evaluation Factors | Per-factor scores with category name, score, and notes | | Strengths | Key positive observations from the interview | | Weaknesses | Areas of concern or improvement | | Recommendation | A hire / no-hire / further review recommendation from the AI |

Evaluation Factors

Evaluation factors are configurable per workspace and position. Each factor has:

  • A category name (e.g. "Communication Skills", "Technical Knowledge", "Problem Solving")
  • A weight (0–100) that determines its contribution to the overall score
  • A score (0–10) given by the AI based on the interview transcript
  • Notes — the AI's reasoning for the score

Example:

{
  "category": "Communication Skills",
  "weight": 30,
  "score": 8,
  "notes": "Candidate articulated ideas clearly and concisely. Demonstrated active listening."
}

AI Engine Options

HUMA supports multiple advanced AI engines for report generation. Your workspace admin can select and switch the active engine at any time — without re-running any interviews.

Go to SettingsAI Configuration to manage your engine preference.


Regenerating a Report

If you want to re-evaluate a candidate with a different AI engine or updated evaluation factors:

  1. Open the report
  2. Click Regenerate Report
  3. Select the AI engine you want to use (or leave as default)
  4. The new report replaces the previous one — the original transcript is unchanged

Regeneration consumes credits based on the model used.


Configuring Evaluation Factors

Workspace-Level Defaults

Go to SettingsEvaluation Factors to set default factors for your workspace. These apply to all positions unless overridden.

Per-Position Override

On any position, go to Evaluation Factors tab to customize factors and weights specifically for that role.

Tips for good evaluation factors:

  • Use 3–6 factors for most roles (more factors = longer reports)
  • Assign higher weights to the most critical skills for the role
  • Keep factor names concise and specific ("Python proficiency" vs "technical skills")

Sharing Reports

Reports can be shared with hiring managers who don't have a HUMA account:

  1. Open the report
  2. Click Share
  3. Copy the shareable link (no login required for view-only access)

Batch Report Generation

If you have multiple candidates to evaluate:

  1. Go to Candidates list
  2. Check the candidates
  3. Click Batch ActionsGenerate Reports

This queues report generation for all selected candidates. Credits are consumed per report.


FAQ

How accurate are the AI scores? AI scores are based purely on the interview transcript. They are designed to provide a structured starting point for human decision-making — not to replace it. Always review the reasoning notes alongside the score.

Can I edit a report? Reports are read-only. You can add notes to a candidate's profile separately. To change evaluation outcomes, regenerate the report with updated factors.

How long does report generation take? Most reports are generated within 10–30 seconds of a session ending. Complex reports with many evaluation factors may take slightly longer.