Customer Story · Education & Training · Saudi Arabia
Learning Space × Evalufy
From 1,500 Applicants to 80 Ready Learners
Without a Single Extra Interview
How Learning Space used Evalufy to run high-volume Arabic assessments, cut through noise at scale, and select top talent — faster, fairer, and fully in Arabic.
1,500
Applicants assessed
in one intake
~80
Selected for
training
95%
Filter rate —
only top performers
Days
From weeks to days
to shortlist
Client Snapshot
About Learning Space
Learning Space is a professional training organization operating across Saudi Arabia, running competitive cohort-based programs for working professionals. Each intake receives hundreds — often over a thousand — applicants competing for a limited number of seats.
OrganizationLearning Space
SectorEducation & Professional Training
LocationSaudi Arabia
Use CaseHigh-volume applicant filtering + AI Video Interview in Arabic
Assessment ScaleUp to 1,500 applicants per intake
Evalufy FeaturesPre-defined cognitive assessments · Custom MCQ tests · Arabic AI Video Interview
The Challenge
Screening at Scale Without the Right Tools
Screening more than 1,000 applicants manually isn't just slow — it's structurally broken. Without an objective filter at intake, HR teams are forced to make decisions on instinct at exactly the moment when instinct fails most: when volume is high, time is short, and no two reviewers use the same criteria.
📊
No scalable method to evaluate 1,000+ applicants objectively
Every new cohort arrived with hundreds of applicants and no automated way to separate signal from noise. The team was spending days doing what a properly configured assessment tool handles in hours.
🎲
Pre-interview screening relied on gut feel, not data
Without standardized cognitive benchmarks, shortlisting decisions were inconsistent. Different reviewers applied different standards — the quality of each cohort depended on who happened to be reviewing that week.
🌐
No Arabic-language assessment tool built for volume
For a predominantly Arabic-speaking candidate base, language support is not a feature — it's a prerequisite for fairness. Most tools either lacked Arabic support entirely or couldn't handle intake volumes at scale.
🔁
Rebuilding assessments from scratch every cohort
Without a reusable question library, the team spent significant time constructing each assessment cycle — time that compounded across every intake, every year.
Why Evalufy
One Requirement Was Non-Negotiable
Learning Space had used Evalufy successfully for earlier intakes. The platform had proven it could handle volume without failure. When they were ready to add AI Video Interviews, one question determined everything:
"
Can we run the AI Interview fully in Arabic?
— Latifah Mohammed, Project Manager, Learning Space
The answer was yes. For organizations assessing Arabic-speaking candidates, language support is the difference between a fair, legally defensible process and one that systematically disadvantages the majority of your applicant pool. Evalufy's AI Video Interview runs natively in Arabic — not as a translated overlay, but as a purpose-built Arabic-language experience.
The Solution
A Four-Stage Assessment Process — Fully in Arabic
Learning Space deployed a three-test cognitive battery from Evalufy's pre-defined library, followed by an AI Video Interview for shortlisted candidates. Together, these create a complete picture of each applicant before a single human conversation happens.
1
Logical Reasoning Skills Test
Measures structured thinking: pattern recognition, logical deduction, and problem-solving without prior domain knowledge. Predictive across roles because it tests raw reasoning ability, not memorized facts.
Cognitive Foundation
2
Quantitative Ability Test
Tests numerical fluency and the capacity to work with data under time pressure. Even in non-technical programs, this signals learning agility and comfort with structured problem-solving.
Numerical Thinking
3
Verbal Ability Test
Assesses reading comprehension, clarity of expression, and the ability to process written information quickly — critical in education contexts where candidates must absorb and communicate learning effectively.
Communication Clarity
4
Arabic AI Video Interview
Shortlisted candidates completed an AI-assessed video interview, fully in Arabic. Every candidate could express themselves at their best — removing language as a hidden, unfair filter in the selection process.
Native Arabic Support
Key design detail: Evalufy randomizes question pools per candidate, so no two applicants see identical sequences. This reduces the risk of answer sharing while maintaining statistical fairness and comparability across the full cohort.
Results
95% Filtered. 100% Confident.
The numbers tell the story. But the outcome wasn't just efficiency — it was a fundamentally better quality of selection.
1,500
Applicants assessed
in one intake
~80
Selected for training
(passed all 3 tests)
95%
Filter rate —
only top performers advanced
Area
Before Evalufy
With Evalufy
Screening method
Manual review, inconsistent criteria
Standardized cognitive + verbal battery
Time to shortlist
Weeks
Days
Interview load
High volume, low signal
Only validated candidates reach interview
Decision confidence
Gut feel
Data-backed scores with full audit trail
Arabic support
Not available
Full Arabic assessments + AI Video Interview
A 95% filter rate doesn't mean 95% of applicants weren't good enough. It means the system worked exactly as designed. The 80 who made it through were selected on ability, not assumption.
FAQ
Frequently Asked Questions
Everything you need to know about running high-volume assessments in Arabic with Evalufy.
Does Evalufy support Arabic-language assessments?
+
Evalufy offers full Arabic-language support across both its cognitive assessment library and AI Video Interview feature. Assessments are not simply translated — they are adapted for Arabic-speaking candidates, including right-to-left interface support and culturally appropriate content, making them suitable for GCC hiring at scale.
How does Evalufy handle high-volume hiring assessments?
+
Evalufy is built to handle intake volumes of hundreds to thousands of applicants simultaneously. The platform randomizes question pools per candidate to prevent answer sharing, delivers assessments asynchronously so candidates complete them on their own schedule, and surfaces ranked results automatically — so HR teams spend time reviewing outcomes, not administering tests.
What cognitive assessments does Evalufy offer?
+
Evalufy's pre-defined library includes Logical Reasoning, Quantitative Ability, Verbal Ability, and a range of domain-specific and role-based tests. Organizations can also upload custom MCQ assessments for program-specific or sector-specific knowledge requirements, giving full flexibility alongside a ready-to-deploy library.
What is an AI Video Interview and how does it work?
+
An AI Video Interview is a structured, asynchronous video assessment where candidates record responses to a predefined set of questions. Evalufy's AI analyzes responses for competency indicators and generates a scored report that hiring teams can review before deciding who to advance to a live interview. The process runs fully in Arabic for Arabic-speaking candidate populations.
How long does it take to shortlist from 1,000+ applicants using Evalufy?
+
Timelines vary based on candidate response rates and assessment configuration, but Learning Space moved from 1,500 applicants to a validated shortlist of 80 in days — compared to weeks when the process was manual. The bulk of time savings comes from eliminating the need for initial phone screening at scale.
Is Evalufy suitable for education and training organizations in Saudi Arabia?
+
Yes. Evalufy is widely used across the GCC for both corporate hiring and training program selection. Its Arabic-language support, high-volume capability, and cognitive assessment library make it particularly well-suited for organizations in the education and professional development sector operating in Saudi Arabia, UAE, and wider MENA markets.