ACADEMIC RESEARCH
T.E.A. – the Test of
English for Aviation
– is designed to
assess the plain
English language
proficiency of pilots
and air traffic
controllers in an
aviation context. It
is designed to
effectively elicit
language
Research has been carried out to measure the validity and
effectiveness of TEA. A summary of this research follows:
assessable by the ICAO band descriptors and is conducted
in form of a one-to-one interview between candidate and
examiner.
As of May 2018, approximately 48,000 TEA tests have
been delivered by 294 examiners in 139 test centres
around the world.
TEA has been developed in accordance with the ICAO
guidelines outlined in Document 9835 (2010). TEA Ltd has
conducted research and produced a variety of reports
pertaining to test specifications, language competences,
task design, item content, assessment and administration:
taken together, they present a strong argument for test
validity.
Test Design and Construct
A designated report has been produced to describe the test
purpose and the principles underlying TEA’s development.
Each aspect of construction is correlated to ICAO’s
language proficiency requirements.
The test measures plain language proficiency in an aviation
context only, and not intelligence, logical thinking,
operational knowledge or any other construct which would
unfairly affect the assessment.
TEA incorporates both direct and semi-direct testing
methods to integrate a CD-delivered listening section, with
individual test items, into the test. Candidates respond to all
items orally and there is no discrete-point testing.
The final score for each test-taker is the lowest of the
scores on each of the six ICAO language proficiency skills.
Test Validity and Reliability
TEA Ltd has conducted a number of studies which provide
a picture of the test’s validity and reliability. A designated
report illustrating the development process is available, as
are reports describing item writing, item trialling and item
revision.
Reports into the language abilities measured, and the
functions and topic domains elicited by T.E.A. show the
range to be wide and appropriate. Expert judges confirm
that key linguistic competences – grammatical, lexical and
phonological – are engaged in every task.
A study of the performance of different test versions post-
operalisation demonstrates that, in each part of the test,
different versions perform equally giving neither an
advantage nor disadvantage to candidates.
Rating Reliability
T.E.A. Examiners are selected based on language and
operational experience. Following training (rating and
interlocution) and certification, their tests are then
monitored by Senior Examiners. A third rater is consulted in
the case of divergent scores.
In high-stakes language testing, examiner reliability of 0.9 is
considered a minimum required level of reliability. In 2013,
there was a high level of agreement between the Senior
Examiners on Overall Score rating ranging from 0.97 to
1.00.
In 2013, approximately 1 in 4 tests were double-marked
and in 98% of cases there was agreement in the overall
score between fully-trained Examiners and Senior
Examiners.
Examiners complete annual standardisation and must re-
certify every 2 years.
Test Administration and Security
Since security is a major issue in the high-stakes
environment of aviation testing, TEA has a number of
security features to reduce these risks including:
numerous test-day identification checks
multiple test versions
centralised certificate printing (a test-day photograph
appears on the certificate)
certificate verification website
a secure database (all candidate and test data is stored
within the TEA Database, an online MySQL secure
database accessed by TEA Centre Administrators)
LINKS