The need for more evaluations of word lists and courses of learning and the need for replication studies
Future directions and conclusion
As new word lists are developed, it is useful to compare and contrast any existing list with a new one to the benefit of researchers in the case of replication studies or the trialling of a new technique for identifying specialised vocabulary. For example, Gardner and Davies (2014) compare their AVL with Coxhead’s AWL, and new medical word lists can compare their results with existing studies such as Lei and Liu (2016), and Wang, Liang and Ge (2008). Importantly, these word lists are available for other researchers to use and explore, as Nelson does with his Business English Lexis website (http://users.utu.fi/micnel/business_english_lexis_site.htm). This website also contains data from Nelson’s (2000) PhD research. Corpora are often not publicly, which means it can be difficult to obtain them to run a replication study or to check findings, as Neufeld, Hancioğlu and Eldridge (2011) did in their reanalysis of Li and Qian’s (2010) study of the AWL in a Finance corpus. There are also gaps in the word list field for more specialised lists, such as Yang (2015) in Nursing, from Chapter 6, and multi-word unit research in word lists also.
Little word list research focuses on postgraduate research; instead, the majority of this research focuses on EGAP, some academic-specific purposes and some professional purposes (for example, Business). It is not clear, for example, what happens to vocabulary between undergraduate studies and postgraduate studies in the same field, let alone what vocabulary demands there are for learners who have studied in one undergraduate field and then change to another for their postgraduate studies. Furthermore, there does not seem to be much of a balance in word list research in terms of Becher’s (1989) framework of hard-pure, soft-pure, hard-applied and soft-applied areas of academia, but see Dang et al. (in press).
Nation’s (2016) timely volume on word list research provides a particularly strong framework for considering some of the main issues of word list research, such as units of counting, critical analysis of a word list study, multiword units and corpus selection and design. At a time when there seem to be many word lists available or being developed for teaching and learning purposes, it is particularly important that we understand what hallmarks there might be in word list development, what considerations need to be taken into account depending on whether a word list is general or specific in nature and what kind of corpus approach was taken in the development. The representation of the corpus is particularly important. Miller and Biber (2015) call for more consideration of internal representation in corpora to take variation in lexis and word list research into account, while acknowledging through their own experience that this is not a particularly easy task.
Current thinking around high, mid and low frequency bands are based on word lists for English for general purposes (Schmitt & Schmitt, 2012; Nation, 2013). An important question to ask is whether or to what extent this framework is true of English for specific or academic purposes. This high-mid-low frequency framework is based on Nation’s extensive work on frequency lists using the BNC (2006). Readily available tools can help anyone who is interested apply this framework of frequency to their own learner or professional texts, for example, by using Heatley, Nation and Coxhead’s (2002) RANGE programme, Tom Cobb’s Lex Tutor (Cobb, n.d.), or Lawrence Anthony’s AntConc (Anthony, 2016). Questions to consider when looking at output from a frequency analysis from these programmes include the following: What category are most of the specialist words from the text in: high, mid or low frequency categories? Which ones occur most often and why? Which words are closely related to the topic of the text and to the field? Which of these words would you expect someone who has basic knowledge in the area or field to know already? What is specialised vocabulary in this text? Is it all the words which are closely related to the subject or only those that are unique to the subject area? Note that Schmitt, Cobb, Horst and Schmitt (2017) call for replication studies into the question of the amount of vocabulary needed for use in English, such as Nation’s (2006) work.
Research that focuses on the international generalisability of word list findings misses opportunities to uncover local language use. For example, a Geography textbook from Aotearoa/New Zealand contains a range of commonly used words from te reo Māori, such as tangata (people), waka (canoe), pohutukawa (the so-called New Zealand Christmas tree), Rangitoto (an island off the coast of Auckland) and wharenui (meeting house). Other examples include the names of iwi or tribes, leaders, regions, towns, and cities, food and so on. These examples illustrate how corpora can reflect the origin and purpose of the texts within them, and these lexical items could well be the focus of word list research, in the case of commonly used lexical items from one language in another, for example. Rather than being seen as a limitation of locally based research, in that it is not globally generalisable, this kind of research can inform and even perhaps inspire research projects in other regions.
Vocabulary in ESP courses of learning
A serious gap in this field is research into pedagogical and institutional decisions based on word lists and the effect on learning and teaching. In a research project involving specialised vocabulary in secondary schools, Coxhead (2012b) gathered interview and survey data from teachers from a range of subject areas in New Zealand. She found a wide range of views on specialised vocabulary and classroom practices involving the selection of items for teaching and learning, the references and sources used in the classroom and the kinds of activities that teachers use with their students to focus on the specialised vocabulary of a subject area. One teacher remarked that the most important thing for her was the creation of a learning and teaching environment where the main aim is to discuss vocabulary rather than focusing on learning a ’preconceived’ list of words (Coxhead, 2012a). One of the challenges of word list research, outlined by Byrd and Coxhead (2010), is bringing published word lists into learning environments. Much more research needs to be done to measure the effectiveness of word lists for language learning and to find out more about how they are used in everyday teaching and learning.
There are examples of pedagogically oriented studies which draw on concordancing and qualitative analysis of vocabulary in context. Csomay and Petrović (2012) present research into specialised vocabulary in legal dramas and movies, taking a corpus-based quantitative approach. Examples of concordances are presented in the study, as they are in other studies such as Breeze (2015) on Legal English. Evaluations of the success of such initiatives in courses are needed and could be carried out with pre and post testing of specialised vocabulary, as well as interviews or surveys to find out more about what learners and teachers say about concordancing as an approach and how it leads (or not) to more learning. The use of television dramas and movies could also be the subject of an evaluation in research, especially since in fields such as Medicine, there is plenty of opportunity to build on the initial research idea.
Researchers such as Ward (2009) have focused on the specialised vocabulary of learners in particular contexts, in his case, Engineering students in a university in Thailand. A large-scale project could use the same basic framework and collect and analyse all texts to be read in a course of study. Based on the commonly occurring specialised vocabulary in the texts, a series of diagnostic tasks or tests for incoming students could be developed so that they could find out what vocabulary they need for each course before the course starts, and then work on developing their knowledge of this vocabulary to help prepare them for their studies. A post-test would also be needed to measure any gains in knowledge of this specialised vocabulary after their studies. The tests could also be used to investigate the size of technical vocabulary knowledge for this group, and the results of the pre-test could be used for learning and pedagogical purposes. Follow-up studies could use the Watson-Todd (2017) study of opaque vocabulary (see Chapter 6) to identify lexis which carried everyday meanings and specialised meanings in particular contexts.
Chapter 9 presents an example of analysing the vocabulary of a commonly used website, TED Talks, to find out more about the specialised nature of these texts, whether they are more like academic written or academic spoken texts in terms of vocabulary, and what the website might offer in terms of support for coping with vocabulary for EAP students (Coxhead & Walls, 2012). This study focused on six minute TED Talks because of a pedagogical decision, in that we did not think teachers would use longer TED Talks in a listening class, given constraints on having to listen multiple times in a limited, perhaps hour-long, class. Longer talks would give more opportunities for vocabulary to occur. This research did not focus on the specialised vocabulary of the TED Talks, but this could be a useful area of possible research, particularly since the categorisation of the talks on the TED Talks website helps with the organisation of any corpus and transcriptions of the talks are available to download. Future research could also examine the effect of a ’book flood’ (Elley & Mangubhai, 1981) type experiment, where learners could undertake a programme of extensive listening in a particular subject area on TED Talks and reading in the same subject areas and their development of specialised vocabulary through that programme could be tracked (Coxhead & Walls, 2012).
Like the TED Talks study which looked into texts which are commonly used in teaching, a written corpus study could involve selecting a text which represents typical reading for a particular group of professionals or students in a field, such as a book chapter, a journal article, a handout used in a class, or a lab manual from the sciences, making sure the text is in electronic form. Laurence Antony’s website (www.laurenceanthony.net/software.html) has freeware for converting files and tools for analysing texts, such as AntConc and AntWordProfiler. These tools can be used to help identify the vocabulary in a text, for example through a Key Word In Context (KWIC) analysis which shows the vocabulary in context in concordance lines or by running the text against Paul Nation’s BNC/COCA (2012) lists. Tom Cobb’s Compleat Lexical Tutor website also provides concordancing and vocabulary profiling (see www.lextutor.ca/).
Finally, research to evaluate Nation’s (2007) Four Strands in vocabulary in ESP is needed. The Four Strands offer a framework for curriculum design that emphasises a balance in meaning-focused input, meaning-focused output, language-focused learning and fluency. Applying this framework to a course of study and evaluating its effectiveness is important, and ESP and EAP learning and teaching programmes would be a very useful place to carry out that research.
Porte (2012) has called for more replication studies in Applied Linguistics. The journal, Language Teaching now has a dedicated slot for these thanks to Porte’s considerable work as editor of the journal. Schmitt et al. (2017) on vocabulary load research and Coxhead (2015) on replication of the Jones and Haywood (2004) formulaic sequences in an EAP course are examples of responses in vocabulary studies to these calls for replication. There are plenty of studies in this book that lend themselves to replication, for example, the opaque analysis study by Watson-Todd (2017). Miller and Biber (2015) point out that word list research, in particular, would benefit from replication studies to find out whether the same list of words would be generated using a new corpus.
Figure 10.1 Examples of teacher talk from university lectures (Hunter & Coxhead, 2007)
Chung and Nation’s (2003) scale for identifying and categorising technical vocabulary is a study which could usefully be replicated. The main findings concern the amount of technical vocabulary in an Anatomy textbook (one word in three in a line of text) and an Applied Linguistics textbook (20%). A replication study could select the same subject areas and textbooks and repeat the study; another replication could select textbooks from the same areas and repeat the study. For example, a study using the scale could be done but on a small scale, using Chung and Nation’s (2003) scale (see Table 2.3), and Figure 10.1 shows two examples of first-year university lectures from Hunter and Coxhead (2007). The first example is from Business Law and the second is from Media Studies. If we use the scale to identify words from these texts, we will see that damages in the Business Law lecture would be a candidate for a word that has both general and specialised meanings in this context. When using the scale, there are questions that could be considered in further research, such as the following: What steps in the scale does most of the vocabulary in the lectures belong to? What proportion of the texts is specialised vocabulary? And how might a larger-scale study be carried out, what kinds of texts, and in what subject areas?