Google, baidu, the library, and the acrl framework - Making a Difference

Teaching information literacy and writing studies - Grace Veach 2018

Google, baidu, the library, and the acrl framework
Making a Difference

Recent years have witnessed a rapid development of information technologies and their increasingly significant, though ubiquitous, impact on academic information literacy. Writing in the Handbook of Reading Research, educational scholar Don Leu (2000) argued that technology will change the pace, form, and function of literacy, and that digital technologies are rapidly and continuously redefining the nature of literacy. He further discussed how quickly classrooms will become irrelevant if instructors cannot keep up with students as they explore digital technologies and their associated writing spaces. Leu hit on the fact that we have come to embody almost 20 years later: we must provide students with opportunities for writing and research that embrace the Digital. To better prepare students for the opportunities and challenges in their navigation of the world of academia mediated by information technologies, researchers and practitioners have moved from simple bibliographic instruction or one-shot library instruction (Spievak & Hayes-Bohanan, 2013; Wang, 2016) to an ecological approach where information literacy skills are fully integrated into writing curricula (Bohannon, 2015; Brown, Murphy, & Nanny, 2003; Kress, 2003; Pinto, Antonio Cordón, & Gómez Díaz, 2010; Purdy, 2010; Valmont, 2003). Defined as a set of skills to locate, evaluate, and use information effectively for various purposes in academic settings (Behrens, 1994; Bruce, 1997; Doyle, 1994; Huston, 1999), information literacy skills have been shown to be essential for students’ success in academic writing in various disciplines (Jordan & Kedrowicz, 2011; McDowell, 2002). More importantly, there is growing awareness that effective access and use of information resources indicate students’ abilities to learn and are an indispensable skill in the production of new knowledge (Leu, Kinzer, Coiro, & Cammack, 2004; Markauskaite, 2006).

However, most of these studies focus on native English speakers, and with growing numbers of multilingual students in U.S. universities, it is important for both writing instructors and librarians to better understand and to scaffold the development of information-seeking behaviors of these students, particularly in the first-year writing (FYW) classroom, their gateway to academic writing and research. These classrooms are typically the environment where most college students are introduced to information literacy as they prepare to write an argumentative or research essay. A few studies have examined the information literacy of multilingual students. For example, in their survey study of 27 international undergraduate students, Mina and Walker (2016) examined the extent to which information literacy instruction may benefit students’ information-seeking behavior, and found that there were important gaps between what students said they were learning and what they were expected to do. Although most students reported that they had received adequate instruction, they felt their information literacy skills were inadequate for many of the academic tasks. Other studies have also identified unique information literacy challenges facing international students (e.g., Zhao & Mawhinney, 2015). Nevertheless, most previous studies in this field are generally based on students’ self-reported experiences from surveys or interviews. To examine students’ actual experiences with information-seeking behaviors, the LILAC (Learning Information Literacy across the Curriculum) Project collects and analyzes screen-captured data containing a video record of screen activity and students’ voice narrative while conducting online bibliographic research on a topic. LILAC researchers collect this qualitative data in addition to survey data that aims to unpack students’ experiences, attitudes, and evaluation of their information literacy. As a networked component of the LILAC Project, this chapter uses three frames from the Framework for Information Literacy for Higher Education produced by the Association of College and Research Libraries (ACRL) as points of reference to examine perceived information-seeking behaviors and possible challenges multilingual students face. We also discuss pedagogical implications by addressing these research questions:

1.What are the information-seeking behaviors of multilingual writers in first-year writing courses?

2.What are the gaps in information-seeking behaviors of multilingual writers as plotted against three ACRL frames and their knowledge practices?

3.How can these findings inform specific pedagogical approaches to improve information-seeking behaviors of multilingual writers in first-year writing courses?

In answering these questions through an empirical study, we describe students’ information-seeking behaviors and summarize the findings based on selected ACRL threshold concepts in an effort to help both writing instructors and librarians see where students are in the trajectory of information literacy development and provide effective instruction and assistance accordingly.

METHODOLOGY

Over the course of two semesters (spring and fall 2015), Lilian collected data for the LILAC Project from multilingual students in a Midwestern research-intensive university. At the time of data collection, that institution was a partner in the LILAC Project, with a special interest in exploring multilingual students’ information literacy skills and behaviors. The deliberate focus on that student population was bicausal: the relatively large population of multilingual students in that institution (about 2,000 in 2015), and Lilian’s teaching of FYW classes designated for multilingual students. This context made it possible to recruit students to participate in the LILAC Project. Participating in the LILAC Project comes in two consecutive parts: completing an online survey about the participant’s training, experience, and self-assessment of information literacy skills, followed by a RAP (Research Aloud Protocol) screen recording session. The screen recordings contain a video recording of screen activity and students’ voice narrative while conducting online research on a topic. The average length of every screen recording is 15 minutes.

Students were recruited and these research sessions were held when students were beginning the unit on argumentative writing that required bibliographic research to construct a research-based argument on a given topic. This unit is part of the program-set curriculum and it was known to start around week seven of the semester. By the time students came to participate in the study, they had selected a topic for their argumentative essay. At the beginning of the research session, students were informed that they would conduct online research on the topic they were writing about in their writing class. The timing of the research session was deliberate because it meant students wouldn’t waste time thinking about a topic or fabricating a topic to research during the RAP session. Although we acknowledge that this setting isn’t ideal for collecting extensive or more situated data, the authenticity of students’ topics and the reality that students were indeed engaged in researching these topics for argumentative essays in their FYW classes compensated for the short time and relatively controlled situation of data gathering.

Data for our study come from 50 RAP recordings, with a total of about 650 minutes of screen-captured data. The recordings were collected from Chinese undergraduate students enrolled in the Midwestern university, most of whom were recruited from Lilian’s FYW classes at the time of data collection. At the beginning of the data analysis process, our team calculated the intercoder reliability to ensure reliability of findings: we converted the questions and notes on the LILAC RAP Coding Scheme into a series of codes that each of us used later to code five RAP recordings (10% of the total data) independently; we then calculated the Cronbach alpha of the three sets of codes to be 0.86, a high reliability. After establishing the interreliability using the coding scheme, we coded the 50 RAP recordings, taking copious research notes that we later used for qualitative analysis of participants’ information-seeking behaviors. As an integrated part of coding, each of us also took into account the fact that these participants were first-year writers, who would not be expected to possess a command of or fluency in all aspects of information literacy. Instead, we sought trends in how participants’ information-seeking skills could be mapped against specific frames and knowledge practices of the ACRL Framework.

Upon comparing our research notes, we plotted the information-seeking behaviors identified in the RAP recordings against seven knowledge practices under three frames of the 2016 ACRL Framework: Searching as Strategic Exploration (p. 9), Research as Inquiry (p. 7), and Authority Is Constructed and Contextual (p. 4). Table 19.1 illustrates the synergy we created between the three ACRL frames and the LILAC RAP Coding Scheme. We decided to use the three frames as our thematic discussion points under which we will present a synergy of the seven knowledge practices selected and the LILAC RAP Coding Scheme. Our implications are outgrowths of both the findings as well as acknowledged limitations of the controlled nature of our Research Aloud Protocols (RAPs).

TABLE 19.1 A Synergy of the ACRL Framework and the LILAC RAP Coding Scheme

ACRL Frames

LILAC RAP Coding

Searching as Strategic Exploration

First Search Source Type of Search

Research as Inquiry

Determining Search Scope

Refining Search Scope

Using Search Results

Authority Is Constructed and Contextual

Evaluating Search Results

Evaluating Sources

Searching as Strategic Exploration

The ACRL (2016) frame Searching as Strategic Exploration centers around the cognitive processes that drive information-seeking behaviors. This frame distinguishes between a novice and an expert searcher. While the expert searcher is expected to be more understanding of the context surrounding the search process with its limitations and challenges, the novice searcher may not always acknowledge the context of his or her search or the limitations of the search process. Another key difference between an expert and a novice searcher is the range of search strategies utilized by each of them; an expert searcher attempts a wide range of search strategies and may “search more widely and deeply” before deciding on the most suitable sources that have the information needed (p. 9). On the other side, a novice searcher employs only a few search strategies and resorts to “a limited set of resources,” thus demonstrating little flexibility during the search process (p. 9). The following discussion of participants’ broad scope of search, difficulty of accessing sources, and reliance on limited research types clearly demonstrates that most participants in this study are more on the novice than the expert end of this frame.

Broad Scope of Search

The first knowledge practice states that “[l] earners who are developing their information literate abilities determine the initial scope of the task required to meet their information needs” (ACRL, 2016, p. 9). Even though this knowledge practice does not have a corresponding question from the LILAC RAP codes, we were able to capture it in participants’ RAP sessions. Most participants started their searching process by identifying the topic they were researching for a particular assignment in their FYW class. The topics were articulated in mostly broad terms, such as “[m]y topic is the global warming and the greenhouse effect.” While very few participants used indirect questions to express the scope of their search, most of these questions were still so general. For example, one participant stated that his topic is “how digital technology affect our health.” Such a loosely identified search scope may offer an interpretation of why most participants were not clear about the type of information they needed for their essays or the best sources to locate that information (as we discuss later); both are features of novice searchers.

(Access to) Sources of Information

When participants started their online search, Google was the first choice for almost half of them (48%), while the school library was the first stop for online information for half that number (24%), as Figure 19.1 illustrates. Although Wikipedia was not the first place participants searched for information, it was a preferred place to participants searching for definitions of their keywords or difficult terms relevant to their search. Surprisingly, many Chinese participants turned to Baidu, the major Chinese search engine, to understand the basics of the topic they were researching. Although these participants acknowledged that they wouldn’t use the information they found through Baidu in their research papers, most of them explained that reading sources in Chinese was an essential stage for them in order to understand more about their topics. As one participant put it, “Chinese information gets me thinking before I search in English” (21028).

Images

Figure 19.1 Sources of information.

These participants’ choice to use Baidu for better understanding of the topics and key terms they were searching highlights a serious access problem for multilingual students in this study. The linguistic barrier many students appeared to face may have resulted in their following unexpected and nontraditional search strategies, such as using a Chinese search engine, reading sources in Chinese, and using Google Translate for assistance with difficult terms in search results. When Mina and Walker (2016) examined the survey data collected from a portion of participants in this study, they found that “most students resort to using the web for their research needs,” and they concluded that “many websites offer translation services” that multilingual students may have found easier to use because of the language barrier (p. 70). Not only does our finding support and consolidate Mina and Walker’s, but it extends their finding and provides examples of those websites and help tools multilingual students utilize.

Another access problem that we identified in this study is the confusion about the technicalities of the search process. Many participants showed a good deal of confusion about locating and accessing various sources of information. Several participants, for instance, didn’t know how to locate the school library website to start their search. A few participants used Google to search for the library website, whereas others started from the school home page to search for the library. When on the library website, many participants were puzzled about which tab to choose from the multiple ones available to start searching. While some participants searched the article database directly, many participants struggled between using the catalog and journal tabs and were frustrated when their keyword searches did not yield enough or any sources at all. In one particular case, a participant gave up on the library and switched to Google because his search wasn’t yielding relevant results due to the fact that he was searching under the “Catalogue” instead of the “Articles and more” tab. One of the fundamental information literacy abilities expert learners should possess is to understand the organization of sources “in order to access relevant information” (ACRL, 2016, p. 9). When multilingual students know they should use their school library to find good and credible information for their writing assignments but they struggle with locating and accessing that information, instructors and librarians should intervene with explicit instruction and specially tailored materials that can facilitate students’ access to sources of information.

Search Type

As Figure 19.2 displays, almost all participants in this study (90%) relied on keywords in their search for online information for their assignments in FYW classes. Being aware of Mina and Walker’s (2016) study cited earlier, we didn’t expect participants to use Boolean operators because only 4% of the multilingual participants in their study said they used this type of search in their responses to the LILAC survey. However, the minimal reliance on natural language query (6%) among participants in this study was quite surprising because we expected participants to use questions or natural-language phrases in their searches. One interpretation of this finding may be that most participants didn’t have a specific question to guide their online search, as we noted earlier. Another interpretation may be that participants didn’t trust their linguistic abilities to be able to correctly articulate natural-language phrases that they can employ in their searches, preferring simple one or two keyword searches instead. Most participants used relatively simple keywords that were very similar, if not identical, to the topics they said they were researching. Keywords included phrases such as “digital cameras,” “food price,” and “healthy food.”

Images

Figure 19.2 Type of search.

Regardless of the type of search a participant may have used, we noticed two issues. First is the spelling errors in search words. Despite the simplicity of keywords participants used in their searches, many of them struggled with the spelling of these words. Some students appeared to utilize Google’s suggestions and would select the words they wanted but struggled with their spelling. When a student used another search source (e.g., school library database), search suggestions were not available, and if the student was not able to spell a keyword correctly, the search didn’t yield results and the student would assume that there were no sources available on that topic. The student then would either try a new search source (e.g., library database to Google) or come up with new search terms that might or might not be correct.

The second issue we noticed was literal translation. Some keywords students used appeared to be literal translations of words in the student’s native language. These words and phrases did not always make much sense in English and thus would not return good or relevant results. This can be frustrating for the students because they wrongly thought there were no resources on their topic. One student, for instance, used “shopping by computer” as her keywords to search for information for her project on online shopping. Another student who was researching the negatives of online shopping used “shopping online judge” as her search phrase. Both searches yielded mostly irrelevant sources, and the two students sounded puzzled by the scarcity of sources on their popular topic.

Research as Inquiry

Another important frame for first-year multilingual writers is Research as Inquiry. Specifically, the ACRL emphasizes that it is critical for students to understand the iterative nature of research and to be able to ask “increasingly complex or new questions whose answers in turn develop additional questions or lines of inquiry in any field” (p. 7). Some major differences between experts and novices in this aspect can be observed in their abilities to (1) determine and refine the scope of investigation and (2) synthesize ideas from multiple sources. Whereas the experts demonstrate effectiveness in managing the iterative process of research by adjusting research scope and synthesizing information from multiple sources, the novices usually lack the understanding of research and the abilities to navigate the process. Segments of RAP videos coded for the corresponding questions on the LILAC coding scheme were used to describe the participants’ abilities and challenges in this area (see Table 19.1).

To look at the participants’ abilities to refine the scope of their research, we examined the changes between the initial scope of research determined by the students and that of the additional search(es). An overwhelming majority of the participants started with a general research purpose and broad scope, as we discussed in the previous section, which was usually taken from the instructor or the requirements of an assignment. After the 15-minute research session recorded in the RAP videos, most of these students made little progress toward refining the research scope. A few students attempted to narrow down the research scope, but all these attempts either ended with a quick answer or a complete change of direction. In other words, instead of trying to further refine research scope and/or search terms based on an evaluation of the search results, the students would choose to give up and switch to a new direction. For instance, when seeing no relevant sources from one search, a participant tried to carry out more searches on different websites (i.e., Google, New York Times, TED Talk videos, and YouTube), but she kept using the same search term (#21038). To understand the participants’ abilities to synthesize information from multiple sources, we examined the extent to which they were able to manage the search process and to keep track of the information they found for future use. In coding the RAP recordings, we first evaluated and characterized the participant’s entire search process: (1) did the participant demonstrate effective control of the process; and (2) was the participant looking for quick answers. Among all the participants, only two (4%) demonstrated that they were in effective control of the search process, and the rest had encountered different degrees of various challenges. Language-related issues have caused some problems in choosing search engines, formulating and varying search terms, and understanding sources, but more importantly, such issues and the lack of information literacy skills seemed to have collided, resulting in ineffective search(es).

Perhaps one of the most important factors underlying the futile search process is the participants’ understanding of the nature and purpose of research. As indicated by our observation of the characteristics of the participants’ search process, 15 of the participants (30%) seemed to believe that research should be a straightforward path that links a question directly to an answer and that their search process was very much driven by the desire to seek quick answers.

The participants’ plan for using the sources identified further confirmed this inclination: only two participants (4%) attempted to paraphrase the information identified; four (8%) indicated that they would copy and paste what they found onto their own papers; and none of the participants took the time to identify more specific sections to quote or to summarize, or to consider how relevant information from the sources can be integrated into their own writing. Although we do not have enough information to make valid inferences regarding how the students may actually use the information in their own writing, it is likely that their desire to seek quick answers during the research process will lead to patchwriting (Pecorari, 2008) or plagiarism concerns.

In terms of the management of the research process, 19 participants (38%) indicated some awareness of the need to keep track of their research. However, it is clear that some explicit instruction will help students develop their ability in managing the research process for the purpose of writing. Only a few students employed some strategies to help keep track of the research process: three (6%) copied and pasted the URL of the sources onto a Word document, and two (4%) used bibliography generators. The rest of the participants would either download the full text, note down the title of the article, or simply indicate that they would bookmark the webpage when doing research on their own computer. Among all 50 participants, none used or indicated that they would use reference management programs. Two alternative explanations may also account for the inadequate use of strategies in keeping track of the research process. First, the time constraint (15 minutes) in the RAP session did not allow the participants to fully demonstrate the actual process of research and writing. Second, the participants might not have access to all the record-keeping tools they need when conducting research on someone else’s computer.

Authority Is Constructed and Contextual

In plotting how LILAC data align with ACRL practices through this frame’s knowledge practices, we specifically focus on questions five and six of the RAP coding scheme: “Evaluating Search Results” and “Evaluating Sources.” Findings from these two questions can help us better understand how students determine the credibility of sources and use their own information-seeking mental toolboxes to scan search engine result lists for sources, evaluate those sources (both multimodal and print), and determine markers of credibility. These data can also help us understand how students practice their judgment of credibility, specifically as they search for and analyze a range of sources in their research processes.

Drawing first from one of this frame’s knowledge practices, that learners “use research tools to determine the credibility of sources” and that they further understand “the elements that might temper this credibility” (ACRL, 2016, p. 4), we found participants to have displayed shallow and simple behaviors that did not necessarily point to an understanding of tempering of source credibility. For example, when choosing a source from search engine results lists, participants were 47% more likely to choose a source from that list based on the source’s title than they were to choose a source that might have relevance to their topic. This relationship was the closest relation from the data, which is visualized in Figure 19.3 and shows the frequency of source selection based on participants’ choices from their results lists. In fact, participants were 63% more likely to choose Title as a valid source selector factor than Popularity of search results based primarily on a keyword search. This finding is significant because anecdotal research and scholarly “stories around the campfire” have long speculated that students choose sources based on list popularity. It’s the “Google thinks the source is important because it’s number one on the list” phenomenon. Our data show those stories may not be true, at least in the frequencies we thought. The most distant significant relationship between the factors participants use to choose sources from results lies between the data sets of Title and Brief Summary of Search Results, which we also call metadata, the information that authors code into their webpages that summarizes their content and includes relevant keywords. We found that participants were almost twice as likely to choose a source based on that source’s title than its metadata. What this means to instructors and librarians is that students either are not considering metadata from websites at all, or do not consider that information to be relevant to credibility in their digital information-seeking practices.

Images

Figure 19.3 Evaluating search results.

Our findings further inform ACRL’s knowledge practices in how participants evaluated sources they chose from search results. Figure 19.4 shows in what frequencies participants evaluated sources they chose. These information-seeking behaviors are depicted as findings from LILAC’s coding template question six. Out of 50 participants, 38 (76%) evaluated sources they found based on the “relevance to topic” category on LILAC’s coding scheme. This finding suggests that self-perceived topic relevance plays a primary role in source evaluation. Although we have no means of measuring what participants considered relevant, many of them actually articulated the word “relevance” and its synonyms as they evaluated sources during RAP sessions. We coded this finding based on verbal cues from participants, so we can conclude that participants did express their intent to evaluate a source based on self-perceived relevance.

Images

Figure 19.4 Evaluating sources.

Interestingly, the frequency data in Figure 19.4 also show that participants evaluated sources based on credibility 9% of the time during recorded RAP sessions and evaluated sources based on titles 7% percent of the time. We see a clear decrease in use of titles in terms of source evaluation versus search results evaluations, a statistical decrease of 20%, which is significant both because it marks a clear reduction as well as a clear distinction in behaviors. Curiously, this finding also points out how participants may view metadata and source abstracts (summaries) differently, with more ethos being placed on abstracts than metadata from websites. We noted only one instance in which a participant evaluated search results based on metadata, while we logged 13 instances of using abstracts to evaluate sources. This finding may point toward lack of digital literacy, that is, knowing how to use metadata in digital spaces. It could also imply that participants are quickly skimming search results instead of evaluating them more deeply. These findings could also suggest that FYW students may, indeed, act on instruction they have received regarding the effective use of abstracts to evaluate a source’s argument.

Further, participants differentiated between sources that supported their arguments and sources that represented opposing views by two to one. This finding may indicate a need to encourage FYW students to seek viewpoints that oppose their stated argument. Self-perceived credibility between search results and source evaluation sits at a frequency of seven and nine, respectively. This is the most consistent finding between the two questions in terms of what participants articulated in RAP sessions as “credible” search results and sources. We might be able to infer, then, that students have a notion of what credibility means as it relates to digital bibliographic research.

Drawing from another notable knowledge practice in this frame, that learners “recognize that authoritative content may be packaged formally or informally and may include sources of all media types” (ACRL, 2016, p. 4), we found that more than 8% of participants in our case study verbally asserted the credibility of visual sources. These sources included YouTube videos, Instagram images, specific image searches in Google, and TED Talk videos. For example, participant 21023 chose a YouTube video as a foundational (first) source and was even able to analyze one such video source in her own words when she had trouble doing so for textual sources. This participant, like many others, also recognized credible information on blogs, both professional and academic ones. Further, participant 21025 articulated the importance of videos not only in source selection but also as part of a more in-depth research process, saying: “TED is a really credible video and website. It is a good resource to support my claim.” This participant also scrolled through search results to click on a link to videos and photos of her topic. Participant 21032 searched specifically for videos as sources for her topic, titled “Race Issues in America,” from YouTube’s website. Using keywords, she generated a results list on YouTube. She chose a speech from President Obama as a credible source. She also articulated that Facebook pages from nonprofit organizations were credible sources. We may view this RAP session as a lesson in how students seek multimodal sources, both informal and formal, to curate credible information for bibliographic research.

PEDAGOGICAL IMPLICATIONS

This study aimed to examine the information literacy skills of 50 international multilingual students enrolled in first-year writing courses through analyzing their recorded RAP sessions. Creating a synergy between the LILAC Project and the ACRL Framework (2016) has enabled us to plot these participants’ information-seeking behaviors against three ACRL frames and their knowledge practices. Our findings indicate that these participants are situated on the novice end of the information literacy continuum with very few exceptions that move toward the expert end. Participants demonstrated narrow search scope and had difficulty accessing the information they needed for their writing projects. They also used limited search strategies without being able to refine or modify their searches. Further, most participants lacked strong search and source evaluation skills that resulted in their determining credibility randomly rather than systematically or consistently. These results add to the empirical evidence concerning multilingual writers’ information literacy, and contribute to the ongoing discussion on integrating IL in the writing curriculum. Another interesting finding of this study is that many participants showed appreciation of multimodal digital sources while diligently seeking those sources to use in their FYW writing assignments. Not only do these findings answer the first two research questions, they also become the springboard for pedagogical implications for both librarians and instructors dealing with international multilingual students in FYW courses in U.S. universities. These implications address our third research question and offer what we hope to be points of consideration for both groups. Although we discuss these implications by the place where they are most likely to be applied, we don’t mean to separate librarians from writing instructors while training this growing and important student population on information literacy skills. On the contrary, we hope these implications can be a starting point for valuable and meaningful conversations and collaborations between the two groups for more solid, situated, and reinforced skills.

In the Library

When instructing students on information literacy regarding the library itself, instructors and librarians should provide students with easy ways to access library websites. Many school libraries set the home page of computers housed in the library to the library website, and librarians start their instruction from that page without adequate information or navigational direction on how to locate and access that website from other computers. We observed that LILAC participants sometimes struggled with nonintuitive navigation on library websites. A key partner that is often overlooked in these situations is the university’s information technology services. Many universities have Web designers who specialize in ease of instructional navigation. Regardless of school size, we assert the importance of collaborating across work units, including those that we might not usually work with but that can bring specialized expertise to our students’ learning.

Furthermore, preparing print or digital handouts as well videos and podcasts that demonstrate the different search processes on a school library website would be helpful and reach diverse learners and digital natives, who often obtain their instructional information in multimodal ways. Examples of possible instructional resources should include catalog search, database use, journal results, and interlibrary loan services.

Information literacy classes provided to students in FYW classes should not be limited to the traditional one session per class or the “one-shot library sessions” as Artman, Frisicaro-Pawlowski, and Monge (2010) described them (p. 99). Extending the offering of these sessions and spreading them out through the course of the semester instead of offering it once before students start their research-driven papers should address students’ different writing, research, and rhetorical needs pertaining to information literacy. These multiple sessions are particularly important for multilingual students whose information literacy needs and challenges can severely impede their information-seeking efforts and the desired outcomes of these efforts as seen in the findings of this study.

For more success in these multiple sessions, we’d like to encourage librarians to collaborate closely with language instructors or specialists who may better understand how language barriers may complicate the task of navigation. For many multilingual writers enrolled in first-year writing courses, both the information literacy concepts and the technical terms that are used to encode the concepts can be fairly foreign. Therefore, a one-shot library instruction session may not be effective for multilingual writers. Instead, librarians can work with language instructors to create materials and offer regular workshops to help these students map technical terms to information literacy concepts and strengthen their information literacy skills in the process.

In the Classroom

Writing instructors are encouraged to create curricula that “bridge students’ prior and future experiences” (Albert & Sinkinson, 2016, p. 120). These curricula should not only include digital composition assignments that require research, but should encourage students to consider novel approaches to online searching for sources in different modes and media to complete those projects. These digital assignments should require students to “evaluate and reflect upon how scholarship is communicated in these formats” (Kalker, 2016, p. 220). Furthermore, incorporating digital composing projects is likely to introduce students to new information literacy conceptions and practices that they will increasingly need in their academic and professional futures. Our research shows the importance of implementing information literacy instruction across modalities of sources. As students search for new genres of sources, they should be “attentive to the fluidity of emerging communication formats” (Albert & Sinkinson, 2016, p. 120). Throughout their online searches examined in this study, many multilingual learners were keen on finding visual sources (videos, images, TED Talks) to support their arguments, demonstrating two significant dispositions: their recognition of the validity and authority of visuals in a research paper, and their understanding of the fluid concept of authority and credibility.

As instructors and librarians collaborate to better understand how students process information literacies in digital spaces, we must consider the results of our findings that point toward a need to instruct students on how to process credible visual sources. Instructors also need to consider how information literacy instruction meets students at the point of need, regarding the reality that they do, indeed, search for visuals as academic sources. How writing instructors approach this specific knowledge practice is especially significant, given our findings that students search for multimodal sources and consider those sources relevant inclusions in their research process and end products.

An essential part of recognizing the iterative nature of the research process is to be able to reflect upon and learn from failed research attempts. For example, students may start a research process with the goal of trying to identify differences in educational systems between two different countries. When students cannot identify self-perceived “useful” sources in these cases, instructors may find it important to help them evaluate the process challenge and assist them in making decisions on cause, such as ineffective use of search terms or library databases; other technical issues; or simply that students are trying to find answers to research questions in just one or two sources. This type of intervention may then lead to a discussion of how to refine the scope of research based on search results. Through large group, instructor-led class talks on effective evaluation of search results, complemented by small group experiential learning with revision of questions and purposes, students will learn to understand that research involves much recursiveness and that it is through such iterative processes that they advance their own research.

Our findings further demonstrate the need to train students on the use of Boolean operators as effective search strategies that are likely to help them find more relevant online search results. Learning to use Boolean operators to yield more focused search results would not only minimize the frustration we observed many participants articulate during their search processes, but such instruction would also result in more effective and efficient information-seeking behaviors. Accordingly, learning which words should go together and which words or phrases should be left out of a search is a valued and required critical thinking skill that instructors and librarians should encourage students to practice.

CONCLUSION

The above discussion and implications indicate a clear purpose for empirical information literacy research across academic fields as well as a need to continue this type of work in tandem with classroom and library instructional applications. During the course of this study and as we were engaged in data analysis and interpretation, we were able to identify some future directions for research other scholars may consider. Examining bibliographic research behavior extensively beyond the 15-minute RAP sessions is an important endeavor that will reveal more about the reality of online research strategies (or lack of) multilingual students utilize. Similarly significant is unpacking students’ thinking process of online research behavior through interviews and focus groups after the research sessions and as a triangulation research method that would help us understand more about the students’ conceptualization of online information-seeking behaviors. Another possible direction is to track the sources students identify in research sessions, as the ones captured in RAP, to study how students use them in their writing.

LILAC researchers continue to conduct studies at diverse institutions with participants of varying matriculations, ranging from first-year writers to graduate students, in order to gain better understanding of the information literacy skills of students at different levels of academic institutions. Given the collaborative, multi-institutional nature of the LILAC Project, we invite instructors and librarians to network with our group. Check out LILAC projects, presentations, publications, and collaboration opportunities on our website: http://lilac-group.blogspot.com/, or by following us on Twitter: @LILACProject.

REFERENCES

Albert, M., & Sinkinson, C. (2016). Composing information literacy through pedagogical partnerships. In R. McClure & J. P. Purdy (Eds.), The future scholar: Researching and teaching frameworks for writing and information literacy (pp. 111—129). Medford, NJ: Information Today.

Artman, M., Frisicaro-Pawlowski, E., & Monge, R. (2010). Not just one shot: Extending the dialogues about information literacy in composition classes. Composition Studies/Freshman English News, 38(2), 93—110.

Association of College and Research Libraries. (2016). Framework for information literacy for higher education. Retrieved from http://www.ala.org/acrl/standards/ilframework

Behrens, S. J. (1994). A conceptual analysis and historical overview of information literacy. College and Research Libraries, 55(4), 309—322.

Bohannon, J. (2015). Not a stitch out of place: Assessing students’ attitudes towards multimodal composition. Bellaterra Journal of Teaching & Learning Language & Literature, 8(2), 33—47.

Brown, C., Murphy, T. J., & Nanny, M. (2003). Turning techno-savvy into info-savvy: Authentically integrating information literacy into the college curriculum. Journal of Academic Librarianship, 29(6), 386—398.

Bruce, C. S. (1997). The relational approach: A new model for information literacy. New Review of Information and Library Research, 3, 1—22.

CWPA, NCTE, & NWP. (2011). Framework for success in postsecondary writing. Retrieved from http://www.ncte.org/positions/statements/collwritingframework

Doyle, C. S. (1994). Information literacy in an information society: A concept for the information age. Syracuse, NY: ERIC Clearinghouse. ED 372763.

Huston, M. M. (1999). Towards information literacy: Innovative perspective for the 90s. Library Trends, 39(3), 187—362.

Jordan, J., & Kedrowicz, A. (2011). Attitudes about graduate L2 writing in engineering: Possibilities for more integrated Instruction. Across the Disciplines, 8(4). Retrieved November 20, 2016, from http://wac.colostate.edu/atd/ell/jordan-kedrowicz.cfm

Kalker, F. (2016). Digital/critical: Reimagining digital information literacy assignments around the ACRL framework. In R. McClure & J. P. Purdy (Eds.), The future scholar: Researching and teaching the frameworks for writing and information literacy (pp. 205—222). Medford, NJ: Information Today.

Kress, G. (2003). Literacy in the new media age. New York: Routledge.

Leu, D. (2000). Literacy and technology: Deictic consequences for literacy education in an information age. In R. Barr, M. Kamil, P. Mosenthal, & D. Pearson (Eds.), Handbook of reading research (pp. 743—770). New York: Routledge.

Leu, D. J., Kinzer, C. K., Coiro, J. L., & Cammack, D. W. (2004). Toward a theory of new literacies emerging from the Internet and other information and communication technologies. In R. B. Ruddell & N. Unrau (Eds.), Theoretical models and processes of reading. Newark, DE: International Reading Association.

Markauskaite, L. (2006). Towards an integrated analytical framework of information and communications technology literacy: From intended to implemented and achieved dimensions. Information Research, 11(3).

McDowell, L. (2002). Electronic information resources in undergraduate education: An exploratory study of opportunities for student learning and independence. British Journal of Educational Technology, 33(3), 255—266.

Mina, L. W., & Walker, J. R. (2016). International students as future scholars: Information literacy skills, self-assessment, and needs. In R. McClure & J. P. Purdy (Eds.), The future scholar: Researching and teaching the frameworks for writing and information literacy (pp. 63—88). Medford, NJ: Information Today.

Pecorari, D. (2008). Academic writing and plagiarism: A linguistic analysis. New York: Continuum.

Pinto, M., Antonio Cordón, J., & Gómez Díaz, R. (2010). Thirty years of information literacy (1977—2007): A terminological, conceptual and statistical analysis. Journal of Librarianship and Information Science, 42(1), 3—19.

Purdy, J. P. (2010). The changing space of research: Web 2.0 and the integration of research and writing environments. Computers and Composition, 27(1), 48—58.

Spievak, E. R., & Hayes-Bohanan, P. (2013). Just enough of a good thing: Indications of long-term efficacy in one-shot library instruction. Journal of Academic Librarianship, 39, 488—499.

Valmont, W. J. (2003). Technology for literacy teaching and learning. Boston: Houghton Mifflin.

Wang, R. (2016). Assessment for one-shot library instruction: A conceptual approach. Libraries and the Academy, 16(3), 619—648.

Zhao, J. C., & Mawhinney, T. (2015). Comparison of native Chinese-speaking and native English-speaking engineering students’ information literacy challenges. Journal of Academic Librarianship, 41, 712—724.