An In-depth Exploration on the Praxis of Computer-assisted Qualitative Data Analysis Software (CAQDAS)

NVivo is a methodological tool in analyzing qualitative data. The software belongs to the genre of Computer-assisted Qualitative Data Analysis Software (CAQDAS). This qualitative case study aimed to explore the practices of NVivo adopters in a state university in the Philippines, Academic Year 2019-2020. A researcher-made and foreshadowed Focus Group Discussion (FGD), in-depth interview, and observation protocols were utilized to gather data from the participants. The conceptual analysis was done using NVivo 12 Plus, while the researcher did the analytic analysis. The study revealed that the NVivo adopters are generally tech-savvy and stationed in the research office. Their speed of using the software varies. The adopters vary in terms of using NVivo- some were fast learners, while others were slow. They regularly ask questions from the researcher-resource speaker, though the NVivo manual of operation was distributed to them. Seven (7) themes have emerged that may optimize NVivo’s functionalities in qualitative data analysis. These themes are requirements for improved coding practices, the need for enhanced visualization skills; the importance of the knowledge on the NVivo research cycle; qualities of NVivo adopters; motivation in using NVivo; and practices for practical qualitative data analysis using NVivo. It is thus concluded that there is a need for the state university administration to consider several mechanisms to improve the use of CAQDAS, specifically NVivo, in qualitative data analysis.


Introduction
'Not everything that counts can be counted, and not everything that can be counted counts' -W. B. Cameron. This statement relates to the need for research to determine how counting affects qualitative data's nature and principles. Since the late 20th century, qualitative research has been gaining momentum. Since then, several qualitative types of research have been conducted in various disciplines and fields. As the volume and demands of data surges, the need for time and cost-efficient qualitative data analysis software becomes highly demanded (Glaser, 1998; Costa, Reis & Moreira, 2019; Silver & Bulloch, 2017). These necessities give rise and popularize the Computer-assisted Qualitative Data Analysis Software (CAQDAS) packages, a counterpart of the Social Package for Social Sciences (SPSS) in quantitative researches. NVivo, as one of the genres of CAQDAS packages, is considered the most popular and most-used CAQDAS package in the Philippines and abroad primarily because of its userfriendly interface similar to Microsoft Outlook, advanced querying functions, and powerful sources of data and coding comparison (Hoover & Koerber, 2011; QSR International, 2018; Lewis & Silver, 2017). In a state university in the Philippines, qualitative research course is offered in the graduate school programs. However, it is only in the College of Arts and Sciences, Doctor of Philosophy in Social Science program, which offers two (2) qualitative research courses -SSC 617 for Qualitative Research and SSC 618 for Qualitative Data Analysis. The Qualitative Data Analysis course discusses the aspects of data analysis. It incorporates the use of CAQDAS NVivo as a digital dimension in teaching-learning processes and in analyzing qualitative researches. Nevertheless, some Qualitative Research course in other programs also incorporates the use of CAQDAS NVivo in analyzing qualitative data. In 2018, through the University Research Development Center (URDC), a state university had acquired eight (8) perpetual licenses of this software similar to other most leading universities and research institutions across and outside the country. This is because research articles that utilize this software have gained greater acceptability in recognized peer-reviewed journals like Scopus, Web of Science, and Academic Search Premier-indexed journals (QSR International, 2018; MacMillan & Koeing, 2004; Miles & Huberman, 2014). Despite the NVivo training provided by the University, some adopters still experienced difficulty running the NVivo alone, exploring or 'playing' with data resulting in autocoding, and failure to come up with visualizations can be corroborated with the themes in reporting the findings of the study. Similarly, the researchers were still stumbling in managing and analyzing their qualitative data despite the use of NVivo 12 Plus. Thus, according to Miles & Huberman (2014), doing loose analysis may result in weak qualitative data analysis and output. Besides, limited information is available on the nature, and practical use of these programs in the Philippine context, which led to various misconceptions regarding the use of the program (MacMillan & Koeing, 2004) and by no means remains by no means a novel approach in qualitative data analysis (Costa, De Sousa, Moreira & De Souza, 2017). Hence, more research was needed to determine how the researchers were leveraging the features, functionality, and methodological operations offered by NVivo concerning the principles of qualitative data analysis (Blismas & Dainty, 2003, in Woords, Paulus, Atkinds & Macklin, 2015; MacMillan & Koeing, 2004). The preceding scenarios propelled and necessitated the researcher to conduct a case study on the practice of NVivo in managing and analyzing qualitative data among its adopters in a state university in the Philippines during the Academic Year 2019-2020. This research tried to explore how the adopters leverage NVivo to analyze qualitative data to ascertain their satisfaction and/or complaints about the software. Through this, it is hoped that such practices would be enhanced in agreement with the claims of Elliott (2018), Miles, Huberman, and Saldaña (2014), Zamawe (2015), Sinkovics and Alfoldi (2012) as well as Hoover and Koerber's (2011) principles of credibility, efficiency, multiplicity, and transparency of the research using NVivo. This is considered a vital tool for insightful discussion and thoughtful evaluation of the research findings.

Evolution of Qualitative Data Analysis
Qualitative data usually come in voluminous, messy, unwieldy, and discursive an attractive nuisance (Miles, 1979, in Ritchie & Lewis, 2003. These may be derived from extensive field notes, hundreds or thousands of pages of transcripts from individual interviews or focus groups, documents, photographs, and videos. Hence, the researchers must find a way to handle these data alongside with rigorous and high-quality research process. That is why qualitative data analysis has been tagged as a labor-intensive undertaking from gathering to analyzing and reporting findings. One of the techniques to overcome this is to conduct prior reading and reviewing these qualitative data before coding until themes are derived (Welsh, 2002; Beecham, n.d).
Unlike quantitative analysis, qualitative analysis has no exact recipe and no defined end as to the rules or procedures for analyzing it. Data analysis should only be creative, logical, and systematic in organizing and analyzing data. This analysis follows the following processes: reading through your text; coding chunks; rereading your data; changing codes; combining codes; adding codes; deleting codes; combining codes into categories; concept comes out from the categories; looking at new data, and so it goes on (Lichtman, 2014; Ritchie & Lewis, 2003).
Conventionally, qualitative data analysis requires multiple cards with codes and hangs on the wall or place on the table. These codes are then placed under others to create categories along with subcategories. These can be moved and regrouped before concepts have arrived. Another way of organizing these data into categories is the use of markers or pencils of different colors and to sort like colors together. In the same vein, Tilahun Nigatu (2009) believed that manual qualitative analysis requires: a) transcription in a word processor; b) reproducing multiple photocopies of the text; c) painstakingly reading through and assigning codes to the material; d) cutting the pages up in coded passages, and e) manually sort the coded text in order to analyze the patterns they find. Due to this, Lichtman (2014) reminded the analysts that the above-mentioned methods work best in a small amount of data but not in a large amount of data.
In a similar vein, there are a number of manual methods of data analysis. One of the identified traditional methods of Russell and Gregory (1993) involves manually assigning quotations to a category, cutting and pasting them into different colored paper, and reorganizing them into sub-themes. These result in a mountain of paper that has to be managed and interpreted. This method tends to be time-consuming and messy. A related study by McLafferty and Farley (2006), revealed that manual method of data analysis could easily lose or overlook data that may be hidden in a mountain of paper and does not allow researchers to revisit their analysis visually, which enhances their familiarity with the data. The above-mentioned circumstances pose a big challenge on the part of social scientists on how to handle a large amount of data efficiently and come up with a well-structured data management system and luckily these CAQDAS packages have been viewed as of strong advantaged (Miles & Huberman, 1994, in Wickham & Woods, 2005. Despite the diversity of qualitative methods, data is often obtained through participant interviews. The subsequent analysis is based on a common set of principles: transcribing the interviews; immersing oneself within the data to gain detailed insights of the phenomena being explored; developing a data coding system; and linking codes or units of data to form overarching categories/ themes which may lead to the development of the theory (Morse & Richards, 2002). Analytical frameworks such as the framework approach (Ritchie & Lewis, 2003) and thematic networks (Attride-Stirling, 2001) are gaining popularity because they systematically and explicitly apply the principles of undertaking qualitative analysis to a series of interconnected stages that guide the process.
A related study of Welsh (2002) provides a good analogy of how computer software can enhance the task of qualitative analysis: It is useful to think of the qualitative research project as a rich tapestry. The software is the loom that facilitates the knitting together of the tapestry, but the loom cannot determine the final picture on the tapestry. It can though, through its advanced technology, speed up the process of producing the tapestry and it may also limit the weaver's errors, but for the weaver to succeed in making the tapestry, she or he needs to have an overview of what she or he is trying to produce. It is very possible, and quite legitimate, that different researchers would weave different tapestries from the same available material depending on the questions asked of the data. However, they would have to agree on the material they have to begin with. Software programs can be used to explore systematically this basic material creating broad agreement amongst researchers about what is being dealt with. Hence, the quality, rigor, and trustworthiness of the research are enhanced.
Related literature authored by Firestone & Dawson (198) found out that there are three general approaches to QDA: intuitive, procedural, and intersubjective. In actuality, these approaches are generally used in combination because each has distinct strengths and weaknesses and so contributes differently to the research process. A major problem with the intuitive approach is that intuition is such a private process that it is difficult to convey the methodology to a reader and allow it to be subjected to external scrutiny. The reader knows little about how the researcher arrived at the conclusions or how firmly they are grounded. Hence, research reports in which intuition is used alone sometimes lack credibility. Another, and perhaps more serious, problem is that the findings may not have undergone the sorts of confirmatory checks that are common in procedural and intersubjective approaches. For this reason, individual intuition should almost always be combined with other, more explicit and deliberately confirmatory, approaches. Individual intuition is the richest and primary source of subjective understanding in qualitative research. Procedures are essentially rule-bound. In the extreme case, the researcher withholds belief and follows a procedure to its logical end before accepting or rejecting a conclusion. In practice, however, procedures vary in the extent to which they allow judgment to intervene as they are being carried out. A variety of procedures exist to help discipline qualitative inquiry, including data display techniques, triangulation, guidelines for induction, and quantitative techniques. Intersubjective approaches require interaction among researchers or between researchers and setting participants about the research findings. Depending on the developmental stage of the research effort, these approaches can both enhance understanding and help verify findings. In fact, both often take place simultaneously through the give and take of discussion and joint work. Almost all of the procedures employed in qualitative research are subject to multiple interpretations, not unlike procedures used in quantitative research (Cronbach, 1980). Intersubjective approaches provide a way of "negotiating" these interpretations. They force researchers to confront alternative explanations and often surface new data at the same time. In the process, a consensus on a "best possible" interpretation usually emerges. However, there is the possibility that error will result from the group process (Firestone, & Dawson, 1982).

Coding Process
The need for coding is simple: Text data are dense data, and it takes a long time to go through them and make sense of them" (Creswell, 2015, p. 152). In qualitative research, coding should not be done during the data analysis procedure, rather it can be done during the data collection. Preliminary jottings and pre-coding are very important to tie the research to a solid analysis. The researchers don't have to wait until all their data has been collected and assembled to begin their preliminary coding. As they are transcribing interviews, writing up field notes, or filing relevant documents, they can jot down preliminary phrases or words as "analytic memos" in a research journal for future reference. Memory doesn't always serve us as well as we might like! Make sure that such analytic memos are distinctively marked so as to avoid mixing them with the raw data. Researchers may also choose to "precode" (Layder, 1998) by underlining, highlighting, circling, bolding, or coloring salient passages that are deemed as striking (De Witt, 2013).
According to Saldaña (2009, p.3), a code in qualitative inquiry symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data guided by the formula 'from codes and categories to theory'. Miles and Huberman (2014) also said "Codes are tags or labels for assigning units of meaning to the descriptive or inferential information compiled during a study. These are usually attached to 'chunks' of varying size-words, phrases, sentences, or whole paragraphs, connected or unconnected to a specific setting. According to the study of Glaser & Laudel (2013), the coding process is inevitably influenced by the researcher's purpose, approach, personal background, and experience. As Miles and Huberman said, some codes do not work; others decay. No field material fits them, or the way they slice up the phenomenon is not the way the phenomenon appears empirically. This issue calls for doing away with the code or changing its level' (1994,61). It is always difficult to convey examples of coding without recounting the entire scope of the research (Cope, 2009).
A study conducted by Kawulich (n.d) stated that a good code has these five elements: 1) a label (i.e., a name); 2) a definition of what the theme concerns; 3) a description of how to know when the theme occurs; 4) a description of any qualifications or exclusions to the identification of the theme; and 5) listing of examples to eliminate confusion. The label should be developed last and should be conceptually meaningful, clear and concise, and close to the data.
Before proceeding to the code of data, Saldaña (2010) said that the researchers must possess the following personal attributes: organized, perseverance, dealing with ambiguity, exercise flexibility, creativity, rigorously ethical, and extensive vocabulary. There are likewise indicators for coding which include: word repetitions, indigenous categories, key-words-in-context, compare and contrast, social science queries, searching for missing information, metaphors and analogies, transitions, connectors, unmarked text, pawing, and cutting and sorting (Gibbs, Clarke, Taylor, Silver & Lewins, 2012).
Actually, Huberman and Miles (1994) divide data analysis into three stages: data reduction, data display, conclusions, and verification. These codes, according to them, may fall or involve all these three stages. A related study conducted by Coffey and Atkinson (1996) stressed that although coding may be part of the analysis process, it should not be thought of as a substitute for analysis. Coding links data fragments to concepts, but the analytic work lies in establishing and thinking about such linkages, not in the mundane processes of coding. Similarly, the study of Thompson (2002) contends that the coding process does not merely consist of a random division into smaller units, but requires skilled perception and artful transformation. Different analysts may use different coding systems for the same data, and the same analyst may apply different coding systems at different stages; there is no one ideal coding structure.  Saldaña (2016), minimum codes will do since there is no standardized or magic number to achieve it. Besides the number of codes depends on the research question. Moreover, not everything needs to be coded based on the study done by Cunningham (2004). Saldaña (2016) also believed that every recorded fieldwork detail is worthy of consideration but the most salient portions of the corpus should be prioritized and that much else can be deleted. Deleting data requires great courage is unnecessary in this age of digital storage.
Coding is regarded differently depending on the scholars. Coding is considered as an act of 'winnowing' (Creswell, 2015, p. 160) and 'data condensation' (Miles, Huberman & Saldaña, 2014, p. 73). Apart from this, a related study by Elliott (2018) emphasized that coding is also a method of discovery. This can be attained by reading and re-reading the chuck of data carefully until it gives intimate and interpretative familiarity with the data. Henceforth, familiarizing the data has potential value in coding or noncoding data. It must be remembered that familiarity with code labels breeds not contempt but understanding. Nonetheless, these codes are oftentimes composed of a word or a short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data. (Saldaña 2016, p. 4). Creswell (2013) also suggests coding in rounds; coding paragraphs in a rough first draft of coding, before refining the labels to smaller pieces through further re-readings. Miles, Huberman, and Saldaña say codes depend on the study and your aims within it. In the same vein, Saldaña distinguishes between "lumpers" and "splitter" the former takes a "lump" or large excerpt and gives it one code; a "splitter" splits the data into smaller codable moments. "Lumping is an expedient coding method, while splitting generates a more nuanced analysis from the start" (Saldaña, 2016, p. 24).
Further, Miles, Huberman, and Saldaña (2014) advocated the multi-factorial coding. A descriptive code assigns a label in the form of the noun; and a process code "uses gerunds ("-ing" words) exclusively to connote observable and conceptual action in the data" (2014, p. 75). These codes must be intimately related to both their research questions and the procedures they are adopting to generate their codes. The results of Cunningham's (2004) investigation suggested that codes should provide "precision of name' to convey their true meaning to the reader (2004, p. 67). A priori codes set beforehand can be categorized and made consistent within categories while emergent codes are more likely to require editing. However, changing the label does not necessarily mean that it no longer describes the data. In essence, Miles, Huberman, and Saldaña (2014) espouse the idea that codes should have some conceptual and structural unity. Meaning, codes should relate to one another incoherent, study-important ways; they should be part of a unified structure" (2014, in Elliott, 2018). Moreover, the code format should code in the margin-right next to the indicator. Creating two codes for the sam,e slice of data is acceptable.
Comparing codes during the memo writing phase will help you develop new, more exacting codes for the data received (Glaser, 1998). "Counting is easy; thinking is hard work" (Saldaña, 2016, p. 41). Whether or not to count codes is rarely a question but it remains critical among qualitative researchers especially those who are against the principle of counting. They believe that counting conveys a quantitative orientation of magnitude and frequency contrary to qualitative research (Creswell, 2013, p. 185). On the other hand, counting provides a useful indicator of the importance of a given code. A related study of Harding's (2013) contends that the number of times a code appears in the data is insignificant but the scope of data makes it significant. Saldaña (2016, p. 41) further pointed out that the frequency of occurrence is not a necessary indicator of significance. Creswell (2013, p. 185) supports this and acclaims that counting of codes is in contrast with the very principle of coding stating that 'all codes should be given equal emphasis'. Counting heightens the risks of overlooking significant and interesting data. Saldaña (2016) likewise warns the possibility that words that appear frequently in a dataset may not a key to unlocking analysis but can suggest something unimportant, inconsequential, and unrelated to the research questions and purpose (2016, p. 25). Nevertheless, the researchers are not prevented from counting codes, as long as they know how to use counts and their implications in later analysis (Elliott, 2018).

Finding Patterns, Categories, and Themes
A related study conducted by Akinyoade (2013) contends that in coding qualitative data and writing memos, the researcher should look for the themes, patterns, and relationships that are emerging across the data. These involve looking for similarities and differences in different segments and sets of data and seeing what different groups of participants are saying. The researcher needs to focus on data that do not fit into the emerging themes, patterns, and relationships. The process involves moving back and forth as the analyst look for exceptions, contradictions, and surprises around a particular theme in the data. The researchers can realize that they can collapse different categories under one main overarching theme. Once the researchers have established over-arching themes and their relationships, they may notice that some cases in the data do not comply with the themes. These are "outlying" data or negative cases. Focus on the "outlying" data and look for explanations of why they do not fit the patterns. The memos are part of the data to be analyzed at this stage. They may even need to write more memos. These activities are the culmination of qualitative data analysis. It could be overwhelming at times.
Based on the investigation of Green, Willis, Hughes, Small, Welch, Gibbs, & Daly (2007), the detailed examination of the data is carried out to categorize the ways in which research participants speak about aspects of the issue under investigation. This linking of codes aims to create coherent categories and is the third step in the analysis of interview data. It is concerned with looking for a 'good fit' between codes that share a relationship. Data usually contain contradictions and exceptions and these need to be sorted into different categories, generating an explanation for everything that is observed or recorded in the data. Analytic categories are 'saturated' when there is sufficient information for the experience to be seen as coherent and explicable. Eventually, through a process of diversifying and intensifying the data generated and analyzed, researchers should be able to make sense of the experience of all people in all categories in the study or explain the conditions under which exceptions occur. Many researchers stop at this step and report findings based on categories. This is acceptable if made explicit. A simple, descriptive study is one that can make a modest knowledge claim with appropriately limited conclusions. At this level, the focus is on dominant categories (frequently mistaken as themes), with the presentation of illustrative quotes. A relatively common problem with the description of categories is that researchers enumerate responses to research questions ('most', 'some', 'a few' participants) to help explain differences in the data, rather than maintaining the emphasis on understanding the meanings in the data based on the full range of accounts. Selectively analyzing the data in this way provides only partial evidence and one-sided meaning. It fails to account for the full range of experiences and provides no explanation of those not included in the category.
A related study by Green, Willis, Hughes, Small, Welch, Gibbs, & Daly, (2007) revealed that the final step of analysis of qualitative data is the identification of themes. A theme is more than a category. The generation of themes requires moving beyond a description of a range of categories; it involves shifting to an explanation or, even better, an interpretation of the issue under investigation. The generation of themes requires testing the explanation both with the data and with the theory, specifically referring to the theoretical concepts relevant to the study. It is this step that is crucial to linking the results from an interview with what we know about people in other settings. The extent to which this is achieved determines the extent to which the study is generalizable to other groups and other settings. The identification of themes, rather than categories, is, therefore, the litmus test of a study that produces stronger evidence. We argue that a high-quality paper identifies themes by linking the categories with social theory until eventually an overriding explanation is arrived at which makes sense of the various patterns that have emerged at the descriptive level. It has this capacity to explain the social phenomena observed in the study that makes the findings generalizable to other settings, thus providing better evidence. The theme involves five aspects: 1) the overall entity or experience; 2) the structure or the basis of the experience; 3) the function or the nature of the experience into a meaningful whole; 4) the form of the stability or variability of the various manifestations of the experience; and 5) the mode or the recurrence of the experience (De Santis & Ugarriza, 2000; in Kawulich, n.d).

Origin and Specifications of NVivo
The Non-Numerical Unstructured Data Indexing Searching and Theorizing (NUD*IST) was originally created by Tom Richards and Lyn Richards at the University of La Trobe, Australia in the early 1980s. It later was renamed into NVivo under the QSR International as its developer (Andrade, Costa, Linhares, De Almeida & Reis, 2019). NVivo is an abbreviation of NUD.IST Vivo. Since then, this program has undergone numerous modifications, and its successive versions have been enriched with newer functions that were better tailored to the needs of qualitative researchers. The dynamics of change propel the introduction of the new version of the application launched every 2-3 years. Actually, there are three (3) editions of NVivo for Windows software: NVivo Starter, NVivo Pro, NVivo Plus. Each edition features a different level of functionalities or potentialities to support a range of projects and research needs (Sanchez, 2018). NVivo 12 Plus is the newest version of the software for qualitative and mixed-methods research.
NVivo belongs to the genre of the CAQDAS program. CAQDAS stands for Computer-Aided Qualitative Data Analysis Software. The studies of Barry (1998) and Friese (2012) support the idea that CAQDAS does not actually analyze data; it is simply a tool for supporting the process of qualitative data analysis. A related study by Cabrera (2018) further contends that NVivo does not favor a particular methodology, rather it is designed to facilitate common qualitative techniques no matter what method you use. It is also one of the packages with paid licenses that stand out others are Dedoose; WebQDA; MAXQDA and QDA Miner. Besides, NVivo is very useful for discourse analysis, ethnography, phenomenology, grounded theory, mixed methods, and many more (Cabrera, 2018).
NVivo becomes a more popular CAQDAS package because of its user-friendly, more comprehensible, and easier-to-learn interface. This allows even the non-proficient in technology to easily navigate and learn its functionalities with minimal or without skilled users (QSR International, 2019). Similarly, NVivo works best in this research cycle: Import, Explore, Code, Query, Reflect, Visualize, and Memo. Essentially, this process follows the cycle from import to memo but allows an analyst to go back to any of the processes as the project progresses. This cycle serves as a guide for the researchers who wish to uphold the very principle of CAQDAS such as transparency, expediency, and replicability of findings (QSR International, 2019).
Along with this, researchers must know the different terms used in NVivo: Sources: The research materials include documents, PDF, datasets, audio, video, pictures, memos, and framework. Source Classifications: This allows you to record information about your sources. Coding: The process of gathering material by topic, theme, or case. Nodes: Serve as containers for your coding that represent themes, topics, or other concepts. It allows you to gather related material in one place so that you can look for emerging patterns and ideas. Cases: The containers for your coding that represent your 'units of observation'. Case Classification: Allow you to record information about cases matrices (Sanchez, 2018).
A related study by Cervantes, García & Trigueros (2018) presents that NVivo has three (3) strategies in data production and analysis: Economy, Credibility, and Quality. Economy because it facilitates the most arduous tasks of qualitative research, especially the identification of central categories and subcategories, and their subsequent codification (Cervantes, García & Trigueros, 2018). The second contribution is credibility. It regains the rigor in analysis processes. This credibility can be made visible with CAQDAS NVivo through the use of different query options such as matrix coding, coding, compound, word frequency, etc. The consultation strategies that are going to be employed will also depend on the paradigm within which we are positioned in our investigation, and the method or techniques we are using to produce the information. Lastly, the third contribution of the NVivo software is for the improvement of the overall quality of the investigation. This software serves as a tool and the decisions of how, when and for what to use it will always be in the hands of the researcher.
Using NVivo to support the research project, the researcher will be able to: work more efficiently, save time, quickly organize, store and retrieve data, uncover connections in ways that aren't possible manually, rigorously back-up findings with evidence. With NVivo, the research becomes portable. Work can be done in the field, at work, at home anywhere. NVivo can also keep projects secure. The budget required for this software was $6,000 or P320,000.00 (6 licenses/users) sourced out from the 2017 Unexpected URDC Balance.
NVivo is the leading CAQDAS research software used by leading universities, government agencies, non-governmental organizations, research institutions, and business organizations in analyzing qualitative and mixed-method data as well as in program or product development, assessment, and monitoring and evaluation (Sanchez, 2017). In fact, aside from WVSU, NVivo is also used in some leading universities in the Philippines such as the  A related study by Auld, Diken, Bock, Boushey, Bruhn, Cluskey, Edlefsen, Goldberg, Misner, Olson, Reicks, Wang, and Zaghloiul (2007) revealed that the decision whether to learn NVivo may be influenced by how frequently the researcher does qualitative analysis. If qualitative research is done infrequently and data sets are limited (fewer than 20 interviews or focus groups) then analysis by hand may prove to be more efficient and inexpensive. If a researcher frequently conducts qualitative analysis, the following needs to be considered: personal preference of conducting analysis by hand or computer, software cost, the expectation of using NVivo for future projects, and time investment needed to learn NVivo.

Potentialities of NVivo
Related Studies of Welsh (2002) and Beecham (n.d) suggested that NVivo is a very good data management tool for qualitative research. Before the introduction of computer software programs, qualitative researchers have been required to transcribe the focus group discussions and interviews, categorize themes/concepts and ideas by cutting up transcribes, and sort particular sections into themes by using a color-coding system. NVivo also allows researchers to analyze text, images, and videos within the same research project (QSR International Pty Ltd, n.d.). The intrinsic value of NVivo lies in its capacity to code and categorize various data formats. Similar to other CAQDAS, using NVivo can minimize the researcher's bias. However, the disadvantage ascribed to qualitative data analysis is that it does not always include evidence needed to support observability and measurability. Using particular programmatic functions overcomes these limitations. The word count function permits the measurement and logical arrangement of qualitative data. Such an approach can be useful in analyzing qualitative data from reflective writing, interview, and open-ended responses of surveys, etc. NVivo also provides a matrix coding feature that allows researchers to compare qualitative data across and within categories.  (2002) found out that NVivo 12 Plus has the following features: capacity to store and organize, categorize and analyze, view and discover the data; analyze and organize qualitative data in the formats of audio, video, digital photo, Word, PDF, spreadsheet, rich text, simple text and social media data and web data; destructure-restructure data especially in the coding process; and allow transfer data from programs, such as Microsoft Excel, Microsoft Word, IBM SPSS Statistics, EndNote, Microsoft OneNote, SurveyMonkey, Evernote, and TranscribeMel. According to the findings of Bourdon's (2002) study, using this software minimized the army of research assistants and reduced the amount spent in research. Hence, it allows the researchers to spend ample time doing analytic analysis.
Moreover, the results of Auerbachand and Silverstein's (2003) investigation revealed that the following things should be done before importing data into NVivo: Making the Text Manageable; Step 1: Explicitly state your research concerns and conceptual Framework; Step 2: Select the relevant text for further analysis. Hearing What Was Said; Step 3: Record repeating ideas by grouping together related passages of relevant text; Step 4. Organize themes by grouping repeating ideas into coherent categories; Developing Theory; Step 5. Develop theoretical constructs by grouping themes into units consistent with your theoretical framework; Step 6. Create a theoretical construct by retelling the participant's story in terms of the theoretical constructs. According to Edhlund (2011), analysis in NVivo comes in four stages: 1) Descriptive involves entering the project details and data into NVivo; 2) Topic includes coding the data and organized in containers called nodes; 3) Analytic involves merging of nodes and running of queries; 4) Conclusions where NVivo assists in organizing the data so the analysis could draw conclusions that were reliable and unproblematic.
A related study by Zamawe (2015) discussed that the searching tools in NVivo allow the researchers to interrogate and explore data at a particular level. This can improve the rigor of the analysis process by validating the researcher's own impressions of the data. It is also very useful if working with a big project since it improves the accuracy of qualitative studies. Some queries are useful to identify common words and codes which can be the starting point for coding. Welsh's (2002) study confirmed this and contended that this software is less useful in addressing issues of validity and reliability in the thematic ideas that emerge during the data analysis process due to the fluid and creative way in which these themes emerge. Hence, it is imperative that researchers recognize the benefit of manual and electronic tools in qualitative data analysis and management and become open to, and make use of, the advantages of each.
A related study by Bong (2002) found out that the following should be done when analyzing qualitative data in the software: a) investing in NVivo and learning its basic functions; b) attaching codes on the data; c) refining code; d) manually finding connections between codes towards theory-building, and e) mapping this web of connections for the presentation of audit trail.

Specific Functions of NVivo
The study of Ahmad & Newman (2010) revealed that NVivo has several functionalities. Coding is one of these. Codes, in NVivo, are called 'nodes'. The nodes work as receptacles or containers for the storage of coded information. These nodes can be further divided into subcategories, called "sub-nodes", which can organize a node's hierarchical structure in an NVivo file (dubbed a "project"). This ensures that the source document will remain intact for future use. In open coding, for instance, the data is dissected into categories through the creation of nodes: free nodes. In other words, NVivo assists the management of data seamlessly: just by using the "drag" and "drop" functions. While doing this, it is advised to revisit full transcripts when reviewing the coding framework.
Similar studies conducted by Bazeley (2007) and Bazeley (2007) pointed out that one other feature of NVivo is the query tools. Queries help probe the data, find patterns and pursue ideas. The researchers can save queries, re-run them through new data and track the progress of results. Some of the queries that can be created and saved in the NVivo are as follows: Text search query: This can provide a quick way of coding your sources. This allows you to search for words and code the occurrence at a particular node; Word frequency query: This lists the words the number of times that they occur in selected items. The most frequent words appear bigger and can help you identify themes and concepts; Coding query: This gathers content based on how it was coded; Matrix coding query: This creates a matrix of nodes based on search criteria. This allows you to view the number of cases and references coded at free nodes by the selected attributes and guides in creating parent and child nodes; A compound query: This combines text and coding queries. It searches for specified text in or near coded content. More importantly, in 'Word frequency', the researchers can create 'word clouds' that display the most frequently occurring words in their paragraph selections which are very useful for showing the most common lexical choices in the representation of each topic. These lists of the most frequent terms are also useful for revealing the activated semantic field in the representation of a specific topic. Besides, the pitfalls associated with qualitative data analysis can be avoided by running queries as the analyst progresses rather than leaving it to a stage whereby data analysis is well advanced (Bergin, 2011).
Akin to the previous findings, Jemmont (2002) and Di Gregario (2000) revealed that NVivo or any other CAQDAS packages do not eliminate the need for the researcher to think. For success, it is recommended that the (a) basic tool functionalities are clearly understood and (b) the researcher creatively seeks ways to apply these functionalities for different purposes within different contexts. In terms of tool functionalities, training resources (books, user manuals, online help, and workshops) are very useful. Tutorials embedded with the tools are also useful assets for this purpose.
Moreover, based on the study conducted by Darmody & Byrne (2006) revealed that the conflicting opinions and suspensions regarding the use of CAQDAS like NVivo are often caused by limited knowledge regarding these programs and their capabilities as well as by insufficient training in the use of qualitative research methods on the part of the researcher. It is also a common error by novice users to mistake qualitative coding for quantitative coding. In order to avoid excessive coding, the researcher should keep in mind the research design and the theoretical framework in order to decide what information is useful to code, depending on the research questions and methods chosen. After all, qualitative coding is interpretative by nature (Morse & Richards, 2002). In this context, it is important to remember that using qualitative software is not analysis. Qualitative software is a methodological tool that facilitates analysis to a great extent by allowing easy access to the data for continuous reviewing and theory building. Analysis, on the other hand, consists of coding, categorizing, and theorizing in the framework of a theoretical body of literature and is not merely a description.  (2007) revealed the following steps in analyzing the data on NVivo: 1) Import the videos and audios, categorize videos in different files, and arrange a separate folder for field reflections and memos. This will allow the researchers to store files in different files and folders in order to keep track of what they have and run queries whenever needed; 2) Transcripts need to be added to the corresponding video and audio files to be able to analyze not only what but also how spatial language is used since it will allow researchers to be "…sensorially closer to the data"; 3) Start coding by choosing the document (video, transcript) which is a typical representation of the sample. As suggested by Bazeley and Jackson (2013), "the first source you code can have a significant influence in determining the categories you create and the ideas you carry through the analysis as it will sensitize you to watch for certain types of detail" (p.69). Codes will be reviewed multiple times and, in the end, some of them will be merged into a category while some of them will be deleted or extended as a category; 4) after coding is completed for all files, "…with no new categories being developed and no new ideas being generated, it will be time to stop or at least to review your sampling strategy"; 5) Review process will include running queries (text search), diagrams, comparison charts. These seeing the parts and seeing the whole or visualizing data process will allow the researchers to make better interpretations. As recommended by Paulus, Jackson, and Davidson (2017), such visualizations are good to reveal the relations among main themes; 6) Take notes during the analysis and write some memos spontaneously on the software during the analysis.
Nevertheless, a parallel study conducted by Hilal and Alabri (2013) reveals that thorough knowledge and skills of applying Nvivo are needed before the researchers start using the software. This can be in the form of workshops that emphasize the application and techniques of NVivo software.

Methodology
This study employed a qualitative case study approach (Sharan Merriam, 1988Merriam, , 1998. A qualitative case study was appropriate since the researcher wanted to fully understand how the NVivo 12 Plus adopters used and explored the software's functionalities in relation to the principles of qualitative data analysis. The participants were selected through purposive sampling. This sampling technique was applied to satisfy the purposes and the needs of this study. The participants were chosen based on the following inclusion criteria: a) must be a NVivo 12 Plus adopter in State University in the Philippines; b) has hand-on use of NVivo 12 Plus in analyzing qualitative data; and c) must have expressed willingness to participate as confirmed by their signature in the informed consent forms. This study also employed theoretical sampling to determine the number of participants to be included in the study. The final participants underwent observation, in-depth interviews, and Focus Group Discussion, respectively. The researcher utilized the researcher-made interview and observation protocols which were foreshadowed beforehand to the non-participants who exhibit similar condition with the real participants. To provide a high level of evidence, the real participants underwent 2-day NVivo workshop on November 21-22, 2019. Data collection and analysis were done simultaneously in this research methodology. The case study design generally follows the idea of Bogdan and Bicklen (1982( , in Merriam, 1988. The analyses of interview transcripts began by listening to the participants' responses in corroboration with the audio-video recordings and photographs. Emerging insights, hunches, and tentative hypotheses directed the next phase of data collection, which in turn led to the refinement or reformulation of questions, and so on. The raw data were transcribed in toto based on the dialect used by the participants using Microsoft Word Program. The Hiligaynon transcription was then meaningfully translated in English and arranged based on research questions. Similarly, the transcripts from observation protocol and interview protocol were corroborated into one transcript arranged based on the statement of the problem. The organized data were then imported into the NVivo 12 Plus and followed the iterative process recommended by the QSR International, the developer of the software. Moreover, 'word cloud' and other visuals from the NVivo 12 Plus served as blueprint for the presentation of results. The conceptual analysis was done in the NVivo 12 Plus while the analytic analysis was done by the researcher and supported with some functionalities of the NVivo 12 Plus with strong adherence to the principles of qualitative data analysis. In this way, the presentation of research findings became more structured, logical and systematic. The final product of case study research was a narrative report that told the story of the case and enables the reader to fully understand the case from the narrative (Taylor & Thomas Gregory, 2015).

NVivo Adopters' Practices during Workshop
The NVivo adopters' profiles are shown in Table 1. This presented the details of eight (8) adopters namely: Femey, Grace, Levelyn, Mary Cone, Melinda, Monique, Pearlane, and Zoe. It also showed their respective age, area of assignment, type of NVivo used, frequency of NVivo workshop attended, indication if they run NVivo after the workshop and years in the university. The participants were specifically observed on the basis of the NVivo Research Cycle introduced by QSR International, the developer of NVivo. Each stage was given emphasis. With the help of facilitators and another workforce of the researcher-resource speaker, the following observations were noted: The NVivo adopters are generally tech-savvy. They know how to navigate a computer application, like NVivo. However, their competencies are just dependent and limited on their previous experiences of the NVivo and preconceived ideas on qualitative data analysis. The majority of them do not have qualitative research experience which somehow affected the way they perceived and navigated the software. More importantly, hands-on activity is very effective in gauging the proficiency and challenges of NVivo adopters through non-participant observation, in-depth interviews, and FGD. Having a conducive venue and contingency laptops affect the practices exhibited by NVivo adopters. Aside from this, there are other personal factors that influence the kind of practices shown by the adopters. As discovered, most of them are assigned in the research office, balanced age group, and years served in the University across all the case attributes. Thus, the researcher has recognized some tendencies of the adopters. One of which is the distilled data employed and the availability of all the resources which pose a greater advantage on the part of adopters to properly execute whatever expectations lie on them. Nevertheless, the real and actual exploration of the NVivo adopters was deliberately noted with the presence of facilitators who were also adopters of such software. In addition, the researcher acknowledges the possibility of having different observations on the practices of NVivo adopters should the explorative activity have been conducted individually, non-formal setting, and in different methods. shows that NVivo adopters can efficiently manage their qualitative data using the software. Words, for them, are very powerful. These qualitative words are equally important and meaningful in the context of Qualitative Data Analysis (QDA). They also acknowledge the importance of non-numeric data in qualitative research since the strength of outputs lies on it. As shown above, adopters are proficient in querying data, especially in Word Cloud, and importing datasets. They are aware of what files to import by clicking the data sources. For them, querying data is the most feasible index to answer their research questions. That is why adopters usually stop upon seeing the Word Cloud. These adopters use Word Clouds for building insights for further analysis of qualitative data. To some extent, these adopters are likewise inclined how to visualize their data using maps, but apparently, their visualizations are very limited.

NVivo Adopters' Challenges
Figure 2. Word cloud on adopters' NVivo deficiencies Figure 2, NVivo adopters were apparently having a hard time coding their qualitative data. The majority of them do not know what to code and how to do it properly. They preferred to auto-code their data to yield quick codes. These adopters too experienced difficulty in coding due to their insufficient 'immersion' in the datasets (transcript). Apparently, they are deficient in visualizing data. Some of them consider such stages as time-consuming and tedious. This situation is attributed to unclear and shallow instructions and mechanics provided for visualization. Some of these adopters wondered and were perplexed on what to do and how visualization should be done properly. Figure 3. Word cloud on qualitative data analysis using NVivo As shown in Figure 3, NVivo is a very important methodological tool in analyzing Qualitative Data Analysis (QDA). These adopters know that this software can fully maximize if the exploration is in accordance with the QSR NVivo Research Cycle. Adequate knowledge of qualitative research can help achieve such a goal. Thus, NVivo Workshop is very instrumental for the adopters to appreciate and deepen their competence in using such software. These adopters likewise recognize the importance of having spontaneous use of NVivo to gain adequate knowledge and skills. This is because, unlike quantitative analysis, qualitative data analysis is dynamic, logical, systematic, and flexible.

Requirements for Improved Coding Exercises
Coding exercises are the prime waterloo of the adopters. As such, the majority of the NVivo adopters are requesting to have coding exercises in order for them to address their difficulties in analyzing qualitative data. Coding, for them, is very vital to have strong, valid, and credible qualitative findings. Hence, NVivo adopters strongly believe that coding exercises are the key to capacitating them. As shown, this mechanism includes; practice coding exercises; asking somebody to check and critic on coding; reading data transcript thoroughly; becoming reflective and serious in coding data; making researcher-generated codes, and generating heuristic codes. "We conducted qualitative research recently and our experience helped us a lot to code qualitative data correctly. Now, we somehow know how to do it properly." These two (2) adopters were amenable that codes should be researcher-generated and being done heuristically.
Monique said: "My codes just depend on the words of the participants, like adjustments, I code 'adjusted'. I think I need coding exercises and I need a priori codes to hasten my coding process in NVivo. With a priori codes, I can easily determine what code to assign upon reading the text or transcript."

Realizations in Querying Data
Querying data is where most adopters are very fascinated. However, such practice is surrounded by misconceptions and misconstruction. Despite being deeply fascinated with Word Cloud results, NVivo adopters realize that the latter is not the ultimate answer to the research question. As shown in figure 5, doing a query of data, according to them, requires the adopters to be mindful of the following: never stop in Word Cloud; maximize query options; explore other visualizations; ensure query results relevant to the Statement of the Problem (SOP).

Figure 5. Realizations in querying data
Based on figure 5, querying data provides a wide range of insights among NVivo adopters. However, these adopters admit that they usually stop after seeing Word Cloud. Monique disclosed: "I know somebody who suddenly stops using NVivo when she sees that Word Cloud does not answer the research questions. There are also researchers whom I know, right there in Word Cloud, they explain and use it (Word Cloud) in reporting their findings." Pearlane nod and told: "Some of the researchers usually stop when they see Word Cloud. They don't even explore other visualization options." With such practices, some NVivo adopters realized that there is more to query. Hence, every adopter needs to maximize the query options, like text search, word search, and code search queries. Nevertheless, NVivo adopters should ensure that query results answer the Statement of the Problems (SOP).

Needs for Enhanced Visualization Skills
Visualizing data provides a distinct chance for an NVivo adopter to enhance the presentation of results. However, visualizing data is also the less mastered skill of the adopters. Hence, figure 6 gives suggestions to enhance the visualization skills of NVivo adopters. This suggestion includes: know how to use visualization options; emphasizing correct use of connectors, shapes, and colors; conducting training and seminars/workshops on visualization making, and knowing the parameters of visualization. Figure 6. Needs for enhanced visualization skills Based on figure 6, visual is relatively new to the adopters. Some of its options were introduced but, according to some, its shallow and purely overview. Not like everyone, Grace has somehow vast exposure to visualization because she has conducted qualitative research. She acclaimed: "NVivo was very instrumental in my recent qualitative research. It helped me organize and visualize my data. I could even make mind maps, concept maps, and graphs." Nevertheless, almost all of the participants were doubtful yet curious on how to fully leverage visualization options.
Melinda said: "Like in quantitative research, we teach when to use line diagram or line chart. When to use the horizontal bar graph, the vertical bar graph, pie charts, something like that, because I'm sure not all data can be used there. We need to be specific about visualization. Like for example the pie chart, should not use it if you only have two (2) slices of the pie. That is taught in Biostatistics subject." Grace added: "For me, I think what is important from the tipping point of research is that everything has a meaning. The variables are not just for the title. It contains meaning and requires skills rather than spiritual thinking.
The preceding finding confirms the idea of Woolf and Silver (2018), that the more complete ideas can be displayed and shared, the more universal their meanings become. Visualization of connections in their data aids qualitative researchers in exploring and in reporting meanings to others.

Importance of NVivo Research Cycle
The NVivo research cycle provides a range of benefits among its adopters. Figure 7 provides a sneak peek at the importance of such a cycle in exploring NVivo. For them, the NVivo research cycle is very important because: helps understand NVivo operations; provides guidance and direction especially novice ones, and makes the teaching of NVivo easy. Figure 7. Importance of NVivo research cycle As shown in figure 7, the NVivo research cycle is the QSR-introduced concept used in leveraging the software. It starts from import until visuals. Adopters can go back and forth to any phase of the cycle as the project progresses. This cycle makes the teaching of NVivo easy as the former provides guidance and direction; thus, it keeps the adopters on the right track. The cycle is also userfriendly which is very useful even novice adopters.
Levelyn said: "NVivo is very helpful, starting from import, you can follow through your data. You will never go wrong. Once you follow the cycle, it would be easy for you to go back if you miss something. You just have to look at the cycle and go back to it. You will be guided properly. The NVivo cycle is very relevant to me. For example, if my student inquires and asks help, I will just show the NVivo cycle." Monique supported Pearlane and told: "This (NVivo cycle) is very useful, especially to novice ones! If you explore it oftentimes, more or less, you know it and you know where to find this, and that. But, if you do not have prior experience, do not dare to explore it alone. You need to undergo an intensive training workshop to capacitate yourself about these things. The adopters need to be guided and have a background in qualitative research. They should have an idea on how it works and how to work on it."

Qualities of NVivo Adopters
Being an adopter of NVivo is never an easy endeavor. This calls for both knowledge and skills to be imbued in order to maximize the use of NVivo in qualitative data analysis. As shown in figure 8, the WVSU NVivo adopters introduce some of the qualities of the software user. This includes; attendance in NVivo workshop; computer literate; critical thinking skills; heart in qualitative research; open-mindedness; strong knowledge in Qualitative Data Analysis (QDA); and knowing technicalities and mechanics of NVivo.

Figure 8. Qualities of NVivo adopters
Based on figure 8, using NVivo requires special competencies and qualities from its adopters. Without these competencies, NVivo adopters will terribly be challenged! Most of them agreed that having sound qualities helps them optimize the potentialities of NVivo. For them, an NVivo adopter is open-minded, has a heart in qualitative research, has good critical thinking skills, and has adequate knowledge in Qualitative Data Analysis (QDA). Monique said: "No. 1 quality of NVivo adopters is computer literacy. Has to focus, has heart in quali, has good critical thinking skills even those non-adopters. Open-minded, patience, like your patience in doing manual (hehehe). Attendance at training workshops should be a pre-requisite in using NVivo (all said yes). You should not explore in your own the qualitative research and NVivo, because they have their own technicalities and mechanics." Grace added: "NVivo cannot do the thinking for you at the end of the day, you as the researcher will analyze and decide about your data. NVivo is only a methodological tool." Femey acclaimed: "I agree with you because the researcher is the one interpreting the result of the study. NVivo only guides us. As I understand qualitative research, you are getting the experiences and narratives of the participants." Moreover, being computer literate is indeed a necessity for an NVivo adopter. This will pave the way to understand the rudimentary requirements ascribed in the software relative to its technicalities and mechanics. This result conforms to the research of Cervantes, Garcia, and Trigueros (2018), that the software serves as a tool, and the decisions on how, when, and for what to use it will always be in the hands of the researcher. Hence, as stated by Woolf & Silver (2018), there is a need to have continuous practice on software and to stay engaged with their research study in order to maintain satisfactory progress. If you don't use it, you will lose it! A high intention to use outcome can only be achieved if the user uses and continues to use the QDAS technology. Figure 9. Word cloud on the mechanisms to be undertaken by WVSU administration As shown in Figure 9, the WVSU Administration is strongly encouraged to institutionalize the use of NVivo in the conduct of qualitative researches in the University. NVivo adopters are strongly convinced that using the software can fortify the research publication goals of the University. For most of them, this software is a very timely tool to keep us relevant at this time, especially in penetrating high-tiered international publications, such as Scopus, Web of Science, and ASEAN Citation Index. Thus, these adopters see the importance of conducting enhanced NVivo workshops with relevant resource speakers who are both knowledgeable in Qualitative Data Analysis (QDA) and NVivo functionalities to fully capacitate the University's faculty-researchers. This tool is viewed as a significant added value to the University's quest and pursuit to become a research university in the years to come.

Computer Literacy as a Requirement for Adopters
Computer literacy is a necessity in this technological age. This skill enables the person to effectively and efficiently maximize the software's functionalities. Amidst the developments of CAQDAS, it is already a must for its adopters to be computer literate to gain a technological advantage. That is why NVivo adopters are amenable to the following schemes to enhance the capability of the user: provide a step-by-step discussion of procedures especially novice adopters; learn technical terms which NVivo manual can help; undergo hands-on NVivo activities and close monitoring into it, and conduct diagnostic test for participants prior to the workshop. Figure 10. Computer literacy as a requirement for adopters The participants should be subject to a diagnostic test to determine who are tech and non-tech savvy. As such, a separate NVivo workshop must be conducted for both parties. Grace lamented: "For technical terms, non-tech savvy will surely stumble. Hence, we should be scientific about it, you have a diagnostic test to identify who is tech-savvy and not. Then, you cannot mix the tech-savvy and not in a workshop. We should make diagnostic tests per college, we can say, this group is grade 1, this is grade 2 and this is grade 3. This will help the facilitator plan for a specific approach to a group of participants in a manner that will address their needs. Just like other things, there is pretest and posttest, etc." For non-tech savvy, tutors and step-by-step procedures are needed to help them navigate the software. Monique agreed and told: "If you are not computer literate, the ideal thing is that you should have at least a tutor beside you. There is a step-by-step discussion, what to click, next, and so forth, especially participants aged 50 and above who are less interested in technology. Even turning on and off of computer set they don't like, since it has many things to operate, like AVR, battery, and unit. Hence, the adopters should have a background in computer literacy."

Challenges in the Diffusion of Innovation
Coupled in every technological explosion is the resistance of people to embrace change. However, bit by bit, this perception of non-adopters can be changed by understanding first the existing practices and the things to be done to help non-adopters. As shown in figure 11, researchers nowadays experience the following conditions: not being fully aware of software; avoiding qualitative research, and suffering from traditional qualitative data analysis. On the other hand, the WVSU NVivo adopters are convinced that the foregoing conditions can be mitigated by doing the following: mentor non-adopters of NVivo; and conduct indepth training/workshop on the use of NVivo. Figure 11. Challenges in the diffusion of innovation As shown in figure 11, not all qualitative researchers are NVivo-conversant or accepting. Some are really resistant to it. Some adopters, are not actually using NVivo because they lack awareness of the said software and they avoid qualitative research. Should they have qualitative researches, adopters are convinced that these non-adopters are suffering from the traditional, manual method of qualitative analysis. Grace said: "For the resistant to NVivo, what I know is they suffer from the traditional methods, or they are not fully aware of NVivo." Melinda agreed and lamented: "Those resisting NVivo maybe they are not aware that their life will be easier if they use NVivo, just like the use of endnotes instead of index cards." Despite such, NVivo adopters are determined to be of help to those non-NVivo adopters and become instrumental for them to thoroughly navigate the software until they become full-fledge adopters. Some of the mechanisms proposed by adopters include: educating non-adopters on the potentialities of the software, conducting in-depth and thorough training-workshop on software, and allowing tutors beside non-adopters.
Monique told: "Those resistant on NVivo need to be educated and must understand the process. We usually become apprehensive about a certain thing if we do not fully understand something, on how it works! Right? This is what the state university lacks (hehehe). We fail to give an in-depth understanding of the software!" These findings conform to the idea of Woolf & Silver (2018), that the decision about whether to use technology or not is ultimately that of the researcher.

Use of NVivo Manual
The use of technology fundamentally requires a manual. This manual needs to be developed in a simplified manner, with step-bystep procedures. Such requirements are present in the researcher-developed NVivo manual. The researcher-developed NVivo manual, as shown in figure 18, has the following features: conforms with NVivo research cycle; has a step-by-step guide with illustrations; assists adopters not to get lost; and guides adopters with or without qualitative background. As shown in figure 12, the NVivo manual is very instrumental for the successful exploration of the adopters. All adopters have recommended the use of a researcher-developed NVivo manual every time exploring the software. Grace and Monique strongly acclaimed: "We strongly recommend the use of NVivo manual in exploring NVivo".
According to these adopters, the manual follows the QSR NVivo research cycle starting from import to visualize and offers a stepby-step guide with photographs. These features make the NVivo easier to navigate and to follow. In the same manner, the manual helps guide those with and/or without qualitative background not to get lost. Monique added: "In manual analysis, the researcher has a tendency to go back and forth, but in NVivo, they can go directly as long as they properly do each stage. NVivo manual is very useful since we can smoothly follow it! It is a stepby-step guide, different from the PowerPoint that was given to us last time (referring to previous NVivo workshops). Thereof, we assumed what is next, we didn't even know if where the parts came from. Worse is when you are not fast in following the sample in front! You will surely get lost! When you don't have an idea and you don't have a background in qualitative and NVivo, you might quite easily (hehehe). If you get lost, you will easily withdraw, if you cannot understand, your tendency is to stop! " Pearlane stated: "The NVivo manual is easy to follow because of its step-by-step guide with matching photographs. Whatever appears on screen, it is there and you can actually compare it with your work if it's right or not. Without the NVivo manual, we might get lost (hehehe). You may be easily get bored if you don't have a cycle to follow. " This finding agrees with the idea of Miles & Huberman (2014) that analysis of qualitative data is done in the cyclical, iterative, and non-linear undertaking.

Conclusion
NVivo research cycle and NVivo manual of operation are very effective tools in using the software for qualitative data analysis. This could be because adopters are well-guided about what to do and how to maximize the software's potentialities. Added to this is the NVivo experience of the adopters before the conduct of the workshop.
NVivo adopters are relatively proficient and challenge across all stages of the QSR NVivo research cycle. Nevertheless, these adopters are particularly proficient in import, exploring, and querying while the challenge is in code, reflect, visual, and memo. This could probably be because of adopters' limited exposure to the functionalities of NVivo and inadequate knowledge of the nature of qualitative data analysis. Import, explore, and query was the most mastered stages of NVivo. Word Cloud, for instance, is the most often used feature of the software. This could mean that adopters easily get amazed and fascinated with the Word Cloud result. They somehow consider Word Cloud as the ultimate answer to their research questions. In the area of import, adopters feel it easy to explore since the datasets were all prepared and organized by the researcher-resource speaker. On the other hand, coding, reflect, visual, and memo are the least mastered stages of NVivo. This could be due to adopters' insufficient training and knowledge in leveraging NVivo and less mentoring received on qualitative research. Likewise, the majority of the adopters had no experience conducting comprehensive qualitative research with the use of NVivo as a methodological tool. Besides, some adopters do not undergo coding exercises, have not known the parameters of visuals, and have lack time to make memos in NVivo.
NVivo analysis is a researcher-driven analysis. This may be attributed to the limited capacity of the NVivo and the iterative nature of Qualitative Data Analysis (QDA). Hence, NVivo adopters are advised to imbue the necessary knowledge and skills in qualitative data analysis before navigating the software. Computer literacy is a must for an NVivo adopter. This is due to the software's rudimentary requirements, which can only be learned and understood easily by the tech-savvy. Thus, having Information Technology (IT) skills is paramount considerations.
Finally, it is timely for the WVSU administration to institutionalize CAQDAS-NVivo in analyzing qualitative data among its researchers in the system. This is because NVivo is a very useful methodological tool in analyzing qualitative data. NVivo brings a wide range of benefits to its adopters. Aside from having a user-friendly interface, NVivo is the most preferred software of ISI journals to provide high acceptability for university researches to get published internationally.